uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
3,212,635,537,606 | arxiv | \chapter{\bf Acknowledgements} I would like to thank Ramy Brustein, Gary
Horowitz, and Joe Polchinski, for discussions. This work is partially
supported by Department of Energy contract No.
DE-AC02-86ER40253.
\refout
\end
|
3,212,635,537,607 | arxiv | \subsection{Please Capitalize the First Letter of Each Notional Word in Subsection Title}
\section{Introduction}
\label{sect:intro}
Now it is a golden era to study galaxy formation and evolution.
Fortunately, elliptical galaxies supply us a good chance to carry
out this work because they seem to be homogeneous stellar systems
that have uniformly old and red populations. Besides, elliptical
galaxies have negligible amounts of gas and have very little star
formation. Therefore, it is convenient to study galaxy formation
via ellipticals first. After the significant development of
cosmology (e.g. Peebles 1980), the image that galaxies formed in a
universe dominated by dark matter was widely accepted. But people
are still arguing about the mechanism of elliptical galaxy
formation. Recently, there are mainly two arguing pictures of
elliptical galaxies' formation. On the one hand, some people
suggest that elliptical galaxies formed in a single intense burst
of star formation at high redshifts and then their stellar
populations passively evolved to the present day. This
``monolithic'' scenario can explain the dense cores, metallicity
gradients (Kormendy 1987; Thomsen \& Baum 1989; Kormendy \&
Djorgovski 1989) and fundamental scaling relations such as the
colour-magnitude relation and the fundamental plane of elliptical
galaxies (Kodama et al. 1998; van Dokkum \& Stanford 2003), but it
cannot explain different metallicity levels of halo stars and the
big age range of globular clusters. On the other hand, based on
evidence of strong gravitational interactions and mergers between
disk galaxies, Toomre \& Toomre et al. (1972) pointed out that
elliptical galaxies are possibly formed by the merging of smaller
galaxies. It is very the so-called ``hierarchical'' scenario of
galaxy formation.
In recent years, the hierarchical picture is thought as the most
possible mechanism of galaxies and was deeply simulated, e.g.
Kauffmann et al. (1993, 1996, 1998) and Durham group (Baugh et al.
1996, Baugh et al. 1998, Cole et al. 2000). In these studies, some
exciting results are presented, e.g. the star formation histories
of galaxies (see e.g. Baugh et al. 1998). But on the observational
side, studies showed some different trends: Firstly, it is found
that a significant fraction of early-type galaxies have recent
star formation (Barger et al. 1996). Secondly, it is also found
that only a small fraction of mass is involved in the interaction
and merger of galaxies. And thirdly, some related issues, e.g. the
super-solar [$\alpha$/Fe] ratio of massive ellipticals, which
suggests these galaxies formed on relatively short time-scales and
have an initial mass function that is skewed towards massive
stars, have brought forward. It seems that those early models of
the hierarchical formation of elliptical galaxies are difficult to
explain and reproduce these observed trends (Thomas 1999). In
2006, De Lucia et al. (2006) brought forward a new hierarchical
model of the formation of elliptical galaxies. This model,
adopting the new $\Lambda$CDM cosmology and high-resolution
simulation, tried to explain the ``anti-hierarchical'' behavior of
star formation histories of elliptical galaxy population and
presented some new predictions. Therefore, it is more valuable to
study the formation of elliptical galaxies based on this kind new
models now.
To study galaxy formation, stellar population synthesis has become
a very useful and popular technique in these years because
different galaxy formation models usually predict different star
formation histories. A series of detailed studies of stellar
populations of galaxies in both observational side and
semi-analytic side have been carried out in recent years (e.g.
Trager et al. 2000a, b; Terlevich \& A. Forbes D. 2002; van Zee et
al. 2004). However, all these works used the single stellar
population (SSP) synthesis models (e.g. Vazdekis et al. 1996,
1997; Vazdekis 1999; Worthey 1994; Bruzual \& Charlot et al. 2003)
as the binary stellar population (BSP) synthesis model was not
available. But as pointed out by Zhang et al. (2005a, b), binary
interaction plays an important role in the study of evolutionary
population synthesis. In their work, some different results from
SSP models were shown (see Zhang 2005a in more detail). Thus we
are now asking the question that how hierarchical formation model
of elliptical galaxies is supported if we take the binary
interaction in stellar population synthesis into account. We plan
to find some answers by making use of the BSP model of Zhang et
al. (2005b) and the hierarchical formation model of De Lucia et
al. (2006) in this paper. But we here do not intend to investigate
the effects of binary interaction, which is very the subject of
another paper. The structure of the paper is as follows. In Sect.
2 we introduce our galaxy sample and the BSP model. In Sect. 3 we
give a brief description of the determination of stellar ages and
metallicities and then show the main results. In Sect. 4 we test
the latest hierarchical formation model of elliptical galaxies and
finally we give our discussion and conclusion in Sect. 5.
\section{Our data sample and the BSP model}
\label{sect:Obs}
\subsection{ The galaxy sample}
We mainly define a sample by selecting all normal ellicptical
galaxies in the sample of Thomas et al. (2005). As a result, 71
normal elliptical galaxies are included while 51 S0 and 2 cD
galaxies are excluded by our sample. Then the $B-V$ colors and
B-band absolute magnitudes of these galaxies are supplemented from
the HyperLeda database (http://www.brera.mi.astro.it/hypercat/) if
it is possible. In the sample, 42 elliptical galaxies reside in
low-density and 29 in high-density environments. The galaxies in
low-density environment include all galaxies that do not reside in
high-density environment. In fact, these data are very good for
estimating stellar ages and metallicities because they were
selected from some creditable sources (Gonz$\acute{a}$lez 1993;
Mehlert et al. 2000, 2003; Beuing et al. 2002; Lauberts \&
Valentijn 1989) and reobserved by Thomas et al. if necessary (19
objects were reobserved). In particular, because the
absorption-line strengths of galaxies are measured as functions of
galaxy radius in the sources, the central indices measured within
r$_{e}$/10 (where r$_{e}$ is the effective radius) are adopted, so
that the analysis does not suffer from aperture effects. In this
work, we use the reliable Lick indices H$\beta$, Mgb, and
$<$Fe$>$=0.5$\times$(Fe5270+Fe5335) directly. According to Thomas
et al., the medians of the 1$\sigma$ errors in H$\beta$, Mgb,
Fe5270 and Fe5335 are 0.06,0.06,0.07 and 0.08, respectively. It is
also another advantage to take these data because these elliptical
galaxies span a large range in central velocity dispersion 0
$\leq$ $\sigma$$_0$/(km s$^{-1}$) $\leq$ 340, which is very
convenient to study the relations between stellar specialities and
stellar mass following the result of Thomas et al. (2005). The
detailed data of our sample galaxies are shown in Table 1. In the
table, the galaxy name, velocity dispersion, H$\beta$, Mgb,
$<$Fe$>$, M$_B$, $B-V$, environment and the observational
uncertainties of three line indices are shown. Besides, we also
select 11 elliptical galaxies in the Fornax cluster from
Kuntschner (2000), but they are only used for testing the
predictions relating to cluster-centric distance.
\begin{table}[]
\caption[]{The data for low- and high-density ellipticals. In the
table, `$\sigma$$_{\rm 0}$' means the velocity dispersion and `E'
means the environment. `L' and `H' denotes low- and high-density
environments, respectively. All line indices are within r$_e$/10
aperture.} \label{Tab:1}
\begin{center}\begin{tabular}{lrrrrrrrrrr}
\hline\noalign{\smallskip
\multicolumn {1} {l} {Name}& \multicolumn {1} {c} {$\sigma$$_{\rm
0}$} &\multicolumn {1} {c} {H$\beta$}&\multicolumn {1} {c}
{$\delta$H$\beta$}&\multicolumn {1} {c} {Mgb}&\multicolumn {1} {c}
{$\delta$Mgb}& \multicolumn {1} {c} {$<$Fe$>$} &\multicolumn {1}
{c} {$\delta$$<$Fe$>$} & \multicolumn {1} {c} {$M_B$}
&\multicolumn {1} {c} {$B-V$}&\multicolumn {1} {c} {E}\\
&\multicolumn {1} {c}{[km s$^{-1}$]} &\multicolumn {1} {c}{[\AA]} &\multicolumn {1} {c}{[\AA]}
&\multicolumn {1} {c}{[\AA]} &\multicolumn {1} {c}{[\AA]}&\multicolumn {1} {c}{[\AA]}
&\multicolumn {1} {c}{[\AA]} &\multicolumn {1} {c}{[mag]}&\multicolumn {1} {c}{[mag]}\\
\hlin
NGC 0221 & 72.1 &2.31 &0.05 &2.96 &0.03 &2.75 &0.03 &-17.424 &0.800 &L \\
NGC 0315 &321.0 &1.74 &0.06 &4.84 &0.05 &2.88 &0.05 &-22.472 &0.929 &L \\
NGC 0507 &262.2 &1.73 &0.09 &4.52 &0.11 &2.78 &0.10 &-22.121 &0.888 &L \\
NGC 0547 &235.6 &1.58 &0.07 &5.02 &0.05 &2.82 &0.05 &-21.663 & &L \\
NGC 0636 &160.3 &1.89 &0.04 &4.20 &0.04 &3.03 &0.04 &-19.798 &0.908 &L \\
NGC 0720 &238.6 &1.77 &0.12 &5.17 &0.11 &2.87 &0.09 &-20.786 &0.948 &L \\
NGC 0821 &188.7 &1.66 &0.04 &4.53 &0.04 &2.95 &0.04 &-20.753 &0.865 &L \\
NGC 1453 &286.5 &1.60 &0.06 &4.95 &0.05 &2.98 &0.05 &-21.613 &0.911 &L \\
NGC 1600 &314.8 &1.55 &0.07 &5.13 &0.06 &3.06 &0.06 &-22.419 &0.923 &L \\
NGC 1700 &227.3 &2.11 &0.05 &4.15 &0.04 &3.00 &0.04 &-21.903 &0.890 &L \\
NGC 2300 &251.8 &1.68 &0.06 &4.98 &0.05 &2.97 &0.05 &-20.754 &0.966 &L \\
NGC 2778 &154.4 &1.77 &0.08 &4.70 &0.06 &2.85 &0.05 &-19.206 &0.889 &L \\
NGC 3377 &107.6 &2.09 &0.05 &3.99 &0.03 &2.61 &0.03 &-19.169 &0.820 &L \\
NGC 3379 &203.2 &1.62 &0.05 &4.78 &0.03 &2.86 &0.03 &-20.608 &0.927 &L \\
NGC 3608 &177.7 &1.69 &0.06 &4.61 &0.04 &2.94 &0.04 &-19.733 &0.909 &L \\
NGC 3818 &173.2 &1.71 &0.08 &4.88 &0.07 &2.97 &0.06 &-19.400 &0.908 &L \\
NGC 4278 &232.5 &1.56 &0.05 &4.92 &0.04 &2.68 &0.04 &-19.359 &0.895 &L \\
NGC 5638 &154.2 &1.65 &0.04 &4.64 &0.04 &2.84 &0.04 &-19.974 &0.892 &L \\
NGC 5812 &200.3 &1.70 &0.04 &4.81 &0.04 &3.06 &0.04 &-20.450 &0.927 &L \\
NGC 5813 &204.8 &1.42 &0.07 &4.65 &0.05 &2.67 &0.05 &-21.113 &0.916 &L \\
NGC 5831 &160.5 &2.00 &0.05 &4.38 &0.04 &3.05 &0.03 &-19.813 &0.897 &L \\
NGC 6127 &238.9 &1.50 &0.05 &4.96 &0.06 &2.85 &0.05 &-21.352 &0.944 &L \\
NGC 6702 &173.8 &2.46 &0.06 &3.80 &0.04 &3.00 &0.04 &-21.613 &0.839 &L \\
NGC 7052 &273.8 &1.48 &0.07 &5.02 &0.06 &2.84 &0.05 &-21.199 & &L \\
NGC 7454 &106.5 &2.15 &0.06 &3.27 &0.05 &2.48 &0.04 &-19.930 &0.866 &L \\
NGC 7785 &239.6 &1.63 &0.06 &4.60 &0.04 &2.91 &0.04 &-21.375 &0.949 &L \\
ESO 107-04 &147.0 &2.24 &0.25 &3.63 &0.16 &2.97 &0.09 &-20.386 &0.849 &L \\
ESO 148-17 &134.5 &2.26 &0.52 &3.49 &0.32 &2.58 &0.20 &-19.865 &0.875 &L \\
IC 4797 &220.6 &1.92 &0.26 &4.52 &0.18 &2.75 &0.10 &-20.876 &0.908 &L \\
NGC 0312 &254.8 &1.83 &0.09 &4.56 &0.08 &2.48 &0.05 &-21.937 &0.929 &L \\
NGC 0596 &161.8 &2.12 &0.05 &3.95 &0.04 &2.81 &0.03 &-20.424 &0.845 &L \\
NGC 0636 &178.5 &1.86 &0.26 &4.38 &0.17 &2.83 &0.09 &-19.798 &0.908 &L \\
NGC 1052 &202.6 &1.22 &0.04 &5.53 &0.03 &2.77 &0.02 &-20.139 &0.900 &L \\
NGC 1395 &250.0 &1.62 &0.05 &5.21 &0.04 &2.93 &0.03 &-21.211 &0.921 &L \\
NGC 1407 &259.7 &1.67 &0.07 &4.88 &0.06 &2.85 &0.03 &-21.432 &0.946 &L \\
NGC 1549 &203.3 &1.79 &0.03 &4.39 &0.03 &2.88 &0.02 &-19.981 &0.906 &L \\
NGC 2434 &180.4 &1.87 &0.13 &3.72 &0.10 &2.87 &0.07 &-19.828 &0.818 &L \\
NGC 2986 &282.2 &1.48 &0.06 &4.97 &0.05 &2.92 &0.03 &-21.064 &0.891 &L \\
NGC 3078 &268.1 &1.12 &0.09 &5.20 &0.07 &3.16 &0.04 &-20.893 &0.916 &L \\
NGC 3923 &267.9 &1.87 &0.08 &5.12 &0.07 &3.07 &0.04 &-21.151 &0.906 &L \\
NGC 5791 &271.8 &1.60 &0.19 &5.06 &0.15 &3.30 &0.10 &-21.123 &0.89 &L \\
NGC 5903 &209.2 &1.68 &0.10 &4.44 &0.08 &2.90 &0.05 &-21.220 &0.839 &L \\
NGC 4261 &288.3 &1.34 &0.06 &5.11 &0.04 &3.01 &0.04 &-21.299 &0.952 &H \\
NGC 4374 &282.1 &1.51 &0.04 &4.78 &0.03 &2.82 &0.03 &-20.888 &0.931 &H \\
NGC 4472 &279.2 &1.62 &0.06 &4.85 &0.06 &2.91 &0.05 &-21.785 &0.928 &H \\
NGC 4478 &127.7 &1.84 &0.06 &4.33 &0.06 &2.94 &0.05 &-19.564 &0.873 &H \\
\noalign{\smallskip}\hline
\end{tabular}\end{center}
\end{table}
\addtocounter{table} {-1}
\begin{table}[]
\caption[]{-- Continued} \label{Tab:1}
\begin{center}\begin{tabular}{lrrrrrrrrrr}
\hline\noalign{\smallskip
\multicolumn {1} {l} {Name}& \multicolumn {1} {c} {$\sigma$$_{\rm
0}$} &\multicolumn {1} {c} {H$\beta$}&\multicolumn {1} {c}
{$\delta$H$\beta$}&\multicolumn {1} {c} {Mgb}&\multicolumn {1} {c}
{$\delta$Mgb}& \multicolumn {1} {c} {$<$Fe$>$} &\multicolumn {1}
{c} {$\delta$$<$Fe$>$} & \multicolumn {1} {c} {$M_B$}
&\multicolumn {1} {c} {$B-V$}&\multicolumn {1} {c} {E}\\
&\multicolumn {1} {c}{[km s$^{-1}$]} &\multicolumn {1} {c}{[\AA]}
&\multicolumn {1} {c}{[\AA]} &\multicolumn {1} {c}{[\AA]}
&\multicolumn {1} {c}{[\AA]}&\multicolumn {1} {c}{[\AA]}
&\multicolumn {1} {c}{[\AA]} &\multicolumn {1} {c}{[mag]}&\multicolumn {1} {c}{[mag]}\\
\hlin
NGC 4489 & 47.2 &2.39 &0.07 &3.21 &0.06 &2.66 &0.05 &-18.189 &0.804 &H \\
NGC 4552 &251.8 &1.47 &0.05 &5.15 &0.03 &2.99 &0.03 &-20.798 &0.936 &H \\
NGC 4697 &162.4 &1.75 &0.07 &4.08 &0.05 &2.77 &0.04 &-21.239 &0.869 &H \\
NGC 7562 &248.0 &1.69 &0.05 &4.54 &0.04 &2.87 &0.04 &-21.416 &0.938 &H \\
NGC 7619 &300.3 &1.36 &0.04 &5.06 &0.04 &3.06 &0.04 &-21.973 &0.925 &H \\
NGC 7626 &253.1 &1.46 &0.05 &5.05 &0.04 &2.83 &0.04 &-21.673 &0.947 &H \\
NGC 4839 &275.5 &1.42 &0.04 &4.92 &0.04 &2.75 &0.04 &-22.263 &0.879 &H \\
NGC 4841A &263.9 &1.53 &0.05 &4.51 &0.05 &2.89 &0.04 &-21.380 & &H \\
NGC 4926 &273.3 &1.50 &0.06 &5.17 &0.06 &2.50 &0.05 &-21.443 &0.954 &H \\
IC 4051 &258.7 &1.42 &0.06 &5.34 &0.07 &2.75 &0.05 &-20.204 &0.933 &H \\
NGC 4860 &280.5 &1.39 &0.06 &5.39 &0.07 &2.85 &0.05 &-20.948 &0.973 &H \\
NGC 4923 &186.0 &1.70 &0.05 &4.43 &0.05 &2.69 &0.04 &-19.983 &0.888 &H \\
NGC 4840 &216.6 &1.63 &0.07 &4.94 &0.07 &2.91 &0.06 &-20.131 &0.954 &H \\
NGC 4869 &188.1 &1.40 &0.05 &4.83 &0.05 &2.90 &0.04 &-20.797 &0.934 &H \\
NGC 4908 &192.4 &1.58 &0.09 &4.58 &0.09 &2.65 &0.07 &-21.075 &0.936 &H \\
IC 4045 &167.3 &1.46 &0.06 &4.70 &0.07 &2.77 &0.05 &-20.282 &0.943 &H \\
NGC 4850 &155.8 &1.57 &0.06 &4.39 &0.06 &2.58 &0.05 &-19.601 &0.956 &H \\
NGC 4872 &171.7 &2.05 &0.05 &4.05 &0.06 &2.82 &0.04 &-20.893 &0.874 &H \\
NGC 4957 &208.4 &1.76 &0.03 &4.53 &0.03 &2.93 &0.02 &-21.177 &0.925 &H \\
NGC 4952 &252.6 &1.71 &0.03 &4.76 &0.03 &2.69 &0.02 &-21.203 & &H \\
GMP 1990 &208.9 &1.40 &0.04 &4.78 &0.04 &2.50 &0.03 & & &H \\
NGC 4827 &243.7 &1.53 &0.03 &4.89 &0.03 &2.80 &0.02 &-21.495 &0.904 &H \\
NGC 4807 &178.5 &1.81 &0.06 &4.39 &0.06 &2.78 &0.05 &-20.703 &0.919 &H \\
ESO 185-54 &277.2 &1.57 &0.06 &5.11 &0.05 &3.07 &0.03 &-21.861 & &H \\
NGC 3224 &155.8 &2.31 &0.14 &3.91 &0.12 &2.92 &0.08 &-20.508 &0.828 &H \\
\noalign{\smallskip}\hline
\end{tabular}\end{center}
\end{table}
\subsection{ The BSP model}
In this work, we translate central line indices of galaxies into
stellar ages and metallicities via the BSP model of Zhang et al.
(2005b). This model supplied us with high-resolution (0.3\, \AA)
absorption-lines defined by the Lick Observatory Image Dissector
Scanner (Lick/IDS) system for an extensive set of instantaneous
burst binary stellar populations with binary interactions. In
particular, its stellar populations span an age range 1--15\, Gyr
and a metallicity range 0.004--0.03.
\section{Stellar ages and metallicities of elliptical galaxies}
\label{sect:data}
To determine BSP-equivalent stellar ages and metallicities of
elliptical galaxies, we use H$\beta$ and [MgFe] (Gonz$\acute
{a}$lez 1993) indices in this work. The latter can be calculated
by $\rm \sqrt{Mgb\times0.5\times(Fe5270+Fe5335)}$. Fig. 1 displays
the H$\beta$ and [MgFe] indices of the 71 elliptical galaxies
overlaid onto the theoretical calibration. The open and filled
circles represent ellipticals in low- and high-density
environments respectively and the error bars show the
observational uncertainties of two indices.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.1.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{ Line-strength indices of our sample elliptical galaxies in the BSP model
in the central r$_e$/10 aperture. Solid lines represent constant age (isochrones)
and dashed lines constant metallicity (isofers). Open and filled circles represent low- and
high-density ellipticals, respectively. Error bars show the observational
uncertainties of two indices.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
The BSP-equivalent stellar age, metallicity of each elliptical
galaxy is determined by choosing the best-fitting (t, Z) in a grid
of stellar age (t) and metallicity (Z). The grid is elaborate
enough and created by interpolating the BSP models at intervals
$\Delta$t = 0.1\, Gyr and $\Delta$Z = 0.0001. To find the
best-fitting (t, Z), we employ the maximum-likelihood fitting
method. In detail, we obtain the best-fitting age and metallicity
of each galaxy by minimizing the function:
\begin{equation}
\rm \chi^{2}(t_{\it i},Z_{\it i})=(H\beta_{\it i}-H\beta_{o})^{2}
+({[MgFe]}_{\it i}-{[MgFe]}_{o})^{2},
\end{equation}
where H$\beta$$_{i}$ and [MgFe]$_{i}$ are the H$\beta$ and [MgFe]
indices corresponding to the \emph {i}th pair of stellar age and
metallicity in the BSP model, while H$\beta$$\rm _o$ and
[MgFe]$\rm _o$ are two observational indices. Moreover, we
calculate the associated uncertainties of the best-fitting stellar
age and metallicity for each galaxy by searching best-fitting (t,
Z)s for [H$\beta$$\rm _o$-error, [MgFe]$\rm _o$], [H$\beta$$\rm
_o$+error, [MgFe]$\rm _o$], [H$\beta$$\rm _o$, [MgFe]$\rm
_o$-error] and [H$\beta$$\rm _o$, [MgFe]$\rm _o$+error]
respectively and then taking their deviations from the
best-fitting (t, Z) derived from [H$\beta$$\rm _o$, [MgFe]$\rm
_o$]. Although the four pairs (t, Z) derived by searching do not
describe perfectly well the total range of possible age and
metallicity values inside the 1$\sigma$ error ellipse, the maximum
deviations of stellar age and metallicity can provide us with a
sufficient sampling of the uncertainties involved when transposing
errors from the H$\beta$-[MgFe] plane to ages or metallicities
(Denicol\'{o} et al. 2005). Therefore, in this work, we take the
maximum deviation as the associated 1$\sigma$ uncertainty for
stellar age and metallicity. Here we show the stellar ages,
metallicities and their 1$\sigma$ uncertainties of the main sample
galaxies in Table 2. The stellar ages of these ellipticals are
within the range from 3.9 to older than 15\, Gyr and the stellar
metallicities span over the range of 0.02 to richer than 0.03. It
seems that these ages do not vary as widely as Trager et al.
(2000a) whose result is 1.5 -- 18\, Gyr. This should result from
the different stellar population synthesis model adopted in this
paper. It is found that about 78\% elliptical galaxies have
stellar populations older than 8\, Gyr. The average stellar age of
elliptical galaxies is 10.37\, Gyr while the average of
metallicity is 0.0277. The average 1$\sigma$ uncertainties of them
are 1.58\, Gyr and 0.0015, respectively.
\begin{table}[]
\caption[]{Stellar ages, metallicities and associated 1$\sigma$
uncertainties of 71 sample elliptical galaxies. The stellar ages
and their uncertainties are in Gyr. } \label{Tab:2}
\begin{center}\begin{tabular}{lrrlrr}
\hline\noalign{\smallskip
\multicolumn {1} {l} {Name}& \multicolumn {1} {c} {Age}
&\multicolumn {1} {c} {Z}&\multicolumn {1} {l} {Name}&
\multicolumn {1} {c} {Age} &\multicolumn {1} {c} {Z}\\
\hline\noalign{\smallskip}
NGC 0221 & 4.3 $\pm$ 0.3 &0.0230 $\pm$ 0.0008 &NGC 2434 & 8.3 $\pm$ 5.9 &0.0234 $\pm$ 0.0041 \\
NGC 0315 & 9.0 $\pm$ 2.1 &\multicolumn{1}{l}{$\geq$0.03} &NGC 2986 &12.6 $\pm$ 2.0 &0.0284 $\pm$ 0.0011 \\
NGC 0507 & 9.2 $\pm$ 2.3 &0.0268 $\pm$ 0.0023 &NGC 3078 &13.9 $\pm$ 0.5 &\multicolumn{1}{l}{$\geq$0.03}\\
NGC 0547 &11.8 $\pm$ 2.7 &0.0286 $\pm$ 0.0025 &NGC 3923 & 9.0 $\pm$ 2.5 &\multicolumn{1}{l}{$\geq$0.03}\\
NGC 0636 & 7.8 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03} &NGC 5791 &$\geq$15 $\pm$ 3.2 &\multicolumn{1}{l}{$\geq$0.03}\\
NGC 0720 &11.2 $\pm$ 2.2 &\multicolumn{1}{l}{$\geq$0.03} &NGC 5903 &10.3 $\pm$ 4.1 &0.0276 $\pm$ 0.0040 \\
NGC 0821 &11.2 $\pm$ 1.7 &0.0280 $\pm$ 0.0012 &NGC 4261 &13.0 $\pm$ 0.3 &$\geq$0.03 $\pm$ 0.0004 \\
NGC 1453 &11.7 $\pm$ 0.3 &$\geq$0.03 $\pm$ 0.0006 &NGC 4374 &13.3 $\pm$ 0.6 &0.0256 $\pm$ 0.0004 \\
NGC 1600 &14.0 $\pm$ 2.0 &0.0286 $\pm$ 0.0014 &NGC 4472 &11.4 $\pm$ 2.6 &0.0296 $\pm$ 0.0032 \\
NGC 1700 & 6.0 $\pm$ 0.1 &\multicolumn{1}{l}{$\geq$0.03} &NGC 4478 & 7.9 $\pm$ 0.7 &$\geq$0.03 $\pm$ 0.0017 \\
NGC 2300 &11.4 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03} &NGC 4489 & 3.9 $\pm$ 0.3 &0.0254 $\pm$ 0.0012 \\
NGC 2778 & 8.6 $\pm$ 1.0 &$\geq$0.03 $\pm$ 0.0010 &NGC 4552 &14.3 $\pm$ 1.8 &0.0285 $\pm$ 0.0014 \\
NGC 3377 & 5.4 $\pm$ 0.4 &0.0283 $\pm$ 0.0011 &NGC 4697 & 9.3 $\pm$ 1.1 &0.0229 $\pm$ 0.0014 \\
NGC 3379 &11.5 $\pm$ 2.7 &0.0281 $\pm$ 0.0027 &NGC 7562 & 9.7 $\pm$ 1.7 &0.0283 $\pm$ 0.0017 \\
NGC 3608 & 9.6 $\pm$ 1.8 &0.0298 $\pm$ 0.0017 &NGC 7619 &13.0 $\pm$ 0.4 &$\geq$0.03 $\pm$ 0.0003 \\
NGC 3818 &11.2 $\pm$ 2.2 &\multicolumn{1}{l}{$\geq$0.03} &NGC 7626 &13.2 $\pm$ 0.8 &0.0274 $\pm$ 0.0007 \\
NGC 4278 &12.3 $\pm$ 2.2 &0.0258 $\pm$ 0.0023 &NGC 4839 &\multicolumn{1}{l}{$\geq$15.0}&0.0242 $\pm$ 0.0007 \\
NGC 5638 &11.3 $\pm$ 1.6 &0.0272 $\pm$ 0.0015 &NGC 4841A &12.6 $\pm$ 2.3 &0.0252 $\pm$ 0.0020 \\
NGC 5812 &11.3 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03} &NGC 4926 &12.9 $\pm$ 2.1 &0.0246 $\pm$ 0.0018 \\
NGC 5813 &$\geq$15 $\pm$ 0.1 &0.0217 $\pm$ 0.0009 &IC 4051 &13.0 $\pm$ 0.6 &0.0285 $\pm$ 0.0009 \\
NGC 5831 & 8.0 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03} &NGC 4860 &13.0 $\pm$ 2.0 &$\geq$0.03 $\pm$ 0.0018 \\
NGC 6127 &13.5 $\pm$ 1.1 &0.0268 $\pm$ 0.0019 &NGC 4923 & 9.8 $\pm$ 4.0 &0.0247 $\pm$ 0.0031 \\
NGC 6702 & 4.0 $\pm$ 0.0 &\multicolumn{1}{l}{$\geq$0.03} &NGC 4840 &11.4 $\pm$ 0.5 &$\geq$0.03 $\pm$ 0.0012 \\
NGC 7052 &12.7 $\pm$ 2.0 &0.0278 $\pm$ 0.0010 &NGC 4869 &13.0 $\pm$ 2.0 &0.0272 $\pm$ 0.0027 \\
NGC 7454 & 5.3 $\pm$ 0.8 &0.0200 $\pm$ 0.0021 &NGC 4908 &12.4 $\pm$ 2.5 &0.0234 $\pm$ 0.0022 \\
NGC 7785 &11.4 $\pm$ 2.6 &0.0276 $\pm$ 0.0031 &IC 4045 &$\geq$15 $\pm$ 0.3 &0.0230 $\pm$ 0.0010 \\
ESO 107-04 & 4.6 $\pm$ 1.6 &$\geq$0.03 $\pm$ 0.0022 &NGC 4850 &12.8 $\pm$ 2.1 &0.0210 $\pm$ 0.0016 \\
ESO 148-17 & 4.5 $\pm$ 8.6 &0.0255 $\pm$ 0.0109 &NGC 4872 & 5.8 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03}\\
IC 4797 & 7.6 $\pm$ 6.3 &$\geq$0.03 $\pm$ 0.0067 &NGC 4957 & 8.6 $\pm$ 0.3 &0.0297 $\pm$ 0.0007 \\
NGC 0312 & 8.5 $\pm$ 1.6 &0.0246 $\pm$ 0.0048 &NGC 4952 & 9.4 $\pm$ 1.1 &0.0275 $\pm$ 0.0005 \\
NGC 0596 & 5.3 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03} &GMP 1990 &\multicolumn{1}{l}{$\geq$15.0}&0.0206 $\pm$ 0.0006 \\
NGC 0636 & 7.7 $\pm$ 4.4 &0.0299 $\pm$ 0.0056 &NGC 4827 &12.4 $\pm$ 1.2 &0.0268 $\pm$ 0.0012 \\
NGC 1052 &13.0 $\pm$ 0.0 &\multicolumn{1}{l}{$\geq$0.03} &NGC 4807 & 8.4 $\pm$ 0.8 &0.0274 $\pm$ 0.0023 \\
NGC 1395 &11.8 $\pm$ 0.2 &\multicolumn{1}{l}{$\geq$0.03} &ESO 185-54 &12.0 $\pm$ 2.0 &$\geq$0.03 $\pm$ 0.0012 \\
NGC 1407 &11.1 $\pm$ 2.1 &$\geq$0.03 $\pm$ 0.0015 &NGC 3224 & 4.4 $\pm$ 0.9 &\multicolumn{1}{l}{$\geq$0.03}\\
NGC 1549 & 8.5 $\pm$ 0.4 &0.0283 $\pm$ 0.0010 \\
\noalign{\smallskip}\hline
\end{tabular}\end{center}
\end{table}
\section{The test of new hierarchical model}
\label{sect:test} The hierarchical formation of elliptical
galaxies has been simulated by many techniques, for example the
N-body simulation and semi-analytic simulation techniques.
Different models were usually carried out in the framework of a
cosmological model with critical matter density and gave different
predictions of stellar properties. By now, the cosmology used
before has been replaced by the $\Lambda$CDM scenario. In this
background, De Lucia et al. (2006) constructed a new hierarchical
formation model of elliptical galaxies based on the $\Lambda$CDM
scenario cosmology and studied how the star formation histories,
ages and metallicities of elliptical galaxies depend on
environment and on stellar mass. As a result, some special
predictions are presented by this model. Firstly, it predicted
that the populations of ellipticals in high-density environment
would be older, more metal rich and redder than those of field
ellipticals. Secondly, it predicted that the most massive
elliptical galaxies would have the oldest and most metal rich
stellar populations. Thirdly, it predicted that the stellar age,
metallicity and galaxy color would increase with increasing
stellar mass. Fourthly, the stellar mass, ages, metallicities and
colors of cluster elliptical galaxies were predicted to decrease
on average with increasing distance from the cluster center. In
addition, the model quantified the effective progenitors of
ellipticals as a function of present stellar mass and then
predicted the ``down-sizing'' or ``anti-hierarchical'' of star
formation histories of ellipticals in a $\Lambda$CDM universe. It
is an important result, because if this model is right, we will
understand the formation of elliptical galaxies much better.
Therefore, it is very necessary to test this model. Of course,
taking the binary interaction into to the test is important
because more than half of stars are binaries as we know. The
detailed tests are as follows.
\subsection{Stellar age, metallicity and galaxy color variation with environments}
A basic prediction of hierarchical galaxy formation picture is
that stellar populations of more massive galaxies are older than
those of less massive galaxies on average (e.g. Kauffmann 1996).
This is also predicted by the model of De Lucia et al. (2006).
Furthermore, De Lucia et al.'s model predicted that galaxies in
denser environment would have more metal rich and redder
populations than ellipticals. These specialties are thought to be
attributed to the fact that high density regions form from the
highest density peaks in the primordial field of density
fluctuations. Here we test these specialties with our data.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.2.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{Stellar age distribution of low- and high-density elliptical
galaxies. The dashed and solid lines show the distribution of low- and
high-density ellipticals, respectively.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
In Fig. 2, we show the stellar age distributions of both low- and
high-density ellipticals. The dashed lines represent the stellar
age distribution of ellipticals in low-density environment while
the solid lines for high-density ellipticals. We see that the
stellar populations of high-density ellipticals are really older
than those of low-density complements. On average, stellar
populations of high-density ellipticals are 1.47\, Gyr older than
those of low-density ellipticals.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.3.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{ Stellar metallicity distribution of low- and high-density elliptical
galaxies. The dashed and solid lines have the same meanings as in Fig. 2. }
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
In Fig. 3, we show the stellar metallicity distributions of low
and high-density ellipticals. As we see, the plot fails to show
that stellar populations of high-density ellipticals are more
metal rich than those of low-density ellipticals. We also can see
this trend clearly from Fig. 1 that ellipticals in high-density
environment really distribute in the lower metallicity region than
those field ellipticals. In the figure, filled circles represent
ellipticals in high-density environment.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.4.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{ $B-V$ distribution of low- and high-density elliptical
galaxies. The dashed and solid lines have the same meanings as in Fig. 2.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
When we study the $B-V$ color distribution of two type elliptical
galaxies, the result is consistent with De Lucia (2006) model (see
Fig. 4 in more detail). On average, ellipticals in high-density
environment are about 0.02 mag redder than those in low-density
environment.
\subsection{Stellar age, metallicity and galaxy color variation with stellar mass}
The most important result and prediction of the De Lucia model is
that the most massive elliptical galaxies have the oldest and most
metal rich stellar populations. Besides, the model predicted that
the stellar age, metallicity and galaxy color would increase with
increasing stellar mass. We test these predictions in Figs 5, 6
and 7, respectively. Here the stellar masses of elliptical
galaxies are calculated by the fitting function suggested by
Thomas et al. (2005):
\begin{equation}
\rm log(\it M_{\ast}/M{_\odot}) = \rm 0.63 + 4.52 log(\it \sigma_{\rm 0}/(\rm km \ s^{-1})) ,
\end{equation}
where \it{M$_\ast$} \rm is the stellar mass and $\sigma$$_{\rm 0}$
is the velocity dispersion. According to previous studies, there
is usually a relation between the mass and luminosity of
elliptical galaxies, e.g. $(M/L)$$_B$=(5.93$\pm$0.25)$h$$_{50}$
(van der Marel 1991). It means that the luminous elliptical
galaxies have the massive masses. Therefore the absolute magnitude
is usually used for a indicator of stellar mass of galaxies (e.g.
Terlevich \& A. Forbes D. 2002). In order to study the reliability
of the mass estimation presented above, we compare the stellar
masses calculated by the function with absolute B-band magnitudes
of these galaxies, which are taken from HyperLeda database
(http://www.brera.mi.astro.it/hypercat/). As a result, we find
that luminous elliptical galaxies have more massive stellar
masses. Therefore, as a whole, the stellar masses estimated by the
fitting function can express the real stellar masses of elliptical
galaxies well.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.5.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{ Stellar age - mass relation of 71 elliptical galaxies. Filled
squares with error bars are predictions of galaxy formation model. With arrows,
some galaxies that have stellar populations possibly older than the maximum age of BSP model (15\, Gyr)
are shown. The solid line is a linear least-squares fit to the data. Open and filled circles represent low- and
high-density ellipticals, respectively.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
In Fig. 5, stellar age is plotted as a function of stellar mass.
The filled squares with error bars are look-back times and stellar
masses predicted by the De Lucia model. The look-back time of a
galaxy is the time corresponding to the redshift when 50 percent
of the stars were first formed. Open and filled circles with
arrows show galaxies that have stellar ages possibly older than
15\, Gyr (the maximum age of the BSP model). It is easy to see a
trend that more massive ellipticals have older stellar populations
and the most massive galaxies have the oldest stars. But the
changing of stellar age with stellar mass is different from the
model prediction. This is perhaps caused by the somewhat different
definitions of look-back time and stellar age. In detail, the
stellar age in the simple BSP model is defined corresponding to
the redshift when all stars formed at the same time. When we fit
the relation between stellar age and stellar mass, we find a
linear relation: age = 3.115 log(\it M$_\ast$/M$_\odot$\rm) -
24.147 , with a 0.656 correlation parameter.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.6.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{Stellar metallicity -- mass relation of our sample
elliptical galaxies. Galaxies that have stellar metallicities larger than
the maximum metallicity available for BSP model (0.03) are shown with arrows.
The open pentacles with dashed error bars represent the predictions of De Lucia model and
the filled pentacles with solid error bars represent the metallicities
of De Lucia model added by 0.074. The open and filled circles have the
same meanings as in Fig. 5.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
In Fig. 6, we show the stellar metallicity -- stellar mass
relation of the main sample elliptical galaxies. The open and
filled circles with arrows show ellipticals that have populations
possibly more metal rich than 0.03 (the maximum metallicity of the
BSP model). The open pentacles with dashed error bars represent
the predictions of De Lucia model. From this plot, we see that all
galaxies have metallicities richer than the predictions of De
Lucia model. However, if we only take ellipticals with stellar
metallicities lower than 0.03 into account, our data can be
expressed by a trend similar to the prediction of De Lucia model,
which can be derived by adding 0.074 (the difference between the
maximum metallicity of De Lucia model and that of the BSP model)
to each stellar metallicity predicted by De Lucia model. We plot
the trend via filled pentacles with solid error bars, which can be
seen clearly in Fig. 7. It means that our data have the same trend
as the prediction of De Lucia model. In fact, the result is
possibly limited by the theoretical models, thus the difference
between our data and prediction of the galaxy formation model is
understandable.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.7.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{$B-V$ and stellar mass relation of the sample elliptical galaxies.
The stellar masses are calculated by this work. Filled squares with error
bars represent the values predicted by the model. Open and filled
circles have the same meanings as in Fig. 5.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
The relation between galaxy color and stellar mass is plotted in
Fig. 7. The stellar masses are calculated by this work using eq.
(2). Filled circles in the plot represent ellipticals in
high-density environment and open circles represent the field
ellipticals. Filled squares with error bars represent the color
versus stellar mass relation predicted by the model. We see that
our data agree with the relation predicted by the model very well.
\subsection{Stellar age, metallicity, mass and color variation with cluster-centric distance}
The De Lucia model predicted a clear trend driven by mass
segregation and incomplete mixing of the galaxy population during
the cluster assembly. According to the prediction, within
clusters, stellar masses, ages, metal abundances and galaxy colors
would decrease on average with increasing distance from the
cluster center. To test these trends, we select 11 component
elliptical galaxies of Fornax cluster and determine their stellar
ages and metallicities from H$\beta$ and [MgFe] line indices
within r$_{e}$/8. The line indices of these galaxies are derived
from Kuntschner (2000) and their coordinates and $B-V$ colors are
derived from HyperLeda database
(http://www.brera.mi.astro.it/hypercat/). Here we show the main
data of 11 ellipticals in Table 3. It is noticeable that we use
the angular distance to the centric galaxy NGC 1399 (a galaxy well
studied, e.g. Loewenstein et al. 2005) instead of the real
cluster-centric distance for each elliptical galaxy, because it is
difficult to determine the accurate distances of galaxies while
they have uncertain peculiar velocities. The main results are show
in Figs 8, 9 and 10.
\begin{table}[]
\caption[]{Main data for 11 component elliptical galaxies of the
Fornax cluster. In the table, log$M$$_\ast$ is the logarithm of
stellar mass and $\theta$$_{1399}$ is the angular distance to NGC
1399. } \label{Tab:3}
\begin{center}\begin{tabular}{lrrrrr}
\hline\noalign{\smallskip
\multicolumn {1} {l} {Name}& \multicolumn {1} {c}
{log($M$$_\ast$/\it M$_\odot$\rm)} &\multicolumn {1} {c}
{$\theta$$_{1399}$}&\multicolumn {1} {c} {$B-V$}&
\multicolumn {1} {c} {Age} &\multicolumn {1} {c} {Z}\\
&&\multicolumn {1} {c}{[arcmin]}&\multicolumn {1}
{c}{[mag]}&\multicolumn {1} {c}{[Gyr]}&\multicolumn {1} {c}{[Gyr]}\\
\hline\noalign{\smallskip}
NGC 1336 & 9.5886 &0.324 &0.809 &14.6 $\pm$ 4.9 &0.0179 $\pm$ 0.0029 \\
NGC 1339 &10.5695 &3.318 &0.903 &13.8 $\pm$ 2.4 &0.0270 $\pm$ 0.0030 \\
NGC 1351 &10.5559 &0.636 &0.844 &14.8 $\pm$ 2.9 &0.0235 $\pm$ 0.0039 \\
NGC 1373 & 9.1050 &0.294 &0.832 & 8.6 $\pm$ 1.9 &0.0212 $\pm$ 0.0038 \\
NGC 1374 &10.8768 &0.240 &0.894 &11.8 $\pm$ 2.3 &0.0299 $\pm$ 0.0041 \\
NGC 1379 &10.1853 &0.036 &0.866 & 9.8 $\pm$ 4.7 &0.0241 $\pm$ 0.0039 \\
NGC 1399 &12.2645 &0 &0.934 &14.1 $\pm$ 0.9 &\multicolumn{1}{l}{$\geq$0.03} \\
NGC 1404 &11.5458 &0.150 &0.941 &$\geq$15 $\pm$ 3.0 &\multicolumn{1}{l}{$\geq$0.03} \\
NGC 1419 & 9.9774 &2.160 &0.863 &14.9 $\pm$ 2.8 &0.0159 $\pm$ 0.0034 \\
NGC 1427 &10.7684 &0.084 &0.885 &11.1 $\pm$ 3.0 &0.0262 $\pm$ 0.0034 \\
IC 2006 &10.2757 &0.588 &0.896 &13.7 $\pm$ 1.0 &0.0294 $\pm$ 0.0010 \\
\noalign{\smallskip}\hline
\end{tabular}\end{center}
\end{table}
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.8.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{The plot of elliptical galaxies in (Z, $\theta$$_{1399}$) plane.
Z and $\theta$$_{1399}$ are stellar metallicity and angular distance to
NGC 1399, respectively. }
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
In Fig. 8, stellar metallicity is plotted as a function of angular
distance to NGC1399 ($\theta$$_{1399}$). We see that it is
difficult to find a clear trend in the whole angular distance
range. But within a small range, e.g. 2.5\, arcmin, the stellar
metallicity seems to decrease with increasing angular distance.
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.9.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{The plot of elliptical galaxies in ($B-V$, $\theta$$_{1399}$) plane.
$\theta$$_{1399}$ has the same meaning as in Fig. 8.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
\begin{figure}
\vspace{2mm}
\begin{center}
\hspace{3mm}\psfig{figure=Fig.10.ps,width=80mm,height=60mm,angle=270.0}
\parbox{180mm}{{\vspace{2mm} }}
\caption{The plot of elliptical galaxies in [log(M$_{\rm star}$/M{$_\odot$}), $\theta$$_{1399}$] plane.
log(M$_{\rm star}$/M{$_\odot$}) represents stellar mass of galaxies and $\theta$$_{1399}$ has the same meaning as in Fig. 8.}
\label{Fig:lightcurve-ADAri}
\end{center}
\end{figure}
The relation of $B-V$ color and angular distance is shown in Fig.
9 while the relation of stellar mass and angular distance in Fig.
10. The two plots do not show clear support or opposition to the
model, neither. In addition, it is found that the trend between
stellar age and mass, which we do not show here, seems almost
random.
In Figs 8, 9 and 10, we see that there are only 3 galaxies with
angular distance farther than 0.6\, arcmin, we suggest the less
elliptical galaxies that farther than 0.6\, arcmin must affect all
trends relating to cluster-centric distance. Furthermore, the
small sample of elliptical galaxies we used perhaps affects the
results.
\section{Discussion and conclusion}
\label{sect:discussion}
We determined stellar ages and metallicities of about 80
elliptical galaxies using the BSP model of Zhang et al. (2005b)
and test the latest formation model of elliptical galaxies (De
Lucia 2006) for the first time. We find that elliptical galaxies
have stellar populations about 10\, Gyr old and more metal rich
than 0.02 (see also Zhou et al. 1992).
When we analysis our data, we find that stellar populations of
elliptical galaxies in high-density environment are about 1.5\,
Gyr older while 0.001 less metal rich than those of field
elliptical galaxies. We also find that elliptical galaxies in
high-density environment are about 0.02 mag redder than field
ellipticals. Furthermore, we find that more massive ellipticals
are redder and have older and more metal rich stellar populations
than those less massive ones. It also seems that the most massive
ellipticals have the oldest and most metal rich populations.
However, elliptical galaxies in low-density environment show more
metal rich stellar populations than those high-density
complements. In fact, this trend is completely opposite to the
prediction of De Lucia et al. model. When we test the stellar
mass, age, metallicity and galaxy color variation with the
cluster-centric distance, the results do not show clear support or
opposition to the model and it seems that they are affected by
using the angular distance instead of cluster-centric distance and
the small elliptical galaxy sample we used. Therefore, the results
derived from BSP model support the $\Lambda$CDM-based hierarchical
model of elliptical galaxies formation, expect the metallicity
distribution with environments and the changing of stellar
peculiarities with cluster-centric distance. However, the
doubtless conflict between our result and prediction of the model,
i.e. the result that low-density elliptical galaxies have more
metal rich populations than high-density elliptical galaxies,
should be paid attention to.
\begin{acknowledgements}
We thank HyperLeda team for supplying us with the photometry of
galaxies on the internet: http://www.brera.mi.astro.it/hypercat/.
We also thank Prof.~Xu Zhou and Prof.~Tinggui Wang for some useful
discussions. This work is supported by the Chinese National
Science Foundation (Grant Nos 10433030, 10521001 and 10303006),
the Chinese Academy of Science (No. KJX2-SW-T06) and Yunnan
Natural Science Foundation (Grant No. 2005A0035Q).
\end{acknowledgements}
|
3,212,635,537,608 | arxiv |
\section*{Acknowledgement} We would like to thank Huy Ha, Zeyi Liu, and Mandi Zhao for their helpful feedback and fruitful discussions. This work was supported in part by NSF Awards 2037101, 2132519, 2037101, and Toyota Research Institute. Dr. Gan was supported by the DARPA MCS program and gift funding from MERL, Cisco, and Amazon. We would like to thank Google for the UR5 robot hardware. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.
\bibliographystyle{plainnat}
\section{Appendix}
\label{sec:appendix}
\section{Evaluation}
\label{sec:evaluation}
In this section, we first evaluate the cutting performance of our proposed method through comparison with various baselines and variants in simulated environments. We further validate our approach by conducting experiments on a real-world setup. Additionally, we conduct ablation studies to evaluate the contribution of each component of our system to its overall performance.
\subsection{Policy Evaluation in Simulation}
\vspace{-2.5mm}\paragraph{Dataset and Experimental Setup}
We generate 300 and 100 multi-material objects for training and evaluation, respectively. Each object comprises a rigid core surrounded by soft material. The soft material is simulated as elastoplastic material using the following parameters: $\lambda=1388.89Pa$, $\mu=2083.33 Pa$, $\sigma=200 Pa$, $\rho=10^3kg/m^3$, where $\lambda$ and $\mu$ are Lamé parameters, $\sigma$ the yield stress, and $\rho$ the density.
The contour of the cores used in training is parameterized by a cubic spline with 3 equally spaced nodes, with 3 degrees of freedom in total. The horizontal coordinate for each node is sampled from a uniform distribution with range [$-0.035m$, $0.035m$].
We subsequently concatenate a fixed back contour and extrude this 2D polygon into a 3D mesh. The 100 cores used in the evaluation are divided into two sets: 50 cores with the same distribution as the training cores and 50 out-of-distribution cores, which consist of 5 categories with 10 cores each: (1) 2 nodes, (2) 4 nodes, (3) triangle, (4) rectangle, (5) ellipse. Examples of the training and testing cores are shown in Fig. \ref{fig:sim_core}
\vspace{-2.5mm}\paragraph{Implementation Details}
For trajectory optimization in the demonstration collection process described in Sec. \ref{method:trajectory_optimization}, we use $\eta_{col} = 2\text{e}4$ and $\eta_e=0.15$. We generate one expert trajectory for each core in the training set, where optimizing each trajectory takes 300 gradient-based updates using the Adam optimizer \cite{adam} with a learning rate of $1\text{e}-2$. Examples of the optimization process are shown in Fig. \ref{fig:optimization_process}.
The state estimation network has an input and output size of 256$\times$256. The classification threshold ($S_{thr}$) is set to 0.3. The training data is generated by randomly sampling 0 to 9 collision points. In the adaptive cutting policy, the number of retraction steps after a collision ($R_{dis}$) is set to 8, and the tolerance increment ($\tau^+$) is set to 0.005. To achieve a smooth trajectory, the tolerance value is linearly decayed to 0 within 5 steps after increasing. Both networks are implemented in PyTorch \cite{paszke2019pytorch} and trained using the Adam optimizer with a learning rate of $1\text{e}-4$ and a weight decay of $1\text{e}-6$. A comprehensive ablation study of some critical parameters is presented in Sec. \ref{sec:ablation}.
\vspace{-2.5mm}\paragraph{Metrics.}
We use the following metrics to evaluate the cutting performance:
\begin{itemize}[leftmargin=5mm]
\item {Completion Rate}. To measure the completion rate of each cutting task, we consider an execution as ``completed'' if the knife reaches the chopping board and as ``failed'' if either the number of collisions exceeds 10 or the energy loss value for any single step exceeds 3.0, as defined in Sec \ref{method:trajectory_optimization}. An episode is terminated as soon as it's considered ``failed'', and all subsequent metrics are evaluated considering only the action sequence executed before the termination.
\item {Cut Mass Ratio}. This metric measures the ratio of the removed mass to the total mass of the soft material originally attached to the rigid core. In case of failed execution, the cut mass only considers the soft material on the right side of the cutting trajectory executed till termination.
\item {Collision Ratio}. To account for variations in trajectory length, we normalize the number of collisions by the length of each trajectory.
\item {Avg / Max Energy}. We also evaluate the energy consumption averaged over all steps, as well as the maximum energy consumption incurred at a single step during the whole trajectory.
\end{itemize}
\vspace{-2.5mm}\paragraph{Baselines.}
We compare our proposed system to the following alternative approaches:
\begin{itemize}[leftmargin=5mm]
\item RL: A model-free reinforcement learning policy operating within the same observation and action space, as well as using the same collision-retraction mechanism as our method.
This policy is trained using Soft Actor-Critic (SAC) \cite{haarnoja2018soft}, using a dense reward function given by $\mathcal{R} = \mathcal{L}^t_m - \mathcal{L}^{t-1}_m - \eta (\mathcal{L}^{t}_{e} - \mathcal{L}^{t-1}_{e})$, where $\mathcal{L}^t_m$ and $\mathcal{L}^t_e$ are the cumulative cut mass and energy consumption till step $t$, respectively. Upon collision, the cumulative cut mass will decrease due to automatic retraction, resulting in a negative reward, which naturally functions as a collision penalty. The goal of the RL algorithm is to maximize the cumulative reward, which is equivalent to maximizing the total cut mass and meanwhile minimizing the total energy consumption in our setting. Through a grid search over the energy weight $\eta$, we found the best value to be $\eta=0.05$. We train the policy with 3 random seeds and report the best performance.
\item NN. This approach uses a nearest neighbor cutting policy in place of our adaptive cutting policy. At each step, it retrieves the most similar core from the training set based on the current state estimation result, and executes the closest action step selected from the associated demonstration trajectory based on the current knife pose.
\item Greedy. This approach adopts a heuristic policy instead of our adaptive cutting policy. It determines the movement direction and knife rotation at each step in a greedy manner with the aim of maximizing the cut mass while avoiding collision with the estimated core. The primary difference between this method and ours is that it does not factor in energy consumption.
\item Non-Adaptive. This is a non-adaptive variant of our system which uses a fixed tolerance value of 0.
\end{itemize}
\subsection{Results and Analysis}
Qualitative results are summarized in Fig. \ref{fig:cut_comparison_sim} and Fig. \ref{fig:cut_comparison_sim_ood}. Quantitative evaluations are reported in Tab. \ref{tab:simulation_result}.
\vspace{-2.5mm}\paragraph{Comparison to model-free RL.} The cutting task is successfully completed by [RL] in over 50\% of the cases. However, the resulting cut mass is significantly inferior compared to that of [RoboNinja]. As demonstrated in Fig. \ref{fig:cut_comparison_sim} and \ref{fig:cut_comparison_sim_ood}, the conservative cutting trajectories of [RL] maintain a significant distance from the bottom part of the core. In contrast, [RoboNinja] leverages explicit core estimation to follow the contour of the core, resulting in a significantly higher cut mass. Moreover, the cutting trajectories of [RL] display a noticeable level of jitter, deviating from ideal human-like behavior. Additionally, the knife could get stuck occasionally due to the lack of an explicit adaptive mechanism.
\vspace{-2.5mm}\paragraph{Comparison to Greedy.} Compared to [Greedy], [RoboNinja] with a learning-based cutting policy achieves around +44\% improvement in terms of completion rate. While [Greedy] is able to strictly follow the contour of the estimated geometry and maximize the cut mass, the policy does not consider energy consumption. This may result in physically implausible actions, such as abrupt knife rotations.
In contrast, our cutting policy, which imitates trajectories optimized with the energy consumption objective, is able to sacrifice a small amount of cut mass ratio in exchange for much less energy consumption. This is evident in the energy consumption values in Tab. \ref{tab:optimization_result}, where the average and maximum energy consumption of [RoboNinja] is 40\% and 60\% less than [Greedy].
\vspace{-2.5mm}\paragraph{Comparison to NN.} [NN] directly leverages the action from the demonstration, which reduces the chance of exceeding the energy limit. However, the performance of [NN] relies heavily on the retrieved nearest neighbor trajectory, making it susceptible to any potential errors in the state estimation result.
As shown in Fig. \ref{fig:cut_comparison_sim}, in [NN]'s last step, a slight mismatch between the actual core and the retrieved one results in a collision. Afterward, if the state estimation doesn't have enough change after the collision, the policy will be stuck since the retrieved nearest neighbor remains the same. In contrast, [RoboNinja] is able to adapt the cutting policy to be more conservative when the state estimation is inaccurate and thereby avoiding being stuck due to collision.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/cut_optimization.pdf}
\caption{\textbf{Trajectories optimized with different weights of energy loss.} Cumulative energy consumption and the cut mass ratio are shown in red and blue, respectively. Without any energy penalty, the trajectory strictly follows the contour of the core and cuts off most of the soft material. However, the knife has to rotate rapidly to avoid collision with the core, which results in large energy consumption. In contrast, a large energy penalty leads to a conservative policy where the rotation of the knife is less noticeable. An appropriate energy weight achieves a balance between energy consumption and cut mass. The one used in our final system [0.15] is able to cut off over 90\% with similar energy consumption as [0.6].}
\vspace{3mm}
\label{fig:cut_optimization}
\vspace{1mm}
\centering
{\footnotesize
\setlength\tabcolsep{9pt}
\begin{tabular}{l|cccc}
\toprule
Energy Weight ($\eta_e$) &0& 0.05& 0.15& 0.6 \\
\midrule
Cut Mass Ratio $\uparrow$ & 0.977 & 0.955 & 0.923 & 0.854 \\
Energy Consumption $\downarrow$ & 53.08 & 36.20 & 32.96 & 30.79 \\
\bottomrule
\end{tabular}
\captionof{table}{\textbf{Effects of different energy weights} \label{tab:optimization_result}}
\vspace{-5.5mm}
}
\end{figure}
\vspace{-2.5mm}\paragraph{Effect of the adaptive cutting policy.} As shown in Fig. \ref{fig:cut_comparison_sim}, [Non-Adaptive] behaves similarly to [NN] and suffers from getting stuck at the same location. Thanks to the adaptability of [RoboNinja], the policy successfully bypasses the peaks of the core with a higher tolerance, which improves the completion rate by over +54\%.
\vspace{-2.5mm}\paragraph{Generalization to novel geometries.}
Despite being trained on a single type of core geometry, our policy is able to handle cores with novel geometries thanks to its adaptive cutting policy. Triangles and rectangles are particularly challenging, as they contain straight edges and sharp corners not present in the training data. The qualitative comparison in Fig. \ref{fig:cut_comparison_sim_ood} shows that other baselines may fail due to exceeding energy consumption ([Greedy]) or getting stuck at one collision location ([RL], [NN], and [Non-Adaptive]). In contrast, [RoboNinja] can iteratively update state estimation after the collision and adjust the cutting policy to avoid getting stuck, achieving a balance between energy consumption and the cut mass.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/cut_ablation.pdf}
\caption{\textbf{Ablations on algorithm parameters.} [Left] summarizes the effects of several critical hyperparameters in the algorithm. Detailed discussion about their effects and trade-off can be found in Sec. \ref{sec:ablation}. [Right-Up] State estimation results with different thresholds. [Right-Bottom] Cutting trajectories with different tolerance increments.}
\vspace{-2mm}
\label{fig:ablation}
\end{figure}
\begin{figure}[t]
\centering
{
\footnotesize
\setlength\tabcolsep{2.5pt}
\begin{tabular}{l|ccc|ccc}
\toprule
& \multicolumn{3}{c|}{In-distribution Geometries} & \multicolumn{3}{c}{Out-of-distribution Geometries} \\
& COMP$\uparrow$ & Cut M$\uparrow$ & COLL$\downarrow$ & COMP$\uparrow$ & Cut M$\uparrow$ & COLL$\downarrow$ \\
\midrule
Non-Adaptive & 0.125 & 0.489 & 0.049 & 0.200 & 0.438 & 0.048 \\
RoboNinja & \textbf{1.000} & \textbf{0.877} & \textbf{0.027} & \textbf{1.000} & \textbf{0.824} & \textbf{0.028} \\
\bottomrule
\end{tabular}
\captionof{table}{\textbf{Cutting performance of real-world evaluation.} Here COMP, Cut M, and COLL represent Completion Rate, Cut Mass Ratio, and Collision Ratio respectively.}
\label{tab:real_result}
}
\vspace{2mm}
\centering
\includegraphics[width=0.98\linewidth]{figures/cut_comparison_real.pdf}
\caption{\textbf{Real-world comparison}. [Non-Adaptive] has the drawback of getting stuck even when the state estimation is very close. In contrast, [RoboNinja] leverages an adaptive cutting policy that allows it to bypass the peak and successfully reach the bottom.}
\vspace{-5mm}
\label{fig:cut_comparison_real}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\linewidth]{figures/cut_fruit.pdf}
\caption{\textbf{Evaluation on real fruits}. The left part illustrates the cutting execution on both an avocado and a mango, with an initial rotation angle of 0\degree, -45\degree, and 45\degree, respectively. The last column displays the final state after one cut and all three cuts.}
\label{fig:cut_fruit}
\centering
\includegraphics[width=0.98\linewidth]{figures/cut_real_sand.pdf}
\caption{\textbf{Realworld evaluation on 3d printed cores and kinetic sand.} Our simulation-trained policy demonstrates strong generalization capabilities, effectively handling both in-distribution and out-of-distribution cores in a real-world setting. With only a few collisions, it is able to accurately estimate the core geometry and cut off the majority of the sand with a smooth cutting trajectory.}
\vspace{-5mm}
\label{fig:cut_real_per_iter}
\end{figure*}
\subsection{Ablation Studies \label{sec:ablation}}
The following experiments study the effects of a few critical parameters and design choices. The experimental results are summarized in Fig. \ref{fig:cut_optimization} and Fig. \ref{fig:ablation}.
\textbf{Energy weight ($\eta_{e}$)} First, we want to validate the effect of energy penalty on trajectory optimizations. From the qualitative results in Fig. \ref{fig:cut_optimization}, we can observe that the trajectories optimized with no energy penalty strictly follow the contour of the core, but the knife rotation changes frequently to avoid collisions between the knife spine and the core. In contrast, a large energy weight derives an over-conservative cutting behavior. An appropriate energy weight should achieve a good balance between energy consumption and cut mass. The one used in our final system (0.15) is able to cut off over 90\% of the soft material while consuming a comparable amount of energy when compared to the one with a large energy weight.
\textbf{State estimation threshold ($S_{thr}$)} We evaluate our policy with different state estimation thresholds ranging from 0.15 to 0.5. Fig. \ref{fig:ablation} (b) indicates that the estimation results are consistent on the area to the left of the core boundary estimated from collision signals. However, the variation in the threshold value leads to a visible difference in the unexplored area. A small threshold suggests a larger estimated geometry, resulting in a more conservative policy with both fewer collisions and less cut mass (blue line in Fig. \ref{fig:ablation} (a)). We choose 0.3 in our system to achieve a balance between the number of collisions and the cut mass ratio.
\textbf{Tolerance increment ($\tau^+$) } Fig. \ref{fig:ablation} (c) demonstrates the cutting trajectories on the same core with different tolerance values. An aggressive cutting policy with a small tolerance will increase the cut mass but also increase the chance of collisions. As to the tolerance adaptation strategy, the green line in Fig. \ref{fig:ablation} (a) shows that a large tolerance increment value $\tau_+$ results in a conservative policy with fewer collisions and a smaller cut mass.
\textbf{Retract distance ($R_{dis}$ )} Another important design choice is the retract distance, which determines the number of retraction steps after a collision. The yellow line in Fig. \ref{fig:ablation} (a) indicates different trends. From 18 to 8, the cut mass ratio increases significantly with a small increase in the number of collisions, but such benefits no longer exist afterward. Therefore, 8 is selected as the retract distance in the final system.
\subsection{Evaluation on a Real-world Setup}
We directly evaluate the trained model on a real-world platform, where a UR5 robot is equipped with a knife and a force sensor described in Sec. \ref{method:real_setup}.
\vspace{-2.5mm}\paragraph{3D printed cores.} In Fig. \ref{fig:real_core}, we 3D print 8 in-distribution and 5 out-of-distribution geometries from the test set. As for the soft material, we use Kinect Sand as a proxy because of its stable physics property. Although a 1-DoF force sensor is sufficient for collision detection, it cannot reflect the energy consumption caused by abrupt rotations. Hence, in our evaluations, we only consider the first three metrics, which are \textit{completion rate}, \textit{cut mass ratio}, and \textit{collision ratio}. For the sake of safety, we exclude [Greedy] from real-world evaluations and select [Non-Adaptive] for comparison. The termination criteria for the real-world evaluation are reaching the bottom (completed) or more than 10 collisions (failed). Following the same criteria as in simulation, if the execution is failed, the soft material to the right of the knife's trajectory is considered to be cut off. The quantitative results in Tab. \ref{tab:real_result} and qualitative comparison in Fig. \ref{fig:cut_comparison_real} show that [Non-Adaptive] gets stuck in most cases. In contrast, [RoboNinja] is able to bypass the core after a few collisions and complete all test cases. The more detailed qualitative results in Fig. \ref{fig:cut_real_per_iter} demonstrate that [RoboNinja] is able to accurately estimate core geometry with sparse collision signals only, and cut the soft part off the cores with not only in-distribution but also out-of-distribution geometries in real-world scenarios.
\vspace{-2.5mm}\paragraph{Fruits.} We then evaluate our model on real fruits, including avocados, mangos, and plums. To better resemble real-world scenarios, we execute the same policy multiple times with different initial rotation angles about the $y$-axis to cut off more soft material from different directions. Additionally, we augment our vertical cutting trajectory with an additional horizontal and repetitive back-and-forth slicing primitive to effectively cut through fiber-rich material, such as mango\footnote{Note that since the force required to create the initial opening on the mango skin exceeds the force limit of our robot, we had to manually remove a small piece of skin at the top.} and plum skin. Qualitative results on an avocado and a mango are shown in Fig. \ref{fig:cut_fruit}, demonstrating that our model trained with procedurally generated geometries is able to generalize to various fruit cores in the real world. Additionally, the ability to extend the policy allows it to cut off soft material from different directions, making it more practical in the real world.
\vspace{-2.5mm}\paragraph{Meat.} We also evaluate our cutting policy on real bone-in meat, specifically, oxtail. In order to resemble real-world situations more realistically, we employ a bimanual setup, where one arm with a parallel-jaw gripper (WSG50) holds the bone and the other arm performs the cutting action. As raw meat is difficult to cut even with the bask-and-forth slicing primitive due to the high collagen content inside tendons, we choose to use cooked meat in this experiment. Qualitative results in Fig. \ref{fig:real_meat} demonstrate RoboNinja's strong generalization ability in meat-cutting scenarios with novel bone geometry and soft material property.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/cut_meat.pdf}
\vspace{-5.5mm}
\caption{\textbf{Evaluation on real meat.} We deploy our cutting policy on a bimanual setup, with one arm using a parallel-jaw gripper to hold the bone and the other arm using a knife to cut the meat. The cutting trajectory and core estimation are shown at the bottom.}
\vspace{-5.5mm}
\label{fig:real_meat}
\end{figure}
\subsection{Limitations and Potential Improvements}
There are several limitations and potential improvements of our system:
(a) The knife may exhibit visible deformations (i.e., bending) in real-world scenarios, causing deviations in behavior compared to simulation and disrupting the accuracy of state estimation due to misleading collision positions. This could be addressed by incorporating more accurate knife modeling in our simulator.
(b) In this work, we consider the scenario where the object being cut is secured using an external fixture, and our primary objective is to optimize the trajectory of the cutting tool. In practice, cutting is a bimanual problem, which typically involves coordination between both arms, one to hold and reorient the object and the other to execute the cutting. In the meat-cutting scenario, we attempt to employ such a bimanual setup; however, this setup uses one arm to fix the meat bone and does not fully consider the coordination required between both arms. In addition, human counterparts use a dexterous hand with a soft exterior to firmly hold the object without damaging it, rather than a rigid gripper. Developing such hardware for a dual-arm robotic system, along with the coordinated control policies to perform cutting more efficiently and safely is a promising direction.
\section{Conclusion}
\label{sec:conclusion}
We introduce RoboNinja, a robotic system for cutting multi-material objects. The system utilizes demonstrations collected in our newly developed differentiable simulator to train an iterative state estimator and an adaptive cutting policy. It enables the robot to cut soft material off the rigid core while optimizing for both collision occurrences and energy consumption. We also present a low-cost real-world cutting system with real-time force feedback and collision detection, which is used to testify our proposed method on a real robotic setup. Our experiments show that our method is able to generalize well to novel core geometries and even real fruits. We hope our experimental findings and the newly developed simulator could inspire future work on robot learning involving interactions with multi-material objects.
\section{Introduction}
\label{sec:introduction}
Imagine slicing a piece of avocado from its seed (Fig. \ref{fig:teaser}) -- we need to carefully slice through the soft outer flesh to locate the rigid seed and then follow the contours of the seed to maximize the volume of the slice. In some cases, we would need to switch the cutting trajectory when the knife collides with the seed.
All of these maneuvers must be performed while adhering to the physical constraints of the knife and its interactions with both the soft and rigid parts of the avocado.
This simple yet intricate task illustrates the challenges of cutting multi-material objects, which is significantly more difficult than cutting through single-material objects that often could be accomplished with an open-loop cutting trajectory \cite{heiden2021disect, long2013robotic, sharma2019learning, mu2019robotic,zhou2006cutting1, zhou2006cutting2}.
In this paper, we are interested in enabling robots to effectively and efficiently perform this task.
Using the above example, we could summarize the unique capabilities required for acquiring such a skill:
\begin{itemize}[leftmargin=5mm]
\item \textbf{Multi-objective optimization under complex physical constraints.} To perform the task, the system needs to simultaneously optimize several objectives -- maximizing the total yield (cut-off mass of the soft material), avoiding collisions with the rigid core, and minimizing energy consumption. Many of these objectives require comprehensive physical reasoning beyond simple geometry analysis.
\item \textbf{Interactive state estimation for extreme partial observability.} In most cases, the rigid core is not observable on the surface. Hence, it requires the system to continuously estimate the location and geometry of the core through interaction, that is, through cutting and sensing the collision. This state estimator will continuously inform the cutting policy in a close-loop manner.
\item \textbf{Adaptive policy for out-of-distribution scenarios.} While the state estimator could infer the core geometry based on contacts, there will always be instances where the shape of the core falls outside the training distribution. An inaccurate estimation could lead to repetitive collisions in the same location. In these scenarios, the cutting policy needs to ``adaptively'' update its cutting strategies to avoid getting stuck.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/cut_teaser.pdf}
\caption{\textbf{RoboNinja} is designed to cut multi-material objects with an interactive state estimator and adaptive cutting policy. Left: When the knife encounters a collision with the invisible core, the algorithm updates the core estimation and re-plans the cutting trajectory after a few retracting actions. Right: We deploy the learned model on a physical robot, allowing it to cut fruits in a way that maximizes the cut-off mass while minimizing collision occurrences.}
\label{fig:teaser}
\vspace{-5mm}
\end{figure}
As the first step towards enabling this new robot capability, we introduce \textbf{RoboNinja}, a learning-based cutting system that combines an interactive state estimator and an adaptive cutting policy.
The interactive state estimator uses sparse contact information to iteratively estimate the position and shape of the core.
The cutting policy, optimized to increase cut-off yield and reduce collision occurrences and energy consumption, produces cutting actions in a closed-loop manner, based on the estimated state and a tolerance value. The tolerance value is a function of past collision events and actively controls the policy conservativeness when encountering new collisions (e.g., keeping a distance from the estimated core location). This adaptivity is critical for handling out-of-distribution scenarios where the state estimation could be inaccurate.
Learning such cutting skills directly on a real-world robot system is challenging and potentially dangerous. However, existing simulators in the literature are limited in simulating multi-material objects, especially the coupling between rigid and soft bodies under forceful manipulations such as cutting. Therefore, we develop a new differentiable simulator for the proposed multi-material object cutting problem, allowing us to use gradient-based optimization for generating trajectories as demonstrations for policy learning.
Finally, when deploying the learned policy, we demonstrate that with the simple collision feedback captured by a low-cost (less than \$10) force sensor, we can successfully transfer the model learned in the simulator directly to real-world scenarios, including out-of-distribution object geometries and materials, thanks to its adaptivity.
In summary, the primary contribution of this paper is RoboNinja~-- the first robotic system demonstrating the capability of multi-material object cutting. To build this system, we make the following technical advancements:
\begin{itemize}[leftmargin=5mm]
\item The formulation of the multi-material cutting task and a differentiable simulator that could perform multi-objective trajectory optimization for collecting demonstrations.
\item A learning-based cutting method with an \textit{interactive} state estimator and an \textit{adaptive} cutting policy.
\item The deployment of our method on a real-world robotic system with low-cost sensory feedback.
\end{itemize}
Videos of experiments, the code for the simulator and the cutting system, as well as the CAD models for the benchmark objects are available at \url{https://roboninja.cs.columbia.edu/}.
\section{Method}
\label{sec:method}
In this work, we study a multi-material object cutting task, where the goal is to manipulate a tool to cut off soft material from a rigid core while maximizing the yield (the total amount of soft material being removed), as well as minimizing the number of collisions with the core and the total energy consumption.
Considering the complex objectives and the real-time nature of this task, we decide to employ the standard teacher-student framework: slow while accurate expert demonstrations act as the teacher (optimized with the differentiable simulator), and a lightweight learning-based policy as a student is trained to imitate the teacher's behavior and is deployed at inference time. Specifically, we first build a physics-based differentiable simulator supporting multi-material coupling to gather expert demonstrations via gradient-based trajectory optimization. Next, we train an interactive state estimation network to infer the position and the geometry of the core based on collected collision signals.
Afterward, a cutting policy is trained to generate actions based on the core estimation and the knife state. This policy not only imitates the behavior of the expert demonstrations but also adaptively adjusts the conservativeness to retract from the core after the collision.
An overview of the execution is illustrated in Fig. \ref{fig:method}. The robot first starts the cutting process by executing actions based on an initial estimation from a learned prior. Upon collisions with the core, the state estimation is immediately updated. The robot subsequently retracts a few steps and replans the cutting trajectory using the updated state estimation. In this process, the system progressively updates the estimated state, leading to an accurate estimate of the position and geometry of the invisible core after multiple collision events, which in turn allows a physically plausible cutting trajectory to cut off most of the soft material.
In the following sections, we first present our expert demonstration process in \S \ref{method:trajectory_optimization}, where we develop our differentiable cutting simulator and perform gradient-based trajectory optimization. We then detail the iterative state estimation in \S \ref{method:state_estimation} and the adaptive cutting policy in \S \ref{method:cutting_policy}. Finally, we present a hardware setup that includes a low-cost force sensor for deploying our policy in real-world scenarios (\S \ref{method:real_setup}).
\subsection{Multi-objective Trajectory Optimization with a Differentiable Simulation Environment}
\label{method:trajectory_optimization}
\vspace{-2.5mm}\paragraph{Differentiable cutting simulation environment} We build a cutting simulation environment to support the modeling of both soft and rigid materials, as well as the coupling between them. The soft material in our scene, e.g. flesh of fruit is represented using an elastoplastic continuum model simulated with MLS-MPM \cite{hu2018moving}, treated by the von Mises yield criterion. In contrast to FEM \cite{heiden2021disect}, MPM-based methods naturally support arbitrary deformation and topology change. For rigid bodies in the scene, including the rigid core, the knife, and the support surface (a chopping board), we represent them as time-varying signed distance fields (SDFs), converted from imported external meshes. We model the contact between soft and rigid materials by computing surface normals of the SDFs and applying Coulomb friction \citep{stomakhin2013material}. Material separation occurring during the cutting process is handled by MLS-MPM, which inherently supports modeling sharp and clean split of material points in the soft material. Our simulator is implemented in Python and Taichi. For gradient computation, we use both Taichi's autodiff system for simple operations and analytical gradient computation for complex operations such as SVD in deformation gradient updates. Additionally, in order to lift the computation bottleneck imposed by limited GPU memory size, we implement efficient gradient-checkpointing to allow gradient flow over long temporal horizons.
\vspace{-2.5mm}\paragraph{Multi-objective trajectory optimization} At each step, we consider a cutting action parameterized by the knife orientation $\theta$ along the $z$ axis and a vertical displacement $\delta \vec{p} = (\delta x, \delta y)$ within the $x-y$ plane (see Fig. \ref{fig:method}). To further ensure the smoothness of the cutting trajectory, we impose an additional constraint on the magnitude of the displacement: $\|\delta \vec{p}\|=2mm$. Since our differentiable simulator allows gradient-based trajectory optimization, we collect cutting trajectories in the simulator assuming access to all ground truth information, optimized using the following objectives:
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/energy_computation.pdf}
\caption{\textbf{Energy Calculation.} [Left] Illustration of energy computation in simulation. Particle position and velocity are changed due to collision with the knife. The cumulative work from the knife to each collided particle is considered as the energy consumption of the agent. [Middle] A plot of energy consumption at each step. [Right] Visualization of knife poses at some representative steps. Note that large energy consumption incurs around the rapid rotations in the trajectory. (e.g., steps 1 and 3).}
\label{fig:energy_plot}
\end{figure}
\begin{itemize}[leftmargin=5mm]
\item \textit{Cut mass}: one primary goal of a cutting policy is to maximize the total amount of the soft material being removed by a cutting trajectory. This objective $\mathcal{L}_m$ is computed by accumulating the mass of all material points removed from the rigid core at the end of each episode.
\item \textit{Collision occurrences}: an ideal cutting trajectory should be able to avoid unnecessary collision events during its course. Our simulator represents both the core and the knife by time-varying SDFs. The collision between them is detected by sampling $N$ uniform points on the knife surface and checking whether they penetrate the core. In order to make this optimization process differentiable, we model the discontinuous contact between the knife and the core in a soft manner, following \cite{huang2021plasticinelab, xian2023fluidlab}, and compute a differentiable collision loss for optimization: $\mathcal{L}_{col} = \sum_{i=1}^N \|\text{max}(d_i+\hat{d}, 0)\|^k$, where $d_i$ represents the penetration distance of the $i$-th sampled point, and $\hat{d}$ is an additional safety margin. In practice, we found $N=5$, $k=4$, and $\hat{d}=2cm$ to be sufficient to produce good trajectories.
\item \textit{Energy consumption}: in order to produce a natural and smooth motion trajectory during a cutting process, our system also optimizes energy consumption. We estimate the energy loss $\mathcal{L}_e$ based on the work done by the knife during its motion at each step, which is computed by summing the product of the distance traveled and force experienced by each material point in contact with the knife: $\mathcal{L}_e = \Sigma_j \frac{m_j\Delta \vec{v}_j}{\Delta t} \cdot \Delta{\vec{p}_j}$, where $\Delta \vec{v}_j$ and $\Delta \vec{p}_j$ denote the change in velocity and position for each particle $j$, respectively. In Fig. \ref{fig:energy_plot}, we illustrate the energy consumption of each step in an example cutting episode.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/cut_optimization_process.pdf}
\caption{ \textbf{Optimization Process of Cutting Trajectory.} As illustrated in the first column, the cutting trajectory is initialized with a predesigned collision-free path. After 300 optimization iterations, the knife is able to cut off most of the soft materials with optimized energy consumption.}
\vspace{-2mm}
\label{fig:optimization_process}
\end{figure}
During trajectory optimization, we initialize using a pre-designed and translational collision-free trajectory that traverses down from above the object till touching the support surface, and optimize the trajectory loss by summing the above objectives: $\mathcal{L}_{total} = \mathcal{L}_m + \eta_{col}\mathcal{L}_{col} + \eta_{e}\mathcal{L}_{e}$,
where $\eta_{col}$ and $\eta_{e}$ are coefficients for balancing different objectives. Examples of the optimization process are shown in Fig. \ref{fig:optimization_process}.
\subsection{Interactive State Estimation}
\label{method:state_estimation}
The purpose of the state estimation module is to determine the location and shape of the initially invisible core using collision signals collected during the cutting process.
As illustrated in Fig. \ref{fig:method}, the input is a sparse collision map within the $x-y$ plane, where each filled pixel represents a collision point encountered during the trajectory traveled until the current time step.
We employ an 11-layer U-Net architecture to estimate the 2D mask of the rigid core.
To facilitate offline training, we randomly select $k$ points on the contour of the training cores to mimic collision signals that might be received during actual execution. Here $k$ is an integer uniformly sampled from a uniform distribution with a range [0, 9]. The model is trained using the ground truth geometry as supervision and optimized with Binary Cross-Entropy loss. During the inference phase, a pre-determined threshold $S_{thr}$ is used to convert the predicted probability map to a binary core mask.
\subsection{Adaptive Cutting Policy}
\label{method:cutting_policy}
The goal of the cutting policy is to generate the cutting action at each step.
The policy network takes the signed distance field of the estimated core mask, the current knife pose, and an additional tolerance value $\tau_t$ as input. It predicts the cutting action $a_t$, parameterized by the translational $\delta \vec{p} = (\delta x, \delta y)$ and rotational movement $\delta\theta$ of the knife.
Tolerance value $\tau_t$ controls the level of the conservativeness of the action. A lower tolerance value results in the knife getting closer to the estimated core, thus increasing the risk of collision. Conversely, a higher tolerance value leads to a more conservative action, keeping the knife farther from the core to prevent potential collisions. We initialize the tolerance at each step as 0.
Upon collisions, the robot first retracts the knife for $R_{dis}$ steps based on the history of actions, then executes actions with an increased tolerance value by $\tau^{+}$ for $R_{dis}$ step, and then linearly decay the tolerance $\tau$ afterwards to produce a smooth trajectory.
We use the demonstrations $\{D\}$ collected in the differentiable simulator to train our cutting policy. The tolerance value of the actions in the demonstration is assumed to be 0, and we use data augmentation to generate training data for tolerance values $\tau$ larger than 0 in the following way: For each demonstration, $D_i=[s_0, a_0, s_1, a_1, \cdots, s_N, a_N, s_{N+1}]$, where $s_i$ and $a_i$ represents the knife pose and action at each step, we generate another knife trajectory $\hat S = [\hat s_0, \hat s_1, \cdots, \hat s_N, \hat s_{N+1}]$ by moving the current knife trajectory away from the core in the direction of the x-axis by $\tau$.
To increase training robustness, a gaussian noise is added to the action sequence.
Finally, the quadruple $(\tau, s^*_0, a_i, \hat s_{i+1})$, along with the core geometry is used for training, where $s^*_0$ is calculated analytically using $a_i$ and $\hat s_{i+1}$. The cutting policy is trained with Mean Squared Error (MSE) loss.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/hardware.pdf}
\caption{ \textbf{Hardware.} We design and construct a compact cutting tool equipped with a force sensor. The force is measured by a strain gauge as an analog electrical signal. The signal is then converted to a digital signal and transmitted to the robot controller through an A/D converter and a Raspberry Pi Zero, respectively.}
\vspace{-4mm}
\label{fig:hardware}
\end{figure}
\subsection{Real-world System Setup}
\label{method:real_setup}
We design and build a low-cost, force feedback system for deploying our method on a real-world UR5 platform. Figure \ref{fig:hardware} shows an image and the exploded view of our hardware system.
A strain gauge load cell measures the shear force experienced by the connected knife as an analog signal. This signal is amplified and converted into digital readings by an HX711 A/D converter at 80 Hz. The digital measurement of the shear force is then processed by the Raspberry Pi Zero and transmitted to the robot controller via Wi-Fi. The entire system is powered by UR5's tool I/O power, and all these components are compactly assembled inside a 3D-printed container. The load cell and the AD converter are the core components of our hardware system to achieve real-time force feedback and are priced at \$8 in total\footnote{Amazon link: \url{https://www.amazon.com/gp/product/B08KRV8VYP}}, significantly cheaper than a force-torque sensor. In practice, it is noteworthy that the cutting friction of different soft materials varies greatly; thus, we manually set the threshold for each material when converting the continuous force signal into a binary collision signal.
\section{Related Work}
\label{sec:related_work}
\subsection{Differentiable Physics Simulation for Policy Learning}
In recent years, a number of differentiable simulation environments have been proposed to accelerate policy learning. These include 1) simulators parameterized by neural networks \cite{li2019propagation, li2018learning, pfaff2020learning}, which haven't proved to be capable of accurate simulations involving complex interactions between multi-phase materials required in cutting scenarios, and 2) analytical simulators implemented in a differentiable way, leveraging either automatic differentiation tools \cite{hu2019difftaichi} or analytical gradient computation rules \cite{hu2019chainqueen}. The latter is used to provide gradient information to accelerate policy search in various robotic tasks, including locomotion \cite{xu2022accelerated}, soft robot design \cite{wang2023softzoo}, soft body manipulation \cite{huang2021plasticinelab, xian2023fluidlab, qi2022dough}, etc. To handle realistic sensory input outside of the simulation environment, prior methods distill the knowledge learned in simulators into visuomotor policies that take images~\cite{lin2021diffskill} or point clouds~\cite{lin2022planning} as input. In contrast, the state of multi-material objects in our task, such as the shape of the rigid core inside, is not directly observable. As such, our method learns an interactive state estimation network based on encountered collision events to address the challenge.
\subsection{Simulation Environments for Cutting}
There has been a significant amount of research conducted on simulating the cutting process of different materials. Theoretical analysis is commonly applied in metal cutting \cite{merchant1945mechanics, childs2006friction} and brittle materials research \cite{griffith1921vi, miller1999modeling, zhou2005rate}. Besides, various numerical methods are utilized to simulate the fracture of deformable objects. They could be further classified into mesh-based methods, such as the Finite Element Method (FEM) \cite {heiden2021disect, areias2017steiner, koschier2014adaptive, wu2015survey, paulus2015virtual, jevrabkova2009stable}, and mesh-free methods, including Position-based Dynamics (PBD) \cite{pan2015real, berndt2017efficient} and Material Point Method (MPM) \cite{hu2018moving, wang2019simulation, wolper2019cd, wolper2020anisompm}.
The work most related to ours is ``DiSECt'' by Heiden et al. \cite{heiden2021disect}, where the FEM-based cutting simulator achieves differentiability through continuous contact formulation and damage modeling.
In contrast to these prior works that only simulate the cutting process of a single material, our differentiable simulator stands out by accounting for both the external soft material and the internal rigid core, utilizing MPM for its ability to accurately and efficiently simulate the dynamics of elastoplastic objects and the coupling between different materials.
\subsection{Interactive Perception}
Interactive perception (IP) \cite{bohg2017interactive} leverages physical interaction to gather information about the environment. It is commonly used for scene reconstruction with occluded objects \cite{xu2020learning, kenney2009interactive, schiebener2013integrating, schiebener2011segmentation} and kinematic object structure discovery \cite{jiang2022ditto, gadre2021act, nie2022sfa, lv2022sagci}. The integration of tactile signals becomes increasingly popular in this field, especially in material classification \cite{culbertson2014modeling, chu2015robotic}, object recognition \cite{liu2017recent, xu2022tandem, xu2022tandem3d, schmitz2014tactile, schneider2009object}, and shape reconstruction \cite{allen1990acquisition, bierbaum2008robust, matsubara2017active, lu2022curiosity}. Recently, Xu et al. \cite{xu2022tandem3d} recognize 3D objects with active tactile explorations. Wang et al. \cite{lu2022curiosity} present a curiosity-driven object reconstruction method using the modality of touch. These prior works only consider rigid or articulated objects. Our work advances the field of interactive perception by applying it to a new and challenging task: multi-material object cutting.
\subsection{Robotic Cutting}
Many robotic systems have been developed for cutting tasks in various domains such as meat \cite{long2013robotic}, vegetables \cite{sharma2019learning, mu2019robotic, zhou2006cutting1, zhou2006cutting2, sawhney2021playing}, and dough \cite{lin2021diffskill, lin2022planning}.
Researchers also utilize multimodal haptic sensory data to enhance system robustness \cite{zhang2019leveraging}. As an alternative to traditional cutting using knives, hot wire cutting tools \cite{duenser2020robocut, sondergaard2016robotic} are adopted in various sculpting and industrial applications.
Previous works mostly deal with only single-material objects and study how to cut through them, where open-loop orthogonal cutting is typically sufficient. In this work, we study the problem of cutting multi-material objects with the goal of removing soft material from an invisible rigid core, which requires controlling under physical constraints such as avoiding collisions with the rigid core, and continuous state estimation from sensory feedback in a partially observable environment. Therefore, we propose to equip our system with an interactive state estimation network and an adaptive policy to address the task.
|
3,212,635,537,609 | arxiv | \section{Introduction}
The recent years in gravitational physics were exciting for experimental physics as well as theoretical physics. The first detection of gravitational waves of a binary black hole merger \cite{Abbot:2016} and of a binary neutron star merger \cite{Abbot:2017} by LIGO/VIRGO mark the starting point of the era of multimessenger astronomy. A few years later the Event Horizon Telescope produced the first picture of a black hole shadow \cite{EHT:2019}. A crucial ingredient for the calculation of the shadow of a black hole is sufficient knowledge about the geodesics of light in an axially symmetric spacetime. For example for a spherically symmetric spacetime (e.g. Schwarzschild spacetime) the size of the shadow is given by the size of the innermost circular photon orbit or photon sphere. \\
In four dimensions the event horizon of axisymmetric black holes, that are asymptotically flat obey the dominant energy condition and, therefore, are topologically spheres \cite{Hawking:1972, Hawking:1973}. If one relaxes one or more of these assumptions (e.g. asympotically anti-de Sitter) horizons of black holes can have different topology. One possibility is the Black Spindle spacetime found by Klemm in 2014 \cite{Klemm:2014} as a subclass of the Carter-Pleba\'{n}ski solution \cite{Carter:1968,Plebanski:1975}. The Black Spindle spacetime describes black holes with a noncompact event horizon while still having a finite volume. This results in a finite entropy. Topologically the event horizons are spheres with two punctures and the spacetime represents the ultraspinning limit of the Kerr-Newman-AdS spacetime. This interesting type of solutions was first obtained in
\cite{Gnecchi:2013mja}, and immediately received much interest, in particular, with respect to their thermodynamics (see e.g. \cite{Hennigar:2014cfa,Hennigar:2015cja,Hennigar:2015gan}). Subsequently further types of black holes with noncompact event horizons of finite area were found: black bottles, whose event horizons are topologically spheres with a single puncture \cite{Chen:2016rjt}. \\
To analyze the structure of spacetime one of the most powerful tools is geodesics. The first to solve the geodesic equation for the Schwarzschild black hole was Hagihara \cite{Hagihara:1931}. He solved the equations of motion in terms of the elliptic Weierstra{\ss} $\wp$-function. Elliptic functions have been used to solve the equations of motion for various black hole and wormhole spacetimes \cite{Kagramanova:2010bk,Grunau:2010gd,Kagramanova:2012hw,Hackmann:2013pva,Cebeci:2016,Flathmann:2015,Chatterjee:2019}. When higher order polynomials ($>4$) are involved in the geodesic equation the solutions are based on the Jacobi inversion problem \cite{Hackmann:2008a,Hackmann:2008b}. These solutions already have been used for spacetimes, where the horizons aren't topologically spheres. In \cite{Grunau:2012a} the authors analyzed the geodesics of test particles and light in the Singly Spinning Black Ring Spacetime and in \cite{Grunau:2013a} for the (Charged) Doubly Spinning Black Ring Spacetime. In addition the geodesic motion in the (rotating) black string spacetime have been studied in \cite{Grunau:2013b}. \\
\newpage
Therefore in this article we study the geodesic motion of test particles and light for the black spindle \cite{Klemm:2014} to analyze the structure of the spacetime. The structure of the article is as follows. In Sec. \ref{sec:spacetime} we review the metric of the Black Spindle spacetime and analyze its properties. Sec. \ref{sec:eom} is devoted to the derivation of the equations of motions and the analysis of possible orbit types can be found in Sec. \ref{sec:class}. The solution of the equations of motion in terms of the Weierstra{\ss} $\wp$-, $\sigma$- and $\zeta$-function and of the Kleinian $\sigma$-function are presented in Sec. \ref{sec:sol}. The visualization of the orbits can be found in Sec. \ref{sec:orbits}. We conclude the article in Sec. \ref{sec:conclusion}.
\section{The Black Spindle}\label{sec:spacetime}
The four dimensional spacetime of a black spindle described by Klemm is given by the metric \cite{Klemm:2014}
\begin{equation}
{\mathrm{d}} s^2= -\frac{{\mathcal{Q}}}{p^2+q^2}\left({\mathrm{d}}\tau-{\mathrm{d}}\sigma\right)^2+\frac{{\mathcal{P}}}{p^2+q^2}\left({\mathrm{d}}\tau+{\mathrm{d}}\sigma\right)^2+\frac{p^2+q^2}{{\mathcal{Q}}}{\mathrm{d}} q^2+\frac{p^2+q^2}{{\mathcal{P}}}{\mathrm{d}} p^2 \,,
\label{eqn:metric}
\end{equation}
where
\begin{align}
{\mathcal{Q}} =&\left(l+\frac{q^2}{l}\right)^2+P^2+Q^2-2mq \\
{\mathcal{P}} =&\frac{\left(p^2-l^2\right)^2}{l^2}\,.
\label{eqn:metricfunctions}
\end{align}
Here $l^2=-\frac{3}{\Lambda}$ is related to the cosmological constant $\Lambda$, $m$ is the mass parameter and $Q$ and $P$ are the electric and magnetic charge of the black hole. For simplification we define the combined electromagnetic charge parameter $C^2=P^2+Q^2$.
The condition $q=p=0$ defines the location of the curvature singularity, whereas for ${\mathcal{Q}}=0$ we find up to two horizons $q_{\pm}$. To guarantee that the singularity is hidden behind the horizons (e.g real valued horizons), the condition \cite{Klemm:2014,Ling:2015}
\begin{equation}
m \geq 2q_{h,0}\left(\frac{q_{h,0}^2}{l^2}+1\right)\,,
\end{equation}
with
\begin{equation}
q_{h,0}^2=\frac{l^2}{3}\left(-1+\sqrt{4+\frac{3C^2}{l^2}}\right)
\end{equation}
has to be fulfilled. The \"{}$=$\"{} sign corresponds to an extremal black hole, where both horizons coincide. To make the spindle shape of the horizons visible, we embed the spacetime in cylindrical coordinates. By fixing $\tau=0$ and $q=q_{\pm}$, we can transform the metric in Eqn. \ref{eqn:metric} into
\begin{equation}\label{eqn:metric_embedd}
{\mathrm{d}} s^2=\frac{p^2+q_{\pm}^2}{{\mathcal{P}}}{\mathrm{d}} p^2+\frac{{\mathcal{P}} q_{\pm}^4}{p^2+q_{\pm}^2}{\mathrm{d}}\sigma^2\,.
\end{equation}
A standard transformation (see for example \cite{Ling:2015}) can be used to obtain the angular coordinate $\phi=-l\sigma$. In this case it is possible to write Eqn. \ref{eqn:metric_embedd} as
\begin{equation}
{\mathrm{d}} s^2={\mathrm{d}}\rho^2+\rho^2{\mathrm{d}}\phi^2+{\mathrm{d}} z^2 \,,
\end{equation}
with
\begin{equation*}
\rho^2=\frac{{\mathcal{P}} q_{\pm}^4}{l^2\left(p^2+q_{\pm}^2\right)}
\end{equation*}
and
\begin{equation*}
\left(\frac{{\mathrm{d}} z}{{\mathrm{d}} p}\right)^2 = \frac{p^2+q_{\pm}^2}{{\mathcal{P}}}-\left(\frac{{\mathrm{d}}\rho}{{\mathrm{d}} p}\right)^2\,.
\end{equation*}
We will solve this differential equation numerically. This procedure is often used for wormhole solutions (e.g. \cite{Willenborg:2018}). See Fig. \ref{pic:spindle} for an example plot.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{1.eps}
\caption{Embedding of the horizon in cylindrical coordinates.}
\label{pic:spindle}
\end{figure}
\section{The geodesic equations}\label{sec:eom}
The Hamilton-Jacobi equation
\begin{equation}
\frac{\partial S}{\partial \lambda}+\frac{1}{2}g^{\mu\nu}\frac{\partial S}{\partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}}=0
\end{equation}
can be solved with the ansatz
\begin{equation}
S=\frac{1}{2}\delta\lambda-E\tau+L\sigma++S_q(q)+S_p(p) \,.
\label{eqn:action}
\end{equation}
Here $E$ denotes the energy of the particle, $L$ its angular momentum and $\delta$ is either equal to $1$ for massive particles or vanishes for light. For massive particles, $\lambda$ is related to the proper time and for light it is an affine parameter along the geodesics. The Hamilton-Jacobi equation separates with the help of the Carter constant $K$ \cite{Carter:1968rr}. Therefore we derive the equations of motion for each coordinate
\begin{align}
\frac{{\mathrm{d}} q}{{\mathrm{d}}\gamma} &= \sqrt{X} \nonumber\\
\frac{{\mathrm{d}} p}{{\mathrm{d}}\gamma} &= \sqrt{Y} \nonumber\\
\frac{{\mathrm{d}}\sigma}{{\mathrm{d}}\gamma} &= \frac{L-Ep^2}{{\mathcal{P}}}-\frac{L-Eq^2}{{\mathcal{Q}}}\nonumber\\
\frac{{\mathrm{d}}\tau}{{\mathrm{d}}\gamma} &= \frac{Lp^2-Ep^4}{{\mathcal{P}}}+\frac{Eq^4+Lq^2}{{\mathcal{Q}}}\,.
\label{eqn:EOM}
\end{align}
For simplification we have defined the mino-time $\gamma$ \cite{Mino:2003yg} with ${\mathrm{d}}\lambda=\left(p^2+q^2\right){\mathrm{d}}\gamma$. The functions $X$ and $Y$ are polynomials of order six in $q$ and $p$
\begin{align}
X(q) &= E^2q^4+2ELq^2-\left(\delta q^2+K\right){\mathcal{Q}}+L^2 \nonumber\\
Y(p) &= -E^2p^4+2ELp^2+{\mathcal{P}}\left(K-\delta p^2\right)-L^2 \,.
\label{eqn:XY}
\end{align}
\section{Classification of the geodesics}\label{sec:class}
\subsection{Parametric diagrams}
To investigate the motion of particles and light in the black spindle spacetime, we have to determine the number of zeros of the polynomials $X$ and $Y$, which correspond to the turning points of the motion. The first tool we are using is parametric diagrams. The number of zeros of $X$ and $Y$ changes if double roots appear. This is the case for
\begin{align}
X(q)&=0\quad \text{and}\quad \frac{{\mathrm{d}} X(q)}{{\mathrm{d}} q}=0\,, \nonumber \\
Y(p)&=0\quad \text{and}\quad \frac{{\mathrm{d}} Y(p)}{{\mathrm{d}} p}=0\,.
\end{align}
The combination of these four conditions can be found in Fig. \ref{pic:parametricdiagrams}.
\begin{figure}[!ht]
\centering
\subfigure[$\delta=0$]{
\includegraphics[width=0.45\textwidth]{2a.eps}
}
\subfigure[$\delta=1$]{
\includegraphics[width=0.45\textwidth]{2b.eps}
}
\caption{Combined parametric $L-E$ diagrams for the $q$-motion (blue lines) and the $p$-motion (green lines) for $m=m_0+0.1$, $C=0.5$, $K=4$ and $l=1$. In the shaded area there is no geodesic motion possible because $Y(p)<0$ for all $p$.}
\label{pic:parametricdiagrams}
\end{figure}
The following regions can be found for the $q$-motion ($q_i<q_{i+1}$)
\begin{enumerate}[(I)]
\item $X(q)$ has no real zeros and $X(q)>0$. Only Transit orbits (TrO) are possible.
\item $X(q)$ has two positive zeros $q_1$ and $q_2$ with $X(q)\geq 0$ for $q\in[q_1,q_2]$. Only Many-world bound orbits (MBO) are possible.
\item $X(q)$ has one positive and one negative zero, where $X(q)\leq 0$ for $q\in[q_1,q_2]$. Possible orbits are Escape orbits (EO) and Two-world escape orbits (TEO).
\item $X(q)$ has one positive and one negative zero, where $X(q)\geq 0$ for $q\in[q_1,q_2]$. Only Crossover Many-world bound orbit (CMBO) are possible.
\item $X(q)$ has two negative zeros $q_1$ and $q_2$ with $X(q)\leq 0$ for $q\in[q_1,q_2]$. Possible orbits are EO and Crossover Two-world escape orbits (CTEO).
\item $X(q)$ has one negative $q_1$ and two positive zeros $q_2,q_3$ with $X(q)\leq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. Here $q_2\leq q_-$ and $q_3\geq q_+$. EO and MBO are possible.
\item $X(q)$ has one negative $q_1$ and two positive zeros $q_2,q_3$ with $X(q)\leq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. Here $q_2,q_3\leq q_-$ EO, BO and TEO are possible.
\item $X(q)$ has three negative $q_1,q_2,q_3$ and one positive zero $q_4$ with $X(q)\geq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. Here negative BO and CMBO are possible.
\item $X(q)$ has two negative $q_1,q_2$ and two positive zeros $q_3,q_4$ with $X(q)\geq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. Possible orbits are BO and MBO. The BO in this case has always $q\leq 0$.
\item $X(q)$ has one negative $q_1$ and three positive zeros $q_2,q_3,q_4$ with $X(q)\leq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. Here $q_3\leq q_-$ and $q_4\geq q_+$. There exist a negative EO an MBO and a positive EO.
\item $X(q)$ has one negative $q_1$ and three positive zeros $q_2,q_3,q_4$ with $X(q)\leq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. Here $q_3,q_4\leq q_-$. Possible orbits are EO, BO and TEO.
\item $X(q)$ has four positive zeros $q_1,q_2,q_3,q_4$ with $X(q)\geq 0$ for $q\in[q_1,q_2]$ and $q\in[q_3,q_4]$. BO and MBO are possible.
\item $X(q)$ has two negative $q_1,q_2$ and four positive zeros $q_3,q_4,q_5,q_6$ with $X(q)\geq 0$ for $q\in[q_1,q_2]$, $q\in[q_3,q_4]$ and $q\in[q_5,q_6]$. This region is only possible for $\delta=1$. Possible orbits are BO and MBO.
\end{enumerate}
\subsection{Effective potentials}
We can get more insight into the particle motion by rewriting the polynomial $X$ in the following way
\begin{align}
X(q) = q^4\left(E-V_+\right)\left(E-V_-\right)\,.
\end{align}
Then we can define the effective potentials $V_{\pm}$
\begin{align}
V_{\pm} &=-\frac{Ll\pm\sqrt{L(C^2 l^2+l^4-2l^2mq+2l^2q^2+q^4)}}{q^2l} \,,
\end{align}
where now, the zeros of the polynomial $X$ are given by the intersections of the potential $V_{\pm}$ with the energy $E$. Fig. \ref{pic:potentials} shows example plots of the effective potential to visualize the possible orbit combinations given in Tab. \ref{tab:orbit-types}.
\begin{figure}[!ht]
\centering
\subfigure[$\delta=0$, $C=0.5$, $l=1$, $m=m_0+0.1$ $K=4$ and $L=3$]{
\includegraphics[width=0.45\textwidth]{3a.eps}
}
\subfigure[$\delta=0$, $C=0.5$, $l=1$, $m=m_0+0.1$ $K=4$ and $L=-1.9$]{
\includegraphics[width=0.45\textwidth]{3b.eps}
}
\subfigure[$\delta=1$, $C=0.5$, $l=1$, $m=m_0+0.1$ $K=4$ and $L=3$]{
\includegraphics[width=0.45\textwidth]{3c.eps}
}
\subfigure[$\delta=1$, $C=0.5$, $l=1$, $m=m_0+0.1$ $K=4$ and $L=-1.9$]{
\includegraphics[width=0.45\textwidth]{3d.eps}
}
\caption{Effective potential $V_{\pm}$ for the $q$-motion. The blue curve represents $V_+$ and the greend curve $V_-$.The dashed vertical lines denote the position of the horizons. The red dashed lines are different energies for different orbit combinations. In the grey area $X(q)<0$, therefore no geodesic motion is possible.}
\label{pic:potentials}
\end{figure}
To conclude the discussion of the possible orbits, we present a list of all possible orbit types in Tab. \ref{tab:orbit-types}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{|lccll|}\hline
type & zeros & region & range of $q$ & orbit \\
\hline\hline
A & 0 & I &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\end{pspicture}
& TrO
\\ \hline
B & 2 & II &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{*-*}(0,0)(1.5,0)
\end{pspicture}
& MBO
\\ \hline
C & 2 & III &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{-*}(-4,0)(-2.5,0)
\psline[linewidth=1.2pt]{*-}(0,0)(3.5,0)
\end{pspicture}
& EO, TEO
\\
C$_0$ & & &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{-*}(-4,0)(-2.5,0)
\psline[linewidth=1.2pt]{*-}(-2,0)(3.5,0)
\end{pspicture}
& EO, TO
\\ \hline
D & 2 & IV &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{*-*}(-2.5,0)(1.5,0)
\end{pspicture}
& CMBO
\\ \hline
E & 2 & V &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{-*}(-4,0)(-3.5,0)
\psline[linewidth=1.2pt]{*-}(-2.5,0)(3.5,0)
\end{pspicture}
& EO, CTEO
\\ \hline
F & 4 & VIII &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{*-*}(-2.5,0)(1.5,0)
\psline[linewidth=1.2pt]{*-*}(-3.5,0)(-3,0)
\end{pspicture}
& BO, CMBO
\\ \hline
G & 4 & IX &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{*-*}(0,0)(1.5,0)
\psline[linewidth=1.2pt]{*-*}(-3.5,0)(-2.5,0)
\end{pspicture}
& BO, MBO
\\ \hline
H & 4 & X &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{-*}(-4,0)(-2.5,0)
\psline[linewidth=1.2pt]{*-*}(0,0)(1.5,0)
\psline[linewidth=1.2pt]{*-}(2,0)(3.5,0)
\end{pspicture}
& EO, MBO, EO
\\ \hline
I & 4 & XI &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{-*}(-4,0)(-2.5,0)
\psline[linewidth=1.2pt]{*-*}(-1.5,0)(-0.5,0)
\psline[linewidth=1.2pt]{*-}(0,0)(3.5,0)
\end{pspicture}
& EO, BO, TEO
\\ \hline
J & 4 & XII &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{*-*}(0,0)(1.5,0)
\psline[linewidth=1.2pt]{*-*}(2,0)(3,0)
\end{pspicture}
& MBO, BO
\\ \hline
K & 6 & XIII &
\begin{pspicture}(-4,-0.2)(3.5,0.2)
\psline[linewidth=0.5pt]{-}(-4,0)(3.5,0)
\pscircle[hatchcolor=white,fillstyle=solid](-2,0){0.075}
\psline[linewidth=0.5pt,doubleline=true](0.3,-0.2)(0.3,0.2)
\psline[linewidth=0.5pt,doubleline=true](1.2,-0.2)(1.2,0.2)
\psline[linewidth=1.2pt]{*-*}(0,0)(1.5,0)
\psline[linewidth=1.2pt]{*-*}(-1.5,0)(-0.5,0)
\psline[linewidth=1.2pt]{*-*}(-3.5,0)(-3,0)
\end{pspicture}
& EO, BO, TEO
\\ \hline
\end{tabular}
\caption{Possible types of orbits for the black spindle spacetime. We represent the range of the orbits by thick lines, whereas the dots mark the turning points of the geodesic motion. $q=0$ is denoted by a single vertical line and the position of the horizons by two vertical lines. }
\label{tab:orbit-types}
\end{center}
\end{table}
\section{Solution of the geodesic equation}\label{sec:sol}
\subsection{Solution for lightlike particles}
If we set $\delta=0$ then $X$ and $Y$ reduce to polynomials of fourth order in $q$ and $p$. Therefore
\begin{align}
X &= \sum_{i=0}^4a_iq^i \nonumber\\
Y &= \sum_{i=0}^4a'_ip^i \,
\end{align}
with the coefficients
\begin{align*}
a_0 &= Kl^2-L^2& \
a'_0 &= -K\left(\mathcal{C}^2+l^2\right)+L^2 \\
a_1 &= 0& \
a'_1 &= 2Km \\
a_2 &= 2EL-2K& \
a'_2 &= 2EL-2K \\
a_3 &= 0& \
a'_3 &= 0 \\
a_4 &= -E^2+\frac{K}{l^2}&
a'_4 &= E^2-\frac{K}{l^2} \,.
\end{align*}
The substitutions $q=\frac{1}{x}+q_{X}$ and $p=\frac{1}{y}+p_{Y}$ where $q_X$ and $p_Y$ are zeros of $X$ and $Y$, respectively, reduce the order of the polynomials to three
\begin{align}
\left(\frac{{\mathrm{d}} x}{{\mathrm{d}}\gamma}\right)^2 &=\sum_{i=0}^3b_ix^i \nonumber\\
\left(\frac{{\mathrm{d}} y}{{\mathrm{d}}\gamma}\right)^2 &=\sum_{i=0}^3b'_iy^i \,,
\label{eqn:poly3}
\end{align}
with adjusted coefficients $b_i$ and $b'_i$. Further substitutions $x=\frac{1}{b_3}\left(4z-\frac{b_2}{3}\right)$ and $y=\frac{1}{b'_3}\left(4w-\frac{b'_2}{3}\right)$ transform Eqn. \ref{eqn:poly3} into the standard Weierstra{\ss} form
\begin{align}
\left(\frac{{\mathrm{d}} z}{{\mathrm{d}}\gamma}\right)^2 &=4z^3-g_2z-g_3=P^{q}_3(z) \nonumber\\
\left(\frac{{\mathrm{d}} w}{{\mathrm{d}}\gamma}\right)^2 &=4w^3-g'_2w-g'_3=P^{p}_3(w) \,.
\label{eqn:standardWeierstrass}
\end{align}
Here $g_2$, $g_3$, $g'_2$ and $g'_3$ denote the Weierstra{\ss} invariants. The solution of Eqn. \ref{eqn:standardWeierstrass} is the elliptic Weierstra{\ss} $\wp$-function \cite{Markushevich:1967}. After resubstitution the solution for $q$ can be written as
\begin{align}
q &=\pm\frac{b_3}{4\wp_{q}\left(\gamma\right)-\frac{b_2}{3}}+q_X \nonumber\\
p &=\pm\frac{b'_3}{4\wp_{p}\left(\gamma\right)-\frac{b'_2}{3}}+p_Y \,.
\label{eqn:subs1}
\end{align}
Here
\begin{align}
\wp_{q}\left(\gamma\right) &:= \wp\left(\gamma-\gamma_{\rm in};g_2,g_3\right) \nonumber\\
\wp_{p}\left(\gamma\right) &:= \wp\left(\gamma-\gamma'_{\rm in};g'_2,g'_3\right) \,,
\end{align}
where
\begin{align}
\gamma_{\rm in} &=\gamma_0+\int_{z_0}^{\infty}\frac{{\mathrm{d}} z}{\sqrt{4z^3-g_2z-g_3}} \nonumber\\
\gamma'_{\rm in} &=\gamma_0+\int_{w_0}^{\infty}\frac{{\mathrm{d}} w}{\sqrt{4w^3-g'_2w-g'_3}} \,
\end{align}
and $\gamma_0$ is the initial value.
Next we will solve the $\sigma$- and $\tau$-equation. Again, for simplicity, we will only solve the first one, since the procedure can be adjusted for the second case. With the help of the $q$- and $p$-equation we can rewrite Eqn. \ref{eqn:EOM} in integral form
\begin{equation}
\sigma-\sigma_0=\int_{p_0}^p{\frac{L-Ep^2}{{\mathcal{P}}}\frac{{\mathrm{d}} p}{\sqrt{Y}}}-\int_{q_0}^q{\frac{L+Eq^2}{{\mathcal{Q}}}\frac{{\mathrm{d}} q}{\sqrt{X}}}=I_p-I_q\,.
\end{equation}
We start with $I_q$. First we apply the same substitution as before to transform $X$ into the Weierstra{\ss} form. A partial fraction decomposition leads to
\begin{align}\label{eqn:eom_parfrac}
I_q &=\int_{z_0}^z\left(K_0+\sum_{n=1}^4\frac{K_n}{z-p^q_n}\right)\frac{{\mathrm{d}} z}{\sqrt{P^q_3(z)}} \\
I_p &=\int_{w_0}^w\left(K'_0+\sum_{n=1}^2\frac{K'_n}{w-p^p_n}+\sum_{n=1}^2\frac{K''_n}{\left(w-p^p_n\right)^2}\right)\frac{{\mathrm{d}} w}{\sqrt{P^p_3(w)}} \,.
\end{align}
Here $p_n$ are first order poles and the $K_n$ are constants arising from the partial fraction decomposition. This elliptic integral of third kind can be solved with the help of the $\wp$, $\zeta$ and $\sigma$ function \cite{Kagramanova:2010bk,Grunau:2010gd,Enolski:2011id} in terms of the Integrals $I_1$ and $I_2$ of reference \cite{Willenborg:2018,Lawden:2008}
\begin{align}
I^{q}_1(v_p) &=\frac{1}{\wp_q'(v_p)}\left[2\zeta_q(v_p)(v-v_p)+\ln{\frac{\sigma_q(v-v_p)}{\sigma_q(v+v_p)}}-\ln{\frac{\sigma_q(v_0-v_p)}{\sigma_q(v_0+v_p)}}\right] \\
I^{q}_2(v_p) &=-\frac{\wp_q''(v_p)}{\wp_q'(v_p)^2}I^q_1-\frac{1}{\wp_q'(v_p)^2}\left[2\wp_q(v_p)(v-v_0)+2\zeta_q(v)+2\zeta_q(v_0)+\frac{\wp_q'(v)}{\wp_q(v)-\wp(v_p)}+\frac{\wp_q'(v_0)}{\wp_q(v_0)-\wp_Q(v_p)}\right]\,,
\end{align}
with $\wp_q'(v_p)=p^q$ and $\zeta_q(z)=-\int{\wp_q(z)}=\frac{{\mathrm{d}}}{{\mathrm{d}} z}\ln{\sigma_q(z)}$.
With the help of these integrals we can write the solution for $\sigma$ as
\begin{equation}\label{eqn:solsigma}
\sigma-\sigma_0=(K_0-K'_0)(v-v_0)+\sum_{n=1}^4K_nI_1^q(v_{p_n})-\sum_{n=1}^2K'_nI_1^p(v_{p_n})-\sum_{n=1}^2K''_nI_2^p(v_{p_n})\,.
\end{equation}
\subsection{Solution for massive particles}
For $\delta=1$ the functions $X$ and $Y$ are polynomials of order 6 in $q$ and $p$, respectively. In this section we solve the equation for the coordinate $q$ and the procedure can be easily adjusted for the equation for the coordinate $p$. As a reminder Eqn. \ref{eqn:EOM} can be written as
\begin{equation}
\left(\frac{{\mathrm{d}} q}{{\mathrm{d}}\gamma}\right)^2=\sum_{i}^6a_iq^i=P_6(q)\,,
\end{equation}
where the coefficients $a_i$ depend on the parameters of the black hole and of the particle. By substituting
\begin{equation}
q=\pm\frac{1}{x}+q_6\,,
\end{equation}
we can reduce the order of the polynomial by $1$. Here $q_6$ is a zero of the polynomial of order $6$. Then we can write
\begin{equation}
\left(x\frac{{\mathrm{d}} x}{{\mathrm{d}}\gamma}\right)^2=P_5(x)\,.
\end{equation}
The right hand side of this equation is now a polynomial of order $5$ and the whole equation can be written as a hyperelliptic integral of the first kind
\begin{equation}
\gamma-\gamma_{\text in}=\int_{x_{\text in}}^x\frac{x'{\mathrm{d}} x'}{\sqrt{P_5(x')}} \,.
\end{equation}
By using derivatives of the Kleinian $\sigma$-function $\sigma_i=\frac{\partial \sigma(\vec{z})}{\partial z_i}$ \cite{Hackmann:2008a}, where the $z_i$ are the components of the argument of $\sigma$, we can write the solution as
\begin{equation}
x=-\frac{\sigma_1(\vec{\gamma}_{\infty})}{\sigma_2(\vec{\gamma}_{\infty})}\,.
\end{equation}
The vector $\vec{\gamma}_{\infty}$ can be calculated with the help of the initial value $\gamma_{\text in}$ by
\begin{equation}
\vec{\gamma}_{\infty}=\left(-\int_{x}^{\infty}{\frac{{\mathrm{d}} x}{\sqrt{P_5(x)}}}, \gamma - \gamma_{\text in}- \int_{x_{\text in}}^{\infty}{{\frac{x {\mathrm{d}} x}{\sqrt{P_5(x)}}}}\right)^T \, .
\end{equation}
A resubstitution leads to the solution of equation \ref{eqn:EOM}
\begin{equation}
q(\gamma)=\mp\frac{\sigma_2(\vec{\gamma}_{\infty})}{\sigma_1(\vec{\gamma}_{\infty})}+q_6\,.
\end{equation}
Similar to Eqn. \ref{eqn:eom_parfrac} the remaining equations for the coordinates $\sigma$ and $\tau$ include integrals of the form
\begin{equation}
\int_{w_0}^w\frac{1}{\left(w'-p\right)^2}\frac{{\mathrm{d}} w'}{\sqrt{P_5(w')}} \,,
\end{equation}
where $P_5(w)$ is a polynomial of order $5$ in $w$. Unfortunately the current methods cannot be used to integrate these terms analytically.
\section{The orbits}\label{sec:orbits}
The orbits we show in this section are visualized in coordinates, where the spindle shape of the spacetime is visible. For the embedding of the horizon, we use the procedure from section \ref{sec:spacetime}. First we set the coordinate $q$ equal to a constant and then we solve Eqn. \ref{eqn:EOM} for one of the constants of motion. Then we embed the orbit to cylindrical coordinates. The resulting orbits are shown in Fig. \ref{pic:orbit1} and Fig. \ref{pic:orbit2}.
\newpage
\begin{figure}[!ht]
\centering
\subfigure[$3$D]{\includegraphics[width=0.45\textwidth]{4a.eps}
}
\subfigure[$2$D]{\includegraphics[width=0.45\textwidth]{4b.eps}
}
\caption{Bound orbit for $\delta=0$, $C=0.5$, $l=1$, $m=m_0+0.1$. The blue line represents the orbit. Note that along the line $q=1$ always holds. The grey structures denote the horizons, which are embedded in cylindrical orbits.}
\label{pic:orbit1}
\end{figure}
\begin{figure}[!ht]
\centering
\subfigure[$3$D]{\includegraphics[width=0.45\textwidth]{4c.eps}
}
\subfigure[$2$D]{\includegraphics[width=0.45\textwidth]{4d.eps}
}
\caption{Bound orbit for $\delta=0$, $C=0.001$, $l=0.1$, $m=m_0+0.5$. The blue line represents the orbit. Note that along the line $q=1$ always holds. The grey structures denote the horizons, which are embedded in cylindrical orbits.}
\label{pic:orbit2}
\end{figure}
\newpage
\section{Conclusion}\label{sec:conclusion}
In this article we used the Hamilton-Jacobi formalism to derive the equations of motion for test particles and light for the black spindle spacetime. We analysed the geodesic motion of massive test particles and light and solved the equations of motions in terms of elliptic functions and in part in terms of hyperelliptic functions. The equations of motion for light can be used to calculate observables like the light deflection angle and the shadow of the black spindle. \\
The analysis could be extended to the case of charged particles and it would be interesting to find a solution for the remaining equations for massive particles.
\section{Acknowledgements}
The authors thank Saskia Grunau and Jutta Kunz for interesting discussions and remarks. K.F. gratefully acknowledges financial support by the DFG, within the research training group 1620: Models of Gravity.
\bibliographystyle{unsrt}
|
3,212,635,537,610 | arxiv | \section{Introduction}
A diffraction grating, defined as a periodic optical structure with infinite extent in one direction diffracts waves incident on its surface~\cite{1}.
Being imposed by the periodicity of a grating, which can be of the order of a free-space wavelength or greater, an incident wave is scattered as propagating diffraction orders only in certain directions.
Concerning their applications, diffraction gratings have been widely used in laser resonators to tune and narrow lasing bandwidth~\cite{2,3}.
Blazed or echelette gratings~\cite{4,5,6} capable of scattering an incident wave into a specific diffraction order have been applied in frequency-scanning reflector antennas~\cite{7,8,9,10} and for radar cross section (RCS) reduction~\cite{11,12} at microwave frequencies and in Littrow mount external cavity lasers in optics~\cite{13}.
Classical blazed gratings are three-dimensional (3D) structures that generally take the form of right-angle sawtooths~\cite{14} and rectangular grooves~\cite{5}.
In the last few years, 2D metamaterials, also known as metasurfaces~\cite{15}, have been applied to mimic blazed gratings functionality~\cite{16}.
Metasurfaces representing themselves as very thin structures have been proposed as planar alternatives to metamaterials to exhibit light manipulation possibilities in various frequency domains, extending from microwave to visible frequencies.
Local magnitude and phase of reflection and/or transmission coefficients of a metasurface can be controlled, and can thus be used to manipulate scattered wavefront of an incident beam.
As such, metasurfaces have been used to perform functions including anomalous reflection and refraction~\cite{17,18,19,20,21,22}, deflection~\cite{23,24,25,26}, lensing~\cite{23,24,27,28,29,32}, thin-film cloaking~\cite{33,34,35}, coupling of propagating waves to surface waves~\cite{36,PhysRevB.97.115447}, optical vortex beams generation~\cite{37,38,39,40}, and holographic imaging ~\cite{41,42,43,44,45,46}, to name a few.
\begin{figure}[tb]
\includegraphics[width=0.99\linewidth]{1.png}
\caption{\label{fig:1} Schematic diagram of the system under consideration: a periodic array of loaded thin wires (pink cylindrical lines) placed on PEC-backed dielectric substrate having permittivity $\varepsilon_s$ and thickness $h$. The array is excited by a TE-polarized plane wave incident at angle $\theta$.}
\end{figure}
Most of the metasurface-based wavefront manipulation geometries rely on the generalized laws of reflection and refraction presented in Ref.~\cite{17}.
However several studies have shown that this approach suffer from low efficiency, particularly in configurations where extreme wave manipulation is considered (see, e.g., Refs.~\cite{Asadchy2016_SpatiallyDispMS,Alu2016_RHMS,49}).
Moreover, implementation of field transformations into physical metasurface structures can reveal to be highly challenging and drawbacks concerning optimization time, design complexity of subwavelength periodically arranged resonant meta-atoms and material losses still exist.
Very recently, the concept of metagratings evolved from classical diffraction gratings has been proposed as an interesting alternative to metasurfaces for boosting wavefront manipulation efficiency~\cite{Alu2017_mtg}.
They are designed for diffraction engineering by cancelling a finite number of undesired propagating diffraction orders and allowing desired ones to radiate.
In general, a metagrating is an array of scatterers (polarizable particles) separated by a distance of the order of the operating wavelength $\lambda$.
The sparse arrangement of scatterers does not allow one describing metagratings in terms of local reflection and transmission coefficients (or surface impedances) as metasurfaces.
In terms of meta-atoms, a metagrating consists of a limited number of meta-atoms in a supercell (period) compared to a metasurface which is composed of supercells incorporating numerous meta-atoms with subwavelength periodicity.
Although metagratings can be considered as relatively simple systems in comparison to metasurfaces, functionalities such as perfect anomalous reflection and perfect beam splitting have been demonstrated in Refs.~\cite{Alu2017_mtg,Epstein2017_mtg,Epstein2018_ieee_mtg,Epstein2019_mtg_exp}, where three propagating diffraction orders were considered at most and were handled by only two degrees of freedom.
In Refs.~\cite{Popov2018,Popov2019_perfect}, the concept was generalized and the possibility to fully control an arbitrary number of propagating diffraction orders by means of a specific number of degrees of freedom was demonstrated.
Essentially metagratings can be understood by considering an example of 1D metagratings represented by a periodic array of supercells composed of $N$ thin wires each.
An incident wave excites polarization line currents in the wires resulting in the scattered field represented by Floquet-Bloch modes which are defined by the period $L$ of the array (i.e., the length of the supercell).
In particular, the diffraction angles of the propagating diffraction orders can be found via the grating formula: $L(\sin[\theta_m]-\sin[\theta_i])=m\lambda$, where $m$ represents the number of an order and $\theta_i$ is the incidence angle of an impinging plane wave.
Furthermore, a line current is mathematically represented by the 2D Dirac delta function $\delta(y,z)$ that allows one to find the scattered field analytically, i.e., to know the complex amplitudes of all diffraction orders (propagating and nonpropagating).
In what follows, we deal with a particular configuration of 1D metagratings when thin wires are placed on the top of a metal-backed dielectric substrate as illustrated in Fig.~\ref{fig:1}. A plane-wave illumination is assumed and the wires interact only with the TE-polarized field.
As it was shown in the theoretical study~\cite{Popov2018}, the complex amplitudes $A_m^{\textup{TE}}$ of the electric field of the reflected plane waves are given by the following expression:
\begin{eqnarray}\label{eq:Am}
A_m^{\textup{TE}}&=&-\frac{k\eta}{2L}\frac{(1+R_m^{\textup{TE}})e^{j\beta_mh}}{\beta_m}\sum_{q=1}^{N}I_q e^{j\xi_m (q-1)d}\nonumber\\&+&\delta_{m0}R_0^{\textup{TE}}e^{2j\beta_0h}
\end{eqnarray}
where $k$ and $\eta$ are respectively, the wavenumber and the characteristic impedance outside the substrate, $\xi_m=k\sin[\theta_i]+2\pi m/L$ and $\beta_m=\sqrt{k^2-\xi_m^2}$ represent respectively, the tangential and normal components of wavevector of the plane waves, and $R_m^{\textup{TE}}$ is the corresponding Fresnel's reflection coefficient.
Equation~\eqref{eq:Am} suggests that complex amplitudes of all $M$ propagating diffraction orders can be set arbitrarily if there are at least $N=M$ line currents $I_q$ in a supercell.
Other parameters of the system, such as the parameters of the substrate and the distances between the line currents are assumed being fixed conversely to previously mentioned studies in Refs.~\cite{Alu2017_mtg,Epstein2017_mtg,Epstein2018_ieee_mtg,Epstein2019_mtg_exp}.
Although here we focus on the TE polarization and reflective configuration of metagratings, the case of TM polarization can be studied similarly by means of duality relations (see, e.g., Refs.~\cite{felsen1994radiation,Popov2019_LPA}).
The mathematical approach used in~\cite{Popov2018} to derive Eq.~\eqref{eq:Am} can be straightforwardly generalized on transmissive-type metagratings, as the particular configuration studied in~\cite{Epstein2018_mtg_refraction}.
\begin{figure}[tb]
\includegraphics[width=0.99\linewidth]{2.png}
\caption{\label{fig:2} (a) Schematic illustration of a capacitive unit cell: printed capacitance on top of a grounded dielectric substrate. (b) Load-impedance density of the printed capacitance extracted from specular reflection. $d\approx 11.6$ mm for the first sample and $d\approx 15.7$ mm in case of the second and third samples. Geometrical parameters are: $w=0.25$ mm, $B=3$ mm and $h=5$ mm and operating frequency is set to $10$ GHz.}
\end{figure}
In this work, based on the theoretical study of Ref.~\cite{Popov2018}, the design of simplified metagratings composed of the number of loaded wire as the considered number of propagating diffraction orders, is presented.
The load-impedance densities of the wires are calculated and engineered from subwavelength wire elements. Measurements are performed on fabricated samples to experimentally validate the theoretical results .
The rest of the paper is organized as follows.
In Section~\ref{sec:2} we provide the design methodology of the reflective-type metagratings.
Section~\ref{sec:3} is devoted to the discussion of the experimental results and their comparison to simulation results.
In the same Section we discuss the mechanism behind the observed wide-band response of the proposed metagratings (see also Refs.~\cite{Epstein2018_ieee_mtg,Popov2018}).
Section~\ref{sec:4} concludes the paper.
\begin{figure*}[tb]
\includegraphics[width=0.99\linewidth]{3.png}
\caption{\label{fig:3} (a)--(c) Schematics of prescribed diffraction patterns established by the three different designed metagratings with: (a) nonspecular reflection at an angle of $60^\circ$ with $N=3$ unit cells per period, (b) nonspecular reflection at an angle of $23^\circ$ with $N=5$ unit cells per period, and (c) equal excitation of the $-2^\textup{nd}$ and $+1^\textup{st}$ orders out of five diffraction orders, respectively. The green and red beams correspond to excited and suppressed diffraction orders, respectively. (d)--(f) Measurement results of the scattered power in the [6 GHz -- 18 GHz] frequency range. (g)--(i) Power management in the excited diffracting orders and scattering losses, the roman digits correspond to the highest propagating diffraction order in a given frequency range.}
\end{figure*}
\section{design procedure}
\label{sec:2}
In order to be able to control the diffraction pattern with a metagrating one has to carefully engineer it.
An appropriate dielectric substrate for a given frequency range is required. Its thickness $h$ and relative permittivity $\varepsilon_s$ should be carefully chosen in order to avoid excitation of waveguide modes~\cite{Popov2018}.
These waveguide modes are analog of surface plasmon polaritons responsible for well-known grating anomalies (or Wood's anomalies).
On the other hand, the presence of waveguide modes leads to divergence of certain Fresnel's reflection coefficients $R_m^{\textup{TE}}$ in Eq.~\eqref{eq:Am}, manifesting themselves in significant numerical errors.
Thus, in order to select a good substrate for a given metagrating's period $L$, one can plot the absolute value of the first few Fresnel's reflection coefficients corresponding to nonpropogating diffraction orders as a function of the substrate's parameters (thickness and permittivity) and avoid poles.
As a rule of thumb, a substrate with low permittivity and thickness of the order of $\lambda/(4\sqrt{\varepsilon_s})$ is a good candidate for the design of metagratings.
\begin{figure}[tb]
\includegraphics[width=0.99\linewidth]{4.png}
\caption{\label{fig:4} (a)--(c) Photographies of the first, second and third samples (from top to bottom, respectively). (d), (e) Schematics of the experimental setup: to measure the scattering range of angles from $-90^\circ$ to $90^\circ$ the experiment is performed in the two steps illustrated by figures (d) and (e).}
\end{figure}
After selecting the correct substrate, the calculation of the characteristics of scatterers composing the metagrating has to be performed.
Incident plane wave excites polarization line currents $I_q$ in loaded thin wires that can be characterized by load-impedance $Z_q$ and input-impedance $Z_{in}$ densities.
Each configuration of the diffraction pattern requires different set of load-impedance densities found from the Ohm's law:
\begin{equation}\label{eq:Z}
Z_qI_q=E_{q}-Z_{in}I_q-\sum_{p=1}^N Z_{qp}^{(m)}I_p.
\end{equation}
The right-hand side of Eq.~\eqref{eq:Z} represents the total electric field at the location of the $q^\textup{th}$ wire (including the self-action $Z_{in}I_q$).
Thus, $E_q=(1+R_0^{\textup{TE}})\exp[j\beta_0h-j\xi_0(q-1)d]$ is the external electric field created by the incident wave $e^{-j k \sin \theta y - j k \cos \theta z + j \omega t}$ reflected from the substrate and the sum $\sum_{p=1}^N Z_{qp}^{(m)}I_p$ takes into consideration the mutual interactions of the $q^\textup{th}$ wire with the rest of the wires (infinite number) and the grounded substrate.
The quantities $Z_{qp}^{(m)}$ are called as mutual-impedance densities.
Generally, load-impedance densities calculated from Eq.~\eqref{eq:Z} require engineering active and/or lossy response, i.e. $\Re[Z_q]\neq 0$.
For instance, in order to perform a large angle nonspecular reflection by means of a $N=M$ metagrating, one has to cancel two propagating diffraction orders out of $M=3$ available (as in the case of normal incidence).
Then, the conditions $A_{-1}^{\textup{TE}}=0$ and $A_{0}^{\textup{TE}}=0$ leave one with only a single variable (being the phase of $A_1^{\textup{TE}}$) that cannot be used to satisfy three different equations
\begin{equation}\label{eq:reactive}
\Re\left[\left(E_q-\sum_{p=1}^N Z_{qp}^{(m)}I_p\right)I_q^*\right]-\Re[Z_{in}]|I_q|^2=0
\end{equation}
providing reactive load-impedance densities (the asterisk stands for the complex conjugate).
Equation~\eqref{eq:Am} relates the complex amplitudes and currents, i.e. Eq.~\eqref{eq:reactive} can be rewritten in terms of $A_m$, with $m$ numbering propagating diffraction orders.
In order to deal with $N=M$ passive and lossless metagratings, equation~\eqref{eq:reactive} has to be satisfied.
To that end, spurious scattering in undesired propagating diffraction orders has to be permitted.
By introducing scattering losses, we sacrifice the efficiency for sake of design that would require only reactive elements.
An optimal configuration is achieved by numerically maximizing the power scattered in desired propagating diffraction orders while minimizing the left-hand-side of Eq.~\eqref{eq:reactive}.
It is worth to note that reflecting metasurfaces face the same difficulty (see for e.g., Refs.~\cite{Asadchy2016_SpatiallyDispMS,Alu2016_RHMS}) with notable exception of Refs.~\cite{Tretyakov2017_perfectAR,Epstein2016_AuxiliryFields}, which are rather special cases.
Generally, the efficiency of nonspecular reflection is used to evaluate the performance of conventional reflectarrays, i.e., efficiency decreases when the angle of nonspecular reflection increases.
However, highly efficient multichannel reflection can still be achieved as we demonstrate further.
\begin{table}[tb]
\caption{\label{tab:1}Parameters of the fabricated metagratings. The indexes correspond to the numbered unit cells in Figs.~\ref{fig:3} (a)--(c).}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Loads ($\eta/\lambda$) &$Z_1$ & $Z_2$ & $Z_3$ & $Z_4$ & $Z_5$ \\
\hline
Sample 1 & $-j30.3$& $-j6.35$& $-j1.57$ & - & - \\
\hline
Sample 2 & $-j3.77$& $-j0.43$& $-j31.2$& $-j7.06$& $-j5.27$\\
\hline
Sample 3 & $-j3.75$& $-j4.84$& $j0.05$ &$-j2.94$& $-j8.86$ \\
\hline
Arm's length (mm) & $A_1$ & $A_2$ & $A_3$ & $A_4$ & $A_5$\\
\hline
Sample 1 & $0.37$ & $3.25$ & $8.70$ & - & -\\
\hline
Sample 2 & $5.23$ & $11.2$ & $0.33$ & $2.91$ & $3.90$\\
\hline
Sample 3 & $5.25$ & $4.22$ & $12.3$ & $6.28$ & $2.27$\\
\hline
\end{tabular}%
}
\end{table}
Once Eq.~\eqref{eq:reactive} is satisfied and corresponding complex amplitudes are found, load-impedance densities are calculated from Eq.~\eqref{eq:Z} and implemented by wire elements engineered at subwavelength scale.
Although in a general case both capacitive and inductive loads might be required~\cite{Popov2019_perfect}, only capacitive elements are necessary in the examples considered further for an operating frequency set to $10$ GHz.
It is assumed that the samples would be fabricated by means of the conventional printed-circuit-board (PCB) technology.
Thus, thin wires are represented by metallic strips of width $w\ll \lambda$, thickness of $t_m=35$ $\mu$m,
and input-impedance density $Z_{in}=k\eta H_0^{(2)}(kw/4)/4$ as given in Ref.~\cite{tretyakov2003analytical}.
Capacitive loads are obtained by means of a microstrip printed capacitances, as illustrated in Fig.~\ref{fig:2} (a).
Load-impedance density of printed capacitances can be approximated analytically using formulas for sheet impedance of a patch array~\cite{Tretyakov_patches_2008} as it is done in \cite{Epstein2017_mtg,Popov2018,Epstein2018_ieee_mtg,Popov2019_perfect}.
Although the analytical model represents a simple tool for designing metagratings, it takes into account the mutual coupling with adjacent loaded wires via a phenomenological scaling parameter which is found by means of 3D full-wave simulations of an entire supercell and, thus, is not unique.
On the other hand, a recently developed simulation-based approach~\cite{Popov2019_LPA} allows one to construct metagrating unit cell by unit cell.
Instead of performing computations on a whole supercell, it deals with a single unit cell and takes analytically into account the interaction between adjacent wires to retrieve load-impedance density.
Additionally, simulation-based approaches are advantageous for being able to consider all practical aspects of meta-atoms, such as finite thickness of the metal cladding and conduction and dielectric losses.
\begin{figure}[tb]
\includegraphics[width=0.99\linewidth]{5.png}
\caption{\label{fig:5} (a) Computational results of the normalized power scattered by a reflective metagrating (having three reactive wires per period $L=\lambda/\sin(60^\circ)$) in the $+1^\textup{st}$ diffraction order vs. the frequency. Normally incident plane wave is assumed. Optimal reactive load-impedance densities are found at each frequency. (b) Absolute value of the Fresnel's reflection coefficient corresponding to the second (evanescent in the considered frequency range) diffraction order.}
\end{figure}
We design three experimental samples of metagratings to operate at $10$ GHz ($\lambda\approx 30$ mm) and we assume a normally incident plane-wave illumination ($\theta=0$) for all three configurations.
The functionalities of these three samples are schematically illustrated in Figs.~\ref{fig:3} (a)--(c).
The $h=5$ mm thick F4BM220 dielectric substrate having permittivity $\varepsilon_s=2.2(1-j10^{-3})$ is selected as a good candidate for the proposed designs.
The first sample deals with three diffraction orders maximizing the power scattered in the $+1^\textup{st}$ order and suppressing scattering in the the $-1^\textup{st}$ and $0^\textup{th}$ orders. Hence, it is composed of three unit cells per supercell, which has a length $L=\lambda/\sin(60^\circ)$ at $10$ GHz. This metagrating is able to achieve anomalous reflection at $60^\circ$ degrees.
The second and third samples each has five unit cells per supercell of length $L=2\lambda/\sin(50^\circ)$, which allows one to control five diffraction orders: $-2^\textup{nd}$, $-1^\textup{st}$, $0^\textup{th}$, $+1^\textup{st}$ and $+2^\textup{nd}$.
The second sample maximizes the power scattered in the $+1^\textup{st}$ propagating diffraction order and thus performs small angle anomalous reflection, corresponding approximately to $23^\circ$ at $10$ GHz.
The third sample equally excites the $-2^\textup{nd}$ and $+1^\textup{st}$ orders while suppressing the three others.
On the basis of these specifications, the required load-impedance density are calculated by means of Eqs.~\eqref{eq:Am} -- \eqref{eq:reactive} and are presented in Table~\ref{tab:1}. Only two capacitive unit cells of length $d=\lambda/[3\sin(60^\circ)]$ and $d=2\lambda/[5\sin(50^\circ)]$ need to be simulated for the design of the three metagratings.
A schematics of the unit cell is shown in Fig.~\ref{fig:2} (a).
In 3D full-wave simulations performed with COMSOL MULTIPHYSICS, periodic boundary conditions are applied to the side faces of the unit cell and the model is excited with a periodic port.
Parameters $w$ and $B$ are fixed to $0.25$ mm and $3$ mm, respectively.
The arm's length $A$ of the printed capacitance is used as a tuning parameter for the load-impedance density.
The load-impedance densities are first extracted from the $S_{11}$ parameter of the unit cell as detailed by the procedure in~\cite{Popov2019_LPA} and are plotted as function of $A$ in Fig.~\ref{fig:2} (b).
Although being built for two different parameters $d$, the two curves in Fig.~\ref{fig:2}(b) almost coincide.
It proofs that the analytical model used in Ref.~\cite{Popov2019_LPA} to take into account the interaction between adjacent wires and the substrate, allows one to obtain the load-impedance density of a wire itself and not of a corresponding array.
Eventually, the load-impedance densities are used to tailor the geometrical parameters of the microstrip printed capacitances listed in Table~\ref{tab:1}.
Photographies of the fabricated samples are displayed in Fig.~\ref{fig:4} (a)--(c) and their physical size is approximately $480$ mm ($y$-direction) by $160$ mm ($x$-direction).
\section{Experimental results}
\label{sec:3}
In this section we demonstrate experimentally the control of diffraction patterns with the proposed and fabricated metagrating designs.
The samples are tested in an anechoic chamber dedicated to radar-cross-section bistatic measurements, where transmitting and receiving horn antennas are mounted on a circular track of $5$ m radius.
A schematic representation of the experimental setup is shown in Figs.~\ref{fig:4} (d) and (e).
In the current experiments, the transmitter is fixed and the receiver moves with $1^\circ$ step and the minimum angle between the transmitter and receiver for the scanning is $4^\circ$.
In order to be able to measure the specular reflection, the transmitter is fixed at $\mp2^\circ$.
Thus, the experiments are conducted in two steps: when the transmitter is fixed at $\mp2^\circ$, the receiver moves form $\pm2^\circ$ to $\pm90^\circ$, as it is clearly illustrated in Figs.~\ref{fig:4} (d) and (e).
Figures~\ref{fig:3}(d)--(f) visualize angle measurements of the scattered power in the frequency range spanning from $6$ to $18$ GHz.
It is clearly observed that the positions of the main lobes (corresponding to diffraction orders) are in perfect agreement with the results given by the grating formula $\theta_m=\sin^{-1}(m c/(\nu L)+\sin[\theta_i])$ (represented by black dashed curves). Here, $c$ is the speed of light in vacuum and $\nu$ is the frequency.
However, the spectrum of waves scattered from a finite-size sample in the far-field is much more complex than just a few plane waves representing propagating diffraction orders.
Thus, in order to estimate the performance of the samples we execute next steps following Ref.~\cite{Epstein2019_mtg_exp}.
In the first place, we localize each diffraction order between the angles $\theta_m^{(1)}$ and $\theta_m^{(2)}$ which correspond to $3$ dB of the power attenuation with respect to the maximum power of the lobe.
The maximum is found near the angle $\theta_m=\sin^{-1}(m c/(\nu L)+\sin[\theta_i])$.
Finally, the normalized power $f_m(\nu)$ scattered in a given diffraction order $m$ at the frequency $\nu$ is estimated by means of the following integral formula
\begin{equation}\label{eq:perf}
f_m(\nu)=\frac{\int_{\theta_{m}^{(1)}}^{\theta_{m}^{(2)}} P(\nu,\theta)d\theta}{\sum_{m}\int_{\theta_{m}^{(1)}}^{\theta_{m}^{(2)}} P(\nu,\theta)d\theta},
\end{equation}
where $P(\nu,\theta)$ is the absolute power scattered in the receiving angle $\theta$ at the frequency $\nu$.
The summation in the denominator is performed over all propagating diffraction orders at the frequency $\nu$.
Figures~\ref{fig:3}(g)--(i) show the performance of the experimental samples (solid curves obtained by means of Eq.~\eqref{eq:perf}) as function of the frequency, scattering losses represent the power scattered in undesired diffraction orders.
The dashed curves demonstrate the results obtained from 3D full-wave simulations (a supercell with imposed periodic boundary conditions and excited by a periodic port).
By comparing the solid and dashed curves, one can observe a good agreement between the experimental and simulation results.
Although the samples were designed to operate at a single frequency ($10$ GHz), it is seen that the scattering losses remain low in a wide range of frequencies.
One of the most important factors affecting an operating frequency range is the frequency response of unit cells. Resonant elements, in a general manner, significantly decrease an operating frequency range (see, e.g., Refs.~\cite{Popov2019_perfect,Popov2019_LPA}).
As demonstrated by Fig.~\ref{fig:2}(b), unit cells used to construct experimental samples do not exhibit resonances at $10$ GHz.
Since the designed metagratings possess a number of degrees of freedom equal to the number of propagating diffraction orders, it is expected that the scattering losses increase when approaching frequencies where the number of propagating diffraction orders changes (corresponding to different areas in Figs.~\ref{fig:3}(g)--(i) labeled with roman digits).
While it is the case for the second and third samples, the performance of the first one decreases far before the appearance of the second propagating diffraction orders, see Figs.~\ref{fig:3}(g)--(i).
It unveils yet another crucial factor influencing an operating frequency range: excitation of waveguide modes discussed in the very beginning of Section~\ref{sec:2}.
Although we avoid waveguide modes around the design frequency of $10$ GHz, they may appear at lower or higher frequencies and this is exactly what happens with the first sample, as we further present in Fig.~\ref{fig:5}.
A waveguide mode is excited at the frequency when the Fresnel's reflection coefficient $R_2^{\textup{TE}}$ diverges and leads to drastic decrease of the performance of the metagrating, as it can be clearly observed in Fig.~\ref{fig:5} when comparing two different thicknesses of the dielectric substrate.
In the experimental and simulation data the waveguide mode manifests itself in the resonance observed around $16.4$ GHz, see Figs.~\ref{fig:3}(d) and (g).
Figure~\ref{fig:5}(a) presents the computational results of maximizing the power of a normally incident plane wave coupled to the $+1^\textup{st}$ propagating diffraction order in the three unit cells per period metagrating, assuming purely reactive load-impedance densities. As demonstrated, the excitation of the waveguide mode can be suppressed by choosing a thinner substrate (for e.g. $2.5$ mm instead of $5$ mm) which enables restoring the performance over the entire range of frequencies where there are three propagating diffraction orders (see blue curves in Fig.~\ref{fig:5}).
\section{conclusion}
\label{sec:4}
To conclude, we have described in details the design procedure of reflective metagratings and tested three experimental samples able of establishing prescribed diffraction pattern in a wide frequency range.
The experimental results have demonstrated a good agreement with 3D full-wave simulations.
Thus, we have experimentally verified the concept of metagratings for controlling multiple beams with as few as one degree of freedom (represented by a reactive load) per a propagating diffraction order.
We have identified the main factors affecting the operating frequency range of metagratings which should facilitate the development of wide-band beamforming devices in the future.
\section*{acknowledgements}
The authors acknowledge the help of Anil Cheraly (ONERA) in conducting the experiments.
\bibliographystyle{IEEEtran}
|
3,212,635,537,611 | arxiv | \section{Introduction}
Spin-orbitronics\cite{spin-orbitronics1,spin-orbitronics2} has attracted a lot of attention recently as a new subfield of spintronics\cite{Spintronics review1,Spintronics review2} in which the relativistic spin-orbit interaction (SOI) plays a central role. Spin-orbitronics includes generation and detection of spin-polarized currents through the spin Hall effect,\cite{SHE-review1,SHE-review2} the induction of non-equilibrium spin accumulations in non-magnetic materials through the Edelstein effect,\cite{edelestein effect1,edelestein effect2} the triggering of magnetization dynamics in single magnetic systems through spin-orbit torques (SOTs),\cite{SOT review1,SOT review2,SOT review3} and magnonic charge pumping by means of inverse SOTs.\cite{Inverse SOT} Spin-orbitronics is believed to ultimately enable the faster and more efficient ways of magnetization switching needed for high density data storage and information processing, thereby providing novel solutions to address the essential challenges of spintronics.
In this paper we investigate the microscopic origin of SOTs in a two-dimensional (2D) metallic ferromagnet with spin-orbit coupling.
The magnetization dynamics in ferromagnets is governed by the seminal Landau-Lifshitz-Gilbert (LLG) equation,\cite{STT review1,STT review2,STT review3}
\begin{equation}
\label{LLG}
\frac{\partial \boldsymbol{m}}{\partial t}=-\gamma\, {\boldsymbol{m}} \times {\boldsymbol{H}}_{\textrm{eff}}+\alpha_G\, {\boldsymbol{m}}\times \frac{\partial {\boldsymbol{m}}}{\partial t}+{\boldsymbol{T}} ,
\end{equation}
where $\boldsymbol{m}$ is a unit vector along the magnetization direction $|{\boldsymbol{m}}|=1$, $\gamma$ is the gyromagnetic ratio, $\alpha_G$ is the Gilbert damping constant and ${\boldsymbol{H}}_{\textrm{eff}}$ is an effective field which includes the effects of the external magnetic field, exchange interactions, and dipole and anisotropy fields. The first term on the right-hand side of Eq.~(\ref{LLG}) describes the precession of the magnetization vector $\boldsymbol{m}$ around the effective field, while the second term describes the relaxation of magnetization to its equilibrium orientation. Furthermore, $\boldsymbol{T}$ is a sum of different magnetization torques not contained in the effectieve field or damping.
The spin-polarized current-induced magnetization dynamics in magnetic materials arises as a result of spin transfer torque (STTs).\cite{STT review1,STT review2,STT review3} It is well known that STT may induce magnetization dynamics in spin-valve structures, and that the exchange interaction between the spin-polarized current and local spins leads, e.g., to domain-wall motion. In uniformly magnetized single-domain systems the transfer of spin angular momentum from the spin-current density $\boldsymbol{j}_s$ to a local magnetization is modelled by two different STT terms: i)~an anti-damping-like (ADL) or Slonczewski in-plane torque $\boldsymbol{T}\propto \boldsymbol{m}\times \boldsymbol{m}\times \boldsymbol{j}_s$, and ii) an out-of-plane field-like (FL) torque $\boldsymbol{T}\propto \boldsymbol{m}\times \boldsymbol{j}_s$, which is typically negligible in conventional metallic spin valves. On the other hand, in ferromagnets with magnetic domains, in which spin textures such as domain walls are necessarily present, the STT also includes reactive, $\boldsymbol{T}\propto\left(\boldsymbol{j}_s\cdot \boldsymbol{\nabla}\right) \boldsymbol{m}$, and dissipative, $\boldsymbol{T}\propto \boldsymbol{m}\times (\boldsymbol{j}_s\cdot\boldsymbol{\nabla}) \boldsymbol{m}$, torques.\cite{STT review1,STT review2,STT review3}
Recently, it was demonstrated both theoretically and experimentally that the current-induced nonequilibrium spin polarization\cite{edelestein effect1,edelestein effect2} in (anti-)ferromagnets with inversion asymmetry may exert a so-called SOT on localized spins and, consequently, may lead to a non-trivial magnetization dynamics.\cite{Garello-switching, SOT1,SOT2,SOT3,SOT4,anti-damping SOT1,anti-damping SOT2,anti-damping SOT-QKE,SOT exp,SOT DW Miron1,SOT DW Miron2,SOT exp Miron1,SOT exp Miron2,SOT exp Miron3,SOT AFM1,SOT AFM2,Linder} Unlike STT, the SOT phenomenon does not require an injection of spin current or the presence of spatial inhomogeneities in the magnetization. The magnetization switching due to SOTs may be achieved with current pulses as short as $\sim 180$\,ps, while the critical charge current density can be as low as $\sim 10^7$\,A\,cm$^{-2}$.\cite{Garello-switching}
Quite generally Rashba SOTs can be classified as either ADL or FL torques \cite{intristic and DW SOT}. The first theoretical and experimental studies of SOT have demonstrated that the ADL SOT is proportional to the disorder strength and can always be regarded as a small correction to the FL SOT.\cite{SOT1,SOT2,SOT3,SOT4, anti-damping SOT1,anti-damping SOT2, anti-damping SOT-QKE, SOT exp} On the other hand, in some recent experiments the opposite statement is made: the torques with ADL symmetry are more likely to be the main source of the observed magnetization behavior.\cite{SOT exp Miron1,SOT exp Miron2,SOT exp Miron3,SHE torque exp,SHE-SOT-thickness exp1,SHE-SOT-thickness exp2,SOT exp Yaroslav} These experiments are performed with ferromagnetic metals grown on top of a heavy metal with strong SOI and may, in principle, be explained by the spin Hall effect which induces a spin-polarized current. This spin current, in turn, exerts a torque on the magnetic layer via the STT mechanism\cite{SHE-SOT-theo, SHE-SOT-thickness exp1,SHE-SOT-thickness exp2} so that the ADL symmetry term plays the major role in the effect as discussed above.
It is, however, a serious experimental challenge to distinguish between SOT and spin-Hall STT in bilayers, since both torques have the same symmetry.\cite{SHE-SOT-thickness exp1,SHE-SOT-thickness exp2} Very recently Kurebayashi \textit{et al.}\cite{intristic SOT exp1} conducted an experiment on the bulk of strained GaMnAs, which has an intrinsic crystalline asymmetry. In these experiments, the contribution of a possible spin-Hall-effect STT was completely eliminated, while sizable ADL torques were nevertheless detected. This provides a strong argument in favour of the ADL-SOT nature of the observed torque. The authors of Ref.~\onlinecite{intristic SOT exp1} attribute this torque to an intrinsic Berry curvature, and estimate a scattering-independent, i.e. intrinsic, ADL-SOT.\cite{intristic SOT exp1,intristic SOT exp2,Blugel} This intrinsic ADL-SOT has also been reported by van der Bijl and Duine.\cite{intristic and DW SOT}
In this paper we calculate both FL- and ADL-SOTs in a 2D Rashba ferromagnetic metal microscopically by using a functional Keldysh theory approach.\cite{functional keldysh} By calculating the first vertex correction we show that the intrinsic ADL-SOT vanishes unless the impurity scattering is spin dependent.
The rest of this paper is organized as follows. Section II introduces the model and method. In Sec. III we calculate
SOTs with and without vertex corrections. We conclude our work in Sec. IV.
\section{Model Hamiltonian and method}
We start with the 2D mean-field Hamiltonian ($\hbar=c=1$),
\begin{equation}
\label{Ham}
\mathcal{H}[\psi^\dagger, \psi]=\int d^2\boldsymbol{r}\;\psi^\dagger_{\boldsymbol{r},t} \left[H_0+V_{\textrm{imp}}+\hat{\boldsymbol{j}}\cdot\boldsymbol{A}_t\right] \psi_{\boldsymbol{r},t}.
\end{equation}
where ${\boldsymbol{\psi}}^\dag=({\boldsymbol{\psi}}_\uparrow^*,{\boldsymbol{\psi}}_\downarrow^*)$ is the Grassman coherent state spinor. Here, $H_0$ is the 2D conducting ferromagnet Hamiltonian density in the presence of Rashba SOI,\cite{Rashba review}
\begin{equation}
\label{H0}
H_0= \frac{{\boldsymbol{p}}^2}{2m_e}+\alpha_R\,(\boldsymbol{\sigma}\times\hat{\boldsymbol{z}})\cdot\boldsymbol{p} -\tfrac{1}{2}\Delta\,\boldsymbol{\sigma}\cdot\boldsymbol{n}_{\boldsymbol{r},t}-\tfrac{1}{2}\Delta_\textrm{B} \sigma_z,
\end{equation}
where $\boldsymbol{p}$ is the 2D momentum operator, $\alpha_R$ is the strength of the SOI, $\Delta$ and $\Delta_{B}$ are the exchange energy and the Zeeman
splitting due to an external field in the $z$-direction, respectively, ${\boldsymbol{n}}_{{\bf{r}},t}$ is an arbitrary unit vector that determines the quantization axis, and $\boldsymbol{\sigma}$ is the three-dimensional vector of Pauli matrices.
The vector potential $\boldsymbol{A}_t= \boldsymbol{E}e^{-i\Omega t}/i\Omega$ is included in Eq.~(\ref{Ham}) to model a $dc$ electric field in the limit $\Omega\to 0$. It is coupled to the current density operator, which is given by $\hat{\boldsymbol{j}}=(ie/2m_e)(\overleftarrow{\boldsymbol{\nabla}} -\overrightarrow{\boldsymbol{\nabla}}) -e \alpha_R\,\boldsymbol{\sigma}\times\hat{\boldsymbol{z}}$, where $e$ is the electron charge and $m_e$ is the electron effective mass. Finally, the impurity potential $V_{\mathrm{imp}}$ is of the form
\begin{equation}
V_{\mathrm{imp}}(\boldsymbol{r})=\begin{pmatrix} V_\uparrow & 0 \\ 0 & V_\downarrow \end{pmatrix}
\sum_{i}\delta(\textbf{r}-\textbf{R}_i),
\end{equation}
where $V_{\uparrow(\downarrow)}$ is the strength of spin-up (down) disorder, and the index $i$ labels the impurity centers $\textbf{R}_i$. More specifically, we restrict ourselves to the gaussian limit of the disorder potential.
The impurity-averaged retarded Green's function in the Born approximation is given by\cite{green function1,green function2,green function3}
\begin{equation}
G^{+}_{{\boldsymbol{k}},\varepsilon}=\left(g^{-1}_\downarrow \sigma^\uparrow+g^{-1}_\uparrow\sigma^\downarrow+\alpha_R(\sigma_yk_x-\sigma_xk_y)\right)^{-1}.
\end{equation}
where $g^{-1}_s=\varepsilon -\varepsilon_k+ s M+i\gamma_s$, for $s=\uparrow\!\!\!(+)$ or $\downarrow\!\!\!(-)$, $\sigma^s=(\sigma_0+s \sigma_z)/2$, ${\boldsymbol{k}}$ and $\varepsilon$ are the wavevector and energy, respectively, $\varepsilon_k=k^2/2m_e$, and $M=(\Delta+\Delta_{B})/2$. We have also introduced the spin-dependent scattering rate $\gamma_{s}=\pi \nu n_{\mathrm{imp}}V^2_{s}$, where $\nu_0=m_e/2\pi$ is the density of states per spin for 2D electron gas, and $n_{\mathrm{imp}}$ denotes the impurity concentration. Here we have assumed that both spin-orbit split bands are occupied, i.e. the Fermi energy is larger than magnetization splitting, $\varepsilon_F> M$.
Following Ref.~\onlinecite{functional keldysh} we minimize the effective action on the Keldysh contour with respect to quantum fluctuations of $\boldsymbol{n}$. This procedure gives us directly the LLG equation which contains torque terms in linear response with respect to the external field $\boldsymbol{E}$. The effective action is given by $S=\int_{\mathcal{C}^K} dt L_F(t)$, where $\mathcal{C}^K$ stands for the Keldysh contour and $L_F(t)=\int d^2\boldsymbol{r}(\hat{\boldsymbol{\psi}}^\dagger_{\boldsymbol{r},t}i \frac{\partial}{\partial t}\hat{\boldsymbol{\psi}}_{\boldsymbol{r},t}-\mathcal{H})$ is the mean-field Lagrangian.
We further assume that we are dealing with a ferromagnetic metal which is uniformly magnetized in the $z$-direction. Thus, we can approximate the vector $\boldsymbol{n}$ as
\begin{equation}
\boldsymbol{n}_{\boldsymbol{r},t}\simeq
\begin{pmatrix} \delta n^x_{{\boldsymbol{r}},t} \\ \delta n^y_{{\boldsymbol{r}},t} \\ 1-\frac{1}{2}(\delta n^x_{{\boldsymbol{r}},t})^2-\frac{1}{2}(\delta n^y_{{\boldsymbol{r}},t})^2) \\
\end{pmatrix}.
\end{equation}
In order to derive the LLG equation with torque terms it is sufficient to expand the effective action up to second order in $\delta \boldsymbol{n}$ and up to first order in the vector potential: $S_{\mathrm{eff}}=S_{\mathrm{SOT}}[\mathcal{O}(\delta {\bf{n}}), \mathbf{A}]+ S_{\mathrm{rest}}[\mathcal{O}(\delta {\bf{n}}^2),\mathbf{A}=0]$. A straightforward calculation gives
\begin{equation}
S_{\mathrm{SOT}}
=\int_{\mathcal{C}^K}\!\!\!dt\int_{\mathcal{C}^K}\!\!\!dt'\int \!d^2{\boldsymbol{r}}
\int \!d^2{\boldsymbol{r}}'\, {\chi}_{a;\boldsymbol{r}-\boldsymbol{r}';t,t'}
\delta n^a_{{\boldsymbol{r}'},t'},
\end{equation}
where $\chi_{a}$ ($a=\{x,y\}$) is the response function,
\begin{equation}
{\chi}_{a;\boldsymbol{r}-\boldsymbol{r}';t,t'}=\frac{i\Delta}{4}\left\langle {\boldsymbol{j}}_{\boldsymbol{r},t}\;\psi^\dagger_{\boldsymbol{r}',t'}\sigma_a\psi_{\boldsymbol{r}',t'}\right\rangle \cdot \boldsymbol{A}_{t}, \label{response-function}
\end{equation}
and $\boldsymbol{j}=\psi^\dagger\hat{\boldsymbol{j}}\psi$ is the charge current density.
Note that in the absence of SOI the term $S_{\mathrm{SOT}}$ is 0 and only second order terms, $S_{\mathrm{rest}}$ \cite{functional keldysh}, remain.
The field $\delta \boldsymbol{n}$ can be split into the physical magnetization field $\delta {\bf m}$ and a quantum fluctuation field $\bm{\xi}$ as $\delta n^a_{\boldsymbol{r},t_{\pm}}=\delta m^a_{\boldsymbol{r},t}\pm\xi^a_{\boldsymbol{r},t}/2$, where $+$ corresponds to the upper and $-$ to the lower branch of the Keldysh contour. At first order with respect to the quantum component we obtain
\begin{equation}
S_{\mathrm{SOT}}=\int\! dt\int\! dt'\!\int\! d^2{\boldsymbol{r}}'\!\! \int\! d^2{\boldsymbol{r}}\; \chi^{-}_{a;\mathbf{r}-\mathbf{r}';t,t'}\xi^a_{\mathbf{r}',t'},
\end{equation}
where $\chi^{-}$ is the advanced component of the correlator, and the sum over repeated indices $a$ is assumed.
The LLG equation is, then, derived by minimizing the effective action with respect to quantum fluctuations, $\delta S_{\mathrm{eff}}/\delta \boldsymbol{\xi}=0$. Thus, the transverse components of the LLG equation in the Fourier space are given by,
\begin{equation}
\label{LLG0}
\mathcal{F}\left[\frac{\delta S_{\mathrm{rest}}}{\delta\xi_a}\right]_{{\bf{q}}=0,\varepsilon}+\chi^{-}_{a;{\bf{q}}=0,\varepsilon=0}=0,
\end{equation}
where $\mathcal{F}[...]$ represents the Fourier transformation operator. The functional derivative in Eq.~(\ref{LLG0}) gives the precession and Gilbert
damping terms of the LLG equation,\cite{functional keldysh} while the second term describes the SOT. The dependence of Gilbert damping on SOI is second order in $\alpha_R$,\cite{intristic and DW SOT} and we focus below on SOT which is of first order in $\alpha_R$. The appearance of the zero-momentum response function $\chi_{a;{\bf{q}}=0,\varepsilon=0}$ in the LLG equation shows that the SOT is finite even for spatially uniform magnetization, in contrast to the (non-)adiabatic STT which is of the first order in the gradient of magnetization.
\begin{figure}
\centering
\includegraphics[width=6cm]{fig.eps}
\caption{Feynman diagrams related to the spin-torque response function Eq. (\ref{response-function}): (a) undressed response function, and (b) the first vertex correction. The solid line corresponds to an electron propagator in the Born approximation, the wiggly line to the coupling to vector potential and current, and the dashed represents a spin fluctuation. The vertical dotted line describes the averaging over impurity positions.}\label{diagram}
\end{figure}
\section{Calculation of SOTs}
In what follows we evaluate the spin-torque response function of Eq.~(\ref{response-function}), shown diagrammatically in Fig.~\ref{diagram}, to derive the SOT in the ballistic limit $\gamma_s \ll k_B T$, where $k_B T$ is the thermal energy. We calculate first the bare (undressed) part of the response function, $\chi^{(0)}$, depicted in Fig.~\ref{diagram}a. The final result for spin torque is, then, obtained by adding the first vertex correction, $\chi^{(1)}$, depicted in Fig.~\ref{diagram}b. Throughout the calculation we assume that $\gamma_s \ll k_B T \ll \alpha_R k_F\ll M$, where $k_F$ is the Fermi wavevector. The condition $ \alpha_R k_F\ll M$ is normally fulfilled in the metallic ferromagnets of interest. Whether or not the condition $\gamma_s\ll k_B T\ll \alpha_Rk_F$ is fulfilled depends strongly on the sample quality. The analysis of spin torques in diffusive regime $k_B T\ll \gamma_s$ will require calculation of the full vertex correction and will be done elsewhere.
\subsection{Undressed response function}
The spin-torque response function of Eq.~(\ref{response-function}) without vertex corrections is given by
\begin{equation}
\label{chi0}
{\chi}^{(0)}_{a;t,t'}=\frac{e\Delta}{4i} \int\frac{d^2\boldsymbol{k}}{(2\pi)^2}\,\mathrm{Tr}[\boldsymbol{v}_{\boldsymbol{k}}\check{G}_{{\boldsymbol{k}};t,t'}\sigma_a\check{G}_{{\boldsymbol{k}};t',t}]\cdot\mathbf{A}_{t}.
\end{equation}
where $\boldsymbol{v}_{\boldsymbol{k}}=\boldsymbol{k}/m_e-\alpha_R \boldsymbol{\sigma}\times \hat{\boldsymbol{z}}$ is the velocity vector, and $\check{G}$ is the Green's function on the Keldysh contour. From Eq.~(\ref{chi0}) we find retarded and advanced components of the response function in the limit of zero frequency and momentum as
\begin{eqnarray}
\label{chi0pm}
{\chi}^{(0)\pm}_{a}
&=&\frac{e\Delta}{4i} \lim_{\Omega\rightarrow 0}\int\!\frac{d^2{\boldsymbol{k}}}{(2\pi)^2}
\int\!\!\!\!\int d\omega\,d\omega'\, \frac{f_{\omega'}-f_{\omega}}{\Omega+\omega'-\omega\pm i0} \nonumber \\
&\times&
\frac{1}{\Omega}\mathrm{Tr}[({\boldsymbol{v}}_{{\boldsymbol{k}}}\cdot\boldsymbol{E})\mathcal{A}_{{\boldsymbol{k}},\omega}\sigma_a
\mathcal{A}_{{\boldsymbol{k}},\omega'}],
\end{eqnarray}
where $\mathcal{A}_{{\boldsymbol{k}},\omega}=i(G^{+}_{\boldsymbol{k},\omega}-G^{-}_{\boldsymbol{k},\omega})/2\pi$ is the spectral function and $f_\omega=[e^{(\omega - \epsilon_F)/k_B T}+1]^{-1}$ stands for the Fermi distribution function.
In the limit of weak disorder, we can decompose the response function into two parts: the intrinsic part $\chi_{\textrm{in}}$, which turns out not to depend on the scattering rate and describes interband transitions and the extrinsic part $\chi_{\textrm{ex}}$, which essentially depends on disorder and corresponds to intraband contributions. The intrinsic part corresponds to the principal value integration in Eq. (\ref{chi0pm}), while the extrinsic part is given by the corresponding delta-function contribution. To leading order in $\alpha_R$ we find
\begin{subequations}
\begin{eqnarray}
\label{inrinsic-chi}
{\chi}^{(0)-}_{\textrm{in},a} &=& \frac{e\alpha_R\Delta}{8 M} \nu_0 E_{a},\\
{\chi}^{(0)-}_{\textrm{ex},a} &=&\frac{e\alpha_R\Delta}{8 M}\nu_0\left[\frac{\varepsilon_F\!-\!M}{\gamma_\downarrow}\!-\!\frac{\varepsilon_F\!+\!M}{\gamma_\uparrow}\right](\hat{\boldsymbol{z}}\times {\boldsymbol{E}})_a.\qquad
\label{extrinsic-chi}
\end{eqnarray}
\end{subequations}
The corresponding expressions for the SOTs are the ADL $\mathbf{T}^{\mathrm{ADL}}$ and FL $\mathbf{T}^{\mathrm{FL}}$ contributions, which do not take into account vertex corrections,
\begin{subequations}
\begin{eqnarray}
\boldsymbol{T}^{(0)}_{\mathrm{ADL}}&=& -2e\alpha_R \nu_0 \boldsymbol{m}\times\boldsymbol{m}\times(\hat{\boldsymbol{z}}\times\boldsymbol{E}),
\label{intrinsic-SOT}\\
\boldsymbol{T}^{(0)}_{\mathrm{FL}}&=& -\frac{e\alpha_R \Delta \nu_0}{M}
\left[\!\frac{\varepsilon_F\!+\!M}{\gamma_\uparrow}-\frac{\varepsilon_F\!-\!M}{\gamma_\downarrow}\!\right]\boldsymbol{m}\times(\hat{\boldsymbol{z}}\times\boldsymbol{E}).\qquad
\label{extrinsic-SOT}
\end{eqnarray}
\end{subequations}
Hence, we find that the ADL SOT in the absence of vertex corrections has an intrinsic origin, i.e., is disorder-independent.
\subsection{Vertex correction}
Let us now turn to the calculation of the first vertex correction to the spin-torque response function depicted in Fig.~\ref{diagram}b. For the corresponding response function on the Keldysh contour we find
\begin{eqnarray}
&&\chi^{(1)}_{a;t,t'}=
\frac{e\Delta}{4i}\int\frac{d{\boldsymbol{k}}_1}{(2\pi)^2} \int\frac{d{\boldsymbol{k}}_2}{(2\pi)^2}\int_{c^K}\!\!\! dt_1\int_{c^K}\!\!\! dt_2
\tr[\mathbf{A}_{t}\cdot{\boldsymbol{v}}_{{\boldsymbol{k}}_1} \nonumber \\
&&\times \check{G}_{\boldsymbol{k}_1;t,t_1}\langle V_{\mathrm{imp}}
\check{G}_{\boldsymbol{k}_2;t_1,t'}\sigma_a\check{G}_{{\boldsymbol{k}}_2;t',t_2}V_{\mathrm{imp}}\rangle \check{G}_{{\bf{k}}_1;t_2,t}].
\end{eqnarray}
The advanced component of $\chi^{(1)}$ at zero energy and momentum is, then, given by
\begin{eqnarray}
\chi^{(1)-}_{a}\!\!\!&=&\frac{e\Delta}{4i} \eta_b\!\int\!\!\frac{d^2\boldsymbol{k}_1}{(2\pi)^2}\!\int\!\!\frac{d^2\boldsymbol{k}_2}{(2\pi)^2}\!
\int\!\!\!\!\int \!\!d\omega\,d\omega'\, \frac{f_{\omega'}-f_{\omega}}{\Omega\!+\!\omega\!-\!\omega'\!-\!i0} \nonumber \\
&\times&\frac{1}{\Omega}\mathrm{Tr}\big[\boldsymbol{E}\cdot{\boldsymbol{v}}_{{\bf{k}}_1}\big(G^{+}_{{\boldsymbol{k}}_1,\omega}\sigma_{b}\mathcal{A}_{{\boldsymbol{k}}_2,\omega}\sigma_aG^{+}_{{\boldsymbol{k}}_2,\omega'}\sigma_{b}\mathcal{A}_{{\boldsymbol{k}}_1,\omega'}\nonumber\\
&&\qquad+\,G^{+}_{{\boldsymbol{k}}_1,\omega}\sigma_{b}\mathcal{A}_{{\boldsymbol{k}}_2,\omega}\sigma_a\mathcal{A}_{{\boldsymbol{k}}_2,\omega'}\sigma_{b}G^{-}_{{\boldsymbol{k}}_1,\omega'}\nonumber\\
&&\qquad+\,\mathcal{A}_{{\boldsymbol{k}}_1,\omega}\sigma_{b}G^{-}_{{\boldsymbol{k}}_2,\omega}\sigma_aG^{+}_{{\boldsymbol{k}}_2,\omega'}\sigma_{b}\mathcal{A}_{{\boldsymbol{k}}_1,\omega'} \nonumber\\
&&\qquad+\,\mathcal{A}_{{\boldsymbol{k}}_1,\omega}\sigma_{b}G^{-}_{{\boldsymbol{k}}_2,\omega}\sigma_a\mathcal{A}_{{\boldsymbol{k}}_2,\omega'}\sigma_{b}G^{-}_{{\boldsymbol{k}}_1,\omega'}\big)\big],
\end{eqnarray}
where the summation over the index $b=\{0,z\}$ and the limit $\Omega \to 0$ are assumed. We have also used the notations $\eta_0=n_{\mathrm{imp}}(V_\uparrow+V_\downarrow)^2/4$ and $\eta_z=n_{\mathrm{imp}}(V_\uparrow-V_\downarrow)^2/4$. Using the same approximations as for the undressed part of the response function we obtain the intrinsic contribution as
\begin{equation}
\chi^{(1)-}_{\textrm{in},a} = -\frac{e\alpha_R\Delta}{8 M}\nu_0\frac{\gamma_\uparrow+\gamma_\downarrow}{2(\gamma_\uparrow
\gamma_\downarrow)^{\frac{1}{2}}}E_{a},
\end{equation}
while the corresponding extrinsic contribution is of the second order in scattering rates and can be neglected.
Thus, we obtain the FL and ADL torques in the limit $\gamma_s\ll\alpha_Rk_F\ll M$ to the leading order in the SOI as
\begin{eqnarray}
\boldsymbol{T}_{\mathrm{FL}}\!\! &= &\!\!\frac{e\alpha_R \nu_0 \Delta}{M}\left[\frac{\varepsilon_F-M}{\gamma_\downarrow}-\frac{\varepsilon_F+M}{\gamma_\uparrow}\right]
\boldsymbol{m}\!\times\!(\hat{\boldsymbol{z}}\!\times\!\boldsymbol{E}),\label{FL-SOT}\\
\boldsymbol{T}_{\mathrm{ADL}}\!\!&= &\!\!\left[\frac{\gamma_\uparrow+\gamma_\downarrow}{2\sqrt{\gamma_\uparrow
\gamma_\downarrow}}-1\right]2e\alpha_R \nu_0\boldsymbol{m}\!\times\!\left(\boldsymbol{m}\!\times\!(\hat{\boldsymbol{z}}\!\times\!\boldsymbol{E})\right).\qquad
\label{ADL-SOT}
\end{eqnarray}
These expressions provide the main result of this paper.
\section{Conclusions}
The SOT mechanism is based on the exchange of angular momentum between the crystal lattice and the local magnetization via spin-orbit coupling. Here, we found the FL- and ADL-SOTs microscopically, Eqs. (\ref{FL-SOT}) and (\ref{ADL-SOT}). The FL-SOT originates from the Fermi surface contribution of the response function Eq. (\ref{response-function}), while the ADL-SOT is acquires contributions from the entire bands. Our main result in Eq. (\ref{ADL-SOT}) immediately shows that the intrinsic contribution to ADL-SOT is completely canceled in the presence of spin-independent scattering $\gamma_\uparrow=\gamma_\downarrow$. That is, the intrinsic
component of the ADL SOT, which originates from virtual interbranch
transitions, is canceled by the vertex correction when weak spin-independent impurity scattering is taken into account. Our result, therefore, explicitly elucidates the interplay between intrinsic and extrinsic contributions to ADL SOT. This result resembles the suppression of both spin Hall conductivity in nonmagnetic metals and anomalous Hall conductivity in magnetic metals, in the presence of spin-independent disorder.\cite{green function1,green function2,green function3,Gerrit-AHE,Sakai} In these effects the cancelation is model dependent, and occurs for parabolic band dispersion and linear-in-momentum SOI. We expect a similar scenario for intrinsic SOT.
The existence of a Rashba effect on the interface between an ultrathin ferromagnet and a heavy metal is the subject of intense discussion. Our results show that the amplitudes of the FL and ADL SOTs can be of the same order of magnitude depending on the relative strengths of the SOI, spin-dependent scattering rates, and exchange interaction. Our results may qualitatively describe Co/Pt interfaces which are characterised by particularly large Rashba SOI of the magnitude of $1$\,eV\,{\AA}.\cite{SOT exp Miron1,SOT exp Miron2,SOT exp Miron3, rashba strength} Relating the strength of the Rashba coupling to the magnitude of the SOTs, however, would require ab-initio modeling and additional experimental information. Which of our results apply to more general models and band structures will be the subject of future investigation.
\section*{ACKNOWLEDGEMENT}
We acknowledge Hiroshi Kohno and Dmitry Yudin for useful discussions. The work was supported by Dutch Science Foundation NWO/FOM 13PR3118 and by EU Network FP7-PEOPLE-2013-IRSES Grant No 612624 "InterNoM". R.D. is a member of the D-ITP consortium,
a program of the Netherlands Organisation for Scientific Research (NWO), which is funded by the Dutch Ministry of Education, Culture and Science (OCW).
|
3,212,635,537,612 | arxiv | \section{Introduction}
\label{Introduction}
Relativistic jet outflows from radio galaxies are a primary mechanism for
energy extraction from supermassive black holes in active galactic nuclei
(AGN) and an important source of
energy input to the intergalactic medium (IGM) in groups and clusters
(e.g. \citealt[and references therein]{McN07}).
We are studying relativistic jet kinematics and dynamics in nearby
low-luminosity radio galaxies with Fanaroff-Riley Class I (FR\,I -
\citealt{FR74}) morphology for which we have obtained radio imaging and
polarimetry at high angular resolution transverse to the jets as well as along
their lengths.
We have developed procedures
for deriving three-dimensional variations of
intrinsic jet parameters -- velocity field, emissivity and
magnetic-field ordering -- from an analysis of {\sl systematic} asymmetries
between the jets and counter-jets \citep{LB02a,CL,CLBC,LCBH06}. We compare the observed asymmetries in
images of total intensity, degree of linear
polarization and apparent magnetic field direction with the predicted effects of
relativistic aberration on synchrotron emission from particles in
partially-ordered magnetic fields in model outflows and deduce the
distributions of intrinsic properties within the jets. We have found that a
generic property of the jet outflows in FR\,I radio galaxies is that they
decelerate from relativistic speeds ($\beta = v/c \approx 0.8$ -- 0.9) near the AGN to
subrelativistic speeds a few kiloparsecs away, and that the outflows are
systematically faster on-axis than at their edges.
It is critical for such an analysis to distinguish patterns of asymmetry in the jets
produced by relativistic aberration from any that are intrinsic to the
outflows or
which result from interactions between the outflows and anisotropic
environments, e.g.\ from pressure gradients or winds in the IGM. One asymmetry
in FR\,I radio jets that has proven instructive in some sources and
problematic in others is the {\sl systematic difference between transverse
intensity profiles in the brighter jets and weaker counter-jets} when
observed at high sensitivity and angular resolution.
This difference correlates with indicators of the
orientation of the jets to the line of sight.
A statistical study of FR\,I jets in the B2 sample by
\cite{LPdRF} found
that the ratio of jet to counter-jet FWHM measured by Gaussian fitting at the
same distance from the nucleus on both sides is strongly anticorrelated with the average
jet/counter-jet brightness ratio
and with the ratio of core\footnote{The 'core' is defined as an unresolved
component coincident with the AGN. The core/extended flux-density ratio is
a statistical indicator of orientation.} to extended flux density.
This anticorrelation is qualitatively as expected for intrinsically symmetrical
relativistic outflows which are faster on-axis than at their edges. In
this case, relativistic aberration makes the transverse brightness profiles of the
approaching, hence apparently brighter, jet more centrally peaked than those
of the receding counter-jet. Gaussian fitting to the jet and counter-jet FWHM
then yields smaller values of the width for the apparently brighter jets even if
the (slower moving) outer boundaries of the jets appear identical on both sides
of the AGN.
The amplitude of the effect found in the B2 source sample by
\cite{LPdRF} is, however, surprisingly large. Modelling of the
anticorrelation requires that the velocity $\beta_{\rm on-axis}
\approx 0.7$ and $\beta_{\rm edge} \approx 0.1$ \citep{LPdRF} in order
to reproduce the spread of width ratios. Two lines of argument suggest
that such large velocity ratios are not typical of the FR\,I
population. Firstly, the ratio $\beta_{\rm edge} / \beta_{\rm
on-axis}$ required to explain the effect is quantitatively
inconsistent with the brightness and polarization distributions in
four of the five individual FR\,I sources we have modelled
(\citealt{LB02a,CL,CLBC} -- the exception is 3C\,296;
\citealt{LCBH06}). Secondly, the smallest values of $\beta_{\rm edge}
/ \beta_{\rm on-axis}$ are required only to generate the unusually
small values of jet/counter-jet width ratio $\approx 0.6$ in a few
members of the B2 source sample with particularly high jet/counter-jet
brightness ratios, whose jets are thought to be highly inclined to the
plane of the sky \citep{LPdRF}.
Thus far, our results would be consistent with the idea that all FR\,I jets are
symmetrical outflows, but that only a few have very large transverse velocity
gradients. Even this hypothesis fails for two of the B2 sample members, B2\,0206+35
and B2\,0755+37 \citep{LGBPB}\footnote{From now on we drop the B2.}. These
sources are unusual in that the {\sl lower} isophotes of their brighter jets
{\sl also} appear narrower than those of the counter-jets at the same distance
from the AGN in images of moderate resolution and sensitivity
(e.g.\ \citealt{Bondi00}) - even though the jets clearly exhibit the basal
asymmetries associated with symmetrical decelerating relativistic outflows.
Apparent width asymmetry in the fainter jet emission cannot generally be
explained by relativistic effects alone if the jets are both {\em symmetrical}
and {\em purely outflowing}\footnote{We discuss a special magnetic-field
configuration for which this is not the case in Appendix~\ref{toroidal}.}. On
the other hand, if the asymmetry is attributed to intrinsic or environmental
differences on the two sides of the AGN (e.g.\ \citealt{Bondi00}) there should
be no systematic trend for the wider jet to be on the receding side as it is in
the (albeit small) sample of \citet{LPdRF}.
In this paper, we explore an alternative explanation for the transverse
brightness profile asymmetries of the jets and counter-jets in 0206+35 and
0755+37. This work was motivated by new deep imaging of these sources showing:
(a) that their counter-jets have minima in their emission profiles with the same
widths as the main jets at similar distances from the nucleus and (b) that the
main jets are surrounded by faint emission resembling the broader outer
emission in the counter-jets \citep[and Section~\ref{images}, below]{LGBPB}.
The new imaging data lead us to model the jets in these sources as intrinsically
symmetrical outflows near the jet axis surrounded by broader features from {\sl
backflowing} material. If backflow in the broader features can be
approximately symmetrical and mildly relativistic, then aberration
can make its emission appear slightly brighter on the {\sl counter-jet}
side, producing differences in isophotal width between the jets similar to those
observed.
Backflow is a reasonable hypothesis a priori for FR\,I sources like
0206+35 and 0755+37 {\sl whose jets appear to propagate within well-defined
lobes}.
It has been an acknowledged ingredient of models of lobed FR\,II
sources since the first attempts to simulate their hydrodynamics
\citep{Norman82}.
FR\,I sources cannot form lobes without similar deflection of jet material and
\cite{LGBPB} showed that FR\,I lobes resemble those of FR\,II sources in many
respects. If FR\,I jets are much lighter than their surroundings and
initially fast
(e.g.\ \citealt{LB02b}), we should not be surprised if some large-scale
post-jet flow in
FR\,I lobes is marginally relativistic. We also note that mildly relativistic
backflow
extends almost all the way back
to the centre of the host galaxy in simulations of relativistic FR\,I jets
with initial dynamical
flow parameters matching those deduced from our observations of 3C\,31 and
realistic pressure and density profiles for the surrounding IGM
\citep{LB02b,PM07}.
In this paper, we show that a fully symmetrical model in which a decelerating
axisymmetric outflow is surrounded by a slower (but still slightly
relativistic) backflow
is {\sl quantitatively} consistent with the detailed brightness and
polarization distributions
of the jets and counter-jets in 0206+35 and 0755+37. It is not obvious a
priori
that conditions needed to produce {\sl symmetrical} backflow are likely to be
realised in lobed FR\,I radio galaxies. Nevertheless, our results suggest that
mildly relativistic backflow contributes significantly to the observed jet vs
counter-jet width relationships and we suggest ways in which this (perhaps
unexpected) ingredient
of FR\,I source structure could be investigated further.
In Section~\ref{obs}, we summarize the optical and large-scale radio properties
of the sources and discuss the additional image processing required to separate
jet and lobe emission. Section~\ref{fits} describes our modelling procedure and
Section~\ref{compare} gives a comparison between models and data. The model
parameters are presented in Section~\ref{intrinsic}. A brief discussion is given
in Section~\ref{discuss}. Section~\ref{summary-further} summarizes our
conclusions and suggests further work. Finally, Appendix~\ref{toroidal} demonstrates
that a toroidally-magnetized outflow can, in special circumstances, produce jet/counter-jet
sidedness ratios significantly less than unity.
We adopt a concordance cosmology with Hubble constant, $H_0$ =
70\,$\rm{km\,s^{-1}\,Mpc^{-1}}$, $\Omega_\Lambda = 0.7$ and $\Omega_M =
0.3$.
\section{Observations and images}
\label{obs}
\subsection{The sources: optical data and large-scale radio structures}
\label{sources}
The galaxy identifications, redshifts and linear scales for the two sources studied
here are given in Table~\ref{tab:sources}. Their radio structures have
been described in detail by \citet{LGBPB}, from which the images
in Fig.~\ref{fig:allsources} are taken.
\begin{center}
\begin{table}
\caption{Names, redshifts, linear scales and associated references for the
sources in this paper.\label{tab:sources} }
\begin{tabular}{lllll}
\hline
Name & Galaxy & Redshift & Scale & Reference \\
& name & & kpc & \\
& & & arcsec$^{-1}$ & \\
\hline
0206+35 & UGC\,1651 & 0.03773 & 0.748 & 1 \\
0755+37 & NGC\,2484 & 0.04284 & 0.845 & 2 \\
\hline
\multicolumn{5}{l}{\scriptsize References: (1) \citet{Miller02}; (2) \citet{Falco99}.}
\end{tabular}
\end{table}
\end{center}
\begin{figure}
\begin{center}
\epsfxsize=8.5cm
\epsffile{allsources.eps}
\caption{Grey-scale images of the sources \citep{LGBPB}. The boxes mark the
areas shown in later plots and the grey-scale ranges, in mJy\,beam$^{-1}$, are indicated
by the labelled wedges. (a) 0206+35 at 4.9\,GHz, 1.2\,arcsec FWHM. (b)
0755+37 at 4.9\,GHz, 1.3\,arcsec FWHM.
\label{fig:allsources}
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfxsize=8.5cm
\epsffile{col_cont.eps}
\caption{False-colour images of total intensity for 0206+35 and 0755+37 over the
areas outlined in Fig.~\ref{fig:allsources}. On the right-hand side of each
panel, we have plotted a single contour to outline the brightest emission of
the main (brighter) jet. On the left-hand side, this contour (rotated through
180\degr) is plotted on the counter-jet emission. (a) 0206+35 at 0.35-arcsec
FWHM resolution. (b) 0755+37 at 1.3-arcsec FWHM resolution.
\label{fig:colcont}
}
\end{center}
\end{figure}
\subsection{Images}
\label{images}
\begin{center}
\begin{table}
\caption{Parameters of the sub-images used for modelling and spectral
analysis. Col. 1: source name; col. 2: observing frequency (an asterisk indicates
that the image was used for modelling); col
3: resolution (FWHM); col. 4: rms off-source noise level in $I$; col.5:
average noise level in $Q$ and $U$; col. 6: sub-image position angle; col. 7:
sub-image sizes parallel and perpendicular to the jet axis.
\label{tab:images}}
\begin{tabular}{lllrrrr}
\hline
Source & $\nu$ & Res &\multicolumn{2}{c}{rms} & Rot & Size \\
& GHz & arc- &\multicolumn{2}{c}{$\mu$Jy\,b$^{-1}$}& deg & arcsec$^2$\\
& & sec & $\sigma_I$ & $\sigma_P$ & &\\
\hline
0206+35 & 1.425 & 1.20 & 19 & $-$ & $-$41.0&$22 \times 20$\\
0206+35 & 4.860 & 1.20 & 12 & $-$ & $-$41.0&$22 \times 20$\\
0206+35 & 4.860 & 0.35*& 7.2 & 7.1 & $-$41.0&$22 \times 20$\\
0755+37 & 1.425 & 1.30 & 20 & $-$ & 158.5&$66 \times 66$\\
0755+37 & 4.860 & 1.30*& 7.8 & 7.9 & 158.5&$66 \times 66$\\
0755+37 & 4.860 & 0.40*& 8.0 & 7.1 & 158.5&$20 \times 16$\\
\hline
\end{tabular}
\end{table}
\end{center}
Table~\ref{tab:images} summarizes the relevant parameters of the
high-resolution sub-images which we model or use for spectral analysis
in this paper (details of the observations and data reduction are
given by \citealt{LGBPB}). The ${\bf E}$-vector position angles of
linear polarization at 4.860\,GHz have been corrected for Faraday
rotation using multifrequency imaging \citep{Bands,0755RM,LGBPB} and
residual depolarization is predicted to be negligible at this
frequency. The areas plotted in later figures are outlined on
Fig.~\ref{fig:allsources}.
Fig.~\ref{fig:colcont} shows rotated sub-images. On the right-hand side of each
panel, we have drawn a single contour to outline the brightest part of the main
jet. On the left-hand side, this contour is rotated through 180\degr and plotted
on the counter-jet emission. This diagram emphasizes the points made earlier
that the minima in the counter-jet emission have roughly the same widths as the
main jets and that the main jets are in turn surrounded by fainter emission.
In order to model jets that appear superimposed on lobes, we must
try to separate the two emission components in all Stokes parameters. There is no unique
way to do this when their spectra and intensities vary
independently across the field of view. Any approach to isolating jet emission
in a lobed FR\,I source therefore entails some simplifying assumption about the
variations in intensity $I$ or in spectral index\footnote{We define spectral
index $\alpha$ in the sense $I(\nu) \propto \nu^{-\alpha}$.} $\alpha$ of the
lobes or jets over the region to be modelled. We have attempted to separate the
jets and lobes for these sources in a way that optimizes the resolution and
signal-to-noise of the jet emission while letting us check for systematic errors
resulting from the assumptions made while doing the lobe-jet separation, as
follows.
One approach to separating jet and lobe emission observed at two
frequencies is based on their systematic {\it spectral} differences: the
jets have characteristic spectral indices close to $\alpha = 0.55$, whereas the lobes
have $\alpha \ga 0.8$ near the centres of the sources \citep{LGBPB}. If the
spectral index of the lobe emission close to the jet is reasonably constant,
we can use a variant of the `spectral tomography' method \citep{K-SR,KSetal,LCCB06}
by assuming that what is observed can be described as the sum of two components: a
jet and a lobe with constant spectral indices $\alpha_{\rm j}$ and $\alpha_{\rm
l}$, respectively. The brightnesses observed at a given point at two
frequencies $\nu_0$ and $\nu_1$ are then:
\begin{eqnarray*}
I(\nu_0) & = & B_{\rm j}\nu_0^{-\alpha_{\rm j}} + B_{\rm l}\nu_0^{-\alpha_{\rm
l}}\\
I(\nu_1) & = & B_{\rm j}\nu_1^{-\alpha_{\rm j}} + B_{\rm l}\nu_1^{-\alpha_{\rm
l}}\\
\end{eqnarray*}
We can scale and subtract the two brightness distributions to estimate the jet brightness
at the modelling frequency $\nu_0$:
\begin{eqnarray*}
B_{\rm j}\nu_0^{-\alpha_{\rm j}} & = & \frac{\nu_0^{\alpha_{\rm l}}I(\nu_0) -
\nu_1^{\alpha_{\rm l}}I(\nu_1)}{\nu_0^{\alpha_{\rm l}}-\nu_1^{\alpha_{\rm
l}}(\nu_0/\nu_1)^{\alpha_{\rm j}}}
\\
\end{eqnarray*}
Once we know $\alpha_{\rm l}$, the method can also
be applied to Stokes $Q$ and $U$ provided that we correct the images at both
frequencies for Faraday rotation before subtraction, and that depolarization is
negligible (as is the case for these sources). Note that the spectral index of
the jets, $\alpha_{\rm j}$, must be both constant and known in order to scale the
result correctly.
In practice, we estimated the lobe spectral index for $\nu_0 = 4.860$\,GHz and
$\nu_1 = 1.425$\,GHz by performing the subtraction
for various trial values of $\alpha_{\rm l}$ and selecting that which minimized
the residual lobe emission in jet-free regions.
Spectral subtraction can remove even rather complicated
lobe emission if the spectral index is constant, but it has two serious
flaws for our purposes: (a) the signal-to-noise ratio of the corrected image is
lower than that of the deep high-frequency image alone and (b) our
highest-resolution data for 0206+35 and 0755+37 are only at one frequency.
The alternative of {\it spatial} subtraction assumes that the lobe {\sl intensity}
varies only slowly across the jet. This approach can be best applied at high
angular resolution where the lobe brightness is low and the spatial variation of jet
emission is clearest.
To separate the two types of emission spatially in $I$, $Q$ and $U$, we
define two background regions parallel to the jet axis and just outside the
maximum transverse extent of the jet as estimated from spectral-index images,
i.e.\ using both the intensity and spectral properties of the jet emission to
guide our choice of the background regions. We then smooth the background
brightness distributions parallel to the jet axis with a boxcar function to
improve their signal-to-noise ratio and interpolate linearly between
them under the jet.\footnote{Higher-order interpolation works poorly for these
brightness distributions.} We refer to this approach as generating `interpolated
images'.
For 0206+35 and 0755+37 we first used spectral
subtraction to verify the total extent of the jet emission and to set
appropriate reference regions for interpolation, then constructed interpolated images for the
final modelling.
\begin{table}
\caption{Interpolation parameters for lobe subtraction. Col. 1: source name;
col 2: resolution (FWHM); col. 3: background region distances from jet axis; col. 4: width of boxcar
smoothing function parallel to the axis.\label{tab:interp}}
\begin{tabular}{llrr}
\hline
Source & FWHM & Background & Smooth \\
& (arcsec) & (arcsec) & (arcsec)\\
\hline
0206+35& 0.35 & 9 -- 10 & 1.0 \\
0755+37&1.30 & 30 -- 45 & 3.0 \\
\hline
\end{tabular}
\end{table}
In Figs~\ref{fig:0206sub} and \ref{fig:0755sub}, we show the results of both
subtraction methods for the two sources. Figs~\ref{fig:0206sub}(a) and
\ref{fig:0755sub}(a) show the images at the resolution used for modelling before
subtraction. We found best-fitting lobe spectral indices between 1.425 and
4.860\,GHz of 0.90 and 0.81 for
0206+35 and 0755+37, respectively. In Fig.~\ref{fig:0206sub}(b), we show the spectral
subtraction for 0206+35 at lower resolution. Although not useful for modelling,
this image outlines the total extent of the flatter-spectrum emission associated
with the jets. The spectral subtraction for 0755+37 at the lower of the two
resolutions used for modelling, shown in Fig.~\ref{fig:0755sub}(b), has little
trace of residual lobe emission but low signal-to-noise.
Guided by the spectral subtraction, we set the interpolation parameters as in
Table~\ref{tab:interp} and computed interpolated images at 1.425 and
4.860\,GHz, from which we in turn derived the spectral-index images shown in
Figs~\ref{fig:0206sub}(c) and \ref{fig:0755sub}(c). These are blanked on the
error in spectral index, as noted in the captions. We then estimated integrated
spectral indices for the jets by summing the interpolated $I$ images over all
pixels which are unblanked on the spectral-index images, excluding the cores. We
found $\langle \alpha_{\rm j}\rangle = 0.55$ for 0206+35 and 0.53 for 0755+37. We used these
values to scale the spectral subtractions. Variations across
the modelled regions are small, with $0.50 \leq \alpha_{\rm j} \leq 0.62$ in both
sources. Finally, we show the 4.860-GHz
interpolated images at the resolutions used for modelling in
Figs~\ref{fig:0206sub}(d) and \ref{fig:0755sub}(d).
We are confident that the interpolated images represent the jet emission
accurately in both sources. The lobe emission in 0206+35 is quite faint at
0.35-arcsec FWHM resolution, and after subtraction, the area around the jets appears
devoid of residual emission in all Stokes parameters (e.g.\
Fig.~\ref{fig:0206sub}d). The lower-resolution (1.3 arcsec FWHM) image of
0755+37 proved to be more of a challenge, because the lobe emission is bright
and irregular (Fig.~\ref{fig:0755sub}a). The spectral subtraction gave a clean
image of the jet with negligible background emission, but amplified noise
(Fig.~\ref{fig:0755sub}b). In contrast, interpolation (Fig.~\ref{fig:0755sub}d)
failed to remove the small-scale lobe emission accurately but retained the full
signal-to-noise ratio of our high-frequency images. Comparison of the two
corrected $I$ images showed that they are accurately consistent wherever $I >
100$\,$\mu$Jy\,beam$^{-1}$. We therefore used the interpolated images for
modelling (in which the faint residual lobe emission has low weight). Modelling the spectrally-subtracted image
(and its counterparts in $Q$ and $U$) gave consistent but
less well constrained results. In the intensity and polarization profiles
plotted below, we compare the results from both subtraction methods.
At 0.4-arcsec FWHM resolution, used for modelling the inner jets of 0755+37, the lobe
brightness is negligible and we did not attempt to subtract it.
\begin{figure}
\begin{center}
\epsfxsize=8.5cm
\epsffile{0206_sub.eps}
\caption{False-colour images showing the results of lobe subtraction for
0206+35. The $I$ intensity colour range (0 -- 3\,mJy\,beam$^{-1}$) is the same for
panels (a) and (d). (a) No subtraction at 0.35-arcsec resolution. (b) Subtraction at 1.2-arcsec resolution
assuming a constant spectral index for the lobe. (c) Spectral index distribution
over the jet and counter-jet at 1.2-arcsec resolution after interpolated
subtraction,
blanked where $\sigma_{\alpha} > 0.03$
(the colour range for spectral index is shown by the labelled wedge).
(d) Subtraction by linear interpolation between background strips parallel to
the jet axis. The resolution is 0.35\,arcsec.
\label{fig:0206sub}
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfxsize=8.5cm
\epsffile{0755_sub.eps}
\caption{False-colour images showing results of lobe subtraction in 0755+37 at
1.3-arcsec resolution.
The $I$ intensity colour range (0 -- 2.5\,mJy\,beam$^{-1}$) is the same for
panels (a), (b) and (d). (a) No subtraction. (b) Subtraction assuming a constant
spectral index for the lobe, as described in the text. (c) Spectral index distribution
over the jet and counter-jet after interpolated lobe subtraction,
blanked where $\sigma_{\alpha} > 0.1$ (the colour range for spectral index is
indicated by the labelled wedge).
(d) Subtraction by
linear interpolation between background strips parallel to the jet axis.
\label{fig:0755sub}
}
\end{center}
\end{figure}
\section{Model fits}
\label{fits}
\subsection{Assumptions}
\label{model-assumptions}
To model the jet emission, we make the following assumptions.
\begin{enumerate}
\item The jets are intrinsically symmetrical, axisymmetric and
antiparallel. They can be treated, on average, as laminar, stationary flows.
\item The radio emission is from relativistic particles with a power-law energy
spectrum $n(E) = n_0 E^{-(2\alpha+1)}$ ($\alpha$ is the spectral index). We
use the integrated values for the modelled regions after lobe subtraction: $\langle \alpha\rangle =
0.55$ for 0206+35 and 0.53 for 0755+37.
The corresponding maximum degree of polarization is
$p_0 = (3\alpha+3)/(3\alpha+5) = 0.70$ in both cases and the variations of
spectral index across the modelled regions are small enough to be ignored (Section~\ref{images}).
\item The magnetic field is tangled on small scales, but anisotropic.
\item The effects of Faraday rotation on the observed emission are
corrected completely. This is an extremely good approximation for 0206+35 and
0755+37 \citep{Bands,0755RM,LGBPB}.
\end{enumerate}
\subsection{Outline of method}
\label{model-outline}
For a symmetrical, outflowing jet with velocity $v = \beta c$, emitting isotropically in
the rest frame and inclined by an angle $\theta$ to the line of sight, a measurement of the observed jet/counter-jet intensity ratio
\begin{eqnarray*}
I_{\rm j}/I_{\rm cj} &=& [(1+\beta\cos\theta)/(1-\beta\cos\theta)]^{2+\alpha} \\
\end{eqnarray*}
does not allow us to determine the velocity and inclination separately. The key
to our method is the use of linear polarization to break this degeneracy. The
relation between the angles to the line of sight in the rest frame of the outflow,
$\theta^\prime$ and in the observed frame, $\theta$, is:
\begin{eqnarray*}
\sin\theta^\prime_{\rm j} & = & [\Gamma(1-\beta\cos\theta)]^{-1}\sin\theta
\makebox{~~~~~(main jet)} \\
\sin\theta^\prime_{\rm cj} & = & [\Gamma(1+\beta\cos\theta)]^{-1}\sin\theta
\makebox{~~(counter-jet)} \\
\end{eqnarray*}
The emission in all three Stokes parameters depends on $\theta^\prime$, since the magnetic
field is in general anisotropic. If the flow is significantly relativistic, we
effectively observe the two jets at different values of $\theta^\prime$ and can
use the differences in polarization for the approaching and receding jets as an
additional constraint to separate $\beta$ and $\theta$. For backflow, the
argument is identical with the roles of jet and counter-jet interchanged.
The principal steps in our method \citep{LB02a,CL,CLBC,LCBH06} are as follows.
\begin{enumerate}
\item Build a parameterized model of the geometry, the velocity field and the
variations of
emissivity ($\propto n_0
B^{1+\alpha}$) and magnetic-field anisotropy in the rest frame of the emitting
plasma.
\item Calculate the observed-frame emission in $I$, $Q$ and $U$, taking account
of relativistic aberration and anisotropic emission in the rest frame.
\item Integrate along the line of sight, normalize to the measured total flux
density and convolve with the observing beam.
\item Calculate and sum $\chi^2$ over the $I$, $Q$ and $U$ images. This is our
measure of goodness of fit.
\item Optimize the parameters using the downhill simplex method of Nelder \& Mead \citep{NR}.
\end{enumerate}
We explored a wide range of starting simplexes in order
to be sure of locating the global minimum in $\chi^2$.
\subsection{Fitting functions}
\label{fit-funcs}
The parameterized model that we fit to the VLA observations is a simplified
version of
those in our previous work \citep{LB02a,CL,CLBC,LCBH06}, with the addition of a few extra terms to describe
the backflow. The functional forms are
given explicitly in Table~\ref{tab:functions}. A critical
discussion of fitting functions will be given elsewhere (Laing \& Bridle,
in preparation).
\begin{table*}
\caption{Coordinate definitions and functional forms for geometry, velocity, proper emissivity and
magnetic-field ordering.\label{tab:functions}}
\begin{tabular}{llll}
\hline
&&&\\
Description & Quantity & Functional form & Distance range\\
&&&\\
\hline
&&&\\
Distance coordinate &
$r$ & $\frac{zr_0}{(r_0 + A)\cos\xi -A}$&$r \leq r_0$ \\
(outflow and backflow)&& $\frac{z+A}{\cos\xi} - A$& $r \geq r_0$\\
&& $A = x_0/\sin\xi_0-r_0 = x_{\rm b}/\sin\xi_{\rm b}-r_0$&\\
&&&\\
Outflow streamline index & $s$ & by continuity & $r \leq r_0$ \\
&& $\xi/\xi_0$ & $r \geq r_0$ \\
&&&\\
Outflow radius & $x(z,s)$ & $a_2(s)z^2 + a_3(s)z^3$ & $r \leq r_0$ \\
&& $(z-r_0+x_0/\sin\xi_0)\tan(\xi_0s)$& $r \geq r_0$ \\
&&&\\
Outflow velocity &$\beta(r,s)$ & $\beta_1 \exp(s^2\ln v_1)$ & $r \leq r_{v_1}$ \\
&& $\beta_1 \exp(s^2\ln v_1)\left (\frac{r_{v0} - r}{r_{v0} -r_{v1}}
\right ) + \beta_0 \exp(s^2\ln v_0)\left (\frac{r - r_{v1}}{r_{v0} -r_{v1}}
\right )$ & $r_{v1} \leq r \leq r_{v0}$ \\
&& $\beta_0 \exp(s^2\ln v_0)$ & $r \geq r_{v_0}$ \\
&&&\\
Outflow proper emissivity &$\epsilon(r,s)$&$g_1 r^{-E_{\rm in}}\exp(\ln e_1 s^2)$&$r \leq r_{e1}$\\
&&$~~r^{-E_{\rm mid}}\exp \left[\ln \left
(\frac{e_1(r_{e0}-r)+e_0(r-r_{e1})}{r_{e0}-r_{e1}} \right )
s^2\right ]$&$r_{e1} \leq r \leq r_{e0}$\\
&&$g_0 r^{-E_{\rm out}}\exp(\ln e_0 s^2)$&$r \geq r_{e0}$\\
&&&\\
Outflow $\langle B_r^2/B_t^2\rangle^{1/2}$ &
$j(r)$&$j_1 $&$r \leq r_{B1}$\\
&&$\frac{j_1(r_{B0}-r)+j_0(r-r_{B1})}{r_{B0}-r_{B1}}$&$r_{B1} \leq r \leq r_{B0}$\\
&&$j_0 $&$r \geq r_{B0}$\\
Outflow $\langle B_l^2/B_t^2\rangle^{1/2}$ &
$k(r)$&$k_1 $&$r \leq r_{B1}$\\
&&$\frac{k_1(r_{B0}-r)+k_0(r-r_{B1})}{r_{B0}-r_{B1}}$&$r_{B1} \leq r \leq r_{B0}$ \\
&&$k_0 $&$r \geq r_{B0}$\\
&&&\\
Backflow streamline index& $t$ & by continuity & $r \leq r_0$ \\
&& $(\xi-\xi_0)/(\xi_{\rm b}-\xi_0)$& $r
\geq r_0$\\
&&&\\
Backflow radius & $x(z,t)$ & $a_2(t)z^2 + a_3(t)z^3$ & $r \leq r_0$ \\
&& $(z-r_0+x_0/\sin\xi_0)\tan[\xi_0+(\xi_{\rm b}-\xi_0)t]$& $r \geq r_0$ \\
&&&\\
Backflow velocity &
$\beta(t)$ & $\beta_{\rm b, in} + t(\beta_{\rm b, out}-\beta_{\rm b, in})$\\
&&&\\
Backflow proper emissivity &
$\epsilon(r,t)$& 0 & $r < r_b$\\
&& $n_{\rm b}(r/r_0)^{-E_{\rm b}}\exp(\ln e_{\rm b}t^2)$& $r \geq r_b$ \\
&&&\\
Backflow $\langle B_r^2/B_t^2\rangle^{1/2}$&
$j$& $j_{\rm b}$ &\\
Backflow $\langle B_l^2/B_t^2\rangle^{1/2}$& $k$& $k_{\rm b}$ &\\
&&&\\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\begin{center}
\epsfxsize=6cm
\epsffile{sketch.eps}
\caption{Sketch of the assumed geometry. The blue and green curves show the
outer boundaries of the outflowing jet and backflow emission, respectively,
Representative streamlines in the two parts of the flow are shown in red. The
fiducial distances and angles are defined in Section~\ref{geom-functions}.
\label{fig:sketch}
}
\end{center}
\end{figure}
\subsubsection{Geometry}
\label{geom-functions}
We use coordinates $(z,x)$ in a plane containing the jet axis, with $z$ measured
along the axis and $x$ perpendicular to it. The jet is divided into a {\em
flaring region}, where the flow first expands and then recollimates, and a
conical {\em outer region}, as sketched in Fig.~\ref{fig:sketch}. The edge of
the outflow is fully defined by the distance of the transition between the two
regions measured along the axis, $r_0$, the radius, $x_0$, and the opening angle
of expansion in the outer region, $\xi_0$. Individual streamlines in the outer
region are straight, so we can define a streamline index $s = \xi/\xi_0$, where $\xi$
is the angle between the streamline and the axis. $s$ ranges from 0 on-axis to 1
at the edge of the outflow. The two coefficients $a_2(s)$ and $a_3(s)$ of the
cubic expression for the streamline radius in the flaring region (Table~\ref{tab:functions}) are
defined by the conditions that the streamline radius $x(z)$ and its first
derivative $x^\prime(z)$ are continuous at the flaring-outer region boundary.
We also define a distance coordinate $r$ which is continuous along a given
streamline from 0 at the nucleus to $r_0$ at the flaring-outer region boundary
and which thereafter increases as the distance from the boundary surface. The
functional forms for $r$ in the two regions are given in terms of $z$ and
$s$ in Table~\ref{tab:functions}
For simplicity, the backflow is assumed to follow the same streamline family as
the jet, extended away from the axis. The edge of the backflow in the outer
region is defined by the radius $x_{\rm b}$ at the region boundary and the
opening angle $\xi_{\rm b}$. These are not independent: $x_{\rm
b}/x_0 = \sin\xi_{\rm b}/\sin\xi_0$. The backflow streamline index $t$ ranges from 0
at the backflow/outflow interface to 1 at the edge of the outflow. The backflow
streamline radii have the same functional form as their outflow equivalents with
the coefficients $a_2(t)$ and $a_3(t)$ again defined by continuity at the region
boundary.
The assumed backflow geometry is ad hoc, but gives a reasonable match
to the observed extent of the emission.
\subsubsection{Velocity}
\label{vel-functions}
The on-axis velocity profile in the outflow is divided into three parts: (a)
constant with a high velocity close to the nucleus; (b) a linear
decrease and (c) constant with a low
velocity at large distances. The velocity along any off-axis streamline is calculated
using the same expressions but with truncated Gaussian transverse profiles. The velocity
profiles, given explicitly in Table~\ref{tab:functions}, depend on two
transition distances, $r_{v1}$ and $r_{v0}$, the on-axis velocities $\beta_1$
and $\beta_0$ and the fractional edge velocities $v_1$ and $v_0$ (which are
required to be $\leq 1$).
We experimented with several functional forms for the backflow velocity. The
most satisfactory has no dependence on $r$, but varies linearly with streamline
index from $\beta_{\rm b, in}$ at the interface with the outflow to $\beta_{\rm b, out}$ at
the outer edge of the backflow.
\subsubsection{Emissivity}
\label{em-functions}
We write the proper emissivity as $\epsilon f$, where $\epsilon$ is
the emissivity in Stokes $I$ for a magnetic field perpendicular to the line of
sight and $f$ depends on the field geometry (defined in
Section~\ref{mag-functions}, below). $\epsilon$, to which we refer loosely as
`the emissivity', is a function only of the total rms magnetic-field strength
and the normalizing constant of the radiating electron energy distribution.
The on-axis emissivity profile in the outflow is also divided into three
regions, each with a power-law profile. The profile is allowed to be
discontinuous at each of the region boundaries. Off-axis, the profile is
multiplied by a truncated Gaussian function of the streamline index, with
values at the jet edge which are constants in the inner and outer emissivity regions and
vary linearly between them. The free parameters for the emissivity profiles are
transition distances, $r_{e0}$ and $r_{e1}$, power-law indices $E_{\rm in}$,
$E_{\rm mid}$ and $E_{\rm out}$, $g_1$ and $g_0$, which measure the
discontinuities at the region boundaries and edge emissivities $e_1$
and $e_0$. Note that $e_1$ and $e_0$ may be $>1$ (in which case the jet is
limb-brightened), $= 1$ (uniformly filled) or $<1$ (centre-brightened).
The backflow emissivity is assumed to be zero within a given distance and to have a
power-law dependence on $r$ with a single index and a truncated Gaussian
dependence on streamline index $t$ elsewhere. The fitted parameters are the
index $E_{\rm b}$, the fractional edge emissivity $e_{\rm b}$, the inner
distance $r_{\rm b}$ and the emissivity ratio between outflow and backflow at
the boundary between the flaring and outer regions, $n_b$.
\subsubsection{Magnetic-field structure}
\label{mag-functions}
We define the rms components of the magnetic field to be $\langle
B_l^2\rangle^{1/2}$ (longitudinal, parallel to a streamline), $\langle
B_r^2\rangle^{1/2}$(radial, orthogonal to the streamline and outwards from the
jet axis) and $\langle B_t^2\rangle^{1/2}$ (toroidal, orthogonal to the
streamline in an azimuthal direction). The rms total field strength is $B =
\langle B_l^2 + B_r^2 + B_t^2\rangle^{1/2}$ The magnetic-field structure is
parameterized by the ratio of rms radial/toroidal field, $j = \langle
B_r^2\rangle^{1/2}/\langle B_t^2\rangle^{1/2}$ and the longitudinal/toroidal
ratio $k =\langle B_l^2\rangle^{1/2}/\langle B_t^2\rangle^{1/2}$. For the
outflow models in the present paper, these depend only on $r$, being constant
close to and far from from the nucleus and varying linearly at intermediate
distances. The free parameters are the fiducial distances $r_{B1}$ and $r_{B0}$
and the field ratios at these distances, $j_1$, $j_0$, $k_1$ and $k_0$.
For the backflow, we assume constant field ratios $j_{\rm b}$ and $k_{\rm
b}$.
\subsection{Modelling of individual sources}
\label{model-specifics}
We estimated the noise levels for each resolution and Stokes parameter based on
the deviations of the brightness distributions from those expected for
axisymmetry, as follows.
\begin{enumerate}
\item Calculate Stokes parameters $Q$ and $U$ in a coordinate system with
position angle 0 along the jet axis.
\item For $I$ and $Q$, take the noise level to be $1/\sqrt{2}$ times the rms
difference
between the image and a copy of itself
reflected across the jet axis.
\item For $U$, take the sum rather than the difference.
\end{enumerate}
These values can be substantially larger than the
off-source rms, but include the effects of small-scale structure (which we do
not attempt to model) and deconvolution errors.
For 0206+35, we fit to images at the highest available resolution, 0.35\,arcsec
FWHM, using different noise levels for the high-brightness emission close to the
nucleus and the fainter regions farther out. For 0755+37, we fit to 0.4-arcsec
FWHM images of the bright inner jets and 1.3-arcsec images elsewhere. Small
regions around the cores were excluded from the fits, since we model only
optically-thin emission. The model images given below include point sources with
the appropriate observed flux densities at the locations of the cores.
The values of $\chi^2$ summed over all Stokes parameters and resolutions were
8012 over 6696 independent points for 0206+35 and 8022 over 5816 points for
0755+37.
\begin{figure*}
\begin{center}
\epsfxsize=13cm
\epsffile{0206_compare.eps}
\caption{Comparison between the observed and modelled total intensities $I$ and
sidedness ratios $I_{\rm j}/I_{\rm cj}$ for 0206+35. (a) observed and (b)
model false-colour images of $I$. (c) profiles of observed (full/red) and
model (dashed/blue) $I$ along the axis of the jet. (d) and (e) images of
$I_{\rm j}/I_{\rm cj}$. The white contours represent $I_{\rm j}/I_{\rm cj} =
1$: outside the contours, $I_{\rm j}/I_{\rm cj} < 1$. (f) profiles of observed
(full/red) and model (dashed/blue) $I_{\rm j}/I_{\rm cj}$ along the jet axis.
\label{fig:0206comp}
}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\epsfxsize=15cm
\epsffile{0206_compare_pol.eps}
\caption{Comparison between the observed and modelled linear polarization of
0206+35. (a) and (b) colour images of degree of polarization $p = P/I$ in the
range 0 -- 0.7, as indicated by the labelled wedge. Blanked areas are
grey. (a) observed; (b) model. (c) profiles of observed (full/red) and model
(dashed/blue) $p$ along the axis of the jet. (d) and (e) vectors with lengths
proportional to $p$ and directions along the apparent magnetic field
superimposed on colour images of $I$. (d) observed, (e) model.
\label{fig:0206polcomp}
}
\end{center}
\end{figure*}
The quoted uncertainties were also derived as in our earlier
work by varying an individual parameter until $\chi^2$ increased by an amount
corresponding to the formal 99 per cent confidence level, leaving the rest of
the model unchanged. These values
are crude (they neglect coupling between parameters), but in practice give a
good impression of the range of reasonable models. As an additional check, we
also performed a series of optimizations at fixed values of $\theta$ and
tabulate the range over which acceptable solutions could be found.
\section{Model-data comparisons}
\label{compare}
\subsection{General}
\label{gencompare}
\begin{figure*}
\begin{center}
\epsfxsize=14cm
\epsffile{trans_0206.eps}
\caption{Transverse profiles of total intensity, $I$, jet/counter-jet sidedness
ratio, $I_{\rm j}/I_{\rm cj}$ and $Q /I$ for 0206+35. The data have been
averaged parallel to the jet axis over three ranges of distance from the
nucleus: 3 -- 5, 5 -- 7 and 7 -- 9\,arcsec, as indicated in the captions
(Section~\ref{gencompare}). Full/red line: observations; dashed/blue line:
model. $Q/I > 0$ and $Q/I < 0$ correspond to transverse and longitudinal
apparent field, respectively.
\label{fig:0206trans}
}
\end{center}
\end{figure*}
In Figs~\ref{fig:0206comp} -- \ref{fig:0206trans} and \ref{fig:0755comp} --
\ref{fig:0755trans}, we show various comparisons between the observed and model
images of the two sources. The images have been rotated by the angles given in
Table~\ref{tab:images} so that the main (approaching) jet points to the right
and the core is either at the centre or the left-hand edge of a plot. The types
of plot are as follows.
\begin{enumerate}
\item False-colour images of total intensity. The angular scale is given on the
accompanying profiles and the brightness range (in mJy\,beam$^{-1}$) is
indicated by the labelled wedges.
\item Longitudinal profiles of total intensity.
\item Images of jet/counter-jet sidedness ratio $I_{\rm j}/I_{\rm cj}$ derived
by dividing the $I$ image by a copy of itself rotated by 180$^\circ$. These
images are blanked (grey) where $I < 3\sigma_I$ on either side of the core (Table~\ref{tab:images}). The
contours show $I_{\rm j}/I_{\rm cj} = 1$. Angular scales are again shown on
the accompanying profiles.
\item Longitudinal profiles of sidedness ratio.
\item Images of degree of polarization, $p = P/I$. These are blanked wherever $I
< 5\sigma_I$. The angular scale is given on the accompanying profiles and the
range is indicated by the labelled wedges. $p$ has been corrected for Ricean
bias \citep{WK}.
\item Profiles of $p$ along the jet axis.
\item Vectors with lengths proportional to $p$ and directions along the
apparent magnetic field, superposed on false-colour images of $I$. The angular
and vector scales are indicated by labelled bars.
\item Averaged transverse profiles of total intensity, $I$, sidedness ratio
$I_{\rm j}/I_{\rm cj}$, and $Q/I$ over selected regions where the brightness
and polarization distributions vary slowly with distance from the
nucleus. Stokes $Q$ is defined in a coordinate system with its axis along the
jet: $Q/I > 0$ for an apparent magnetic field transverse to the axis; $Q/I <
0$ for a longitudinal field. In the flaring region, these profiles were
derived by averaging along radii from the nucleus, in which case they are
plotted against angle from the jet axis. For the outer region, they are
averages along lines parallel to the jet axis and are plotted against angular
distance from the axis. In order to make a fair comparison, only pixels which
were not blanked on the observed images were used in the averages.
\end{enumerate}
In general the fits are very good. We examine the correspondence between model
and observed brightness distributions in detail in the next two sub-sections.
\subsection{0206+35}
\label{0206fit}
We show images and longitudinal profiles of total intensity and sidedness ratio
in Fig.~\ref{fig:0206comp} and of degree and direction of polarization in
Fig.~\ref{fig:0206polcomp}. Averaged transverse profiles of $I$, $I_{\rm
j}/I_{\rm cj}$ and $Q/I$ are given in Fig.~\ref{fig:0206trans}.
The model accurately reproduces the main features of the brightness and
polarization distributions of 0206+35, including the following.
\begin{enumerate}
\item The main (approaching) jet has a bright base, with a peak at
$\approx$2\,arcsec from the nucleus (Figs~\ref{fig:0206comp}a -- c).
\item The peak sidedness ratio of $I_{\rm j}/I_{\rm cj} \approx 37$ is
at a distance of $\approx$0.6\,arcsec from the nucleus (Figs~\ref{fig:0206comp}d -- f), close to the position
of the flaring point as determined from high-resolution MERLIN observations
\citep{LGBPB}.
\item At low isophotes, the counter-jet appears wider than the main jet (Figs~\ref{fig:0206comp}a and b).
\item The counter-jet has a limb-brightened structure, which is brightest
between 2.5 and 6\,arcsec from the nucleus, whereas the main jet appears
narrower and is centrally peaked (Figs~\ref{fig:0206comp}a and b; Figs~\ref{fig:0206trans}a -- f).
\item The longitudinal profile of degree of polarization shows the
characteristic asymmetry we have noted in other FR\,I jets: the main jet has a
polarization minimum at $\approx$2.5\,arcsec from the nucleus, corresponding to
the transition between longitudinal and transverse apparent field, whereas the
counter-jet shows a high degree of polarization with a transverse apparent
field, reaching an average of $p \approx 0.5$ at 10\,arcsec (Fig.~\ref{fig:0206polcomp}c).
\item There is a transition in the field direction between transverse on-axis
and aligned with the jet boundaries at the edges on both sides of the
nucleus. This is clear within 2 or 3\,arcsec of the ridge line in the main and
counter-jets, respectively (Fig.~\ref{fig:0206polcomp} and Figs~\ref{fig:0206trans}j -- o). The signal-to-noise ratio in the data is too low to
determine the edge field direction accurately at larger distances, so
discrepancies between observed and predicted $Q/I$ transverse profiles
should not be taken too seriously.
\item Close to the nucleus, the apparent field wraps around the edges of both
jets, with a high degree of polarization, especially on the counter-jet side
(Figs~\ref{fig:0206polcomp}d and e).
\end{enumerate}
\begin{figure}
\begin{center}
\epsfxsize=7cm
\epsffile{outback0206.eps}
\caption{Predicted brightness distributions for the outflowing and backflowing
parts of the model for 0206+35. (a) outflow; (b) backflow.
\label{fig:outback0206}
}
\end{center}
\end{figure}
The main deficiency of the model is that it underpredicts the brightness of the
counter-jet $\ga$5\,arcsec from the axis and over-predicts that
of the main jet between 1.5 and 4\,arcsec. These effects
lead to a model sidedness ratio which is too high off-axis, although
still significantly $<$1. This discrepancy is most obvious between 5 and 7\,arcsec
from the nucleus (Figs~\ref{fig:0206trans}b, e and h), but is restricted to
regions where the brightness is $\la$200\,$\mu$Jy\,beam$^{-1}$. The model is
also constrained to have monotonic deceleration in the outflow and velocity
independent of distance from the nucleus in the backflow, so it cannot
reproduce the increase in sidedness ratio between 8 and 10\,arcsec from the
nucleus. The surface brightness is low at these distances so uncertainties
in lobe subtraction may be significant.
Fig.~\ref{fig:outback0206} shows the predicted brightness distributions for the
outflowing and backflowing parts of the model separately. The former is
similar to the pure outflow models we have derived for other
sources
\citep{LB02a,CL,CLBC,LCBH06}. In the model of 0206+35, the limb-brightening of the counter-jet
is due to a combination of outflow and backflow. In the outflow, the on-axis
velocity remains high, so the edges of the outflowing counter-jet material
appear relatively brighter
because they suffer less Doppler dimming than the on-axis material. This effect is
reinforced by emission from the backflow, which adds a thin shell of emission immediately
surrounding the outflow. Most of the asymmetry is due to the outflow: the
backflow is only slightly brighter on the counter-jet side.
\subsection{0755+37}
\label{0755fit}
We compare model and observed total intensity images and profiles for 0755+37 in
Fig.~\ref{fig:0755comp}; the corresponding polarization comparisons are shown in
Fig.~\ref{fig:0755polcomp} and Fig.~\ref{fig:0755trans} gives averaged
transverse profiles of $I$, $I_{\rm j}/I_{\rm cj}$ and $Q/I$. Note that the
fainter emission is affected by imperfect lobe subtraction, as discussed in
Section~\ref{images}. This is particularly obvious at large distances from the
jet axis in images of ratios such as $I_{\rm j}/I_{\rm cj}$ and $p$.
The following
features of the brightness and polarization distributions are reproduced.
\begin{enumerate}
\item The main jet has a brightness peak at 1.3\,arcsec from the core
(Figs~\ref{fig:0755comp}i -- k). Farther out, the profile declines rapidly with
distance.
\item There is a rapidly-expanding, triangular region of roughly uniform
brightness at the base of the counter-jet (Figs~\ref{fig:0755comp}a and b).
\item The jet base structure is initially very asymmetric, with a peak sidedness ratio
$\approx$80 at 1.9\,arcsec from the core, decreasing rapidly with distance to
reach an asymptotic value $\approx$1 at 15\,arcsec (Figs~\ref{fig:0755comp}e
-- h).
\item At faint brightness levels, the counter-jet appears significantly
wider than the main jet, with a large opening angle (Figs~\ref{fig:0755comp}a
and b).
\item The counter-jet brightness profiles are more flat-topped or edge-brightened
than those of the main jet at most distances from the core (Figs~\ref{fig:0755comp}a and b and Figs~\ref{fig:0755trans}a -- h).
\item A prominent arc of emission crosses the counter-jet at $\approx$26\,arcsec
from the nucleus (Figs~\ref{fig:0755comp}a
and b).
\item There is also a bar of emission crossing the counter-jet at
$\approx$12\,arcsec from the nucleus (Figs~\ref{fig:0755comp}a, b and d).
\item The profiles of degree and direction of polarization along the axis show
the same characteristic asymmetry seen in 0206+35 and other FR\,I jets.
There is a change in apparent field direction at $\approx$5\,arcsec from the
nucleus in the approaching jet, but not in the counter-jet, whose apparent
magnetic field is always transverse (Figs~\ref{fig:0755polcomp}g and h, Figs~\ref{fig:0755trans}q -- t). The degree of polarization in the
counter-jet rises monotonically with distance from the nucleus,
reaching large values ($p \approx 0.5$) far from the nucleus
(Figs~\ref{fig:0755polcomp}a -- c).
\item The degree of polarization in the main jet base is low, and the apparent
field is longitudinal (Figs~\ref{fig:0755polcomp}d -- f, i and j).
\item There are minima in the degree of polarization on either side of the axis
in both jets, corresponding to the transition between transverse and
longitudinal apparent field (Figs~\ref{fig:0755polcomp}a, b, g, h;
\ref{fig:0755trans}m -- t).
\item There is a region of high polarization with a circumferential magnetic
field around the base of the counter-jet (Figs~\ref{fig:0755polcomp}g and h).
\item Determination of the observed polarization in the faint
regions far from the axis is complicated by imperfect subtraction of lobe
emission, but the apparent field is primarily parallel to the edges of both
jets (Figs \ref{fig:0755trans}m -- t).
\end{enumerate}
\begin{figure*}
\begin{center}
\epsfxsize=15cm
\epsffile{0755_compare.eps}
\caption{Comparison between the observed and modelled total intensities $I$ and
sidedness ratios $I_{\rm j}/I_{\rm cj}$ for 0755+37. (a) and (b) colour images
of $I$. (a) observed; (b) model. (c) and (d) profiles of observed and model
$I$ along the axis of the jet. (e) and (f) images of $I_{\rm j}/I_{\rm
cj}$. The white contours represent $I_{\rm j}/I_{\rm cj} =
1$: outside the contours, $I_{\rm j}/I_{\rm cj} < 1$. (g) and (h) profiles of observed and model $I_{\rm
j}/I_{\rm cj}$ along the jet axis. The resolution for panels (a) -- (h) is
1.3\,arcsec FWHM. (i) and (j) colour images of $I$ in the range 0 --
5\,mJy\,beam$^{-1}$ for the base of the main jet. (i) observed, (j) model. (k)
profile of observed and model $I$ along the
jet axis. The resolution for panels (i) -- (k) is 0.4\,arcsec FWHM. In the
profile plots, the full and dotted (red) lines are from observed images with lobe
subtraction by interpolation (as for the colour plots) and spectrum,
respectively. The dashed/blue line is the model.
\label{fig:0755comp}
}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\epsfxsize=18cm
\epsffile{0755_compare_pol.eps}
\caption{Comparison between the observed and modelled linear polarization
for 0755+37 at resolutions of 1.3 and 0.4\,arcsec
FWHM. (a) and (b) colour images
of $p = P/I$ in the range 0 -- 0.7 at 1.3\,arcsec FWHM. (a) observed; (b) model. (c)
profiles of observed (full/red) and model (dashed/blue) $p$ along the axis of the
jet. Only the profile of $p$ derived from interpolated images is plotted; the
equivalent for spectral subtraction is very noisy. (d) -- (f): as (a) -- (c)
but for the main jet only at 0.4\,arcsec FWHM.
(g) and (h): vectors with lengths
proportional to $p$ and directions along the apparent magnetic field
superimposed on colour images of $I$. The resolution is 1.3\,arcsec FWHM and
the vector scale is indicated by the
labelled bar. (g) observed, (h) model. (i) and (j): as (g) and (h), but for
the main jet at 0.4\,arcsec FWHM.
\label{fig:0755polcomp}
}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\epsfxsize=17cm
\epsffile{trans_0755.eps}
\caption{Transverse profiles of total intensity, $I$, jet/counter-jet sidedness
ratio, $I_{\rm j}/I_{\rm cj}$ and $Q /I$ for 0755+37. The data have been
averaged along radii from the nucleus from 8.1 -- 12.6\,arcsec and from 12.6
-- 17.1\,arcsec and parallel to the jet axis from 18 -- 21\,arcsec and 24 --
30\,arcsec, as indicated in the captions (Section~\ref{gencompare}). Full and
dotted (red) lines both represent observations, with lobe subtraction by
interpolation and spectral methods, respectively. Dashed/blue lines show the
model. $Q/I > 0$ and $Q/I < 0$ correspond to transverse and longitudinal
apparent field, respectively.
\label{fig:0755trans}
}
\end{center}
\end{figure*}
Features which are not fit well by the model are as follows.
\begin{enumerate}
\item The observed brightness distribution of the bright main jet base is
slightly more centre-brightened than the model and the observed degree of
polarization is higher than predicted at its edges (Figs~\ref{fig:0755comp}i,
j; \ref{fig:0755polcomp}d, e, i, j).
\item The observed transverse total-intensity profiles are significantly more
limb-brightened than the model in some places, and in particular between 18
and 21\,arcsec on both sides of the nucleus (Figs~\ref{fig:0755trans}c and g).
\item The inner bar crossing the counter-jet is both straighter and slightly
farther from the nucleus in the observed image ($\approx$13\,arcsec compared
with $\approx$11\,arcsec for the model; Figs~\ref{fig:0755comp}a, b, d). The
fit may be affected by the limb-brightening in this region, however.
\item As in 0206+35, the off-axis brightness of the main jet is slightly overestimated
close to the nucleus. The difference is, however, exaggerated by the look-up table in
Figs~\ref{fig:0755comp}(a) and (b) and is more accurately represented by the
profile in Fig.~\ref{fig:0755trans}(a).
\end{enumerate}
Fig.~\ref{fig:outback0755} shows the outflow and backflow components of the
model intensity distribution. As for 0206+35, the outflow appears similar to
that in other FR\,I radio galaxies, but the backflow is relatively stronger in
0755+37. The prominent curved arc crossing the counter-jet $\approx$26\,arcsec
from the nucleus is modelled as the projection of the inner edge of the backflow
at $r = r_{\rm b}$. This is roughly elliptical in shape, with an axial ratio
of $\sec\theta = 1.22$ and there is good correspondence between model and
data. As mentioned earlier, the fit to the bar crossing the counter-jet closer to the nucleus is less
successful. In the model, this is the other half of the projected inner edge of
the backflow, so there is no freedom to adjust its location or curvature to match the
observed feature more closely. A
similar problem afflicts the main jet: the projection of the inner edge of the
backflow appears slightly too bright, causing the excess off-axis emission close
to the nucleus.
\section{Derived parameters}
\label{intrinsic}
The best-fitting parameters for our models of 0206+35 and 0755+37 are listed
in Tables~\ref{tab:outflow} (outflow) and \ref{tab:backflow} (backflow).
\subsection{Geometry}
Both sources are fairly close to the line of sight, as expected from their high
jet/counter-jet sidedness ratios and bright cores. We derive $\theta = 40^\circ$
for 0206+35 and $35^\circ$ for 0755+37. The outflow geometries are typical of
those we have determined for other FR\,I jets, with the boundaries between
flaring and outer regions at 5.3 and 13.9\,kpc from the nucleus for 0206+35 and
0755+37, respectively. The corresponding half-opening angles in the outer
regions are 3\fdg 9 and 7\fdg 4.
In 0206+35, the backflow has a half-opening angle of 11\degr\ in the outer
region and its emission extends back into the flaring region, with a cut-off at $r_{\rm b}
= 2.7$\,kpc. For 0755+37, on the other hand, the backflow emission is truncated within
the outer region ($r_{\rm b} = 23$\,kpc), where its half-opening angle is
16\degr.
\subsection{Velocity}
\label{vel-parms}
\begin{figure}
\begin{center}
\epsfxsize=7cm
\epsffile{outback0755.eps}
\caption{Predicted brightness distributions for the outflowing and backflowing
parts of the model for 0755+37. (a) outflow; (b) backflow.
\label{fig:outback0755}
}
\end{center}
\end{figure}
Velocity images derived from our model fits are shown in Fig.~\ref{fig:vels}.
The initial velocities of both outflow components are similar ($\beta_1 = 0.86$
for 0206+35 and 0.88 for 0755+37) and the associated transverse velocity
profiles are close to uniform. 0206+35 shows little on-axis deceleration,
reaching an asymptotic velocity $\beta_0 = 0.68$ after 4\,kpc. Its transverse
velocity profile evolves much more, and the fractional edge velocity is 0.04 at
large distances. In both these respects, the source resembles 3C\,296
\citep{LCBH06}. 0755+37, on the other hand, appears to decelerate rapidly, to
$\beta_0 = 0.25$ by 18.5\,kpc, with a fractional edge velocity of 0.26. This
estimate should be treated with caution since the emission in the outer
counter-jet is dominated by the backflow component, making it difficult to
assess the intensity or polarization of the outflow there.
The backflow velocities increase away from the source axis, from $\beta = 0.05$
to 0.20 for 0206+35 and from 0.25 to 0.35 for 0755+37.
\subsection{Emissivity}
Model images of $n_0 B^{1+\alpha}$ (proportional to the emissivity function
$\epsilon$) are shown in Fig.~\ref{fig:emiss}.
The model outflow components again show properties very similar to those in
other FR\,I jets. The locations of the flaring points (0.82 and 1.55\,kpc from
the nucleus for 0206+35 and 0755+37, respectively) are well determined and
consistent with higher-resolution observations \citep{LGBPB}. The emissivity
variations in the faint and poorly resolved inner jets upstream of the flaring
points are not well constrained. In the flaring and outer regions, the gradient
of the emissivity profile flattens with distance in both sources, as is usual in
FR\,I jets. 0755+37 requires a sudden decrease in emissivity with distance at $r
= r_{e0}$ whereas 0206+35 does not.
The observed limb-brightening in both sources shows side-to-side symmetry. This
cannot result from a transverse velocity gradient in the sense we have inferred,
which would lead to limb-brightening only in the counter-jet. In agreement with
this qualitative argument, the best-fitting transverse emissivity profiles are
higher at the edges than on-axis. This effect is slight in 0206+35, where the
profile is consistent with a uniformly-filled cylinder everywhere. In 0755+37,
however, limb-brightening is required over much of the outer region
(Fig.~\ref{fig:emiss}b). As noted in Section~\ref{0755fit}, the observed
transverse intensity profiles in this source are significantly more
limb-brightened than the model predicts, suggesting that there is a narrow
enhancement in emissivity at the boundary between the outflow and backflow. The
functional form we assume for the transverse variation of emissivity does not
allow for such narrow features.
The backflow emissivity decreases with distance at similar rates in the two
sources ($\propto r^{-1.66}$ in 0206+35 and $\propto r^{-1.81}$ in 0755+37). It
is centre-brightened in 0206+35 ($e_{\rm b} = 0.02$) but closer to uniform in
0755+37 ($e_{\rm b} = 0.79$).
\subsection{Field Ordering}
The fractional components of magnetic field, $\langle B^2_t \rangle^{1/2}/B$
(toroidal), $\langle B^2_l \rangle^{1/2}/B$ (longitudinal), and $\langle B^2_r
\rangle^{1/2}/B$ (radial) are plotted in Fig.~\ref{fig:bfield}.
In both sources, the field close to the nucleus in the outflow is close
to isotropic, with the longitudinal component just exceeding the other
two. At larger distances, the toroidal component dominates, with significant
longitudinal and radial contributions in 0206+35 and 0755+37, respectively. As
for velocity and emissivity, the field components in the outer parts of 0755+37
may have larger systematic errors because of the dominance of backflow emission.
The field in the backflow is toroidally dominated in both sources, with
non-negligible radial components in both cases and some longitudinal field in
0206+35.
\begin{center}
\begin{table}
\caption{Model parameters which are common to outflow and backflow, or which
apply only to the outflow (Section~\ref{fit-funcs} and Table~\ref{tab:functions}). Col.\,1: parameter; col.\,2: unit;
cols 3 and 4: values for 0206+35 and 0755+37. The parameters are defined in
Section~\ref{fit-funcs} and listed in
Table~\ref{tab:functions}. $\Delta\theta$ is the range of angles to the line
of sight for which any acceptable solutions can be obtained.\label{tab:outflow}}
\begin{tabular}{lrrr}
\hline
&&&\\
\multicolumn{2}{c}{Variable} &0206+35&0755+37\\
&&&\\
\hline
&&&\\
\multicolumn{4}{c}{Geometry (common to outflow and backflow)}\\
&&&\\
$\theta$° &$40.0_{-0.3}^{+0.3}$ &$34.8_{-0.8}^{+0.7}$ \\
$\Delta\theta$°&$34 - 43$ &$32.5 - 37.5$ \\
$r_0$&kpc &$5.3_{-0.1}^{+0.1}$ &$13.9_{-0.3}^{+0.3}$ \\
&&&\\
\multicolumn{4}{c}{Outflow geometry}\\
&&&\\
$\xi_0$° &$3.9_{-0.2}^{+0.2}$ &$7.4_{-0.1}^{+0.2}$ \\
$x_0$&kpc &$1.32_{-0.04}^{+0.02}$&$3.88_{-0.06}^{+0.08}$\\
&&&\\
\multicolumn{4}{c}{Velocity}\\
&&&\\
$r_{v1}$&kpc&$1.8_{-0.3}^{+0.3}$ &$3.6_{-1.5}^{+1.6}$ \\
$r_{v0}$&kpc&$4.1_{-0.2}^{+0.3}$ &$18.5_{-1.5}^{+2.3}$ \\
$\beta_1$& &$0.86_{-0.07}^{+0.08}$&$0.88_{-0.04}^{+0.05}$\\
$\beta_0$& &$0.68_{-0.05}^{+0.09}$&$0.25_{-0.05}^{+0.07}$\\
$v_1$& &$0.95_{-0.13}^{+0.05}$&$1.00_{-0.06} $\\
$v_0$& &$0.04_{-0.01}^{+0.02}$&$0.26_{-0.11}^{+0.19}$\\
&&&\\
\multicolumn{4}{c}{Emissivity}\\
&&&\\
$r_{e1}$&kpc &$0.82_{-0.02}^{+0.02}$&$1.55_{-0.03}^{+0.04}$\\
$r_{e0}$&kpc &$2.04_{-0.06}^{+0.07}$&$10.2_{-0.3}^{+0.1}$ \\
$E_{\rm in}$& &$\approx 3.1$ &$\approx 2.4$ \\
$E_{\rm mid}$&&$2.59_{-0.08}^{+0.09}$&$3.76_{-0.04}^{+0.02}$\\
$E_{\rm out}$&&$2.13_{-0.06}^{+0.08}$&$1.16_{-0.09}^{+0.05}$\\
$e_1$& &$1.2_{-0.5}^{+0.6}$ &$1.0_{-0.2}^{+0.3}$ \\
$e_0$& &$1.14_{-0.16}^{+0.16}$&$2.2_{-0.3}^{+0.5}$ \\
$g_1$& &$1.7_{-1.3}^{+0.8}$ &$1.7_{-0.4}^{+0.5}$ \\
$g_0$& &$1.05_{-0.09}^{+0.08}$&$0.52_{-0.03}^{+0.06}$\\
&&&\\
\multicolumn{4}{c}{Field component ratios}\\
&&&\\
$r_{B1}$&kpc &$<1.4$ &$8.8_{-2.0}^{+2.8}$ \\
$r_{B0}$&kpc &$4.6_{-0.5}^{+0.5}$ &$15.4_{-3.2}^{+2.5}$ \\
$j_1$ &&$1.50_{-0.22}^{+0.34}$&$0.96_{-0.09}^{+0.13}$\\
$j_0$ &&$0.11_{-0.11}^{+0.13}$&$0.44_{-0.15}^{+0.12}$\\
$k_1$ &&$1.36_{-0.13}^{+0.13}$&$1.15_{-0.07}^{+0.08}$\\
$k_0$ &&$0.64_{-0.04}^{+0.05}$&$0.08_{-0.08}^{+0.22}$\\
&&&\\
\hline
\end{tabular}
\end{table}
\end{center}
\begin{center}
\begin{table}
\caption{Model parameters for backflow (Section~\ref{fit-funcs} and Table~\ref{tab:functions}). Col.\,1: parameter; col.\,2: unit;
cols 3 and 4: values for 0206+35 and 0755+37.\label{tab:backflow}}
\begin{tabular}{lrrr}
\hline
&&&\\
\multicolumn{2}{c}{Variable} &0206+35&0755+37\\
&&&\\
\hline
&&&\\
\multicolumn{4}{c}{Geometry}\\
&&&\\
$\xi_{\rm b}$°&$10.9_{-0.5}^{+0.5}$&$15.6_{-0.1}^{+0.5}$\\
$r_{\rm b}$&kpc &$2.7_{-0.2}^{+0.1}$ &$23.2_{-0.7}^{+0.8}$\\
&&&\\
\multicolumn{4}{c}{Velocity}\\
&&&\\
$\beta_{\rm b, in}$ &&$0.02_{-0.02}^{+0.03}$&$0.25_{-0.07}^{+0.04}$\\
$\beta_{\rm b, out}$&&$0.20_{-0.07}^{+0.06}$&$0.35_{-0.05}^{+0.05}$\\
&&&\\
\multicolumn{4}{c}{Emissivity}\\
&&&\\
$n_{\rm b}$&$\times 100$&$2.3_{-0.2}^{+0.2}$ &$0.094_{-0.010}^{+0.000}$\\
$E_{\rm b}$& &$1.66_{-0.07}^{+0.06}$&$1.81_{-0.05}^{+0.07}$\\
$e_{\rm b}$& &$0.05_{-0.01}^{+0.02}$&$0.79_{-0.14}^{+0.13}$\\
&&&\\
\multicolumn{4}{c}{Field component ratios}\\
&&&\\
$j_{\rm b}$&&$0.24_{-0.07}^{+0.08}$&$0.38_{-0.07}^{+0.07}$\\
$k_{\rm b}$&&$0.38_{-0.09}^{+0.08}$&$0.03_{-0.03}^{+0.15}$\\
&&&\\
\hline
\end{tabular}
\end{table}
\end{center}
\begin{figure}
\begin{center}
\epsfxsize=6.0cm
\epsffile{vels.eps}
\caption{The model values of velocity $\beta$ in units of $c$ in planes
containing the jet axes. Positive and negative values of $\beta$ denote
outflow and backflow, respectively. (a) 0206+35, (b) 0755+37.
\label{fig:vels}
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfxsize=6.0cm
\epsffile{emiss.eps}
\caption{The model values of $\log (n_0 B^{1+\alpha})$ in planes containing the
jet axes ($n_0$ and $B$ are in SI units). (a) 0206+35, (b) 0755+37.
\label{fig:emiss}
}
\end{center}
\end{figure}
\subsection{Backflow spectral index}
\label{spectra}
We can also constrain the spectral index of the radio emission from
the backflows. The spectral indices at the edges of the
jets, where the line of sight is mainly through backflow emission after the lobe
subtraction, are much closer to those of the jets themselves than to the values
elsewhere in the lobes. We can estimate the spectrum of the backflow emission
directly from the images shown in Figures~\ref{fig:0206sub}(c) and
\ref{fig:0755sub}(c) or, more accurately, by integrating total intensity at
1.425 and 4.860\,GHz over pixels which are unblanked in these images. The latter
method gives mean spectral indices of 0.50 for 0206+35 and 0.57 for 0755+37,
compared with 0.55 and 0.53 for the sum of outflow and backflow emission.
\section{Discussion}
\label{discuss}
\subsection{Testing the hypothesis}
It is clear that the initial jet base asymmetries of most FR\,I jets are
produced by relativistic aberration \citep{LB02a,CL,CLBC,LCBH06}. If 0206+35
and 0755+37 prove to be typical -- in that counter-jets consistently appear
wider than the main jets at a given isophote in lobed FR\,I sources whose jet
base asymmetries are large -- then the jet/counter-jet width asymmetry must also
be correlated with jet orientation. The models presented in
Section~\ref{compare} show that mildly relativistic backflow offers a possible
cause for such an orientation-dependent effect.
There is an alternative explanation
for $I_{\rm j}/I_{\rm cj}$ becoming $<$1 in some parts of a source
which
also
preserves the orientation-dependence of the effect.
For the special case where the
magnetic field is
purely toroidal and the edge velocity is $\approx \cos\theta$, it is possible
for relativistic aberration to give an off-axis jet/counter-jet sidedness ratio
$<$1 even for a pure symmetrical
{\em outflow}.
We analyse this
special
case in
Appendix~\ref{toroidal}, where we show that it is {\em inconsistent} with the
polarization imaging of 0206+35 and 0755+37. The mechanism inevitably produces
degrees of polarization close to the theoretical maximum of $p_0 \approx 0.7$
with a transverse apparent field.
It is therefore unlikely to be important in
the majority of observed jets
but it
may be relevant in a few objects like 3C\,296
(Appendix~\ref{toroidal}).
If the jets are intrinsically symmetrical, then the backflow hypothesis remains
the most plausible explanation for the observed brightness and polarization
asymmetries, but (with only two clear-cut cases analysed in such detail so far)
it is important to test it by looking at more objects. We reviewed the rest of
the B2 low-luminosity source sample \citep{Parma87} to see if any other data
support (or contest) the interpretation given here. \cite{LPdRF} found that the
source B2\,0844+31 also has both a small jet to counter-jet width ratio and a
high intensity ratio $I_{\rm j}/I_{\rm cj}$. Unfortunately, there is no imaging
for that source of the high quality we now have for 0206+35 and 0755+37 so we
cannot test models of its asymmetries at the same level of detail. Nor can we
classify its large scale structure definitively as `lobed' or `plumed': deeper
imaging sensitive to its most extended structure is needed. Although lack of
high-quality imaging precludes us from finding other good examples of these
phenomena in the B2 sample, we note that there are no clear {\sl
counter}-examples -- either of sources in which the brighter jet appears to be
wider than the counter-jet at low intensity levels, or of a large
jet/counter-jet width asymmetry in a source that lacks `lobed' structure or with
only a small jet/counter-jet {\sl intensity} asymmetry at its base.
As noted by \citet[see their fig.~15]{LCBH06}, the jets in the lobed FR\,I
source 3C\,296 show $I_{\rm j}/I_{\rm cj} < 1$ at their edges. The emission
there is faint, but the effect is consistently present in the flaring and outer
regions. The transverse variations of linear polarization are also very
different in the two jets \citep[Figs~18g and h]{LCBH06}: the counter-jet shows
a prominent parallel-field edge, whereas the main jet does not. The model
described by \citet{LCBH06}, while giving a good overall fit to the brightness
and polarization distributions of 3C\,296, was not consistent with the
observation of $I_{\rm j}/I_{\rm cj} < 1$ and did not fully reproduce the flat
profile of $p$ with transverse apparent field in the approaching jet. We have
examined possible backflow models for 3C\,296 and find that they are
qualitatively inconsistent with the polarization distribution, although they can
easily fit the edge sidedness ratios. The combination of sidedness ratio and
polarization is more reminiscent of the predictions of the outflow model
analysed in Appendix~\ref{toroidal}.
Emission from backflow such as that modelled here would be hard to recognise
in lobed FR\,I sources whose jets are close to the plane of the sky. The backflow emission in
such sources would be almost indistinguishable from faint outer edges of their jets and
only unusually precise spectral index measurements could distinguish it from low level
brightness enhancements of the lobes near the jets.
\begin{figure*}
\begin{center}
\epsfxsize=15cm
\epsffile{bcomps.eps}
\caption{The fractional magnetic field components for the three sources. (a),
(d) toroidal, $\langle B_t^2/B^2\rangle^{1/2}$; (b), (e)
longitudinal, $\langle B_l^2/B^2\rangle^{1/2}$; (c), (f) radial, $\langle
B_r^2/B^2\rangle^{1/2}$. (a) -- (c) 0206+35, (d) -- (f) 0755+37.
\label{fig:bfield}
}
\end{center}
\end{figure*}
\subsection{Should we expect backflows in FR\,I sources?}
Light jets propagating into dense media can be expected to terminate in one of
two ways. They may decelerate and transition into `plumes' or `tails' that are
deflected away from the AGN by external pressure gradients or by winds in the
IGM. Alternatively, they may deflect before reaching a contact discontinuity
with the denser external medium, thus accumulating a `cocoon' around the
outflow. The first process is thought to underlie the formation of plumed or
tailed FR\,I radio sources such as 3C\,31 while the second is thought to form
the `classical double' lobed radio sources such as Cygnus A and is often
associated with FR\,II morphology. Lobes in the (generally more luminous)
FR\,II sources also frequently contain discrete radio `hot spots' that are
identified with strong shocks where well-collimated (supersonic) outflows are
slowed and begin to supply lobe material. There is no reason to suppose,
however, that discrete hot spot formation is a requisite for cocoon (or radio
lobe) formation -- momentum balance alone requires the deflection of the light
outflow if it cannot escape along its initial path owing to development of a
high pressure region downstream. Cocoons without hot spots are indeed seen in
simulations of relativistic jets which are much lighter than their surroundings
\citep{PM07,Rossi08}, in which the jets flows are transonic where they terminate.
The majority of FR\,I sources form radio lobes
whose detailed morphologies, spectral characteristics and polarization properties strongly
resemble those of higher-power FR\,II lobes \citep{PDF96,Parma99,LGBPB}. Their lobes
have sharp outer brightness gradients,
circumferential magnetic fields, and spectral indices that steepen towards the centre
of the source on the largest angular scales -- but {\sl without} hot spots. Furthermore,
outflows in lobed FR\,I sources can deflect
through large angles without losing their identities: \cite{LGBPB} found regions
where emission with jet-like spectral index $\approx$0.6 had displaced steeper-spectrum
emission within FR\,I lobes. These results suggest that ongoing large-scale flow is
present in these lobes well beyond the clearly recognisable jets.
There is therefore both theoretical and observational support for
supposing that jet outflows containing relativistic particles and
magnetic fields may be redirected through large angles in lobed FR\,I
sources. The additional ingredient suggested by our modelling of
0206+35 and 0755+37 is that a component of such an outflow in an FR\,I
source can return to the vicinity of the AGN as mildly relativistic
backflow. As we noted in the introduction to this paper, this idea is
supported by the presence of backflow with $\beta \ga 0.2$ around the
jets in some numerical simulations of the propagation of light,
relativistic jets. The simulation by \citet{PM07} used initial
conditions for the jet derived from our FR\,I source models
\citep{LB02a,LB02b} and realistic density and pressure gradients in
the surrounding galactic and group atmosphere \citep{Hard02}. In
particular, the velocity at injection was $\beta = 0.87$ and the
initial density contrast (the ratio of the density of the jet to that
of its surroundings) was $\eta = 10^{-5}$. Although the jet had
propagated only $\approx$15\,kpc by the end of the simulation, the
structure already resembled a lobed FR\,I source of the type discussed
here, with a cocoon of backflowing, mixed jet and external plasma
surrounding the jet. The jet was transonic at its termination, so no
hot spot was formed. Typical backflow velocities in the cocoon were
$\beta \approx 0.15$, with values reaching $\beta \approx 0.4$ close
to the nucleus. The use of an open boundary condition in the symmetry
plane at the base of the jet can cause the backflow speed to be
over-estimated \citep{Saxton02}, although \citet{PM07} argued that this
effect was small in their simulation because the flow through the open
boundary was negligible. One other possible concern is that the
simulation by \cite{PM07} was axisymmetric: the speed and extent of
fast backflow appear to be smaller in some fully three-dimensional
simulations compared with the equivalent axisymmetric cases
\citep{Norman96,Aloy99}. We note, however, that the comparison may not
be relevant to lobed FR\,I sources because the density contrast, $\eta
= 0.01$, was much higher in these two examples, leading to cocoons
which were far longer and thinner than those observed. The
three-dimensional simulation of a relativistic jet with $\eta =
10^{-4}$ by \citet{Rossi08} indeed showed fast backflow with $\beta
\approx 0.4$, despite the use of symmetric boundary conditions at the
jet inlet. The initial conditions (jet Lorentz factor $\Gamma = 10$)
and the assumption of a uniform external density are probably more
appropriate to smaller physical scales than we consider here, however.
Thus, although the assumptions and initial conditions of the
simulations by \citet{PM07} and \cite{Rossi08} are not realistic
enough to permit a quantitative comparison with our results, they do
suggest that the idea of fast backflow is a reasonable one provided
that the density contrast is very small ($\la 10^{-4}$).
The simulations discussed above are entirely hydrodynamic. We also note that
backflow is an expected ingredient of models of magnetic hoop stress collimation
of current-carrying jets because such models must provide a return current path --
although it is unclear that such return paths need be as close to the jet outflow
boundary as the backflow we have described here.
\section{Summary and Further Work}
\label{summary-further}
\subsection{Summary}
\label{summary}
We have shown that many aspects of the intensity and linear polarization distributions
over the inner jets and counter-jets in the lobed FR\,I radio sources 0206+35 and
0755+37 are accounted for by an intrinsically symmetrical decelerating
relativistic jet model that includes (mildly) relativistic backflow around both jets.
We have estimated properties of this backflow subject to the simplifying assumptions
that it is symmetrical across the AGN, axisymmetric, and that its streamlines are
similar in shape to those of the outflow. Although these assumptions are likely
to be too simple a priori we nevertheless find that the quality of the $IQU$ fits
obtained with the models including such symmetric backflow is
similar to that obtained with pure decelerating outflow models of other
FR\,I jets \citep{LB02a,CL,CLBC,LCBH06}. Furthermore, the outflow components of
the models we have fitted to 0206+35 and 0755+37 are quite similar to those
obtained for other FR\,I sources. The addition of backflow to the models
therefore suffices to explain the otherwise anomalous jet/counter-jet
asymmetries of both sources and eliminates the need to invoke ad hoc
environmental (or other intrinsic) side-to-side asymmetries.
The salient features of backflow inferred from this procedure are as follows.
\begin{enumerate}
\item The backflow velocities are mildly relativistic, in the range $0.05 \la \beta \la 0.35$ (Fig.~\ref{fig:vels}).
\item The backflows are approximately symmetric around the outflows and their radio
emission comes from a hollow cone surrounding the jet axis with additional half-opening
angles $\approx 8\degr$.
\item They can be traced to considerable distances from the AGN (at least 15\,kpc
for 0206+35 and 50\,kpc for 0755+37) but the emission close to the ends of the
jets in both sources is chaotic, and it is not clear where the backflows
begin.
\item They do not emit synchrotron radiation all the way in to the AGN (Fig.~\ref{fig:emiss}).
\item The backflows emit with a spectral index $\alpha \approx 0.55$
(Section~\ref{spectra}). This spectral index is lower than that of the
nearby lobes and comparable with those of the outflows.
\item Their magnetic fields are mostly toroidal and their emissivities decrease
with distance roughly as $r^{-1.7}$ (Figs~\ref{fig:emiss} and \ref{fig:bfield}).
\end{enumerate}
These are the only two lobed FR\,I sources for which we have deep enough imaging and
polarimetry to reveal the `two-component' aspect of the jets and counter-jets that
motivated this study. The generality of our results could thus be called into question by a
{\sl single} new example of an FR\,I source with strong jet-width asymmetries in which
either (a) the axis is inferred to be close to the plane of the sky or (b) the apparently wider features are
associated with the brighter jet. With only two examples of possible backflow features
we also cannot address whether {\sl all} lobed FR\,I sources might contain backflow or
(conversely) whether backflow exists {\sl only} in lobed sources.
The interpretation including backflow will continue to
be preferable to any involving intrinsic side-to-side width differences if further studies
find the apparently wider features only on the counter-jet side, and only in lobed sources
for which inclination indicators suggest that the jets are at moderately large angles to the
plane of the sky.
\subsection{Open questions and further work}
\label{open}
Our observations and models give no clue about the ultimate fate of
the backflow or how it may interact with the outflow, but they raise a
number of questions which could be addressed by deeper,
higher-resolution observations of 0206+35 and 0755+37.
\begin{enumerate}
\item Where does the backflow originate? Does it start in a high-pressure
region at the end of the outflow?
\item Does the backflow shield the jet from entrainment or interaction with the
lobe plasma?
\item Does the presence of the backflow perturb the jet structure in any way?
\item Where does the backflow ultimately go: sideways or even closer to the AGN?
\item Why does the backflow radiate strongly where it does and stop radiating
close to the AGN?
\item Can the backflow really be faster than the asymptotic velocity of the
outflow, as appears at first sight to be the case in 0755+37\footnote{The
asymptotic outflow velocity is poorly constrained (Section~\ref{vel-parms}),
so this difference may not be real.}?
\end{enumerate}
Additional questions which could be answered by observations of a
sample of FR\,I sources include the following.
\begin{enumerate}
\item Do jets in other FR\,I sources with large jet-width asymmetries
also have the two-component jet and counter-jet structure found here
in 0206+35 and 0755+37 (i.e.\ a strongly centrally brightened peaked
main jet and centre-darkened counter-jet near the axis, and
counter-jet emission consistently brighter than that of the main jet
further from the axis)?
\item Does the counter-jet/jet width asymmetry indeed correlate well
with orientation indicators -- counter-jet/jet intensity ratios and
normalized core power -- as expected in a relativistic backflow
model of this asymmetry? If so, the tightness of the correlation
with orientation indicators could be used to constrain the intrinsic
symmetry of the backflow.
\item Does the width asymmetry indeed occur only in {\sl lobed}
FR\,I's? It will be important to obtain images which are
sufficiently sensitive to extended structure to detect faint lobe
emission in any sources whose structural classification is dubious.
\end{enumerate}
The high sensitivity and resolution of the imaging needed to address
all of these issues and to test backflow models of the type we have
proposed will require the use of the Jansky (Expanded) Very Large
Array and {\sl e-MERLIN}.
Given the similarities between the extended emission in FR\,II and
lobed FR\,I sources, it would also be interesting to search for
evidence of backflow in the former class. The jets in FR\,II sources
are usually much narrower than those we have imaged in the present
study and are thought to be highly supersonic where they terminate in
compact hot spots. Backflow is predicted by simulations of FR\,II
dynamics, but it is unclear how its properties might depend on density
contrast, Mach number, magnetization and source age. It may be that
observations of FR\,II sources without prominent hot spots will offer
the best chance of detecting backflows. Counter-jets in FR\,II sources
are faint and difficult to distinguish from filamentary lobe emission,
so identification of any backflow component may be even more
challenging than in FR\,I's.
Three-dimensional simulations of very light, relativistic jets
propagating in realistic external density and pressure distributions
would be extremely valuable in understanding the backflow
phenomenon in FR\,I sources. To be realistic, such simulations should
be bipolar, with initial density contrasts $\approx$10$^{-5}$. The
effects of magnetic fields (ordered or disordered) on the flow also
remain to be investigated.
\section*{Acknowledgements}
The National Radio Astronomy Observatory is a facility of the National Science
Foundation operated by Associated Universities, Inc. under co-operative
agreement with the National Science Foundation. We are grateful to the referee
for a very careful reading of the paper. RAL would also like to thank Alan and
Mary Bridle for hospitality.
|
3,212,635,537,613 | arxiv | \section{Introduction}
In this paper will consider the expansion properties of a random binomial simplicial complex past the threshold
for the cohomological connectivity.
This model was introduced by Linial and Meshulam~\cite{ar:LinMesh2006} and it is
a generalisation of the binomial random graph $G(n,p)$.
Let $Y(n,p;d)$ denote the random $d$-dimensional simplicial complex on $[n]:=\{1,\ldots, n\}$ where all possible
faces of dimension up to $d-1$ are present but each subset of $[n]$ of size $d+1$ becomes a face with probability $p=p(n) \in [0,1]$, independently of every other subset of size $d+1$. When $d=1$, the model reduces to the binomial random graph on $[n]$ with edge probability equal to $p$.
In their seminal paper, Linial and Meshulam~\cite{ar:LinMesh2006} considered the \emph{cohomological connectivity} of $Y (n,p;2)$, that is whether the cohomology group $H^1(Y(n,p;d);\mathbb Z_2)$ over $\mathbb Z_2$ on dimension $1$ is trivial.
They discovered a threshold function for the 2-face probability $p$:
$$\lim_{n\to \infty} \Prob{H^1(Y(n,p;d);\mathbb Z_2) \ \mbox{is trivial}} =
\begin{cases}
1, & \mbox{if $p = \frac{2\log n + \omega(n)}{n}$} \\
0, & \mbox{if $p = \frac{2\log n - \omega (n)}{n}$}
\end{cases},
$$
where $\omega: \mathbb N \to \mathbb R_+$ is an arbitrary function such that $\omega (n) \to \infty$ as $n\to \infty$ and
$\log$ is the natural logarithm.
This generalises the classic theorem of Gilbert~\cite{ar:Gilbert59} and
Erd\H{o}s and R\'enyi~\cite{ar:ErdosRenyi60} regarding the (graph) connectivity of $G(n,p)$ ($G(n,m)$, respectively). In particular, let $\omega : \mathbb N \to \mathbb R_+$
be a function such that $\omega (n) \to \infty $ as $n \to \infty$;
if $p = \frac{\log n + \omega (n)}{n}$, then w.h.p. $G(n,p)$ is connected, whereas if $p = \frac{\log n - \omega (n)}{n}$, then w.h.p. $G(n,p)$ has an isolated vertex.
The theorem of Linial and Meshulam was extended by Meshulam and Wallach~\cite{meshulamwallach} to any dimension $d \geq 2$, with $\mathbb Z_2$ replaced by any finite group $R$:
$$\lim_{n\to \infty} \Prob{H^{d-1}(Y(n,p;d);R) \ \mbox{is trivial}} =
\begin{cases}
1, & \mbox{if $p = \frac{d\log n + \omega(n)}{n}$} \\
0, & \mbox{if $p = \frac{d\log n - \omega (n)}{n}$}
\end{cases},
$$
where $\omega$ is as above.
Linial and Meshulam~\cite{ar:LinMesh2006} asked whether this can be extended to $\mathbb Z$ (for random
$2$-complexes).
For $d=2$, {\L}uczak and Peled~\cite{ar:LuczakPeled2017} proved a hitting time version of this result
considering the generalisation of the random graph process. This is a random process in which one constructs a
random simplicial complex on $n$ vertices with complete skeleton, where in each step
a new 2-dimensional face is added, selected uniformly at random.
They showed that w.h.p. the $1$-homology group over $\mathbb Z$ becomes trivial the very moment all edges (1-faces)
lie in at least one 2-face. This was proved for $\mathbb Z_2$ by Kahle and Pittel~\cite{ar:KahlePittel2016}.
These results generalise the classic result of Bollob\'as and Thomason~\cite{ar:BolThom83} on the w.h.p. coincidence of the hitting times of connectivity with that of having minimum degree at least 1.
For $d\geq3$, Hoffman et al.~\cite{ar:HoffKahlePaq2017} provided a partial answer showing that the 1-statement holds for $H^{d-1} (Y(n,p;d);\mathbb Z)$ provided that $np \geq 80 d \log n$.
Furthermore, Gundert and Wagner~\cite{gundert2016eigenvalues} showed that $H^{d-1} (Y(n,p;d); \mathbb R)$ is
trivial provided that $np \geq C \log n$ where $C$ is a sufficiently large constant. Their approach was extended
by Hoffman, Kahle and Paquette~\cite{hoffman2012spectral} which extended this to $p$ such that $np\geq (1+\varepsilon) d\log n$. Very recently, Cooley, del Guidice, Kang and Spr\"ussel~\cite{ar:CdGKS2020} considered
the cohomological connectivity of a generalised version of the Linial-Meshulam model in which random selection
of faces takes place at all levels and not merely at the top level.
In this paper, we will study the expansion properties of $Y(n,p;d)$ for $p$ as in the supercritical regime
of the Linial-Meshulam-Wallach theorem. To be more precise, we will consider the case $np = (1+\varepsilon)d\log n$,
for an arbitrary fixed $\varepsilon >0$,
and we will deduce sharp concentration results about the spectral gap of the (combinatorial) Laplace operator as well as the Cheeger constant of $Y(n,p;d)$.
For a sequence of events $(\mathcal {E}_n )_{n \in \mathbb N}$, where $\mathcal {E}_n$ is an event in the probability space of
$Y(n,p;d)$, we say that they occur \emph{with high probability (w.h.p.)}, if $\Prob{\mathcal {E}_n} \to 1$ as $n\to \infty$.
(We will use the same term for events in the probability space of $G(n,p)$.)
If $X_n$ is a random variable defined on the probability space of $Y(n,p;d)$ and $c \in \mathbb R$, we write
$X_n = c (1+o_p(1))$, if $\Prob{|X_n - c| > \varepsilon } \to 0$ as $n\to \infty$ - so loosely speaking
$X_n \to c$ in probability as $n\to \infty$.
\subsection{Measures of expansion: the spectral gap and the Cheeger constant}
The definition and the use of the discrete Laplace operator in quantifying expansion properties of graphs
dates back to Alon and Milman~\cite{ar:AlonMilman85}.
For a graph $G=(V,E)$, the (combinatorial) Laplace operator $\Delta^+_G$ is defined as the difference
$D_G - A_G$, where $D_G=\mathrm{diag} ( \dgr{v})_{v\in V}$ and $A_G$ is the adjacency matrix of $G$.
In~\cite{ar:AlonMilman85}, Alon and Milman showed how the smallest positive
eigenvalue of the Laplace operator is linked to the structure of the graph as a metric space and,
in particular, to the distribution of distances between disjoint sets and the diameter of the graph.
The Laplace operator had been considered in graph theory earlier~\cite{tr:AndMor71,bk:Biggs74, ar:Fiedler73}
in relation to the number of spanning trees, the girth and connectivity of a graph.
Let $\lambda (G)$ denote the smallest positive eigenvalue of $\Delta^+_G $ also known as the \emph{spectral gap}
of $\Delta^+_G$. It is a consequence of
a more general result in~\cite{ar:AlonMilman85} that $\lambda (G)$ is bounded from above
(up to some multiplicative constant) by the \emph{edge expansion} of $G$.
This is defined as
$$c (G):= \min_{A \subset V \ : \ 0 < |A|\leq |V|/2} \frac{e(A, V\setminus A)}{|A|},$$ where
$e(A, V \setminus A)$ denotes the number of edges with one endpoint in $A$ and the other in $V\setminus A$;
it is called the \emph{Cheeger constant} of $G$.
Lemma 2.1 in~\cite{ar:AlonMilman85}
implies that for any non-empty (proper) subset $A \subset V$ we have
\begin{equation} \label{eq:graph_Cheeger}
\lambda (G) \leq \frac{n \cdot e(A, V\setminus A)}{|A| |V\setminus A|} =h(A;G).
\end{equation}
If $|A| \leq |V|/2$, then the above is at most $2c(G)$.
This is the discrete analogue of an inequality proved by Cheeger in~\cite{ar:Cheeger1970}. Setting
$h(G) = \min_{A \ : \ 0 < |A| \leq |V|/2} h(A;G)$ one can complete the above inequality with a lower bound
(proved by Dodziuk~\cite{ar:Dodziuk84}) and get
\begin{equation} \label{eq:Laplace-Cheeger-graphs}
\frac{h^2 (G)}{8 d_{\max} (G)} \leq \lambda (G) \leq h(G) \leq 2c(G),
\end{equation}
where $d_{\max} (G)$ is the maximum degree of $G$.
Another consequence of this result is that if $d_{\min} (G)$ denotes the minimum degree of $G$,
then $\lambda (G) \leq \frac{|V|}{|V|-1} \delta (G)$ (this was also proved in~\cite{ar:Fiedler73}).
Moreover, if $G$ is disconnected, then $\lambda (G)=0$. Thus, sometimes $\lambda (G)$ is called
the \emph{algebraic connectivity} of $G$.
Further properties of expander graphs in relation to the smallest positive eigenvalue of the Laplace
operator were obtained by Alon in~\cite{ar:Alon86}.
More generally, the spectrum of the Laplace operator of a graph also determines how the edges between subsets
of vertices are distributed. This is expressed through the well-known \emph{Expander-Mixing Lemma}.
Roughly speaking, it states that if the entire non-trivial spectrum of the Laplace operator of a graph $G=(V,E)$
is close to $d$, then the density of edges between any two non-empty subsets $A, B \subset V$ is about $d/n$. Such an estimate about the number of edges within any given subset of $A \subset V$
was proved by Alon and Chung~\cite{ar:AlonChung88}, in the case where $G$ is a $d$-regular graph.
It was generalised by Friedman and Pippenger in~\cite{ar:FriedmanPipp87}.
The spectral gap of $\Delta^+(G(n,p))$ was considered recently by Kolokolnikov et al.~\cite{kolokolnikov2014algebraic}.
Kolokolnikov et al.~\cite{kolokolnikov2014algebraic} considered $p = c \log n /n$, for $c > 1$ (that is, above the connectivity threshold), and showed that in this case w.h.p.
\begin{equation}\label{eq:lambda_Gnp}
|\lambda (G(n,p))- d_{\min} (G(n,p))| < C \sqrt{\log n}.
\end{equation}
One of the results of our paper is to generalise this result to higher dimensions in the context of the Linial-Meshulam
random simplicial complex $Y(n,p;d)$.
\subsection{High dimensional Laplace operators}
Let $Y$ be a $d$-dimensional simplicial complex on a set $V$ with $|V|< \infty$. We let $Y^{(j)}$ denote the
set of $j$-dimensional faces in $Y$, that is, the faces containing exactly $j+1$ vertices, where $-1\leq j\leq d$.
It is customary to set $Y^{(-1)} =\{ \varnothing \}$. Also, note that $Y^{(0)} = V$.
For abbreviation, we will be calling
a $j$-dimensional face a \emph{$j$-face}.
If all possible $j$-faces are present in $Y$, for $j \leq d-1$, then $Y$ is said to have \emph{complete skeleton}.
Furthermore, for a $d-1$-face $\sigma$ in $Y$ its \emph{degree} $\dgr{\sigma}$ is the co-degree of $\sigma$
in $Y$, that is, $\dgr{\sigma} = |\{ v \in Y^{(0)} \ : \ \{v\} \cup \sigma \in Y^{(d)} \}|$.
We let $\delta (Y) = \min_{\sigma \in Y^{(d-1)}} \dgr{\sigma}$
be the minimum co-degree of a $d-1$-face in $Y$. If $d=1$, then $\delta (Y)$ coincides with $d_{\min}$ of
the associated graph.
In our context a $j$-face for $j\geq 2$ has two orientations which are
the two equivalence classes of all permutations of its vertices which have the same sign. In other words,
two permutations correspond to the same orientation if we can derive one from the other applying an even
number of transpositions.
If $\sigma$ is an oriented $j$-face, we denote by $\bar{\sigma}$ the opposite orientation of it, and by $Y^{(j)}_{\pm}$ we denote the set of oriented $j$-faces. Finally for an oriented $j$-face $\rho = [v_0,\dots, v_j]$,
with $1 \leq j \leq d$, we let $\partial \rho$ denote the \emph{boundary of $\rho$}, which is the set of oriented $j-1$-faces
$(-1)^i \rho \setminus v_i := (-1)^i [v_0,\ldots, v_{i-1},v_{i+1},\ldots, v_j]$, for $i=0,\ldots, j$.
The space of \emph{$j$-forms}, which we denote by $\Omega^{(j)}(Y;\mathbb R)$ is the vector space over $\mathbb R$ of all
skew-symmetric functions on oriented $j$-faces. In other words, for $j \geq 2$ we define
$$\Omega^{(j)} (Y;\mathbb R) = \{ f: Y^{(j)}_{\pm} \to \mathbb R \ : \ f(\bar{\sigma}) = - f(\sigma), \ \forall \sigma \in Y^{(j)}_{\pm} \}, $$
whereas $\Omega^{(0)} (Y;\mathbb R)$ is just the set of all real-valued functions on $V$ and $\Omega^{(-1)} (Y;\mathbb R)$ is defined
as the set of all functions from $Y^{(-1)} = \{\varnothing\}$ to $\mathbb R$, which can be identified with $\mathbb R$.
The space $\Omega^{(j)} (Y;\mathbb R)$ is endowed with the inner product:
$$\langle f,g \rangle = \sum_{\sigma \in Y^{(j)}} w(\sigma) f(\sigma) g(\sigma), \ \mbox{for $f,g \in \Omega^{(j)} (Y;\mathbb R)$}, $$
where $w: Y \to (0,\infty)$ is a weight function.
If $\sigma = [v_0,\ldots, v_j]$ is an oriented $j$-face and $v\in V$ not a member of $\sigma$, then we set
$v\sigma = [v,v_0,\ldots, v_j]$. Furthermore, if $v\in V$ and $\sigma \in Y$, we write $v\sim \sigma$, if
$\{ v\} \cup \sigma \in Y$ too.
For $j=1,\ldots, d$, we define the $j$th \emph{boundary operator} $\partial_j : \Omega^{(j)} (Y;\mathbb R) \to \Omega^{(j-1)} (Y;\mathbb R)$:
for $f \in \Omega^{(j)} (Y;\mathbb R)$ and $\sigma \in Y^{(j-1)}$ we set
$$(\partial_j f) (\sigma) = \sum_{v: v \sim \sigma} f(v\sigma). $$
It is well-known and easy to verify that for $j \geq 1$, we have $\partial_{j}\partial_{j+1} = 0$, whereby
${\rm Im} \partial_{j+1} \subseteq \mathrm{Ker} \partial_{j}$. We set $Z_j (Y)= \mathrm{Ker} \partial_{j}$ and $B_j (Y)= {\rm Im} \partial_{j+1}$.
The set $Z_j (Y)$ is the set of $j$\emph{-cycles}, whereas $B_j(Y)$ is the set of $j$\emph{-boundaries}.
Note that both $Z_{j}, B_{j} \subseteq \Omega^{(j)} (Y;\mathbb R)$ and furthermore $(\Omega^{(j)} (Y;\mathbb R), \partial_j)_{j=1}^d$ is
a chain complex. The group $H_j(Y;\mathbb R) = Z_j (Y)/B_j(Y)$ is the $j$th \emph{homology group} over $\mathbb R$.
Similarly, one defines the $j$th \emph{coboundary operator} $\delta_{j}: \Omega^{(j)} \to \Omega^{(j+1)}$ as
follows: if $\sigma= [v_0,\ldots, v_{j+1}]$ is an oriented $j+1$-face and $f \in \Omega^{(j)}$, then
$$(\delta_j f) (\sigma) = \frac{1}{w(\sigma)}\sum_{i=0}^{j+1} (-1)^i w(\sigma \setminus v_i) f(\sigma \setminus v_i), $$
where $\sigma \setminus v_i = [v_0,\ldots, v_{i-1}, v_{i+1},\ldots, v_{j+1}]$.
It is not hard to show that ${\rm Im} \delta_{j-1} \subseteq \mathrm{Ker} \delta_{j}$
We set $Z^j (Y) = \mathrm{Ker} \delta_{j}$ (the set of closed $j$-forms) and $B^j (Y)= {\rm Im} \delta_{j-1}$ and
$H^j (Y;\mathbb R) = Z^j(Y)/ B^j (Y)$, the $j$th \emph{cohomology group} over $\mathbb R$.
A straightforward calculation shows that $\delta_{j-1}$ is the adjoint operator of $\partial_j$:
for $f_1 \in \Omega^{(j-1)}(Y;\mathbb R)$ and $f_2 \in
\Omega^{(j)}(Y;\mathbb R)$
\begin{equation} \label{eq:adjoint}
\langle \delta_{j-1} f_1, f_2 \rangle = \langle f_1, \partial_j f_2\rangle.
\end{equation}
Note that $B^{j}(Y) = Z_{j}(Y)^{\perp}$ and $B_j(Y) = Z^j (Y)^\perp$ and
$$\Omega^{(d-1)} (Y;\mathbb R) = B^{d-1} (Y) \oplus Z_{d-1} (Y) = B_{d-1} (Y) \oplus Z^{d-1} (Y).$$
\subsubsection*{The Laplace operator and the spectral gap}
The Laplace operator associated with $Y$ is the operator
$\Delta : \Omega^{(d-1)} (Y;\mathbb R) \to \Omega^{(d-1)} (Y;\mathbb R)$
defined as
$$ \Delta = \Delta^+ + \Delta^-, $$
where
$$\Delta^+ = \partial_d \delta_{d-1} \ (\mbox{upper Laplacian}), \ \mbox{and} \ \Delta^- = \delta_{d-2}\partial_{d-1} \
(\mbox{lower Laplacian}).$$
Note that~\eqref{eq:adjoint} implies that $\mathrm{Ker} \Delta^+ = Z^{d-1}(Y)$ whereas $\mathrm{Ker} \Delta^- = Z_{d-1}(Y)$.
The partial Laplace operators decompose the space $\Omega^{(d-1)} (Y; \mathbb R)$:
$$\Omega^{(d-1)} (Y;\mathbb R) = B^{d-1} (Y) \oplus Z_{d-1} (Y) = B_{d-1} (Y) \oplus Z^{d-1} (Y).$$
The subspace $\mathcal {H}_{d-1} (Y)=\mathrm{Ker} \Delta$ is the called the space of \emph{harmonic $d-1$-forms}.
Note that for any $f \in \mathcal {H}_{d-1}(Y)$, the fact that $\delta_{j-1}$ is the adjoint of $\partial_j$ implies that
$$\langle \partial_{d-1} f , \partial_{d-1} f \rangle, \langle \delta_{d-1} f , \delta_{d-1} f \rangle = 0,$$
whereby $f \in Z^{d-1} (Y) \cap Z_{d-1}(Y)$. Also, note that the definitions of $Z^{d-1}(Y), Z_{d-1}(Y)$ imply
that $Z^{d-1} (Y) \cap Z_{d-1}(Y) \subseteq \mathrm{Ker} \Delta$. Thus, in fact $\mathcal {H}_{d-1} (Y)=Z^{d-1}(Y) \cap Z_{d-1} (Y)$.
The \emph{discrete Hodge decomposition} is due to Eckmann~\cite{ar:Eckmann44}:
\begin{eqnarray} \label{eq:Hodge}
\Omega^{(d-1)} (Y;\mathbb R) &=& B^{d-1}(Y) \oplus \underbrace{\mathcal {H}_{d-1}(Y) \oplus B_{d-1}(Y)}_{Z_{d-1}(Y)}.
\end{eqnarray}
It can be shown (see ~\cite{parzanchevski2016isoperimetric} p. 203) that
\begin{equation} \label{eq:coh-isomorphism}
H^{d-1}(Y;\mathbb R) \cong \mathcal {H}_{d-1}(Y) \cong H_{d-1} (Y;\mathbb R).
\end{equation}
A quantity that is of interest is the \emph{spectral gap} $\lambda (Y)$ of a $d$-dimensional complex $Y$.
This is defined as the minimal eigenvalue of the Laplacian or the upper Laplacian over $Z_{d-1}$ (the
set of $d-1$-cycles). (Note that the two operators $\Delta^+$ and $\Delta$ coincide on $Z_{d-1}= \mathrm{Ker} \partial_{d-1}$.) We define
\begin{equation}\label{eq:gap-def}
\lambda (Y) := \min {\rm Spec} (\Delta |_{Z_{d-1}(Y)}) = \min {\rm Spec} (\Delta^+|_{Z_{d-1} (Y)}).
\end{equation}
Horak and Jost~\cite{ar:JostHorrack} developed the theory of the Laplace operator for general weight
functions. We will focus on special weighting schemes that give rise to generalisations of the well-studied
combinatorial Laplace operator as well as the normalised Laplace operator.
\subsubsection*{The combinatorial Laplace operator and the Cheeger constant}
In the case where $w(\sigma ) = 1$ for all $\sigma \in Y$, the operator $\Delta^+$ is called the
\emph{combinatorial (upper) Laplace operator} associated with $Y$.
An algebraic manipulation can give explicitly the combinatorial Laplace operator:
for $f \in \Omega^{(d-1)} (Y)$ and $\sigma = [v_0,\ldots, v_{d-1}] \in Y_{\pm}^{(d-1)}$, we have (as in (3.1) from
~\cite{parzanchevski2016isoperimetric}), taking $v_d = v$,
\begin{eqnarray*}
(\Delta^+ f) (\sigma ) &=& \sum_{v : v\sigma \in Y_\pm^{(d)}} (\delta_{d-1}f) (v\sigma) =
\sum_{v : v\sigma \in Y_\pm^{(d)}} \sum_{i=0}^d (-1)^i f(v \sigma \setminus v_i ) \\
&=& \sum_{v : v\sigma \in Y_\pm^{(d)}} \left( f(\sigma) - \sum_{i=0}^{d-1} (-1)^i f(v \sigma \setminus v_i ) \right) \\
&=& \dgr{\sigma} f(\sigma ) - \sum_{v : v\sigma \in Y_\pm^{(d)}} \sum_{i=0}^{d-1} (-1)^i f(v \sigma \setminus v_i ).
\end{eqnarray*}
If $Y$ is a finite simplicial complex with $|Y^{(0)}|=n$ vertices, we define
\begin{equation} \label{eq:Cheeger_def}
h(Y) =\min_{|Y^{(0)}| = A_0 \uplus \cdots \uplus A_d}
\frac{n \cdot |F(A_0,\ldots, A_d)|}{\prod_{i=0}^d |A_i|},
\end{equation}
where the minimum is taken over all partitions of $Y^{(0)}$ into $d+1$ non-empty parts $A_0, \ldots, A_d$
and $F(A_0, \ldots, A_d)$ is the set of $d$-faces with exactly one vertex in each one of the parts.
The following theorem was proved by Parzanchevski, Rosenthal, and Tessler~\cite[Theorem 1.2]{parzanchevski2016isoperimetric}.
\begin{thm}
\label{thm:cheegerIneq}
For a finite complex $Y$ with a complete skeleton,
$$\lambda(Y) \leq h(Y).$$
\end{thm}
Furthermore, the authors also derive an expander mixing lemma for complexes with complete skeleton.
This assumption was removed by Parzanchevski~\cite{ar:Parzan17}.
In~\cite{parzanchevski2016isoperimetric}, the authors discuss the existence of a lower bound in the spirit
of the lower bound in~\eqref{eq:Laplace-Cheeger-graphs}. They observe (cf. Section 4.2 in~\cite{parzanchevski2016isoperimetric}), that a bound of the form $C \cdot h(Y)^m \leq \lambda (Y)$,
for some $C, m>0$ cannot hold, providing as counterexample the minimal triangulation of a M\"obius strip,
which has $\lambda (Y)=0$ but $h(Y) >0$.
Parzanchevski et al.~\cite{parzanchevski2016isoperimetric} conjecture that an inequality of the form
$C \cdot h(Y)^2 - c \leq \lambda (Y)$ should hold, where $C, c>$ depend on the maximum degree of any
$d-1$-face of $Y$ as well as on the dimension of $Y$.
Furthermore, they showed~\cite{parzanchevski2016isoperimetric} that for $D>0$ there exists $\gamma$
such that $\gamma = O(\sqrt{D})$, as $D \to \infty$, such that
if $np = D \cdot \log n$, then w.h.p. ${\rm Spec} (\Delta^+|_{Z_{d-1}}) \subset [(D-\gamma) \log n, (D+\gamma) \log n]$. This implies that w.h.p. $\lambda (Y(n,p;d)) = (D \pm O(\sqrt{D})) \log n$ if $np = D \log n$ and $D$ is sufficiently large.
Our results strengthen the latter, showing that if $np = (1+ \varepsilon)d \log n$ and $\varepsilon >0$ is fixed, the upper bound $\lambda (Y(n,p;d)) \leq h(Y(n,p;d))$ which follows from Theorem~\ref{thm:cheegerIneq} becomes tight in
that $\lambda (Y(n,p;d)) = h(Y(n,p;d)) (1+o_p(1))$.
Furthermore, we show that $\lambda (Y(n,p;d)) / np$ converges in probability as $n\to \infty$
to a certain constant which depends on $\varepsilon$ and $d$.
Recall that $\delta (Y(n,p;d))$ denotes the minimum co-degree among all $d-1$-dimensional faces of $Y(n,p;d)$.
\begin{thm} \label{thm:Cheeger_mincodeg} Let $p = \frac{(1+\varepsilon)d \log n}{n}$, where $\varepsilon >0$ is fixed. There exists $C>0$ such that w.h.p.
$$\delta (Y(n,p;d)) - C \sqrt{\log n}\leq \lambda (Y(n,p;d) ) \leq h( Y(n,p;d)) \leq \delta (Y(n,p;d)). $$
Furthermore, w.h.p.
$$ |\delta (Y(n,p;d)) - (1+\varepsilon )a d \log n| < C \sqrt{\log n}, $$
where $a = a(\varepsilon)$ is the solution to
$$ \varepsilon = (1+\varepsilon) (1- \log a ) a. $$
\end{thm}
The above theorem not only strengthens the results of Parzanchevski et al.~\cite{parzanchevski2016isoperimetric}
as far as the range of $p$ is concerned, but it also gives more precise asymptotics for large $\varepsilon$.
Remark 1.2 in~\cite{kolokolnikov2014algebraic} states that as $\varepsilon \to \infty$, $a (\varepsilon )= 1 - \sqrt{\frac{2}{(1+\varepsilon)d}} + O\left( \frac{1}{\varepsilon} \right)$.
Hence, if we write $D = (1+\varepsilon )d$, then for any $D$ sufficiently large we have w.h.p
$$\lambda (Y(n,p;d)), h(Y(n,p;d)) = \left( D - \sqrt{2 D} + O(1) \right)\log n + O(\sqrt{\log n}). $$
The proof of the above theorem has three parts. We start with the result on $\delta (Y(n,p;d))$
in Section~\ref{sec:min_codeg} (cf. Lemma~\ref{lem:minDegree}). Thereafter, in Section~\ref{sec:cheeger}
we show that $h(Y(n,p;d)) \leq \delta (Y(n,p;d))$ (cf. Theorem~\ref{thm:mainCheeger}).
Hence, the upper bound on $\lambda (Y(n,p;d))$ follows from Theorem~\ref{thm:cheegerIneq}.
For the lower bound on $\lambda (Y(n,p;d))$ in Section~\ref{sec:laplacians} we follow an approach similar to that of Gundert and Wagner~\cite{gundert2016eigenvalues}.
The lower bound is derived through a decomposition, essentially due to Garland~\cite{garland1973p}, of the Laplace operator $\Delta^+$ of a simplicial complex $Y$ into the sum of the (combinatorial) graph Laplace operators of the link graphs defined by the $d-2$-faces of $Y$. We show that the positive eigenvalues of these are bounded from below by $\delta (Y)$ and hence the lower bound in Theorem~\ref{thm:Cheeger_mincodeg}.
However, this approximation incurs a term which involves the adjacency matrix of these link graphs. As we will see in Section~\ref{sec:laplacians}, in the case where $Y$ is $Y(n,p;d)$ these graphs are distributed as $G(n-d+1,p)$.
At this point we use sharp results of Feige and Ofek~\cite{ar:FeigeOfek2005} to show that this term has no
essential contribution.
\subsection{The normalised Laplace operator}
Under a weighting scheme where
\begin{equation} \label{eq:normalised_weights}
w(\sigma ) = \begin{cases} 1 & \mbox{if $\sigma \in Y \setminus Y^{(d-1)}$} \\
\frac{1}{\dgr{\sigma}} & \mbox{if $\sigma \in Y^{(d-1)}$}
\end{cases},
\end{equation}
the operator $\Delta^+$ is called the \emph{normalised Laplace operator} associated with $Y$.
An explicit calculation as above (cf. (2.6) in~\cite{ar:ParzRosen2017}) shows that for
$f \in \Omega^{(d-1)} (Y;\mathbb R)$
and $\sigma = [v_0,\ldots, v_{d-1}] \in Y_\pm^{(d-1)}$ we have
\begin{eqnarray*}
(\Delta^+ f) (\sigma ) &=& f(\sigma) - \sum_{v : v\sigma \in Y_\pm^{(d)}} \sum_{i=0}^{d-1} \frac{(-1)^i f(v\sigma \setminus v_i)}{\dgr{v\sigma \setminus v_i}} \\
&=& f(\sigma) - \sum_{\sigma' : \sigma'\sim \sigma} \frac{f(\sigma')}{\dgr{\sigma'}},
\end{eqnarray*}
where for two oriented faces $\sigma, \sigma' \in Y_{\pm}^{(d-1)}$, we write $\sigma \sim \sigma'$ if there exists
$\rho \in Y_\pm^{(d)}$ such that $\sigma, \overline{\sigma'} \in \partial \rho$.
For graphs, the normalised Laplace operator acts on functions on the vertex set of a graph $G=(V,E)$ and is defined
as $\mathcal{L}_G= D_G^{-1/2} \Delta^+ D_G^{-1/2} = I - D_G^{-1/2} A_G D_G^{-1/2}$, where $I$ is the identity operator. However, note that for $d=1$ the definition of $\Delta^+$ yields the operator $\Delta^+= I - D_G^{-1} A_G$.
This has the same spectrum as $\mathcal{L}_G$, provided that $\dgr{\sigma}>0$, for all $\sigma \in Y^{(0)}$.
Furthermore, the constant function on $V$ is an eigenfunction corresponding to eigenvalue 0, whereas all other eigenvalues are positive.
Gundert and Wagner~\cite{gundert2016eigenvalues} showed that w.h.p. the non-trivial eigenvalues of
the normalised Laplacian of $Y(n,p;d)$ are close to $1$, for $p$ such that $np \geq C \log n$. This implies that
$H^{d-1}(Y(n,p;d); \mathbb R)$ is trivial for such $p$.
Hoffman, Kahle and Paquette~\cite{hoffman2012spectral} extended this argument for $p$ such that
$n p \geq (1/2 + \delta) \log n$, showing that
w.h.p. all non-trivial eigenvalues are within $C/\sqrt{np}$ from 1.
The argument of Gundert and Wagner~\cite{gundert2016eigenvalues} relies on proving the sharp concentration of the non-trivial eigenvalues of the normalised Laplacian of $G(n,p)$ around 1. Hence, the sharpening of Hoffman et al.~\cite{hoffman2012spectral} follows from their main result about the eigenvalues of the Laplacian of $G(n,p)$ for any $p$ such that $np \geq (1/2 + \delta) \log n$, for arbitrary fixed $\delta>0$.
For the denser regime where $np =\Omega (\log^2 n)$, this was proved by Chung, Lu and Vu~\cite{ar:ChungLuVu2003}. However, for sparser regimes ($np$ bounded) this fact has been proved by Coja-Oghlan~\cite{ar:Coja-Oghlan2007} for the Laplace operator restricted on core of $G(n,p)$, although for $G(n,p)$
itself the spectral gap is $o_p(1)$.
\subsection{Random walks on $Y(n,p;d)$ and expansion}
This part of the paper is motivated by the notion of a random walk on $Y$ introduced by Parzanchevski and Rosenthal~\cite{ar:ParzRosen2017}.
This is in fact a random walk on $Y^{(d-1)}_\pm$ and, more precisely, on the
graph $(Y^{(d-1)}_\pm, E^{(d-1)}_\pm)$, where $\sigma \sigma' \in E^{(d-1)}_\pm$ if and only if $\sigma \sim \sigma'$, for distinct $\sigma, \sigma'$.
For example, if $Y$ is 2-dimensional complex, then this is a walk on the oriented edges (1-faces) of $Y$.
If $[v,u]$ is such a face, then the walk can move to any edge $[v',u]$ or $[v,v']$ provided that $[v,u,v'] \in Y_\pm^{(2)}$.
One may consider the projection of such a walk on $Y^{(d-1)}$.
For distinct $\sigma, \sigma' \in Y^{(d-1)}$, we also write $\sigma \sim \sigma'$, if there exists $\rho \in Y^{(d)}$ such that both $\sigma, \sigma' \subset \rho$. Suppose that for all $\sigma \in Y^{(d-1)}$ we have $\dgr{\sigma}>0$.
If $(X_0,X_1,\ldots)$ denotes this Markov chain,
then for any $n\geq 1$ the transition probabilities are
$\Prob{X_n = \sigma' \mid X_{n-1}=\sigma} = \frac{1}{d \cdot \dgr{\sigma}}$, provided that
$\sigma \sim \sigma'$; otherwise
$\Prob{X_n = \sigma' \mid X_{n-1}=\sigma} = 0$.
In a more general setting, one may consider a
\emph{$\gamma$-lazy} version of this random walk, for $\gamma \in (0,1)$,
where $\Prob{X_n = \sigma \mid X_{n-1}=\sigma} = \gamma$
and $\Prob{X_n = \sigma' \mid X_{n-1}=\sigma} = \frac{1-\gamma}{d \cdot \dgr{\sigma}}$, for $\sigma \sim \sigma'$.
In this Markov chain, the stationary distribution on $Y^{(d-1)}$, denoted by $\pi$, is such that $\pi (\sigma)$
is proportional to $\dgr{\sigma}$. Note that $\sum_{\sigma \in Y^{(d-1)}}\dgr{\sigma} = (d+1) \cdot |Y^{(d)}|$.
For any $\sigma \in Y^{(d-1)}$ we have $\pi (\sigma) = \frac{\dgr{\sigma}}{(d+1) \cdot |Y^{(d)}|}$.
We consider the mixing of such a random walk in the case where $Y$ is $Y(n,p;d)$
with $np \geq (1+\varepsilon) d\log n$. In particular, we will consider the \emph{conductance} of this Markov chain
which we denote by $\Phi_{Y}$.
First for any non-empty proper subset $S \subset Y^{(d-1)}$ we define
$$\Phi_Y(S) = \frac{Q(S,\overline{S})}{\pi (S) \pi (\overline{S})},$$
where $\overline{S}= Y^{(d-1)} \setminus S$ and $Q(S,\overline{S}) = \sum_{\sigma \in S} \sum_{\sigma' \in \overline{S} : \sigma' \sim \sigma} \pi(\sigma) \cdot \frac{1}{d \cdot \dgr{\sigma}}$ and
$\pi (S) = \sum_{\sigma \in S} \pi (\sigma )$.
The conductance $\Phi_Y$ is defined as
$$\Phi_Y = \min_{S\subset Y^{(d-1)} : 0< \pi (S) \leq 1/2} \Phi_Y(S). $$
Note that if $np> (1+\varepsilon) d \log n$, then a first moment argument shows that w.h.p. $\dgr{\sigma} > 0$, for all
$\sigma \in Y^{(d-1)} (n,p;d)$.
We show that w.h.p. the conductance of $Y(n,p;d)$ is bounded away from 0.
\begin{thm}
\label{thm:conductance}
Let $Y = Y(n,p;d)$ where $np = (1+\varepsilon) d \log n$ and $\varepsilon >0$ is fixed.
Then there exists $\delta > 0$ such that w.h.p.
$$\Phi_{Y(n,p;d)} > \delta.$$
\end{thm}
The first author and Reed~\cite{ar:FounReed2008} showed the analogous result for largest connected component
of $G(n,p)$ when $np = \Omega (\log n)$.
We prove Theorem~\ref{thm:conductance} in Section~\ref{sec:conductance}. Its proof is based on a double counting argument that is facilitated by a weak version of the Kruskal-Katona theorem (cf. Theorem~\ref{thm:kk}).
\subsubsection{Tools: concentration inequalities}
In our proofs, we make use of the following variant of the Chernoff bounds (see \cite[Chapter~4]{Chernoffcite}).
\begin{lem}\label{feelthechern}
Let $p \in (0,1)$, $N \in \mathbb N,$ and $\varepsilon > 0$. Then
\begin{equation}
\label{eqn:ChernoffUpper}
\Prob{\mathrm{Bin}(N,p) \ge (1+\varepsilon)Np} \le e^{-\varepsilon^2 Np / 3}
\end{equation}
and
\begin{equation}
\label{eqn:ChernoffLower}
\Prob{\mathrm{Bin}(N,p) \le (1-\varepsilon)Np} \le e^{-\varepsilon^2 Np / 2}.
\end{equation}
\end{lem}
\section{The minimum (co)degree of $Y(n,p;d)$} \label{sec:min_codeg}
The following lemma builds very strongly on the work of Kolokolnikov, Osting, and Von Brecht \cite{kolokolnikov2014algebraic}, who obtained very sharp bounds on the minimum vertex degree in $G(n,p)$, and its relation to the spectral gap of the graph just above the connectivity threshold, in particular, when $p = (1+\varepsilon) \tfrac{\log n}{n}$ for $\varepsilon > 0$. (More specifically, see Lemmas 3.3. and 3.4 in~\cite{kolokolnikov2014algebraic}.)
\begin{lem}
\label{lem:minDegree}
Let $p = (1+\varepsilon) \frac{d \log n}{n}$, and let $a = a(\varepsilon)$ denote the solution to
\begin{equation}
\label{eqn:aDefn}
\varepsilon = (1+\varepsilon)(1-\log a)a.
\end{equation}
Let $Y = Y(n,p;d)$ and let
\[
\delta (Y) = \min_{\sigma \in Y^{(d-1)}} |\{v \in [n] \setminus \sigma^{(0)} : \sigma \cup \{v\} \in Y^{(d)} \}|
\]
be the minimum co-degree of a $(d-1)$-dimensional face in $Y$. Then there exists a constant $C > 0$ such that w.h.p. we have that
\[
|\delta (Y) - (1+\varepsilon) a d \log n | \leq C \sqrt{\log n}.
\]
\end{lem}
\begin{proof}
For a random variable $X$ following the binomial distribution $\mathrm{Bin}(n,p)$ and $c > 0$, let
\[
f_n(p,c) = \Prob{X \leq c n p} = \sum_{i=0}^{\lfloor c n p \rfloor} \binom{n}{i} p^i (1-p)^{n-i}.
\]
For $c > 0$, set
\begin{equation}
\label{eqn:HDefn}
\mathcal {H}(c) = c-c \log c - 1.
\end{equation}
Observe that by \eqref{eqn:aDefn} and \eqref{eqn:HDefn} we have that $(1+\varepsilon)\mathcal {H}(a(\varepsilon)) = -1$.
It can be shown (see Lemma 3.3 in \cite{kolokolnikov2014algebraic}) that for $p = \Theta(\log n / n)$
there exist constants $c_1, c_2 > 0$ such that
\begin{equation}
\label{eqn:fBounds}
\frac{c_1 e^{np \mathcal {H}(c)}}{\sqrt{np}} \leq f_n(p, c) \leq c_2 \sqrt{np} e^{np \mathcal {H}(c)}.
\end{equation}
Recall that for a set $\sigma \subset [n]$ of $d$ vertices, $\dgr{\sigma}$ denotes the number of $d$-dimensional faces in $Y$ containing $\sigma$. Clearly, $\dgr{\sigma}$ follows the binomial distribution $\mathrm{Bin}(n-d,p)$.
The application of~\eqref{eqn:fBounds} to the random variable $\dgr{\sigma}$ yields:
\begin{equation}
\label{eqn:factualBounds}
\frac{c_1 e^{np \mathcal {H}(c)}}{2\sqrt{np}} \leq \frac{c_1 e^{(n-d)p \mathcal {H}(c)}}{\sqrt{(n-d)p}} \leq f_{n-d}(p, c) \leq c_2 \sqrt{(n-d)p} e^{(n-d)p \mathcal {H}(c)} \leq c_2 \sqrt{np} e^{np \mathcal {H}(c)}.
\end{equation}
Furthermore,
\[
a n p \pm \sqrt{np} = a(n-d)p \pm \sqrt{np} + a dp= \left ( a \pm \frac{\sqrt{np}}{(n-d)p} +\frac{a d}{n-d}\right ) (n-d)p.
\]
Since $a(\varepsilon) \in (0,1)$ for all $\varepsilon > 0$ (see Remark 1.3 in \cite{kolokolnikov2014algebraic}), by taking
\[
c_0^\pm = a \pm \frac{\sqrt{np}}{(n-d)p} + \frac{ad}{n-d}
\]
we can use \eqref{eqn:factualBounds} to see that
\begin{align*}
\Prob{\dgr{\sigma} \leq a n p - \sqrt{np} } & \leq c_2 \sqrt{np} e^{np \mathcal {H}(c_0^-)} \\
& = c_2 \sqrt{(1+\varepsilon)d \log n} \cdot \exp ((1+\varepsilon)d \log n \mathcal {H}(c_0^-)).
\end{align*}
and
\begin{align*}
\Prob{\dgr{\sigma} \leq a n p + \sqrt{np} } & \geq \frac{c_1 e^{np \mathcal {H}(c_0^+)}}{2\sqrt{np}} \\
& = \frac{c_1 \exp((1+\varepsilon)d \log n \mathcal {H}(c_0^+))}{2\sqrt{(1+\varepsilon)d \log n}}.
\end{align*}
Since both $\mathcal {H}(c)$ and $\mathcal {H}'(c) = -\log c$ are continuous and positive on $(0,1)$, we have that
\[
\mathcal {H}(c_0^\pm) = \mathcal {H}(a) \pm \frac{(1+o(1))\mathcal {H}'(a)}{\sqrt{(1+\varepsilon)d \log n}}.
\]
We start by showing that w.h.p. we have $\delta (Y) \geq a n p - \sqrt{np}$.
For a fixed subset $\sigma$ of size $d$ we have that
\begin{eqnarray*}
\lefteqn{\Prob{\dgr{\sigma} \leq a n p - \sqrt{np} } \leq}\\
&& c_2 \sqrt{(1+\varepsilon)d \log n} \exp \left ( (1+\varepsilon)d \log n \left ( \mathcal {H}(a) - \frac{(1+o(1))\mathcal {H}'(a)}{\sqrt{(1+\varepsilon)d \log n}} \right ) \right ) \\
& &= c_2 \sqrt{(1+\varepsilon)d \log n} \exp \left (- d \log n - \Theta \left ( \sqrt{\log n} \right ) \right ) \\
& &= n^{-d} \exp \left (- \Theta \left ( \sqrt{\log n} \right ) \right ).
\end{eqnarray*}
Therefore the expected number of $d$-element sets $\sigma$ with $\dgr{\sigma} \leq a n p - \sqrt{np}$ is at most
\[
\binom{n}{d} n^{-d} \exp \left (- \Theta \left ( \sqrt{\log n} \right ) \right ) = o(1),
\]
and consequently w.h.p. we have $\delta (Y) \geq a n p - \sqrt{np}$.
To bound $\delta (Y)$ from above, we first find an upper bound on $\dgr{\sigma}$.
Using that $(1+\varepsilon) \mathcal {H}(a)=-1$, we have
\begin{align*}
\Prob{\dgr{\sigma} \leq a n p + \sqrt{np} } & \geq \frac{c_1\exp \left ( (1+\varepsilon)d \log n \left ( \mathcal {H}(a) + \frac{(1+o(1))\mathcal {H}'(a)}{\sqrt{(1+\varepsilon)d \log n}}\right ) \right )}{2\sqrt{(1+\varepsilon)d \log n}} \\
& = \frac{1}{2\sqrt{(1+\varepsilon)d \log n}} c_1\exp \left (- d \log n + \Theta \left ( \sqrt{\log n} \right ) \right ) \\
& = n^{-d} \exp \left (\Theta \left ( \sqrt{\log n} \right ) \right ).
\end{align*}
Let $X_\sigma = \indicator{\dgr{\sigma} \leq a n p + \sqrt{np}}$ and let $N_0 = \sum_{\sigma \in Y^{(d-1)}} X_\sigma$ denote the number of $d$-element subsets $\sigma$ with $\dgr{\sigma} \leq a n p + \sqrt{np}$. Hence, letting $\mu = \E{|N_0|}$ we have
\begin{equation}
\label{eqn:mu}
\mu = \binom{n}{d} f_{n-d} (p, c_0^+) \geq \exp \left (\Theta \left ( \sqrt{\log n} \right ) \right ) \to \infty
\end{equation}
as $n \to \infty$.
By Chebyshev's inequality we then have
\begin{equation}
\label{eqn:chebyshev}
\Prob{|N_0-\mu| > \mu/2} \leq \frac{4\mathop{\rm Var}\nolimits(N_0)}{\mu^2}.
\end{equation}
The co-degrees of two subsets $\sigma, \sigma'$ are independent whenever $|\sigma \cap \sigma'| \neq d-1$. Thus the variance of $N_0$ satisfies
\begin{align*}
\mathop{\rm Var}\nolimits(N_0) & = \sum_{\sigma \in Y^{(d-1)}} \mathop{\rm Var}\nolimits(X_\sigma) + \sum_{\sigma, \sigma' \in Y^{(d-1)} : |\sigma \cap \sigma'| = d-1} \mathop{\rm Cov}\nolimits(X_\sigma, X_{\sigma'}) \\
& \leq \binom{n}{d} f_{n-d} (p, c_0^+) + n^{d+1} \mathop{\rm Cov}\nolimits(X_\sigma, X_{\sigma'}),
\end{align*}
where $\sigma, \sigma'$ are two fixed sets satisfying $|\sigma \cap \sigma'| = d-1$. Since
\begin{equation}
\label{eqn:covariance}
\mathop{\rm Cov}\nolimits(X_\sigma, X_{\sigma'}) = \Prob{X_\sigma = X_{\sigma'} = 1} - (f_{n-d} (p, c_0^+))^2,
\end{equation}
we focus on the value of $\Prob{X_\sigma = X_{\sigma'} = 1}$. Let $\dg{\sigma}{\setminus \sigma'}$ denote the number of $d$-dimensional faces that contain a $d$-subset $\sigma$ but do not contain the $d$-subset $\sigma'$. Using the law of total probability, conditioning on the presence or absence of the unique face that contains both $\sigma$ and $\sigma'$, and
since $\dg{\sigma}{\setminus \sigma'}$ and $\dg{\sigma'}{\setminus \sigma}$ are identically distributed,
we have
\begin{eqnarray} \label{eq:covs}
\lefteqn{\Prob{X_\sigma = X_{\sigma'} = 1} =} \nonumber \\
& & \Prob {\dg{\sigma}{\setminus \sigma'} + 1 \leq a n p + \sqrt{np} }
\Prob {\dg{\sigma'}{\setminus \sigma} + 1 \leq a n p + \sqrt{np} } p \nonumber \\
& &+ \Prob{\dg{\sigma}{\setminus \sigma'} \leq a n p + \sqrt{np} } \Prob {\dg{\sigma'}{\setminus \sigma} \leq a n p + \sqrt{np} } (1-p) \nonumber \\
& &\leq \left[ \Prob {\dg{\sigma}{\setminus \sigma'} + 1 \leq a n p + \sqrt{np}} \right ]^2 p +
\left[ \Prob{\dg{\sigma}{\setminus \sigma'} \leq a n p + \sqrt{np} } \right]^2.
\end{eqnarray}
In particular, the random variable $\dg{\sigma}{\setminus \sigma'}$ follows the binomial distribution
$\mathrm{Bin} (n-d-1,p)$ and, thereby, it is stochastically dominated by $\mathrm{Bin} (n-d,p)$.
So
\begin{eqnarray*}
\lefteqn{\Prob {\dg{\sigma}{\setminus \sigma'} + 1 \leq a n p + \sqrt{np}} =} \\
& & \Prob {\mathrm{Bin} (n-d-1,p) \leq a n p + \sqrt{np}-1} \leq \Prob{\mathrm{Bin} (n-d,p) \leq a np + \sqrt{np}}
= f_{n-d} (p, c_0^+).
\end{eqnarray*}
For the second term in~\eqref{eq:covs}, we have
\begin{align*}
\Prob{\dg{\sigma}{\setminus \sigma'} \leq a n p + \sqrt{np} } & = \sum_{j=0}^{\lfloor a n p + \sqrt{np} \rfloor} \binom{n-d-1}{j} p^j (1-p)^{n-d-1-j} \\
& \leq \sum_{j=0}^{\lfloor a n p + \sqrt{np} \rfloor} \binom{n-d}{j} p^j (1-p)^{n-d-1-j} \\
& = \left ( 1 + \frac{p}{1-p} \right ) \sum_{j=0}^{\lfloor a n p + \sqrt{np} \rfloor} \binom{n-d}{j} p^j (1-p)^{n-d-j} \\
&\stackrel{1-p> 1/2}{\leq}(1+2p) f_{n-d} (p, c_0^+).
\end{align*}
Hence for $n$ large enough we have
\[
\Prob{X_\sigma = X_{\sigma'} = 1} \leq (p + (1+2p)^2) \left( f_{n-d} (p, c_0^+) \right )^2
\leq (1+6p) \left ( f_{n-d} (p, c_0^+) \right )^2,
\]
and consequently
\begin{equation}
\label{eqn:covIneq}
\mathop{\rm Cov}\nolimits(X_\sigma, X_{\sigma'}) \leq 6p (f_{n-d} (p, c_0^+))^2.
\end{equation}
Thus by \eqref{eqn:chebyshev}, \eqref{eqn:covariance}, and \eqref{eqn:covIneq}, we obtain
\begin{align*}
\Prob{|N_0-\mu| > \mu/2} & \leq \frac{\mu + 6p n^{d+1} (f_{n-d} (p, c_0^+))^2}{\mu^2} \\
& = \frac{1}{\mu} + O(n^{-d} \log n) = o(1),
\end{align*}
since we have $\mu \to \infty$. Thus with high probability we have that the minimum co-degree of a $d$-element set is at most $a n p + \sqrt{np}$ and the lemma holds.
\end{proof}
For $k \in \mathbb Z$, let $W_k(x)$ be the $k$-th branch of the Lambert $W$ function, defined as
\[
W_k(x) e^{W_k(x)} = x.
\]
We now discuss some further properties of the function $a(\varepsilon)$ defined in \eqref{eqn:aDefn}.
This lemma shows that for small $\varepsilon$ the function $a(\varepsilon)$ is bounded by a linear function on $\varepsilon$.
We will use this bound in the next section.
\begin{lem} \label{lem:a_bound}
We have
\begin{equation}
\label{eqn:aformula}
a = a(\varepsilon) = \exp \left ( 1 + W_{-1} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right ) \right ) < \min \left \{1, 0.33 \varepsilon \right \}
\end{equation}
for all $\varepsilon > 0$.
\end{lem}
\begin{proof}
First, note that we can rewrite \eqref{eqn:aDefn} as
\[
\frac{\varepsilon}{1+\varepsilon} = (1-\log a)a = - e(\log a-1) \exp(\log a - 1),
\]
so we have that $\log a - 1 = W_k(-\frac{\varepsilon}{e(1+\varepsilon)})$ for some $k \in \mathbb Z$, and consequently that
\[
a (\varepsilon) = \exp \left ( 1 + W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right ) \right ).
\]
Since only the $0$th and $(-1)$th branch of the Lambert $W$ function are real, we must have $k=0$ or $k=-1$. Finally, since for $-1/e<x<0$ we have $W_{-1}(x) < -1 < W_{0}(x)$ and we must have $a(\varepsilon) \in (0,1)$, we see that we must have $k = -1$ for all $\varepsilon > 0$.
We now move on to showing that $a(\varepsilon) < \min \{1, 0.33\varepsilon \}$. The bound $a(\varepsilon) < 1$ follows from Remark 1.3 in \cite{kolokolnikov2014algebraic}. Hence we focus on the bound $a(\varepsilon) < 0.33\varepsilon$.
First, using the property that for any branch of the Lambert $W$ function and any $z \in (-e^{-1}, 0)$ we have $W'(z) = \frac{W(z)}{z(1+W(z))}$, and that $e^{W(z)} = z/W(z)$, we obtain
\begin{align*}
a'(\varepsilon) & = \exp \left ( 1 + W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right ) \right ) \frac{W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right )}{- \frac{\varepsilon}{e(1+\varepsilon)}\left(1+W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right )\right)} \frac{-1}{e(1+\varepsilon)^2} \\
& = - \frac{\exp \left ( W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right ) \right )}{-\frac{\varepsilon}{e(1+\varepsilon)}} \frac{W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right )}{\left(1+W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right)\right)} \frac{1}{(1+\varepsilon)^2} \\
& = - \frac{1}{(1+\varepsilon)^2 \left(1+W_{k} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right) \right)}.
\end{align*}
It was shown by Chatzigeorgiou \cite{chatzigeorgiou2013lambert} that for $u > 0$ we have
\[
W_{-1} \left ( -e^{-u-1} \right ) < -1 -\sqrt{2u} - \frac{2u}{3}.
\]
Taking $u = \log \frac{1+\varepsilon}{\varepsilon}$ leads to the bound
\[
a'(\varepsilon) < \frac{1}{(1+\varepsilon)^2 \left ( \sqrt{2 \log \frac{1+\varepsilon}{\varepsilon}} + \frac{2}{3} \log \frac{1+\varepsilon}{\varepsilon} \right )}.
\]
(Recall that $W_1(z) < -1$, so in absolute value the above inequality has the opposite sign.) Since $(1+\varepsilon)^2$ is increasing in $\varepsilon$, and $\sqrt{2 \log \frac{1+\varepsilon}{\varepsilon}} + \frac{2}{3} \log \frac{1+\varepsilon}{\varepsilon}$ is decreasing in $\varepsilon$, for $0 < \varepsilon \leq 1/5$ we have
\[
a'(\varepsilon) < \frac{1}{\left ( \sqrt{2 \log \frac{1.2}{0.2}} + \frac{2}{3} \log \frac{1.2}{0.2} \right )} < 0.33.
\]
Also, $a''(\varepsilon)$ is positive for $\varepsilon < \varepsilon_0 < 0.189$, and negative for $\varepsilon > \varepsilon_0$, so the maximum value of $a'(\varepsilon)$ is obtained for some $0< \varepsilon < 0.189$, where $a'(\varepsilon) < 0.33$. Since we have $W_{-1} \left ( - \frac{\varepsilon}{e(1+\varepsilon)} \right ) \to -\infty$ as $\varepsilon \to 0$, and consequently $a(\varepsilon) \to 0$ as $\varepsilon \to 0$, we have that $a(\varepsilon) < 0.33\varepsilon$ for all $\varepsilon > 0$.
\end{proof}
\section{Cheeger constant}
\label{sec:cheeger}
In this section we consider the measure of expansion of a simplicial complex which is called its \emph{Cheeger constant} and was defined in~\eqref{eq:Cheeger_def}.
As the main result in this section we prove the following theorem.
\begin{thm}
\label{thm:mainCheeger}
Let $p = (1+\varepsilon) \frac{d \log n}{n}$ and let $Y = Y(n,p;d)$ and let $a=a(\varepsilon)$ be as in~\eqref{eqn:aDefn}. There exists a positive constant $C > 0$ such that w.h.p. we have
\begin{equation}
\label{eqn:mainCheeger}
(1+\varepsilon) a d \log n - C\sqrt{\log n} \leq h(Y) \leq \delta (Y) \leq (1+\varepsilon) a d \log n + C\sqrt{\log n} .
\end{equation}
\end{thm}
\begin{proof}
The upper bound on $h(Y)$ follows immediately from Lemma \ref{lem:minDegree}. Indeed, if $A = \{a_0, \ldots, a_{d-1} \}$ is a $d$-element set with the minimum co-degree, then w.h.p. taking $A_i = \{ a_i \}$ for $0 \leq i \leq d-1$ and $A_d = [n] \setminus A$ gives us a partition with the desired value of
$|F(A_0, A_1, \ldots, A_d)| = \delta (Y)$.
Thus, $h(Y) \leq \delta (Y)$.
So now we focus on lower-bounding $h(Y)$. First, observe that for a given set of values of $|A_0|, |A_1|, \ldots, |A_d|$, the value of $|F(A_0, A_1, \ldots, A_d)|$ follows the binomial distribution $\mathrm{Bin} (\prod_{i=0}^d |A_i|, p)$.
Without loss of generality let us assume that $|A_0| \leq |A_1| \leq \ldots \leq |A_d|$; note that this implies that $|A_d| \geq n/(d+1)$. We shall consider three possible cases, depending on the size of the second largest set $A_{d-1}$.
First, let us assume that $|A_{d-1}| \geq n/(\log n)^{1/2}$, and set $|A_d| = \alpha n$ for some $\alpha \in [1/(d+1),1]$. In this case by \eqref{eqn:ChernoffLower} we have
\begin{align*}
& \Prob{n \cdot |F(A_0, A_1, \ldots, A_d)| \leq (1+\varepsilon) \prod_{i=0}^d |A_i| \log n} \\
& \qquad = \Prob{\mathrm{Bin} \left (\alpha n \prod_{i=0}^{d-1} |A_i|, (1+\varepsilon) \frac{d \log n}{n} \right ) \leq (1+\varepsilon) \alpha \prod_{i=0}^{d-1} |A_i| \log n} \\
& \qquad \leq \exp \left ( \left ( \frac{(1+\varepsilon)d - (1+\varepsilon)}{(1+\varepsilon)d} \right ) ^2 (1+\varepsilon) \alpha d \log n \prod_{i=0}^{d-1} |A_i| / 2 \right ) \\
& \qquad = \exp \left (- \Omega \left (n (\log n)^{1/2} \right ) \right ),
\end{align*}
where the last equality follows from $|A_{d-1}| \geq n/(\log n)^{1/2}$. Since there are at most $(d+1)^n$ partitions of $V$ into $d+1$ disjoint sets, by the union bound we see that with probability $1-o(1)$ every partition with $|A_{d-1}| \geq n/(\log n)^{1/2}$ has
\[
\frac{n \cdot |F(A_0, A_1, \ldots, A_d)|}{\prod_{i=0}^d |A_i|} > (1+\varepsilon) \log n > (1+\varepsilon) a(\varepsilon) \log n,
\]
where the last inequality follows from $a(\varepsilon) < 1$.
Now, assume that $C(d, \varepsilon) \leq |A_{d-1}| < n/(\log n)^{1/2}$ for some constant $C(d, \varepsilon) > 0$ to be determined later. Note that by the assumption that $|A_0| \leq |A_1| \leq \ldots \leq |A_d|$, this implies that $|A_d| = (1-o(1))n$. Observe that, bounding crudely, there are at most $n^{d}$ possible choices of the sizes $|A_0|, |A_1|, \ldots, |A_d|$ (after selecting $|A_0|, |A_1|, \ldots, |A_{d-1}|$, we have $|A_d| = n - (|A_0| + |A_1| + \ldots + |A_{d-1}|)$). For each such choice of $|A_0|, |A_1|, \ldots, |A_d|$, there are at most $n^{|A_0|+|A_1|+ \ldots + |A_{d-1}|}$ possible choices of partitions. Hence, in total, we will want to show that the probability that for a given partition our lower bound on the Cheeger constant does not hold is
\[
o \left ( n^{-(d+|A_0|+|A_1|+ \ldots + |A_{d-1}|)} \right ).
\]
We will show that in this case with probability $1-o \left ( n^{-(d+|A_0|+|A_1|+ \ldots + |A_{d-1}|)} \right )$ we have
\[
\frac{n \cdot |F(A_0, A_1, \ldots, A_d)|}{\prod_{i=0}^d |A_i|} \geq (1+\varepsilon) \min \left \{1, 0.33\varepsilon \right \} \log n
\stackrel{Lemma~\ref{lem:a_bound}}{\geq} (1+\varepsilon) a(\varepsilon) \log n.
\]
Since $|A_d| = (1-o(1))n$, we have
\begin{align*}
& \Prob{n \cdot |F(A_0, A_1, \ldots, A_d)| \leq (1+\varepsilon) \min \left \{1, 0.33\varepsilon \right \} \prod_{i=0}^d |A_i| \log n} \\
& \qquad \leq \Prob{\mathrm{Bin} \left ((1-o(1)) n \prod_{i=0}^{d-1} |A_i|, (1+\varepsilon) \frac{d \log n}{n} \right ) \leq (1+\varepsilon) \min \left \{1, 0.33\varepsilon \right \} \prod_{i=0}^{d-1} |A_i| \log n} \\
& \qquad \leq \exp \left ( - \left ( \frac{(1-o(1))(1+\varepsilon)d - (1+\varepsilon) \min \left \{1, 0.33\varepsilon \right \}}{(1-o(1))(1+\varepsilon)d} \right ) ^2 (1+\varepsilon) d \log n \prod_{i=0}^{d-1} |A_i| / 2 \right ) \\
& \qquad = \exp \left ( - \left ( \frac{d - (1+o(1)) \min \left \{1, 0.33\varepsilon \right \}}{d} \right ) ^2 (1+\varepsilon) d \log n \prod_{i=0}^{d-1} |A_i| / 2 \right ).
\end{align*}
We note that, since $d \geq 2$, if $\varepsilon > 3.01$ then for $n$ large enough we can bound
\begin{align*}
\left ( \frac{d - (1+o(1)) \min \left \{1, 0.33\varepsilon \right \}}{d} \right ) ^2 (1+\varepsilon) & \geq \left ( \frac{d - (1+o(1))}{d} \right ) ^2 (1+\varepsilon) \\
& \geq \left ( \frac{1-o(1)}{2} \right ) ^2 \cdot 4.01 > 1.001.
\end{align*}
On the other hand, if $\varepsilon \in (0,3.01]$ then we have that
\begin{align*}
\left ( \frac{d - \min \left \{1, 0.33\varepsilon \right \}}{d} \right ) ^2 (1+\varepsilon) - 1 & = \left ( 1 - 0.33\varepsilon/d \right ) ^2 (1+\varepsilon) - 1 \\
& \geq \left ( 1 - 0.33\varepsilon/2 \right ) ^2 (1+\varepsilon) - 1.
\end{align*}
As a polynomial in $\varepsilon$, this is positive on $(0,3.04)$. So for any $\varepsilon \in (0,3.01]$, if $n$ is large enough then we also have $\left ( \frac{d - (1+o(1)) \min \left \{1, 0.33\varepsilon \right \}}{d} \right ) ^2 (1+\varepsilon) = 1+\delta_\varepsilon > 1$.
Since for all $0 \leq i \leq d-2$ we have $|A_i| \geq 1$, we can set $|A_0|+|A_1|+ \ldots + |A_{d-1}| = d-1+M$ for some $M \geq C(d, \varepsilon)$. Given that, the minimum value of $\prod_{i=0}^{d-1} |A_i|$ is attained when $|A_0| = \ldots = |A_{d-2}| = 1$ and $|A_{d-1}| = M$. This gives
\begin{align*}
\min \{ 1.001, 1+\delta_\varepsilon \} \frac{d}{2} \prod_{i=0}^{d-1} |A_i| & \geq \min \{ 1.001, 1+\delta_\varepsilon \} \frac{dM}{2} \\
& \geq \min \{ 1.001, 1+\delta_\varepsilon \} M \geq 2d + M
\end{align*}
when $M \geq \max \{2000d, 2d/\delta_\varepsilon\}$. Hence we obtain
\begin{eqnarray*}
&&\Prob{n \cdot |F(A_0, A_1, \ldots, A_d)| \leq (1+\varepsilon) \min \left \{1, 0.33\varepsilon \right \} \prod_{i=0}^d |A_i| \log n}\leq \\
& & \hspace{5cm} \exp \left ( - \left (d + 1 +\sum_{i=0}^{d-1} |A_i| \right ) \log n\right ).
\end{eqnarray*}
Hence, the union bound completes the proof for this case.
The final remaining case is when $|A_{d-1}| < C(d, \varepsilon)$ for $C(d, \varepsilon) > 0$ constant. In this case we can use Lemma \ref{lem:minDegree}. Since we know that w.h.p. the minimum co-degree of a $d$-element set is at least $(1+\varepsilon)a(\varepsilon)d \log n - C \sqrt{\log n}$,
it follows that w.h.p. for any selection of elements $a_0 \in A_0, \ldots, a_{d-1} \in A_{d-1}$ there are at least $(1+\varepsilon)a(\varepsilon)d \log n - C \sqrt{\log n} - d C(d, \varepsilon)$ faces $\{a_0, \ldots, a_{d-1}, a_d \}$ with $a_d \in A_d = [n] \setminus \bigcup_{i=0}^{d-1} A_i$. Thus w.h.p. for any such partition we have
\begin{align*}
\frac{n \cdot |F(A_0, A_1, \ldots, A_d)|}{\prod_{i=0}^d |A_i|} & \geq \frac{n \prod_{i=0}^{d-1} |A_i| ((1+\varepsilon)a(\varepsilon)d \log n - C \sqrt{\log n} - d C(d, \varepsilon))}{n \prod_{i=0}^{d-1} |A_i|} \\
& \geq (1+\varepsilon)a(\varepsilon)d \log n - 2C \sqrt{\log n}.
\end{align*}
This completes the proof of Theorem \ref{thm:mainCheeger}.
\end{proof}
\section{Algebraic expansion}
\label{sec:laplacians}
Recall from \eqref{eq:gap-def} that for a $d$-dimensional simplicial complex $Y$ with complete skeleton, the \emph{spectral gap} $\lambda(Y)$ is the minimal eigenvalue of the upper Laplacian on $(d-1)$-cycles, i.e.,
\[
\lambda(Y) = \min \mathrm{Spec} \left ( \Delta^+ |_{Z_{d-1}} \right ).
\]
As our main result in this section, we prove the following theorem about the spectral gap of $Y(n,p;d)$.
\begin{thm}
\label{thm:laplacians}
Let $Y = Y(n,p;d)$ for $d \geq 2$ and $p = \frac{(1+\varepsilon)d \log n}{n}$, where $\varepsilon >0$.
There exists a constant $C > 0$ such that w.h.p. the spectral gap of $Y$ satisfies
\begin{equation}
\label{eqn:spectral}
\delta (Y) - C \sqrt{\log n} \leq \lambda(Y) \leq \delta (Y).
\end{equation}
\end{thm}
\begin{proof}
The upper bound in \eqref{eqn:spectral} follows immediately from Theorem~\ref{thm:mainCheeger} and through Theorem \ref{thm:cheegerIneq}. Recall that the latter states that
$\lambda (Y) \leq h(Y)$. Furthermore, at the beginning of the proof of Theorem~\ref{thm:mainCheeger}, we observed that $h(Y) \leq \delta (Y)$.
We now focus on the lower bound on $\lambda(Y)$. We will give a lower bound on $\langle \Delta^+ f, f\rangle$ for $f \in Z_{d-1}(Y)$ which relies on a decomposition of $\Delta^+$ into the Laplace operators
of the link graphs of all $d-2$-faces of $Y$.
For a $(d-2)$-dimensional face $\tau$ in $Y^{(d-2)}$, let $\mathrm{lk} \tau$ be the \emph{link graph} of face $\tau$, i.e., the graph with vertex set $Y^{(0)} \setminus \tau$, with $u,v \in Y^{(0)} \setminus \tau$ forming an edge if $\tau \cup \{u,v\} \in Y^{(d)}$. Note that when $\tau$ is a $(d-2)$-dimensional face in $Y(n,p;d)$, then $\mathrm{lk} \tau$ is a random graph with the $G(n-d+1,p)$ distribution.
For $\sigma \in Y^{(d-1)}$, we let $\dgg{\tau}{\sigma} = |\{\sigma' \in Y^{(d-1)} \ : \ \sigma' \sim \sigma, \ \tau \subset \sigma' \}|$. For convenience we write $\Omega^{(j)}$ for $\Omega^{(j)} (Y;\mathbb R)$.
We define the localised (on $\tau \in Y^{(d-2)}$) upper Laplace operator
$\Delta_\tau^+ : \Omega^{(d-1)} \to \Omega^{(d-1)}$ ;
for any $f \in \Omega^{(d-1)}$ and any $\sigma \in Y^{(d-1)}$ we set
\begin{equation} \label{eq:def_loc_Lapl}
(\Delta_\tau^+ f)(\sigma) = \begin{cases}
\dgg{\tau}{\sigma} - \sum_{\sigma' : \sigma' \sim \sigma, \tau\subset \sigma'} f(\sigma') &
\mbox{if $\tau \subset \sigma$} \\
0 & \mbox{$\tau \not \subset \sigma$}
\end{cases}.
\end{equation}
Furthermore, for a form $f \in \Omega^{(d-1)}$ and a face $\tau \in Y^{(d-2)}$, we define
$f_\tau : (\mathrm{lk} \tau)^{(0)} \to \mathbb R$ as $f_\tau (v) = f(v\tau)$.
The operator $\Delta_\tau^+$ has the same effect as the Laplace operator associated with
$\mathrm{lk} \tau$ (see 2. in the following theorem).
This is made precise in the following result by Garland \cite{garland1973p} (we state it as in Lemma 4.2
in~\cite{parzanchevski2016isoperimetric}).
\begin{thm}
\label{thm:garland}
Let $X$ be a $d$-dimensional simplicial complex and $f \in \Omega^{(d-1)}$.
Then the following hold.
\begin{enumerate}
\item[1.] $\Delta^+ = \sum_{\tau \in X^{(d-2)}} \Delta_\tau^+ - (d-1)D$, where
$(Df)(\sigma) = \dgr{\sigma} f(\sigma)$.
\item[2.] For any $\tau \in X^{(d-2)}$, \ $\langle \Delta_{\tau}^+ f,f\rangle = \langle\Delta^+_{\mathrm{lk} \tau} f_\tau , f_\tau \rangle$.
\item[3.] If $f \in Z_{d-1}(X)$, then $f_\tau \in Z_0 (\mathrm{lk} \tau)$.
\item[4.] $\sum_{\tau \in X^{(d-2)}} \langle f_\tau, f_\tau\rangle = d \langle f,f \rangle$.
\end{enumerate}
\end{thm}
For $f\in Z_{d-1} (Y)$ with $f \not = 0$, we seek a lower bound on $\langle \Delta^+ f,f\rangle$.
We will use the decomposition of $\Delta^+$
in terms of the local Laplace operators as in 1. of the above theorem. To make use of this, we will rewrite
the operator $D$ as a sum of localised versions of it. In particular, we write
$D_{\mathrm{lk} \tau} : \Omega^{(0)} (\mathrm{lk} \tau;\mathbb R) \to
\Omega^{(0)} (\mathrm{lk} \tau;\mathbb R)$ for the operator such that for $f' \in \Omega^{(0)} (\mathrm{lk} \tau;\mathbb R)$ and $v \in \mathrm{lk} \tau^{(0)}$
we have $(D_{\mathrm{lk} \tau} f')(v) = \dgg{\tau}{v\tau} f'(v)$. So, in particular, if $f \in \Omega^{(d-1)}$, then
$(D_{\mathrm{lk} \tau} f_\tau )(v) = \dgg{\tau}{v\tau} f(v\tau)$.
The following holds.
\begin{clm} \label{claim:D_decomp}
For any $f \in \Omega^{(d-1)} (Y)$, we have
$$ \langle Df,f\rangle = \frac{1}{d} \sum_{\tau \in Y^{(d-2)}} \langle D_{\mathrm{lk} \tau}f_\tau,f_\tau \rangle. $$
\end{clm}
\begin{proof} Let $f \in \Omega^{(d-1)} (Y)$. We write
\begin{eqnarray}
\langle Df,f \rangle &=& \sum_{\sigma \in Y^{(d-1)}} \dgr{\sigma} f^2(\sigma) =
\frac{1}{d} \sum_{\tau \in Y^{(d-2)}} \sum_{v \in \mathrm{lk} \tau^{(0)}} \dgr{v\tau} f^2 (v\tau) \nonumber \\
&=& \frac{1}{d} \sum_{\tau \in Y^{(d-2)}} \sum_{v \in \mathrm{lk} \tau^{(0)}} \dgg{\tau}{v} f_\tau ^2 (v) \nonumber \\
&=& \frac{1}{d} \sum_{\tau \in Y^{(d-2)}} \langle D_{\mathrm{lk} \tau} f_\tau, f_\tau \rangle. \nonumber
\end{eqnarray}
\end{proof}
Note that if $A_{\mathrm{lk} \tau}$ is the adjacency matrix of $\mathrm{lk} \tau$, then
$$ \Delta^+_{\mathrm{lk} \tau} = D_{\mathrm{lk} \tau} - A_{\mathrm{lk} \tau}. $$
Hence using Theorem~\ref{thm:garland} parts 1. and 2., for any $f \in \Omega^{(d-1)}$ we can write
\begin{eqnarray}
\langle \Delta^+ f, f\rangle &\stackrel{Thm~\ref{thm:garland}~1.}{=}& \sum_{\tau \in Y^{(d-2)}}\langle \Delta_\tau^+f,f\rangle - (d-1)\langle Df,f\rangle = \nonumber \\
&\stackrel{Thm~\ref{thm:garland}~2., \ Claim~\ref{claim:D_decomp}}{=}& \sum_{\tau \in Y^{(d-2)}}
\langle \Delta_{\mathrm{lk} \tau}^+ f_\tau,f_\tau \rangle - \frac{d-1}{d} \sum_{\tau \in Y^{(d-2)}} \langle D_{\mathrm{lk} \tau}f_\tau,f_\tau\rangle \nonumber \\
&=& \sum_{\tau \in Y^{(d-2)}} \langle (D_{\mathrm{lk} \tau} - A_{\mathrm{lk} \tau}) f_\tau,f_\tau\rangle - \frac{d-1}{d} \sum_{\tau \in Y^{(d-2)}} \langle D_{\mathrm{lk} \tau}f_\tau,f_\tau\rangle \nonumber \\
&=& \sum_{\tau \in Y^{(d-2)}} \left(\langle (D_{\mathrm{lk} \tau} - A_{\mathrm{lk} \tau}) f_\tau,f_\tau\rangle -
\frac{d-1}{d} \langle D_{\mathrm{lk} \tau}f_\tau,f_\tau\rangle \right) \nonumber \\
&=& \sum_{\tau \in Y^{(d-2)}} \left(\frac{1}{d} \langle D_{\mathrm{lk} \tau} f_\tau, f_\tau\rangle - \langle A_{\mathrm{lk} \tau} f_\tau, f_\tau \rangle \right).
\label{eq:Laplace_decomp}
\end{eqnarray}
But note that
$$ \langle D_{\mathrm{lk} \tau} f_\tau, f_\tau\rangle = \sum_{v \in \mathrm{lk} \tau^0} \dgg{\tau}{v\tau} f_\tau (v)^2 \geq \delta (Y) \langle f_\tau, f_\tau \rangle. $$
Thereby,
\begin{eqnarray}
\frac{1}{d} \sum_{\tau \in Y^{(d-2)}} \langle D_{\mathrm{lk} \tau} f_\tau, f_\tau \rangle &\geq& \delta(Y) \cdot \frac{1}{d}
\sum_{\tau \in Y^{(d-2)}} \langle f_\tau, f_\tau\rangle \stackrel{Thm~\ref{thm:garland}~4.}{=} \delta (Y) \frac{d}{d} \langle f,f\rangle \nonumber \\
&=& \delta(Y) \langle f,f \rangle. \label{eq:Ds_lowerbound}
\end{eqnarray}
Now, we will give an upper bound on $\sum_{\tau \in Y^{(d-2)}} \langle A_{\mathrm{lk} \tau} f_\tau, f_\tau \rangle$, for $Y=Y(n,p;d)$ and
$f\in Z_{d-1}$.
As we observed above, $\mathrm{lk} \tau$ is distributed as a $G(n- d+1,p)$ random graph for any
$\tau \in Y^{(d-2)} (n,p;d)$. Of course, these random graphs are not independent.
Hence, the simplest way to bound from above this sum is to give an upper bound on $\langle A_{\mathrm{lk} \tau} f_\tau, f_\tau\rangle$
that holds with probability $1- o(n^{-(d-1)})$ and apply the union bound over all $\tau \in Y^{(d-2)} (n,p;d)$, which
there are $O(n^{d-1})$ of them.
To this end, we will use some results by Feige and Ofek~\cite{ar:FeigeOfek2005} on the spectrum of the
adjacency matrix of $G(n,p)$.
\subsubsection*{The adjacency matrix of $G(n,p)$ and its spectral gap}
In brief, Feige and Ofek~\cite{ar:FeigeOfek2005} showed that the second largest eignvalue of $G(n,p)$ is
$O(\sqrt{np})$, provided that $np = \Omega (\log n)$. Let $A$ be the adjacency matrix of $G(n,p)$.
If $G(n,p)$ were regular, then the all-1s vector $\mathbf{1}$ would span the eigenspace of the leading eigenvalue
and the above result would imply that $\langle Af,f\rangle = O(\sqrt{np}) \langle f,f\rangle$ for any function $f$ on the vertex set of $G(n,p)$ such that $f \perp \mathbf{1}$. However, $G(n,p)$ is not regular but almost regular
in the sense that for any $\varepsilon >0$ w.h.p. most vertices have degrees $np(1\pm \varepsilon)$.
Feige and Ofek~\cite{ar:FeigeOfek2005} proved that despite this, the bound on this quadratic form still
holds w.h.p.
To state these more precisely, let
$S=\{ f : [n] \to \mathbb R : f \perp \mathbf{1} , \langle f , f\rangle \leq 1\} $ and fixing $0 < \delta <1$ we let
$$ T= \{ f : [n] \to \mathbb R : f \in \left\{ \frac{\delta}{\sqrt{n}} \mathbb Z \right\}^{[n]} , \ \langle f , f\rangle \leq 1\}. $$
The main theorem in~\cite{ar:FeigeOfek2005} is as follows.
\begin{thm}[Theorem 2.5 in~\cite{ar:FeigeOfek2005}] \label{thm:FeigeOfek_main}
Let $A$ be the adjacency matrix on $G(n,p)$, where $c_0 \frac{\log n}{n} \leq p \leq \frac{n^{1/3}}{n(\log n)^{1/3}}
$. For every $c>0$, there exists $c'>0$ such that with probability at least $1 - n^{-c}$, for any
$f,f' \in T$ we have
$$| \langle Af, f'\rangle | \leq c' \sqrt{np}. $$
\end{thm}
Furthermore, the authors state and prove the following claim.
\begin{clm} [Claim 2.4 in~\cite{ar:FeigeOfek2005}]\label{clm:Feige-Ofek 2.4}
Suppose that for some $c >0$ we have $|\langle Af , f'\rangle |< c \sqrt{np}$, for any $f,f' \in T$.
Then for any $f \in S$, we have $\langle Af, f \rangle < \frac{c}{(1-\delta)^2} \sqrt{np}$.
\end{clm}
From the above two statements the following result is deduced.
\begin{thm} \label{thm:adj_Gnp}
Let $A$ be the adjacency matrix on $G(n,p)$, where $c_0 \frac{\log n}{n} \leq p \leq \frac{n^{1/3}}{n(\log n)^{1/3}}
$. For every $c>0$, there exists $c''>0$ such that with probability at least $1 - n^{-c}$, for any
$f \in S$ we have
$$ \langle Af, f \rangle \leq c'' \sqrt{np}. $$
\end{thm}
\subsubsection*{Deducing the lower bound on $\lambda (Y(n,p;d))$.}
We apply this in our setting, recalling that for any $\tau \in Y^{(d-2)}$, the link graph
$\mathrm{lk} \tau$ is distributed as $G(n-d+1,p)$ with $p = \frac{(1+\varepsilon) d \log n}{n}$.
Taking $c=d$ in Theorem~\ref{thm:adj_Gnp}, we deduce that for some constant $c_d''>0$
with probability at least $1- n^{-d}$ for any
$f_\tau \in Z_0 (\mathrm{lk} \tau)$, we have
$$\langle A_{\mathrm{lk} \tau} f_\tau, f_\tau \rangle \leq c_d'' \sqrt{np} \langle f_\tau, f_\tau \rangle. $$
The union bound (over all $\tau \in Y^{(d-2)}$) implies that for any $f\in Z_{d-1}(Y)$ we have w.h.p.
\begin{equation} \label{eq:adjacency_sum}
\sum_{\tau \in Y^{(d-2)}} \langle A_{\mathrm{lk}}f_\tau, f_\tau \rangle \leq
c_d''\sqrt{np} \sum_{\tau \in Y^{(d-2)}} \langle f_\tau, f_\tau \rangle \stackrel{Thm~\ref{thm:garland}~4.}{=}
d c_d'' \sqrt{np} \langle f, f\rangle.
\end{equation}
Using the lower bound of~\eqref{eq:Ds_lowerbound} and the upper bound of~\eqref{eq:adjacency_sum},
Equation~\eqref{eq:Laplace_decomp} yields that w.h.p. for any $f\in Z_{d-1} (Y)$ we have
$$ \langle \Delta^+ f, f \rangle \geq \left(\delta (Y) - d c_d'' \sqrt{np} \right) \langle f, f \rangle.$$
But Lemma~\ref{lem:minDegree} states that $\delta (Y) \geq (1+ \varepsilon) a d \log n - C\sqrt{\log n}$ w.h.p.
for some $C>0$, where $a=a(\varepsilon)$ is the solution of~\eqref{eqn:aDefn}.
Hence, for some $C'>0$ w.h.p. for any $f \in Z_{d-1} (Y)$ such that $f \not = 0$ we have
$$ \frac{ \langle \Delta^+ f, f \rangle}{ \langle f, f \rangle} \geq \delta (Y) - C' \sqrt{\log n}. $$
Therefore, w.h.p.
$$\lambda (Y) \geq \delta (Y) - C' \sqrt{\log n}.$$
\end{proof}
\section{Combinatorial expansion} \label{sec:conductance}
Our lower bound on $\Phi_Y$ will rely on a lower bound on the edge expansion of the graph $(Y^{(d-1)}, E^{(d-1)})$, where the edge set $E^{(d-1)}$ consists of those distinct pairs $\sigma , \sigma' \in Y^{(d-1)}$ for which there exists
a $d$-face $\rho \in Y^{(d)}$ which contains both $\sigma$ and $\sigma'$. We write $\sigma \sim \sigma'$.
Hence, given a subset $S \subset Y^{(d-1)}$, we would like to bound from below the number of $d$-faces which
contain at least one $d-1$-face in $S$ and a $d-1$-face not in $S$. In other words, we would like to provide
a lower bound on the number of $d$-faces which are potential \emph{exits} for a random walk that starts
inside $S$.
To express these more precisely, we introduce some relevant notation.
For a $d$-dimensional complex $Y$ and $0 \leq k \leq d-1$, given $S \subset Y^{(k)}$ let
\[
\partial^+ S = \{\rho \subset Y^{(k+1)} \ : \ \mbox{there exists \ } \sigma \in S \mbox{ such that } \sigma \subset
\rho \}.
\]
Analogously to the oriented case, for $\sigma \in Y^{(k)}$ where $1\leq k\leq d$ we define
\[
\partial \sigma = \{\tau \in Y^{(k-1)} : \tau \subset \sigma \}.
\]
For $S\subset Y^{(d-1)}$ we let $\overline{S} = Y^{(d-1)} \setminus S$. We write
\begin{eqnarray*}
Q(S, \overline{S}) &=& \sum_{\sigma \in S} \sum_{\sigma' \in \overline{S} : \sigma' \sim \sigma} \pi(\sigma) \cdot \frac{1}{d \cdot \dgr{\sigma}} = \frac{1}{d(d+1)\cdot |Y^{(d)}|}\sum_{\sigma \in S} \sum_{\sigma' \in \overline{S} : \sigma' \sim \sigma} \dgr{\sigma} \cdot \frac{1}{\dgr{\sigma}} \\
&=& \frac{1}{d(d+1) \cdot |Y^{(d)}|}\sum_{\sigma \in S} \sum_{\sigma' \in \overline{S} : \sigma' \sim \sigma} 1.
\end{eqnarray*}
With $B_S = \{ \sigma \in \partial^+ S : \partial \sigma \subset S \}$,
we bound
$$ \sum_{\sigma \in S} \sum_{\sigma' \in \overline{S} : \sigma' \sim \sigma} 1 \geq |\partial^+S \setminus B_S|.$$
The latter is the number of $d$-faces that are exits out of the set $S$.
Furthermore,
$$ \pi (S) = \frac{\sum_{\sigma \in S} \dgr{\sigma}}{d \cdot |Y^{(d)}|} \leq \frac{d\cdot | \partial^+S |}{(d+1) \cdot |Y^{(d)}|}< \frac{| \partial^+S|}{|Y^{(d)}|}.$$
Hence,
$$ \Phi_Y(S) \geq \frac{Q(S,\overline{S})}{\pi (S)} \geq \frac{1}{d(d+1)} \cdot \frac{|\partial^+S\setminus B_S|}{| \partial^+S|},$$
whereby
\begin{equation} \label{eq:conductance_lower}
\Phi_Y \geq \frac{1}{d(d+1)} \cdot \min_{S\subset Y^{(d)}: 0 < \pi (S) \leq \frac12} \frac{|\partial^+S\setminus B_S|}{| \partial^+S|}.
\end{equation}
In our proof, we will in fact use an upper bound on the number of $d$-faces in $\partial^+ S$ which are not exits.
These are $d$-faces whose $d-1$-subsets are all faces belonging to $S$. To bound their number from above,
we will use a weak version of the Kruskal-Katona theorem. This provides an upper bound on the number of
complete subgraphs on a hypergraph with a given number of hyperedges. To apply this to our context, we consider the hypergraph spanned by the $d-1$-faces in $S$. The number of the complete $d+1$-subhypergraphs of this hypergraph is an upper bound on the number of $d$-faces in $\partial^+S$ which
are not exits.
Note that two $d-1$-faces $\sigma, \gamma \in Y^{(d-1)}$ which are \emph{neighbours}, that is,
$\sigma \sim \sigma'$, satisfy $|\sigma \cap \gamma| = d-1$. Given $X \subseteq Y^{(d-1)}$, we say that $X$ is \emph{tightly connected} if for any distinct $\sigma, \sigma' \in X$ there exists $m \geq 1$ and a sequence $\sigma = \delta_0, \ldots, \delta_m = \sigma' \in X$ such that $\delta_i \sim \delta_{i+1}$ for all $0 \leq i \leq m-1$.
As we observed earlier, the random walk we consider walks over tightly connected sets.
To bound the quantity on the right-hand side of~\eqref{eq:conductance_lower},
we start with the following theorem, which
considers only subsets $S \subset Y^{(d-1)}$ which are tightly connected and $\pi (S) \leq 1/2$.
In fact, it suffices to consider those subsets $S$ with $|S| \leq \frac{1}{2}{n \choose d}$.
To see this, recall Lemma~\ref{lem:minDegree} implies that w.h.p. $\deg (\sigma) \geq d+1$.
Hence, if $\pi (S) \leq 1/2$, then
$$ (d+1) |S| \leq \sum_{\sigma \in S} \dgr{\sigma}\leq \frac{d+1}{2} |Y^{(d)}| \leq \frac{d+1}{2} {n \choose d}. $$
This directly implies that $|S| \leq \frac{1}{2} {n \choose d}$.
\begin{thm}
\label{thm:mainExpansion}
Let $Y = Y(n,p;d)$ where $np=(1+\varepsilon)d\log n$ for $\varepsilon>0$ fixed. There exists $\delta > 0$ such that w.h.p. the following holds. For any tightly connected set $S \subset Y^{(d-1)}$ with $|S| \leq \tfrac{1}{2} \binom{n}{d}$ and
\[
B_S = \{ \sigma \in \partial^+ S : \partial \sigma \subset S \}
\]
we have
\begin{equation}
\label{eqn:mainExpansion}
| (\partial^+S \setminus B_S)\cap Y^{(d)} | \geq \delta | \partial^+S \cap Y^{(d)}|.
\end{equation}
\end{thm}
\begin{proof}
We shall assume that $n$ is large enough for the estimates in the proof to hold. Given $S \subset Y^{(d-1)}$ and $1 \leq i \leq d+1$, let
\[
F_i(S) = \{ \rho \in \partial^+S : | \partial \rho \cap S | = i \},
\]
and set $f_i(S) = |F_i(S)|$. Denoting $|S| = m$, by double counting, we have that
\begin{equation}
\label{eqn:fisum}
\sum_{i=1}^{d+1} i f_i(S) = m(n-d).
\end{equation}
For any $t \geq 2$, let $K_{t+1}^{(t)}$ be the complete $t$-uniform hypergraph on $t+1$ vertices, and for a $t$-uniform hypergraph $G$, let $K_{t+1}^{(t)}(G)$ denote the number of copies of $K_{t+1}^{(t)}$ in $G$. Note that in fact we have $f_{d+1}(S) = K_{d+1}^{(d)}(S)$, i.e., $f_{d+1}(S) = |B_S|$. Hence, we shall use the following weak form of Kruskal-Katona theorem (see Lov{\'a}sz \cite{lovaszcpe}).
\begin{thm}
\label{thm:kk}
Suppose $r \geq 1$ and $G$ is an $r$-uniform hypergraph with
\[
m = \binom{x_m}{r} = \frac{x_m(x_m-1)\ldots(x_m-r+1)}{r!}
\]
hyperedges, for some real number $x_m \geq r$. Then $K^{(r)}_{r+1}(G) \leq \binom{x_m}{r+1}$, with equality if and only if $x_m$ is an integer and $G = K^{(r)}_{x_m}$.
\end{thm}
Note that we can rewrite \eqref{eqn:fisum} to get
\begin{equation}
\label{eqn:fiineq}
\sum_{i=1}^{d} f_i(S) \geq \frac{1}{d} \left ( m(n-d) - (d+1)f_{d+1}(S) \right ) = \frac{nm}{d} \left ( 1-\frac{d}{n} - \frac{(d+1)f_{d+1}(S)}{nm} \right ) .
\end{equation}
Observe that $| \partial^+S \setminus B_S|$ follows the binomial distribution $\mathrm{Bin}(\sum_{i=1}^{d} f_i(S),p)$.
By Theorem \ref{thm:kk} we have
\[
\frac{f_{d+1}(S)}{nm} \leq \frac{1}{n} \frac{\binom{x_m}{d+1}}{\binom{x_m}{d}} = \frac{x_m-d}{n(d+1)}.
\]
However, by assumption $x_m$ satisfies $m = \binom{x_m}{d} \geq \frac{(x_m-d)^d}{d!}$; thus, we obtain $x_m-d \leq (md!)^{1/d}$. This yields
\begin{equation}
\label{eqn:fd+1bound}
\frac{f_{d+1}(S)}{nm} \leq \frac{(md!)^{1/d}}{n(d+1)}.
\end{equation}
By \eqref{eqn:fiineq} and \eqref{eqn:fd+1bound} we obtain
\begin{equation}
\label{eqn:fi2ndineq}
\sum_{i=1}^{d} f_i(S) \geq \frac{nm}{d} \left ( 1-\frac{d}{n} - \frac{(md!)^{1/d}}{n} \right ) = \frac{nm}{d} \left ( 1 - \frac{(md!)^{1/d}}{n} - o(1) \right ) .
\end{equation}
For a fixed $S$ with $|S| = m$ we will bound from above the probability of the event that $| \partial^+S \setminus B_S | \leq \delta | \partial^+S |$. It can be easily seen that $| \partial^+S \cap Y^{(d)} |$ is stochastically
dominated by a random variable that has the $\mathrm{Bin}(nm,p)$ distribution. So by \eqref{eqn:ChernoffUpper} we have that
\begin{align}
\label{eqn:RHSbound}
\Prob{| \partial^+S \cap Y^{(d)} | \leq 3nmp} & \geq \Prob{\mathrm{Bin}(nm,p) \leq 3nmp} \nonumber \\
& \geq 1 - \exp \left ( -\frac{4nmp}{3} \right ) \nonumber \\
& = 1-\exp \left ( - \frac{(1+\varepsilon) 4 d m \log n}{3}\right ).
\end{align}
Hence, we will now aim to bound the probability that $| \partial^+S \setminus B_S | \leq \delta 3nmp$. We are interested in sets $S$ of $(d-1)$-dimensional faces of size
\[
m \leq \frac{1}{2}\binom{n}{d} \leq \frac{n^d}{2d!},
\]
which implies that
\[
\frac{nm}{d} \left ( 1 - \frac{(md!)^{1/d}}{n} - o(1) \right ) \geq \frac{nm}{d} \left ( 1 -2^{-1/d} - o(1) \right ).
\]
Hence by \eqref{eqn:fi2ndineq}, the random variable $| (\partial^+S \setminus B _S)\cap Y^{(d)}|$ is stochastically
bounded from below by a random variable distributed as
$\mathrm{Bin} \left ( \lceil \frac{nm}{d} \left ( 1 -2^{-1/d} - o(1) \right ) \rceil, p\right )$.
Thus,
\begin{align*}
\Prob{| (\partial^+S \setminus B _S)\cap Y^{(d)}| \leq \delta 3nmp} \leq \Prob{\mathrm{Bin} \left (\left\lceil \frac{nm}{d} \left ( 1 -2^{-1/d} - o(1) \right )\right\rceil, p\right ) \leq \delta 3nmp }.
\end{align*}
Let
\[
\mu =\left\lceil \frac{nm}{d} \left ( 1 -2^{-1/d} - o(1) \right ) \right\rceil p\geq (1+\varepsilon) \left ( 1 -2^{-1/d} - o(1) \right ) m \log n.
\]
We have
\begin{align*}
\frac{\mu - \delta 3nmp}{\mu} \geq 1 - \frac{3 \delta d}{1 -2^{-1/d}} - o(1) > 0
\end{align*}
for
\[
\delta < \frac{ 1 -2^{-1/d} }{ 3 d }.
\]
Hence, by the Chernoff bound \eqref{eqn:ChernoffLower} we obtain
\begin{equation}
\label{eqn:LHSbound}
\Prob{|( \partial^+S \setminus B_S) \cap Y^{(d)} | \leq \delta 3nmp} \leq \exp \left ( - \frac{\left ( 1 -2^{-1/d} - 3 \delta d - o(1) \right )^2 }{2} \mu \right ).
\end{equation}
We shall consider two cases: 1. $n^{d(1-\alpha)} \leq m \leq \tfrac{1}{2} \binom{n}{d}$, and 2. $m < n^{d(1-\alpha)}$, for some $0 < \alpha < 1$ to be specified later.
Let us consider the case $m \geq n^{d(1-\alpha)}$ first. The number of sets $S$ of size $m$ is then at most
\begin{align*}
\binom{\binom{n}{d}}{m} & \leq \left ( \frac{e \binom{n}{d}}m \right )^m \leq \exp \left ( m + m \log \frac{n^d}{m} \right ) \leq \exp \left ( m + m \log n^{\alpha d} \right ) \\
& = \exp \left ( (1+ o(1)) \alpha d m \log n \right ).
\end{align*}
There are at most $n^d$ possible values of $m$ in that region. Hence, if
\[
\alpha < \frac{(1+\varepsilon) \left ( 1 -2^{-1/d} - o(1) \right ) \left ( 1 -2^{-1/d} - 3 \delta d - o(1) \right )^2 }{2d},
\]
then by \eqref{eqn:RHSbound}, \eqref{eqn:LHSbound}, and the union bound, we have that \eqref{eqn:mainExpansion} holds with probability $1-o(1)$ for all sets $S$ of size at least $n^{d(1-\alpha)}$.
Next, we consider the case when $m < n^{d(1-\alpha)}$, i.e., when $m^{1/d}/n < n^{-\alpha}$. By \eqref{eqn:fd+1bound} we have
\[
|B_S| = f_{d+1}(S) \leq mn \frac{(md!)^{1/d}}{(d+1)n} \leq mn^{1-\alpha} \frac{d}{d+1} \leq mn^{1-\alpha}.
\]
So $|B_S \cap Y^{(d)}|$ is stochastically bounded from above by a random variable that
distributed as $\mathrm{Bin} (\lfloor mn^{1-\alpha}\rfloor ,p)$.
Let $k_0 = \lceil 3mn^{1-\alpha/2}p \rceil$. By the Chernoff bound~\eqref{eqn:ChernoffUpper} we have that
\begin{align}
\label{eqn:smallB}
\Prob{| B_S \cap Y^{(d)}| \geq k_0 } & \leq \Prob{\mathrm{Bin}(\lfloor mn^{1-\alpha}\rfloor,p) \geq (1+2n^{\alpha/2}) mn^{1-\alpha}p} \nonumber \\
& \leq \exp \left ( -\frac{4 n^\alpha n^{1-\alpha} mp}{3} \right ) \nonumber \\
& = \exp \left ( - \frac{(1+\varepsilon) 4d m \log n}{3}\right ).
\end{align}
By \eqref{eqn:fi2ndineq} and the fact that $m=o(n^d)$ we obtain
\[
\sum_{i=1}^{d} f_i(S) \geq \left ( 1 - o(1) \right ) \frac{nm}{d},
\]
which implies that $| ( \partial^+S \setminus B_S )\cap Y^{(d)} |$ stochastically dominates the $\mathrm{Bin} \left ( \frac{nm}{d} \left ( 1 - o(1) \right ), p\right )$ distribution. Since
\[
k_0 = o \left ( \E{\mathrm{Bin} \left ( \frac{nm}{d} \left ( 1 - o(1) \right ), p\right )} \right ),
\]
we then have that
\begin{align}
\label{eqn:largeShadow}
\Prob{| (\partial^+S \setminus B_S) \cap Y^{(d)} | \leq k_0} & \leq \Prob{\mathrm{Bin} \left ( \frac{nm}{d} \left ( 1 - o(1) \right ), p\right ) \leq k_0 } \nonumber \\
& = \sum_{k=0}^{k_0} \Prob{\mathrm{Bin} \left ( \frac{nm}{d} \left ( 1 - o(1) \right ), p\right ) = k } \nonumber \\
& \leq 2 k_0 \Prob{\mathrm{Bin} \left ( \frac{nm}{d} \left ( 1 - o(1) \right ), p\right ) = k_0 } \nonumber \\
& \leq 2 k_0 \binom{ \frac{nm}{d} \left ( 1 - o(1) \right )}{k_0} p^{k_0} (1-p)^{\frac{nm}{d} \left ( 1 - o(1) \right )-k_0} \nonumber \\
& \leq 2 k_0 \left ( \frac{ 3nmp }{dk_0} \right )^{k_0} (1-p)^{\frac{nm}{d} \left ( 1 - o(1) \right )-k_0} \nonumber \\
& \leq 2 k_0 \left ( \frac{ 3nmp }{dk_0} \right )^{k_0} \exp \left ( - \frac{nmp}{d} \left ( 1 - o(1) \right )+k_0p \right ) \nonumber \\
& = 2 k_0 \left ( \frac{ 3nmp }{dk_0} \right )^{k_0} \exp \left ( - (1 - o(1) ) (1+\varepsilon) m \log n \right ).
\end{align}
Now,
\begin{align}
\label{eqn:fewCases}
\left ( \frac{ 3nmp }{d} \right )^{k_0} & = \exp \left ( k_0 \log \frac{ 3nmp }{d} \right ) \nonumber \\
& \leq \exp \left ( \frac{4 d m \log n}{ n^{\alpha/2}} \log (m \log n) \right ) \nonumber \\
& = \exp \left ( o ( m \log n) \right ).
\end{align}
Combining \eqref{eqn:smallB}, \eqref{eqn:largeShadow} and \eqref{eqn:fewCases} together we obtain
\begin{align}
\label{eqn:boundSmallSets}
& \Prob{| (\partial^+S \setminus B_S) \cap Y^{(d)} | \geq | \partial^+S | / 2 } \nonumber \\
& \qquad \geq \Prob{| B_S \cap Y^{(d)}| \leq k_0 \mbox{ and } | (\partial^+S \setminus B_S) \cap Y^{(d)}|
\geq k_0 } \nonumber \\
& \qquad \geq 1 - \exp \left ( - (1 - o(1) ) (1+\varepsilon) m \log n \right ).
\end{align}
The bound in \eqref{eqn:boundSmallSets} is not strong enough to apply it with the union bound over all sets $S$ of size $m$. However, we assumed that $S$ is tightly connected and we will now exploit this assumption. We bound the number of tightly connected sets $S \in Y^{(d-1)}$ with $|S| = m$ as follows. Order the set $Y^{(d-1)}$ of $(d-1)$-faces in an arbitrary way; for example, identifying every face $\sigma$ with an ordered tuple $(\sigma_0, \ldots, \sigma_{d-1})$ where $\sigma_0 < \ldots < \sigma_{d-1}$, we could order the faces increasingly in the lexicographic order. We can pick the first face $\sigma \in S$ in $\binom{n}{d}$ many ways. We then perform a breadth-first-search on $S$: we first find all neighbours of $\sigma$, i.e., faces that share $d-1$ vertices with $\sigma$. Exploring these faces according to the selected order, we then find all yet unexplored faces that share $d-1$ vertices with consecutive neighbours of $\sigma$. Then, we move to the second neighbourhood of $\sigma$ and find all of their still unexplored neighbours. Since, having picked $\sigma$, we have to discover a total of $m-1$ faces and these will be first found as one of the offspring of one of $m$ faces, we see that there are at most $\binom{2m-2}{m-1} \leq 4^m$ many ways to assign the numbers of offspring to consecutive faces
(a collection of $m$ non-negative integers which sum up to $m-1$).
Any $(d-1)$-dimensional face $\sigma$ has at most $dn$ neighbours, as we have $d$ vertices in $\sigma$ we can drop, and at most $n$ vertices not in $\sigma$ can be added to form the neighbour. Hence, for any choice of the numbers of neighbours first explored by consecutive faces, there are at most $(dn)^m$ ways to pick these neighbours. Thus, the number of tightly connected sets of size $m$ is at most
\[
4^m (dn)^m = \exp((1+o(1))m \log n).
\]
As again the number of values of $m$ we have to consider is at most $n^d$, by \eqref{eqn:boundSmallSets} and the union bound we see that with probability $1-o(1)$, \eqref{eqn:mainExpansion} holds for all sets $S$ with $|S| \leq n^{d(1-\alpha)}$. This completes the proof of Theorem~\ref{thm:mainExpansion}.
\end{proof}
We can now easily show that the assumption that $S$ is tightly connected can be dropped in Theorem~\ref{thm:mainExpansion} yielding a lower bound on $\Phi_{Y(n,p;d)}$ and completing the
proof of Theorem~\ref{thm:conductance}.
\begin{cor}
Let $Y = Y(n,p;d)$ with $np=(1+\varepsilon)d\log n$ for $\varepsilon>0$ fixed.
There exists $\delta > 0$ such that w.h.p. the following holds. For any set $S \subset Y^{(d-1)}$
we have
\begin{equation}
\label{eqn:mainAllSets}
| (\partial^+S \setminus B_S)\cap Y^{(d)} | \geq \delta | \partial^+S \cap Y^{(d)}|.
\end{equation}
\end{cor}
\begin{proof}
First, observe that if $U, V$ are distinct maximal tightly connected sets in $Y^{(d-1)}$ then $\partial^+U \cap \partial^+ V = \emptyset$. Indeed, if $\rho \in \partial^+ U \cap \partial^+ V$ then there exist $\sigma_1 \in U, \sigma_2 \in V$ such that $|\rho \cap \sigma_1| = |\rho \cap \sigma_2| = d$, but that implies $|\sigma_1 \cap \sigma_2| = d-1$, so $\sigma_1, \sigma_2$ are incident; a contradiction.
Hence, let $S$ be a union of maximal tightly connected sets $S_1, \ldots, S_p$. For $1 \leq i \leq p$, let
\[
B_i = \{ \rho \in \partial^+ S_i : \partial \rho \subset S_i \}
\]
and $B= \cup_{i=1}^p B_i$.
Set $\alpha_i = |(\partial^+S_i \cap Y^{(d)}) \setminus B_i |$ and $\beta_i = | \partial^+S_i \cap Y^{(d)} |$. By Theorem \ref{thm:mainExpansion} with probability $1-o(1)$ we have $\alpha_i / \beta_i \geq \delta$ for all $i$.
Now observe that $a,b,c,d > 0$ and $a/b > c/d$ implies $a> cb/d$ and therefore
\[
\frac{a+c}{b+d} > \frac{c\frac{b}{d}+c}{b+d} = \frac{c\frac{b+d}{d}}{b+d} = \frac{c}{d}.
\]
Thus by the disjointness of the sets $\partial^+S_1, \ldots, \partial^+ S_p$ we deduce that with probability $1-o(1)$ we have
\[
\frac{ | (\partial^+S \cap Y^{(d)}) \setminus B | }{ | \partial^+S \cap Y^{(d)} |} \geq \min_{1 \leq i \leq p} \frac{\alpha_i}{\beta_i} \geq \delta.
\]
This completes the proof.
\end{proof}
\section{Conclusions} This paper is a study of various measures of expansion in the Linial-Meshulam random complex $Y(n,p;d)$ past the cohomological connectivity threshold. We considered the spectral gap of the combinatorial
Laplace operator and showed that w.h.p. it is very close to the the Cheeger constant associated with the simplicial complex. Furthermore, we showed that both quantities are w.h.p. very close to the minimum co-degree of the random simplicial complex. We determined explicitly the latter using the large deviations theory of the binomial distribution.
Finally, we considered a random walk on the $d-1$-faces of the random simplicial complex, which generalises the standard random walk on graphs. In particular, we considered the conductance of such a random walk and showed that w.h.p. it is bounded away from zero.
The above results were obtained for $p$ such that $np = (1+\varepsilon )d \log n$, for any $\varepsilon >0$ fixed.
Our proofs seem to work when $\varepsilon =\varepsilon (n) \to 0$ as $n\to \infty$ slowly enough.
A natural next step would be to consider these quantities for $p$ that is closer to the threshold $d \log n /n$. Indeed, the supercritical regime is for $p$ such that $np = d \log n + \omega (n)$, where $\omega(n) \to \infty$ as
$n \to \infty$ arbitrarily slowly. We believe it would be interesting to extend the analysis to this range of $p$ as well.
This would complete the picture of the evolution of the expansion properties of $Y(n,p;d)$.
\section{Acknowledgements} We would like to thank Eoin Long for suggesting to us the use of the Kruskal-Katona theorem in the context of bounding the combinatorial expansion of $Y(n,p;d)$.
\bibliographystyle{plain}
|
3,212,635,537,614 | arxiv | \section{Introduction
The $N$th singular value~\cite{Watson-sing,B-C-Z}
is the algebraic number $k_N\in[0,1]$ for which
\begin{equation}
{\rm AGM}\left(1,\sqrt{1-k_N^2}\right)=\sqrt{N}\,{\rm AGM}(1,k_N)
\label{k-N}
\end{equation}
where the arithmetic-geometric mean (AGM) is obtained by
iterating the rapidly convergent process~\cite{AGM}
${\rm AGM}(a,b)={\rm AGM}\left((a+b)/2,\sqrt{a b}\right)$.
For square-free $N\equiv3$~mod~8, with $N$ coprime
to 3,
\begin{equation}
k_N^2=\frac12-\sqrt{\frac14-\frac{16}{r^{24}}}
\label{r}
\end{equation}
is determined by an algebraic number $r>2^{\frac14}$
that is given by a Weber function~\cite{Weber,Atkin-M,Y-Z,Hajir-V,LMS,Schertz}
and has a minimal polynomial of degree $3h$, where
$h=h(-N)$ is the class number of the imaginary quadratic field $Q(\sqrt{-N})$.
For square-free $N\equiv3$~mod~4,
the complete elliptic integral
\begin{equation}
K_N=\int_0^1\frac{{\rm d}x}{\sqrt{(1-x^2)(1-k_N^2x^2)}}
=\frac{\pi}{2}\,\frac{1}{{\rm AGM}\left(1,\sqrt{1-k_N^2}\right)}
\label{K-N}
\end{equation}
is reducible to the $\Gamma$ values~\cite{C-S-1,Chowla-S,Zucker} in
\begin{equation}
G_N=\prod_{k=1}^{N}\left[\Gamma\left(\frac{k}{N}\right)\right]
^{\left(\frac{-N}{k}\right)}
\label{G-N}
\end{equation}
with exponents given by the Legendre--Jacobi--Kronecker symbol
$\left(\frac{-N}{k}\right)$. For $N>3$,
this reduction takes the form
\begin{equation}
K_N=\left(\frac{r}{2}\right)^2\sqrt{\frac{2\pi}{N}
\left(\lambda^4G_N\right)^{\frac{1}{h}}}
\label{w}
\end{equation}
where $\lambda>0$ is an algebraic number.
As noted in~\cite[Eq.~8]{C-S-1}, $\lambda=1$ when $h(-N)=1$.
Moreover, I conjecture in this paper
that $\lambda$ is an algebraic {\em unit} of the Hilbert class field
when $h(-N)$ is odd, i.e.~for prime $N>3$ congruent to 3 modulo 4.
I shall describe how $r$ and $\lambda$ were reduced to radicals
in the case $N=2317723$, with class number $h(-N)=105$.
To achieve this reduction,
I construct, in Section~4.2, a pair of class invariants,
one of which appears to outperform the {\tt quadhilbert}
procedure of {\em Pari-GP}, in regard of the economy
with which it generates the class field.
\section{Chowla--Selberg formula
It is not necessary to compute $N$ values of the
$\Gamma$ function to evaluate $G_N$ at high precision. Instead
we may use $h$ values of the Dedekind eta function
\begin{equation}
\eta(z)=\exp(\pi{\rm i}z/12)\prod_{k=1}^\infty(1-\exp(2\pi{\rm i}k z))
=\sum_{n=-\infty}^\infty(-1)^n\exp((6n+1)^2\pi{\rm i}z/12)
\label{eta}
\end{equation}
to evaluate $G_N$ using
the Chowla--Selberg formula~\cite[Eq.~2, p.~110]{Chowla-S}
\begin{equation}
\prod_{k=1}^{N}\left[\Gamma\left(\frac{k}{N}\right)\right]
^{\left(\frac{-N}{k}\right)}
=(2\pi N)^h\prod_{[a,b,c]\in H}\frac{1}{a}
\left|\eta\left(\frac{b+\sqrt{-N}}{2a}\right)\right|^4
\label{C-S}
\end{equation}
for square-free $N\equiv3$~mod~4 and $N>3$.
For other cases, including non-fundamental discriminants,
see~\cite{Huard-K-W}.
In~(\ref{C-S}), the product runs over the strict equivalence classes
$[a,b,c]$ of primitive integral binary quadratic forms
$a x^2+b x y+c y^2$ with discriminant $b^2-4a c=-N$.
These equivalence classes form an Abelian
group $H$, by Gauss's composition of quadratic forms, and the order of
$H$ is the class number $h=h(-N)$. It is remarked in~\cite{Huard-W}
that publication of this striking formula was delayed for 18 years,
between its discovery at the time of the Chowla--Selberg
paper~\cite{C-S-1} of 1949 and its appearance in the
Selberg--Chowla paper~\cite{Chowla-S} of 1967. For precursors
of this formula, see~\cite{J-N}.
\subsection{A conjecture for prime discriminants
For square-free positive $N\equiv3$~mod~4, I define
\begin{equation}
\lambda=\prod_{[a,b,c]\in H}a^{\frac14}\left|\frac
{\eta\left(\frac{1+\sqrt{-N}}{2 }\right)}
{\eta\left(\frac{b+\sqrt{-N}}{2a}\right)}\right|
\label{lambda}
\end{equation}
where the product runs over the equivalence classes
for discriminant $b^2-4a c=-N$.\\[5pt]
{\bf Conjecture 1}: For prime $N\equiv3$~mod~4, $\lambda$ is a unit
of the Hilbert class field of $Q(\sqrt{-N})$.\\[5pt]
Remarks:\begin{enumerate}
\item I have verified this in the 155 cases with $N<2000$.
\item For each of these cases, the minimal polynomial $L(x)$ of
$\lambda$ is available\footnote{From the directory
{\tt http://paftp.open.ac.uk/pub/staff\_ftp/dbroadhu/K2317723/}~.}
in a file {\tt lambdaCS.txt} which is read by {\tt lambdaCS.gp}
with output in {\tt lambdaCS.out} that confirms,
at a precision of 15,000 digits, that $L(\lambda)=0$
and that $L$ generates the same field as the {\tt quadhilbert}
procedure of {\em Pari-GP}.
\item In each of these cases, $L(x)$ is a monic polynomial with $L(0)=-1$
and hence $\lambda$ is a unit of the class field.
\item For $N=2317723$, the Hilbert class group
is cyclic and is generated by the equivalence class $[a,b,c]=[151,-91,3851]$,
with order $h(-N)=105$. In Section~5, I describe how 15,000 digits of
$\lambda$ were used to reduce it to a unit, which was then checked
at 40,000 digits of precision.
\item John Zucker and I have investigated some composite discriminants,
finding that $\lambda^2$ is a unit
when $N$ is a product of distinct primes greater than 3.
I have verified this for squarefree $N<2000$ with $N\equiv3$~mod~4
and coprime to 3. I have not yet found a simple criterion
that determines why $N=7\times11\times13\times19=19019$
yields $\lambda$ as a unit, while for $N=7\times11\times23=1771$
one must take $\lambda^2$ to form a unit.
\end{enumerate}
\section{Hilbert class field
The Hilbert class field of $Q(\sqrt{-N})$ is generated by the
polynomial~\cite[Th.~7.2.14]{Cohen}
\begin{equation}
P(x)=\prod_{[a,b,c]\in H}\left(x-
j\left(\frac{b+\sqrt{-N}}{2a}\right)\right)
\label{Hilbert}
\end{equation}
where
\begin{equation}
j(z)=\left(\left(\frac{\eta(z/2)}{\eta(z)}\right)^{16}
+16\left(\frac{\eta(z)}{\eta(z/2)}\right)^8\right)^3.
\label{Klein}
\end{equation}
As shown in~\cite[Sect.~125, p.~461]{Weber},
a real root of $P(x)$ is supplied by
\begin{equation}
\left(\frac{256}{r^{16}}-r^8\right)^3=j\left(\frac{1+\sqrt{-N}}{2}\right).
\label{real}
\end{equation}
For $N=2317723$, $P(x)$ is a polynomial of degree 105,
whose integer coefficients have up to 3050 decimal digits,
making it rather difficult to reduce its roots to a set
of simple radicals.
Fortunately, we do not need to use $P(x)$. A more convenient
polynomial that generates the same number field will serve our purpose.
Using a class invariant defined in Section~4.2,
I found that the Hilbert class field
of $Q(\sqrt{-2317723})$ is generated by a compositum
of three polynomials that generate its sub-fields of prime degree, namely
\begin{eqnarray}
Q_7(x)&=&x^7-323x^5-6057x^4-35434x^3-186299x^2-1450032x-19143360\qquad{}
\label{Q7}\\
Q_5(y)&=&y^5-y^4-339y^3-7879y^2+146334y-566316
\label{Q5}\\
Q_3(z)&=&z^3-z^2-59z-322
\label{Q3}
\end{eqnarray}
where $Q_p$ has discriminant
\begin{equation}
D_p=f_p^2(-N)^{\frac{p-1}{2}}
\label{disc}
\end{equation}
with an index $f_p$. The indices
\begin{equation}
f_3=1,\qquad
f_5=2^{4}\times3\times5^2\times11\times17\times47,\qquad
f_7=2^{10}\times3^2\times19^2\times61\times f_5
\label{indices}
\end{equation}
fortunately contain no prime greater than 61.
\section{An efficient pair of class invariants
The algebraic number $r$ in~(\ref{r}) is the real root of a
monic cubic polynomial
with coefficients in the Hilbert class field.
These coefficients are algebraically constrained by the
condition that~\cite{Weber}
\begin{equation}
\gamma_2=\frac{256}{r^{16}}-r^8
\label{gamma2}
\end{equation}
generates the Hilbert class field, while the minimal polynomial
for $r$ has degree $3h$ and generates a cubic relative extension.
For each of the 198 primes congruent to 3 modulo 8
and less than 6000, I found that
the cubic relative extension takes
the form~\cite{nmbrthry}
\begin{equation}
r^3-2(f r^2+g r+1)=0
\label{f-g}
\end{equation}
where $f$ and $g$ are algebraic integers of the Hilbert class field.
I then found that these algebraic integers obey the constraint
\begin{equation}
2f^4-16f^3g^2+20f^2g^4-12f^2g-8fg^6+16fg^3-2f+g^8-4g^5+3g^2=0
\label{zero}
\end{equation}
which indeed ensures that $r$ does not appear in
\begin{equation}
-\frac{\gamma_2}{32}=
8f^8+32f^6g+16f^5+40f^4g^2+32f^3g+16f^2g^3+6f^2+12fg^2+g^4+2g
\label{integer}
\end{equation}
as may be confirmed by using~(\ref{f-g}) to eliminate powers
$r^j$ with $j\ge3$ from~(\ref{gamma2}) and then using~(\ref{zero})
to eliminate powers $g^k$ with $k\ge8$.
A particularly simple example~\cite[Table VI, p.~725]{Weber}
is provided by $N=163$, the largest number for which $h(-N)=1$,
where the integer pair $[f,g]=[3,-2]$ determines the well-known
18-digit integer~\cite[Sect. 7.2.3]{Cohen}
\begin{equation}
-j\left(\frac{1+\sqrt{-163}}{2}\right)=262537412640768000
\label{j-163}
\end{equation}
that differs from $\exp(\pi\sqrt{163})$ by less than 3 parts
in $10^{30}$ and is here obtained by evaluating $-\gamma_2^3$,
using~(\ref{integer}). Hence $[f,g]=[3,-2]$ is a Diophantine
solution of~(\ref{zero}).
\subsection{A signature for $N\equiv3\mbox{ mod }8$
I began my investigations by considering prime values of $N\equiv3$~mod~8,
since those yield a Chowla--Selberg unit, according to Conjecture~1.
Studying such primes, I discovered a signature, comprising a triplet of signs
$[S_1,S_2,S_3]$ that eventually enabled me to construct a pair of class
invariants for any number congruent to 3 modulo 8 and coprime to 3.
I arrived at this signature by using~(\ref{f-g}) to eliminate $f$
from~(\ref{zero}), obtaining an octic equation for $g$.
After some manipulations, I was able to solve this by taking 3 square roots.
The general solution for the octic has the form
\begin{equation}
g=-\frac{1}{r}
+S_1\left(r
+S_2\left(\frac{r^2}{2}
+S_3\left(\frac{r^{4}}{8}-\frac{1}{r^8}
\right)^{\frac12}
\right)^{\frac12}
\right)^{\frac12}
\label{signs}
\end{equation}
with signs $S_j=\pm1$.
By conjecture, precisely one of the 8 choices of signs
gives an algebraic integer of the Hilbert class field of $Q(\sqrt{-N})$.
If we know this signature, the problem of
identifying $k_N$ as an algebraic number becomes {\em much}
more tractable than previously supposed, since
instead of having to find an integer relation between
$3h+1$ numbers, namely $r$ and an integral basis
for a cubic relative extension of the Hilbert class field,
we now need a pair of relations between merely $h+2$ numbers,
namely $[f,g]$ and an integral basis for the Hilbert class field
itself. At large $N$, the coefficients in the minimal polynomial
of $g=O(\sqrt{r})$ have, typically, 48 times fewer digits than those
in the Hilbert polynomial~(\ref{Hilbert}).
I determined the signatures of the 198 primes
$N\equiv3$~mod~8 with $N<6000$ by trial and error,
using the {\tt lindep} procedure of {\em Pari-GP}
to search for a integer relation between the unique real
embedding of the integral basis {\tt nfinit(quadhilbert(-N)).zk}
and numerical evaluations of~(\ref{signs}) in each of 8 possible cases.
For each prime, I found precisely one valid signature.
Then I listed the first 12 primes for each signature, obtaining the sequences
\begin{eqnarray*}
[-1,-1,-1]&:&163,\,227,\,419,\,547,\,739,\,1123,\,1187,\,1571,\,
1699,\,2083,\,2339,\,2467\\{}
[-1,-1,+1]&:&11,\,139,\,331,\,523,\,587,\,907,\,971,\,1163,\,
1291,\,1483,\,1867,\,1931\\{}
[-1,+1,-1]&:&179,\,307,\,499,\,563,\,691,\,883,\,947,\,1459,\,
1523,\,1907,\,2099,\,2803\\{}
[-1,+1,+1]&:&59,\,251,\,379,\,443,\,571,\,827,\,1019,\,1531,\,
1723,\,1787,\,1979,\,2683\\{}
[+1,-1,-1]&:&3,\,67,\,131,\,643,\,1091,\,1283,\,1667,\,1987,\,
2179,\,2243,\,2371,\,2819\\{}
[+1,-1,+1]&:&43,\,107,\,491,\,619,\,683,\,811,\,1259,\,1451,\,
1579,\,2027,\,2347,\,2411\\{}
[+1,+1,-1]&:&19,\,83,\,211,\,467,\,659,\,787,\,1171,\,1427,\,
1619,\,1747,\,1811,\,2003\\{}
[+1,+1,+1]&:&283,\,347,\,859,\,1051,\,1307,\,1499,\,1627,\,
2011,\,2203,\,2267,\,2459,\,2843
\end{eqnarray*}
which led me to conjecture, as these 8 lists were slowly growing,
that the signature of a prime congruent to 3 modulo 8
is uniquely determined by its residue modulo 64,
as indeed turned out to be the case for the rest
of the sample of 198 primes.
I then checked that this is also
the case for all the composite integers less than 3500
that are congruent to 3 modulo 8 and coprime to 3,
using the {\tt nfisisom} routine of {\em Pari-GP}
in situations for which {\tt quadhilbert} did not furnish
a polynomial with a real root. (I thank Karim Belabas
for this workaround.)
Thus, for each square-free positive integer $N$
that is congruent to 3 modulo 8 and is coprime to 3
(and also for $N=3$ itself)
there appears to be a unique signature $[S_1,S_2,S_3]$,
determined by the residue of $N$ modulo 64,
such that~(\ref{signs}) yields an algebraic integer
of the class field.
\subsection{Construction and conjecture modulo 64
For positive integer $N$ congruent to 3 modulo 8, I define
a signature
\begin{equation}
[S_1,S_2,S_3]=\left\{\begin{array}{l}
[-1,-1,-1]\mbox{ for }N\equiv35\mbox{ mod }64\\{}
[-1,-1,+1]\mbox{ for }N\equiv11\mbox{ mod }64\\{}
[-1,+1,-1]\mbox{ for }N\equiv51\mbox{ mod }64\\{}
[-1,+1,+1]\mbox{ for }N\equiv59\mbox{ mod }64\\{}
[+1,-1,-1]\mbox{ for }N\equiv 3\mbox{ mod }64\\{}
[+1,-1,+1]\mbox{ for }N\equiv43\mbox{ mod }64\\{}
[+1,+1,-1]\mbox{ for }N\equiv19\mbox{ mod }64\\{}
[+1,+1,+1]\mbox{ for }N\equiv27\mbox{ mod }64\end{array}\right.
\label{sig}
\end{equation}
and a pair of algebraic numbers
\begin{equation}
[f,\,g]=\left[\frac{r}{2}-\frac{s}{\sqrt{r}},\,
-\frac{1}{r}+s\sqrt{r}\right]
\label{pair}
\end{equation}
where
\begin{eqnarray}
r&=&\exp(-\pi{\rm i}/24)\,
\frac{\eta\left(\frac{1+\sqrt{-N}}{2}\right)}{\eta\left(\sqrt{-N}\right)}
\label{Weber-f}\\
s&=&S_1\left(1+S_2\left(\frac{1}{2}
+S_3\left(\frac{1}{8}-\frac{1}{r^{12}}
\right)^{\frac12}\right)^{\frac12}\right)^{\frac12}.
\label{s}
\end{eqnarray}
{\bf Conjecture 2}: For every square-free positive integer
$N$ congruent to 3 modulo 8 and coprime to 3,
the Hilbert class field of $Q(\sqrt{-N})$
is generated by at least one of $[f,g]$
and for $N>1099$ it is generated by both.\\[5pt]
Remarks:\begin{enumerate}
\item I have checked that the minimal polynomials of $f=f(N)$
and $g=g(N)$ have degree $h=h(-N)$ for all of the cases in Conjecture~2
with $1099<N<100000$.
\item There are 7 cases with $N\le1099$ in which only
one of $[f,g]$ generates the Hilbert class field, while the
other generates a sub-field.
\item Five of these yield the integers
$f(83)=1$, $f(91)=1$, $g(331)=-1$, $g(427)=1$, $g(907)=-2$
and were noted in~\cite{Russell}, with three cases appearing
in~\cite[Table~5]{Y-Z}.
\item For $N=715$, with $h=4$,
the minimal polynomial of $g$ is $x^2 + x - 1$.
\item For $N=1099$, with $h=6$, it is $x^3 + x^2 - x + 6$.
\item In the cases $N=11, 19, 43, 67, 163$, with $h=1$,
the $[f,g]$ pairs are $[1, -1]$, $[0, 1]$, $[1, 0]$, $[1, 1]$,
$[3, -2]$, all of which were noted in~\cite[Table~VI]{Weber}.
\item Apart from the 10 cases noted above,
no other value of $N<1000000$ produces an integer.
(The integers $f(3)=g(3)=f(27)=0$ do not fall within Conjecture~2.)
\item For $N<3500$, I have verified that whenever the minimal
polynomial of $f$ or $g$ has degree $h$ the field which it
generates is isomorphic to that generated by the
{\tt quadhilbert} procedure of {\em Pari-GP}.
\item I have performed the same tests for prime $N<6000$.
\item At large $N$, the minimal polynomial of $g$ provides a
rather economical generator of the field. For $N=2317723$,
it may be computed in less than 100 milliseconds and
has a height less than the {\em cube} root of the height of the
{\tt quadhilbert} polynomial.
\end{enumerate}
\subsection{Minimal polynomials
The algebraic numbers $f$ and $g$ are, by construction,
roots of the polynomials
\begin{eqnarray}
F(x)&=&\prod_{j=1}^h
\left(x-\frac{r_{j,1}}{2}-\frac{r_{j,2}}{2}-\frac{r_{j,3}}{2}\right)
\label{F}\\
G(x)&=&\prod_{j=1}^h
\left(x+\frac{1}{r_{j,1}}+\frac{1}{r_{j,2}}+\frac{1}{r_{j,3}}\right)
\label{G}
\end{eqnarray}
where $r_{j,k}$ is a labelling of the roots
of the minimal polynomial of $r$ such that
\begin{equation}
\gamma_2\left(r_{j,1}\right)=
\gamma_2\left(r_{j,2}\right)=
\gamma_2\left(r_{j,3}\right)
\label{same}
\end{equation}
with $\gamma_2(r)=256/r^{16}-r^8$.
Conjecture~2 asserts, {\em inter alia},
that at least one of these polynomials
is irreducible and generates the Hilbert class field.
To compute the polynomials, we may use Reinier Br\"{o}ker's
fine formula~\cite[Th.~6.3, p.~106]{Broker} for the root
associated to the equivalence class $[a,b,c]$
of binary quadratic forms with discriminant $b^2-4ac=-4N$. Denoting
$z=(b/2+\sqrt{-N})/a$, this root is
\begin{equation}
R(a,b,c)=\left\{\begin{array}{ll}
-(-1)^{\frac{a^2-1}{8}}\exp\left(-\frac{b(a c^2-a-2c)}{48}\,\pi{\rm i}\right)
\,\frac{\eta(z/2)}{\eta(z)}&\mbox{ if $c$ is even}\\
-(-1)^{\frac{c^2-1}{8}}\exp\left(-\frac{b(c-a-5a c^2)}{48}\,\pi{\rm i}\right)
\,\sqrt{2}\,\frac{\eta(2z)}{\eta(z)}&\mbox{ if $a$ is even}\\
\exp\left(-\frac{b(c-a-a^2 c)+2}{48}\,\pi{\rm i}\right)
\,\frac{\eta((1+z)/2)}{\eta(z)}&\mbox{ otherwise}
\end{array}\right.
\label{R-a-b-c}
\end{equation}
where I have written the Weber functions as explicit eta quotients.
I remark that $R(1,0,N)=r$ determines the $N$th singular value~(\ref{r})
and that at least one of $[a,c]$ is odd, since $b$ is even.
When the class group for discriminant $-4N$ is generated
cyclicly, by a single class with order $3h$, there is a very simple
procedure to generate the roots with a labelling
that respects the condition~(\ref{same}): we may compute $r_{j,k}$
by applying~(\ref{R-a-b-c}) to the reduced form obtained
by raising the generator to the power $j+(k-1)h$. If there
are sub-groups, a little book-keeping is required to ensure that the
roots are slotted into~(\ref{F},\ref{G}) in a manner that respects
condition~(\ref{same}). I ordered the roots by size of the
real parts of their $\gamma_2$ values and then inspected the
signs of the imaginary part of $\gamma_2$
For $N>1099$, the minimal polynomial $G$ is a rather economical generator
of the Hilbert class field.
In the rather simple example of $N=1571$, with $h=17$, I obtained
\begin{eqnarray}
G(x)&=&x^{17} + 14x^{16} + 38x^{15} + 19x^{14} + 83x^{13}
+ 440x^{12} + 275x^{11} -507x^{10} + 384x^9\nonumber\\
&+& 541x^8 - 1343x^7 - 88x^6 + 712x^5 + 585x^4
- 1254x^3 + 852x^2 - 304x + 64\qquad{}
\label{min-g}
\end{eqnarray}
whose index
\begin{equation}
2^{37}\times13^2\times17^2\times41\times43\times139\times2083\times34259
=117388472496907896691997278208
\label{index-g}
\end{equation}
has merely 30 digits. By contrast the polynomial obtained
in~\cite[p.~152]{Broker}, using a double
eta-quotient~\cite{Schertz-units,Schertz,Enge-S} of the form
\begin{equation}
w_{p,q}(z)=\frac{\eta\left(\frac{z}{p}\right)\eta\left(\frac{z}{q}\right)}
{\eta(z)\eta\left(\frac{z}{p q}\right)},
\label{p-q}
\end{equation}
with $[p,q]=[5,7]$,
has a 52-digit index, while {\tt quadhilbert}
yields a 60-digit index, using $[p,q]=[29,31]$.
The economy of $G$ is also reflected
in the storage for the integral basis obtained by
outputting {\tt nfinit(G).zk} from {\em Pari-GP},
which produces a file of less than 12 kilobytes, while
{\tt nfinit(quadhilbert(-1571)).zk} produces more than 29 kilobytes.
This is because large divisors of the index occur in the
denominators of the rational elements of the matrix
that transforms powers of the root to an integral basis.
\section{Reduction to simple radicals for $N=2317723$
For $N=2317723$, I used the generator $[a,b,c]=[604, 422, 3911]$,
with order 315, to obtain the polynomials $[F,G]$
from~(\ref{F},\ref{G}) in 90 milliseconds.
Their indices in the class field have
10,756 and 5,815 digits, respectively.
By way of comparison, the {\tt quadhilbert}
routine of {\em Pari-GP} gave an index with 20,075 digits.
The height of $G$ has 65 digits, while a 204-digit height was
produced by {\tt quadhilbert}. Using $G$,
I found the sub-fields~(\ref{Q7},\ref{Q5},\ref{Q3}).
\subsection{The elliptic integral $K_{2317723}$
Inspired by the results in~\cite[pp.~238--247]{EMA},
obtained by Jon Borwein and John Zucker for
elliptic integrals $K_N$ with $N\le 100$,
my goal was to reduce the elliptic integral $K_{2317723}$
to $\Gamma$ values and the simplest possible radicals,
which I took to be those generated by the
polynomials $Q_7$, $Q_5$ and $Q_3$ in~(\ref{Q7},\ref{Q5},\ref{Q3}),
whose indices in sub-fields of the Hilbert class field contain no prime
greater than $61$. By contrast, a compositum of these polynomials
gave a 7419-digit index.
Nonetheless, I found it convenient to construct, for intermediate
purposes, a local integral basis from this compositum and then to use
{\tt lindep} to obtain the coefficients of $[f,g,\lambda]$
in this basis. The reason is simple: this is a triplet of
algebraic integers, so by using an integral basis we ensure that
no large denominator may leak into the $Q$-linear relations
and thereby inflate the typical size of numerators in
rational coefficients.
Hence the results were, in the first instance, in terms of a rather
unwieldy (yet computationally effective) integral basis,
occupying 74 Megabytes of disk space. However, it was possible
to shrink this data set, very dramatically.
\subsection{Reduction to monomials
Next, I transformed $[f,g,\lambda]$ from the integral basis
to the 105 monomials $x^i y^j z^k$,
with $i<7$, $j<5$ and $k<3$, where $x$, $y$ and $z$
are the unique real roots of $Q_7(x)=0$, $Q_5(y)=0$ and $Q_3(z)=0$.
Then {\em Pari-GP} found that the {\tt content} of $g$ is $1/C$, where
\begin{equation}
C=2^{8}\times3^2\times5^3\times11^2\times17^2\times19^2
\times47^2\times61\times2317723=1135455149209896386784000
\label{C}
\end{equation}
has 25 digits. The resulting compact integer data for the
vector $V=[f,g,\lambda]$ is available (see the first footnote)
in the form of a 32-kilobyte file {\tt K2317723.txt}
that achieves a 2300-fold compression of the data from the integral basis.
I remark that my intermediate use of an integral basis had
the merit of reducing the working precision required for the reduction
of $\lambda$ to radicals by roughly 2,500 decimal digits,
i.e.~by about 25 digits per term in the reduction of the unit
$\lambda$ to an integral basis of the class field.
It seemed to me to be beyond reasonable expectation that {\em Pari-GP}
might determine a system of fundamental units
for the class field of $Q(\sqrt{-N})$ with $N=2317723$.
Hence I used only {\tt nfinit} at $N=2317723$,
while the more time-consuming procedure {\tt bnfinit} was used
to good effect for $N<6000$.
\subsection{Solution of sub-fields by radicals
To complete the reduction to simple radicals,
I needed to determine the real roots
of the equations $Q_7(x)=0$, $Q_5(y)=0$, $Q_3(z)=0$
and then, from $f$ and $g$, the real root $r$ of the cubic~(\ref{f-g}).
It is elementary to solve a cubic by radicals. In particular,
\begin{equation}
z=\frac13
+\left(\frac{9227}{54}+\sqrt{\frac{2317723}{108}}\right)^{\frac13}
+\left(\frac{9227}{54}-\sqrt{\frac{2317723}{108}}\right)^{\frac13}
\label{z}
\end{equation}
is the unique real root of $Q_3(z)=0$.
To solve the quintic, we may compute the real parts
\begin{eqnarray}
u_n&=&\Re\bigg[
-(650272782-564880\sqrt{-2317723})\exp(2\pi{\rm i}n/5)\nonumber\\&&
-(1703074422-359490\sqrt{-2317723})\exp(4\pi{\rm i}n/5)\bigg]\label{u}
\end{eqnarray}
for $n=1\ldots4$, using $4\cos(\pi/5)=1+\sqrt{5}$. Then
\begin{equation}
y=\frac{1+u_1^{\frac15}+u_2^{\frac15}-(-u_3)^{\frac15}+u_4^{\frac15}}{5}
\label{y}
\end{equation}
is the unique real root of $Q_5(y)=0$.
To solve the septic, we may compute the real parts
\begin{eqnarray}
v_n&=&\Re\bigg[
-(1959346982341+140861987\sqrt{-2317723})\exp(2\pi{\rm i}n/7)\nonumber\\&&
-(686210881202-650234914\sqrt{-2317723})\exp(4\pi{\rm i}n/7)\nonumber\\&&
-(1670361863821+547274245\sqrt{-2317723})\exp(6\pi{\rm i}n/7)\bigg]\label{v}
\end{eqnarray}
for $n=1\ldots6$, using
\begin{equation}
6\cos(\pi/7)=1
+\left(\frac{-7+7\sqrt{-27}}{2}\right)^{\frac13}
+\left(\frac{-7-7\sqrt{-27}}{2}\right)^{\frac13}
\label{cos}
\end{equation}
and then
\begin{equation}
x=\frac{v_1^{\frac17}-(-v_2)^{\frac17}+v_3^{\frac17}
+v_4^{\frac17}+v_5^{\frac17}+v_6^{\frac17}}{7}
\label{x}
\end{equation}
is the unique real root of $Q_7(x)=0$.
The algebraic integers in $u_n$ and $v_n$
were found at 38-digit precision, using the method
outlined in~\cite[Chap.~3.1]{Morain} and there exemplified
by the quintic that generates the Hilbert class field
of $Q(\sqrt{-47})$.
As remarked in~\cite[VI-5]{CM}
that quintic
was solved by G.P.\ Young~\cite{Young} in 1888.
For G.N.\ Watson's comments on Young, see~\cite{B-S-W}.
For J.M.\ Whittaker's comments on Watson, see~\cite{Watson-obit}.
For the inspirational role of Srinivasa Ramanujan, see~\cite{B-C-Z}.
\subsection{Numerical checks
At no stage in the reduction of $[f,g,\lambda]$ to such simple
radicals was it necessary to use a working precision
above 15,000 digits. The results were then checked at a precision
of 40,000 digits. For the singular value, that is very easy,
since we need only take seventh, fifth, cube and square roots
and check the relation between a pair AGMs in~(\ref{k-N}).
To check the elliptic integral, I evaluated the Chowla--Selberg
formula~(\ref{C-S}) at a precision of 40,000 digits.
As a final check that no stray
factor had been overlooked in going from the $\Gamma$ values
in~(\ref{G-N}) to the $\eta$ values in~(\ref{C-S}),
I evaluated 2,317,723 values of the $\Gamma$ function,
at 38-digit precision,
and combined them with the Kronecker symbol, obtaining
agreement with~(\ref{w}). The checking programme {\tt K2317723.gp}
and its output {\tt K2317723.out} are in the same directory as the
monomial coefficients, with a URL given in the first footnote.
\section{Comments and conclusion
As announced in~\cite{nmbrthry,LL2008}, I had earlier reduced the
elliptic integrals $K_{34483}$ and $K_{1242763}$ to algebraic
numbers and $\Gamma$ values, following
the identification of elliptic integrals at singular values
in quantum field theory~\cite{B3G,contour}.
However, that was done more labouriously, without benefit of the
novel construction in~(\ref{sig}--\ref{s}).
The discoveries reported here stemmed from my persistent belief that
(notwithstanding well-intentioned advice to the contrary)
the problem of a polynomial with degree $3h$, for
singular values $k_N$ with $N\equiv3$~mod~8,
ought (at bottom) to be no more difficult than the problem with
degree $h$, for $N\equiv7$~mod~8.
It was thus rather gratifying to discover that 3~mod~8 is, in fact,
far preferable to 7~mod~8. In particular, I remark that:
\begin{enumerate}
\item The polynomial $G$ in~(\ref{G}) generates the Hilbert class field
with great (perhaps unprecedented) economy for large $N\equiv3$~mod~8
and coprime to 3, since it is precisely the trebling of roots
of the Weber polynomial that allowed me to combine their
reciprocals, three at a time. Thus we may avoid the large-$N$ growth
of $r=\exp(\pi\sqrt{N}/24)+o(1)$, using a level-48 class invariant with
growth
\begin{equation}
g=\alpha(N)\exp(\pi\sqrt{N}/48)+o(1)
\label{growth}
\end{equation}
where the asymptotic prefactor $\alpha(N)\in[-\sqrt2,\sqrt2]$ is given
by the signature~(\ref{sig}) as
\begin{equation}
\alpha(N)=\left\{\begin{array}{ll}{}
-\sqrt{1-\beta_-}\,=\,\sqrt2\cos(11\pi/16)&\mbox{ for }N\equiv35\mbox{ mod }64\\{}
-\sqrt{1-\beta_+}\,=\,\sqrt2\cos( 9\pi/16)&\mbox{ for }N\equiv11\mbox{ mod }64\\{}
-\sqrt{1+\beta_-}\,=\,\sqrt2\cos(13\pi/16)&\mbox{ for }N\equiv51\mbox{ mod }64\\{}
-\sqrt{1+\beta_+}\,=\,\sqrt2\cos(15\pi/16)&\mbox{ for }N\equiv59\mbox{ mod }64\\{}
+\sqrt{1-\beta_-}\,=\,\sqrt2\cos( 5\pi/16)&\mbox{ for }N\equiv 3\mbox{ mod }64\\{}
+\sqrt{1-\beta_+}\,=\,\sqrt2\cos( 7\pi/16)&\mbox{ for }N\equiv43\mbox{ mod }64\\{}
+\sqrt{1+\beta_-}\,=\,\sqrt2\cos( 3\pi/16)&\mbox{ for }N\equiv19\mbox{ mod }64\\{}
+\sqrt{1+\beta_+}\,=\,\sqrt2\cos( \pi/16)&\mbox{ for }N\equiv27\mbox{ mod }64
\end{array}\right.
\label{alpha}
\end{equation}
with
\begin{equation}
\beta_{\pm}=\sqrt{\frac12\pm\sqrt{\frac18}}
\label{beta}
\end{equation}
obtained from~(\ref{s}) in the limit $r\to\infty$.
\item I find it notable that a novel solution to a problem relating
to elliptic integrals was suggested, almost by accident,
by typing merely 3 primes into Neil Sloane's wonderful search
engine for integer sequences~\cite{OEIS}, which shrewdly informed me
of a common residue.
\item The challenge of increasing the value $N$, of a square-free number
for which the complete elliptic integral $K_N$ has been successfully
reduced to explicit radicals and $\Gamma$ values, is now seen
to be {\em far} easier for $N\equiv3$~mod~8 than for $N\equiv7$~mod~8,
since the minimum value of $h(-N)$ accessible using the residue 3~mod~8
is approximately 3 times smaller than that for 7~mod~8,
for comparable $N$.
\item The cause is clear: we know the result for
the sum of Kronecker symbols in~\cite[Cor.~5.3.13]{Cohen}
\begin{equation}
\sum_{k=1}^{\frac{N-1}{2}}\left(\frac{-N}{k}\right)
=\left\{\begin{array}{rl}
3h(-N)&\mbox{for }N\equiv3\,{\rm mod}\,8\\
h(-N)&\mbox{for }N\equiv7\,{\rm mod}\,8\end{array}\right.
\label{by-3}
\end{equation}
and have very little reason to expect the left-hand side
of this equation to favour one residue of $N$ over another, on average.
\item Indeed it does not. The smallest known odd class number $h(-N)$
for $N>2100000$ and $N\equiv3$~mod~8 is $h(-2317723)=105$, while
the smallest for $N\equiv7$~mod~8 is $h(-2140807)=309$. As
expected, from the right-hand side of~(\ref{by-3}), the latter
is close to 3 times former. It might have been thought, heretofore, that
what we gained on Kronecker's swings, by choosing 3~mod~8,
would be lost on Weber's
roundabouts so to speak\footnote{The colloquial saying seems to
be: ``What's lost upon the roundabouts, we pull up on the swings."},
where we are confronted by a Weber polynomial with degree $3h$
for the residue 3~mod~8.
\item However, I have demonstrated that nothing is lost, thanks
to the construction in~(\ref{sig}--\ref{s}) which
gives a pair of class invariants, both of whose minimal polynomials
have (conjecturally) degree $h$ for all square-free $N>1099$
with $N\equiv3$~mod~8 and $N$ coprime to 3.
One of these appears to outperform the double eta-quotient method.
\item It is understandable why the residue 3 modulo 8 was
discarded~\cite[Sect.\ 7.2.2, p.\ 46]{Atkin-M} in the early days
of elliptic curve primality proving: the factor $3$ in
the degree $3h$ of the Weber polynomial appeared to be a
considerable hindrance. Yet it is, in reality, a great {\em help} in
generating the class field of degree $h$, with true economy.
\item For $N=9760387\equiv3$~mod~8, mentioned in an
update~\cite[Table~3]{fastECPP} on progress~\cite{C-C}
with elliptic curve primality proving,
the minimal polynomial $G$ of the level-48 class invariant
$g$ in~(\ref{pair}) has a height whose {\em logarithm} is less than 37\%
of the logarithmic height generated by the double eta-quotient
used in {\em Pari-GP}. Moreover, the far simpler
polynomial $G$ was generated by~(\ref{R-a-b-c})
in less than 60\% of the time taken by
{\tt quadhilbert(-9760387)} in {\em Pari-GP}.
\item After completing this work, I found that the cubic
relative extension~(\ref{f-g}) had been analyzed
in~\cite{Russell,Watson-sing,Y-Z} in cases with class number $h(-N)\le5$.
\end{enumerate}
I conclude by remarking that
negative discriminants $D=-N$ with $N\equiv3$~mod~8
have recently been used to good effect in the
construction of elliptic curves of prime order~\cite{Broker}
as well as in elliptic curve primality proving~\cite{Primo,fastECPP}.
It may be that the class invariants $[f,g]$ constructed
in~(\ref{pair}) have something to offer researchers in these
and other fields. To that end, I append a polynomial, derived
from~(\ref{zero},\ref{integer}), that relates $g$ to the $j$-invariant.
\section{Appendix
With $J=j((1+\sqrt{-N})/2)$ and $[f,g]$ defined in~(\ref{pair})
for $N\equiv3$~mod~8, I obtained
{\scriptsize\begin{eqnarray*}
&&4722366482869645213696g^{192}
+906694364710971881029632g^{189}\\&&{}
+83642555144587156024983552g^{186}
+4939066436035567493262082048g^{183}\\&&{}
+209846732144453295821190856704g^{180}
+6836790472875669456820597948416g^{177}\\&&{}
+177760660111365660798399713116160g^{174}
+3790405367998157254338394567213056g^{171}\\&&{}
+67599317184302478754990860798001152g^{168}
+1023330374861490173762756220786049024g^{165}\\&&{}
+13300167538995234485451503275754913792g^{162}
+149751357319880860617353032541637967872g^{159}\\&&{}
+1471242473645701356762195242184643444736g^{156}
+12686152623120457776559166922665911910400g^{153}\\&&{}
+96465713862314370555819332777575421313024g^{150}
+649387270593628934858171069925186898755584g^{147}\\&&{}
+(31230955333453581854030430208J
+3882453146659327990928087554832136180596736)g^{144}\\&&{}
+(3112575311968735739497345449984J
+20668528534099939341664223139973218586066944)g^{141}\\&&{}
+(146491081273850273193327964717056J
+98181174566531282177050821993847942140657664)g^{138}\\&&{}
+(4334567835473120225746709693595648J
+416862949310707523391004696461020551773683712)g^{135}\\&&{}
+(90573700669027853953409791435997184J
+1584081987345225328419300608733679840703545344)g^{132}\\&&{}
+(1423343973438783107899395237834915840J
+5392827180734138390120670122880709527544528896)g^{129}\\&&{}
+(17493275962926368294467182339498704896J
+16459794382816811643862933127629261242032455680)g^{126}\\&&{}
+(172648676792410562129703110693458280448J
+45061910572250059933411888109783903591347519488)g^{123}\\&&{}
+(1394283125785794590584373780949472641024J
+110684289672788641685181184738158837724230975488)g^{120}\\&&{}
+(9342192287286270079567239190370043559936J
+243939817239193661299082038559564687476650934272)g^{117}\\&&{}
+(52479612331578998117553933098199803756544J
+482336992597299364139466938834793244884021018624)g^{114}\\&&{}
+(249139788648436660109159830308175969517568J
+855388454309670556943328874006671088228310712320)g^{111}\\&&{}
+(1005715446292108629597040700247216865935360J
+1359908516549423968069669760282605503992983191552)g^{108}\\&&{}
+(3468496180439370615221376712976872425652224J
+1936722218974592826488979701653464696769575124992)g^{105}\\&&{}
+(10256774471149943627485447340354884299915264J
+2468005608190905568464422101020736836421097095168)g^{102}\\&&{}
+(26077249048140483395956375421712523558125568J
+2809141400052941240710393397868547027567342780416)g^{99}\\&&{}
+(74434605568023196281142352281600J^2
+57114891944394614356435459851412372728053760J\\&&{}
+2847596261406655579330927459837021685211463680000)g^{96}
+(2477554196157529183058923534417920J^2\\&&{}
+107908793049662694542903803183229293591265280J
+2557916576434113812829574035094728838099498958848)g^{93}\\&&{}
+(37777050131731872831172646094766080J^2
+176007812462339450094643683054165114857979904J\\&&{}
+2018094067316585479424095293307588682510015397888)g^{90}
+(350315392609385120206628736534052864J^2\\&&{}
+247919707199861830348868071022250230292676608J
+1375480968524779058279375724416284358713081331712)g^{87}\\&&{}
+(2212954931060628055518534721471512576J^2
+301530529184884050981024118577114960445308928J\\&&{}
+782912651925726045755575568707697403306226745344)g^{84}
+(10113655478296307547676853277516890112J^2\\&&{}
+316467192335142296745902365499603998948196352J
+342002381493088167709431759538867103984517120000)g^{81}\\&&{}
+(34686160560417913758497734880697253888J^2
+286296268001635388526891108596990972415442944J\\&&{}
+80670074215058098900212634195910829748908982272)g^{78}
+(91454819947811608348373674411331420160J^2\\&&{}
+222854932560540399953035451922185320440791040J
-32576013459072580759743422218374296690436866048)g^{75}\\&&{}
+(188538486053819166866726313869734576128J^2
+148855954137712460111900655737028200724692992J\\&&{}
-55144112538101344960539779845043749378943090688)g^{72}
+(307688231969241589959784881772377931776J^2\\&&{}
+84952975352400749448181433286283669083783168J
-39652008878894560091506319036249518347078598656)g^{69}\\&&{}
+(401243764881273752344676821704124661760J^2
+41126233908050171152515472191400945995743232J\\&&{}
-18737893972438524931145834580573490329621102592)g^{66}
+(421140463268410135952689102182263291904J^2\\&&{}
+16667390579331302318343971161533721635454976J
-5174077032622993644412698123126373574057656320)g^{63}\\&&{}
+(357737370032797306669416932583241940992J^2
+5506258143473539813062973804040786515329024J\\&&{}
+356706547547601649023766058702724252706013184)g^{60}
+(246929031550304841851998761113119358976J^2\\&&{}
+1391923871860051857163468765410511483305984J
+1324523691432456084761088145116209617060233216)g^{57}\\&&{}
+(138863115636906600095701410448505044992J^2
+218444669720290975310036847391841483489280J\\&&{}
+806529800268765371684542515696435618323103744)g^{54}
+(63697798479377155669175763462382419968J^2\\&&{}
-5462276197319471266819439416672115490816J
+266980239729130899108909660949569834722525184)g^{51}\\&&{}
+(698176579929963364344659968J^3
+23826118069257453400721993188748820480J^2\\&&{}
-13455662359513885691724184271536483467264J
+25376524169352783195973261633263764798177280)g^{48}\\&&{}
+(2550985327389109428468842496J^3
+7253560775325956917535952133876088832J^2\\&&{}
-1613591086910774630703152810299985231872J
-24065084574987736253843103599575646487969792)g^{45}\\&&{}
+(4174917207070705118814928896J^3
+1790785455303357581601357069218217984J^2\\&&{}
+1787431768056365804275779798027674320896J
-14211863708323917478342629778282407593508864)g^{42}\\&&{}
+(4040205466878976552201093120J^3
+356547255740949811729208872745828352J^2\\&&{}
+1125486365607802615981912345130657906688J
-3441206898378612596310745579245155244834816)g^{39}\\&&{}
+(2570697570474836080757047296J^3
+56806582737949670581168990750507008J^2\\&&{}
+226614072500397724462829613101904560128J
-112335117391643625302204517357601891024896)g^{36}\\&&{}
+(1131315371638572737196195840J^3
+7167453070918708350458449780277248J^2\\&&{}
-87239165909372962210788016406434676736J
+192249493128040752962179905607616239239168)g^{33}\\&&{}
+(352740455777859128457691136J^3
+706517742750046477475912431435776J^2\\&&{}
-64776713258997636457464555907753967616J
+66506990142748686364267453679736522276864)g^{30}\\&&{}
+(78521574805087093933473792J^3
+53633593765131740994404867899392J^2\\&&{}
-10038379384210031276177488830957355008J
+7410964829180529718510440126722403729408)g^{27}\\&&{}
+(12416913717118929289347072J^3
+3135707043050900857899932712960J^2\\&&{}
+1593542960922142737017862254701314048J
-659117895483763817020809120277963210752)g^{24}\\&&{}
+(1371359778179842251423744J^3
+108551055367292136549255217152J^2\\&&{}
+405810458730443210149413338518388736J
-482780631328626439347360470569198813184)g^{21}\\&&{}
+(102588738647821241548800J^3
+64988007965336090461895393280J^2\\&&{}
-1063100916132737610564329604644864J
-65330166994701834714296174655572017152)g^{18}\\&&{}
+(4951224026747224719360J^3
-38854100569579258684962242560J^2\\&&{}
-1959825089179216758355828729184256J
-2155988932398684231421595412269629440)g^{15}\\&&{}
+(142903607317254504448J^3
+7056016192482886441475506176J^2\\&&{}
-32318350469538093301391589113856J
+1183943345005433116201363571887570944)g^{12}\\&&{}
+(2182827387064418304J^3
-415803546176586840262311936J^2\\&&{}
+1949335548919313469500521709568J
+185598558328963647368433135255552000)g^9\\&&{}
+(14241167385034752J^3
+6745596914666936897372160J^2\\&&{}
-90486773832711112570699776000J
+9948227935453805037037289472000000)g^6\\&&{}
+(25348472307712J^3
-19845426622060560384000J^2\\&&{}
+4189061192520522792960000000J
+181543631801412552228864000000000)g^3\\&&{}
+J^4
+2654208000J^3
+2348273369088000000J^2
+692533995824480256000000000J\;=\;0\,.\end{eqnarray*}}
The corresponding polynomial relation for $f$ has degree 8
in $J$ and is likewise available in the file {\tt PhiJFG.txt}
with checks provided by {\tt PhiJFG.gp} and {\tt PhiJFG.out}
from the URL in the first footnote.
\section*{Acknowledgements}
I am very grateful to David Bailey, Karim Belabas,
John Bolton, Jon Borwein, Reinier Br\"{o}ker, Chris Caldwell,
Larry Glasser, Marcel Martin, Neil Sloane and John Zucker for their
generous advice and gentle encouragement.
\newpage
\raggedright
{\small |
3,212,635,537,615 | arxiv | \section{Introduction}
\label{sec:intro}
Publishing new research in journals/conferences is a common practice in the scientific community. It is noticed that papers of few authors consistently get accepted in journals whereas papers of certain other authors get rarely accepted\footnote{https://www.sciencemag.org/careers/2018/12/yes-it-getting-harder-publish-prestigious-journals-if-you-haven-t-already}. An intriguing question thus is what makes the papers of certain authors almost always eligible for acceptance. Is there a special recipe that they follow in preparing their manuscripts? Does it depend on their position in the collaboration/citation network? Does their experience or their $h$-index matter? Does the diversity in the topics that they work on help escalate the acceptance? The present paper attempts to delve into some of these questions and characterise authors based on their paper acceptance profile. We base our investigations on a dataset obtained from the Journal of High Energy Physics that has information about authors, papers written by them, citations obtained by them and the review reports written by expert referees for each of their accepted paper. The overall peer review workflow for this journal is illustrated in Figure~\ref{fig:peer_review_system}. In a nutsell the workflow is as follows -- once an author submits a paper, the system allocates the submission to an editor based on a simple keyword matching technique. The editor then handles the paper and chooses one or more competent referees who are experts in the area and can judge the technical merit of the paper. The referee(s) in turn read the paper and send their review report(s) to the editor. The editor reads the review(s) and takes a decision to either accept, reject or invite the authors to revise and resubmit. The revise and resubmit decision re-instantiates the same workflow described above once again and the cycle continues until the paper is eventually accepted or rejected.
We categorize the authors in this dataset into three classes based on the fraction of their papers accepted to the journal. We calculate the \textit{acceptance rate} ($ACC$) of an author as the ratio of the number of papers accepted to the number of papers submitted by the author to the journal. For each of the three categories (discussed below) we analyse a bunch of interesting features that are drawn from the collaboration/citation network of an author as well as the peer reviews received by the different accepted papers of the authors. We find that these features are considerably different across the three $ACC$ classes.
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.65\hsize,height=5.5cm]{Figures/EDITOR_REVIEWER.jpg} \\
\end{tabular}
\caption{The JHEP peer review workflow.}
\label{fig:peer_review_system}
\end{figure}
\vspace{-0.2cm}
\subsection{Our contributions}
We categorize the authors into three classes based on their acceptance rate. Authors whose papers are consistently accepted for publication and have high $ACC$ are placed in the class $ACC_{high}$; authors whose papers are rarely accepted and have low $ACC$ are placed in the class $ACC_{low}$ and authors who are neither in $ACC_{high}$, nor in $ACC_{low}$ and have moderate $ACC$ are placed in $ACC_{mid}$. We explain the process of author categorization in details in section~\ref{sec:categorization}.
Our main contributions are threefold.
\begin{enumerate}
\item Rigorous analysis of the profile and peer review based features of authors belonging to each category.
\item Analyzing inter-category and intra-category interaction and network-centric properties obtained from three different networks -- (i) the co-reviewer network ($CRN$), (ii) the collaboration network ($CON$) and (iii) the co-citation network ($CCN$).
\item Early prediction of an author's category based on the profile, peer review data and network-centric features.
\end{enumerate}
Toward the first objective, we extract various features representing an author. These features are divided into two types -- (i) author's profile based features ($AP_f$) and (ii) features based on peer review data ($PE_f$). Author's profile based features ($AP_f$) comprises citation count ($C_{cnt}$), topic diversity ($T_{div}$), experience ($E_{cnt}$) and $h$-index~\cite{Hirsch:2005} ($H_{ind}$). Peer review based features ($PE_f$) consists of sentiment of review text ($SNT_{r}$), length of the review text ($L_r$), reviewer diversity ($R_{div}$) and editor diversity ($Ed_{div}$).
In addition, we extract various features -- centrality values, clustering coefficient, core-periphery structure etc. from the three different type of networks mentioned above. These networks are defined below.
\noindent \textbf{(i) Co-reviewer network ($CRN$)}: In this article, we introduce a co-reviewer network. Each author is considered as a node in the network and two authors are connected by an edge if their papers are reviewed by the same reviewer. In addition, we also prepare an induced co-reviewer graph for the three different author categories.\\
\noindent \textbf{(ii) Collaboration network ($CON$)}: Each author in this network is considered as a node and two authors are connected by an edge if they co-authored in a paper. We also prepare the induced collaboration networks of the authors of each category.\\
\noindent \textbf{(iii) Co-citation network ($CCN$)}: In this directed network, each author is considered as a node and two authors are connected by an edge ($a_{i} \longrightarrow a_{j}$) if author $a_i$ has cited an article authored by $a_j$. There is bidirectional edge ($a_{i} \longleftrightarrow a_{j}$) if author $a_i$ and author $a_j$ cites each other.
For our experiments, we consider the authors who have submitted their paper to the Journal of High Energy Physics (JHEP) between 1997 to 2015. We consider approx. 29k papers and more than 24k authors. We also have approx. 70k unique review reports.
\subsection{Key results}
A nuanced analysis shows that authors in the class $ACC_{high}$ usually receive more citations than the other two categories. We also note that papers of the $ACC_{low}$ authors receive more citation if they coauthored with $ACC_{high}$ authors in some paper. $ACC_{high}$ authors always receive more positive reviews than the other two categories. An intriguing observation is that the set of referees and editors to whom the papers of the $ACC_{high}$ class are assigned are found to be less diverse than the other two classes. The $ACC_{high}$ authors are more `central' in all the networks. We finally make early predictions of the $ACC$ category of an author and obtain 0.82 - 0.95 precision and 0.82 - 0.91 recall. In a followup discussion we narrate how apart from the author characteristics, the peer review system itself can potentially facilitate discrimination in the editing and the reviewing process of papers in the three categories which could reinforce the distinction between the authors of these categories and calls for further investigation by the system admins.
\subsection{Outline}
The rest of the paper is organised as follows. Section~\ref{sec:datasets} describes the dataset used in this paper. Section~\ref{sec:categorization} details the method for author categorization. Section~\ref{sec:authors_feature} and~\ref{sec:peer_review_features} demonstrate the author profile features and peer review based features respectively. In section~\ref{sec:graph_based}, we discuss the network features of the three category of authors. In section~\ref{prediction} we predict the category of the authors. In section~\ref{sec:catwise_authors_editor_reviewer_analysis} discuss the potential role of the peer review system in enhancing the distinction among the three categories of authors. Section~\ref{sec:related_work} presents a brief literature review. Finally, we conclude in section~\ref{conc}.
\section{Dataset Description}\label{sec:datasets}
In our article, we consider papers submitted to the Journal of High Energy Physics (JHEP)\footnote{https://jhep.sissa.it/jhep/} in between 1997 and 2015. JHEP is one of the leading journals in the domain of high energy physics. In JHEP, the identity of the referee remains confidential. This dataset contains a total of 28871 papers, where the number of accepted and rejected papers are 20384 and 6190 respectively. We also have 70000 unique peer review reports. For each paper we have the title, author names, broad topics that the paper is on, publication date (in case it was accepted) and the number of citations for the accepted papers. In addition, this dataset contains the review text, number of review rounds, editor and reviewer ids (anonymised) of each paper. We also have the citation link among the papers. For the rejected papers, we collected the arXiv\footnote{http://arxiv.org} id using the Inspire\footnote{https://inspirehep.net} search engine. We consider the cumulative number of citations obtained at the end of 2015. We present a brief statistics of the dataset in Table~\ref{tab:dataset}.
\vspace{-0.2cm}
\begin{table}[tbhp]
\centering
\caption{\label{tab:dataset}Dataset description.}
\begin{adjustbox}{width=0.35\textwidth}
\begin{tabular}{|c|c|c|c|} \hline
{\bf Basic Information} & {\bf Count} \\ \hline
\#papers & 26574\\ \hline
\#unique authors & 24868 \\ \hline
\#papers (accepted) & 20384 \\ \hline
\#papers (rejected) & 6190 \\ \hline
Average \#citations (accepted papers) & 31.88 \\ \hline
Average \#citations (rejected papers) & 9.45 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\section{Author Categorization}
\label{sec:categorization}
In this section, we categorize authors' profile into three categories based on their articles' acceptance rate ($ACC$) -- (i) authors with high acceptance ($ACC_{high}$) (ii) authors with moderate acceptance ($ACC_{mid}$) (iii) authors with low acceptance ($ACC_{low}$). Acceptance rate of an author is calculated as the ratio of the number of papers accepted to number of papers submitted by that author. We calculate article acceptance rate of each author for every year. In case of $ACC_{high}$ category, we consider only those authors who have high acceptance rate $(>0.7)$ in at least 70\% of the years over all the years. $ACC_{low}$ category contains authors who have very low acceptance rate $(<0.4)$ in at least 80\% of the years. We keep the rest of the authors (not falling in the other two categories) in $ACC_{mid}$ category. Statistics of the unique authors are given in Table~\ref{tab:category_stat}.
The number of accepted and rejected papers in each class are noted in Figure~\ref{fig:categorywise_percentageOfPapers}. The papers of authors in the $ACC_{high}$ class almost always get accepted.
\begin{table}[tbhp]
\centering
\caption{Statistics of author categorization.}\label{tab:category_stat}
\begin{adjustbox}{width=0.3\textwidth}
\begin{tabular}{|c|c|} \hline
{\bf Author Categories} & {\bf \#Authors} \\ \hline
$ACC_{high}$ & 3688 \\ \hline
$ACC_{mid}$ & 10359 \\ \hline
$ACC_{low}$ & 9644 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=.45\textwidth]{Figures/categorywise_percentageOfPapers.jpg}\\
\end{tabular}
\caption{Percentage of accepted and rejected papers of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low) authors. }
\label{fig:categorywise_percentageOfPapers}
\end{figure}
\begin{figure*}[!tbh]
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.45\hsize, height=7.5cm]{Figures/coauth_all.jpg}
\includegraphics[width=0.45\hsize, height=7.5cm]{Figures/coauth_high_mid_final.jpg} \\
\includegraphics[width=5cm,height=0.5cm,right]{Figures/AUTH_COLORS_FINAL.jpg}
\end{tabular}
\caption{(Left) This collaboration network includes $ACC_{high}$, $ACC_{mid}$ and $ACC_{low}$ authors. (Right) This collaboration network includes $ACC_{high}$ and $ACC_{mid}$ authors only for better visualisation.}
\label{fig:coauthorship}
\end{figure*}
\section{Author profile based features ($AP_f$)}
\label{sec:authors_feature}
\subsection{Citation index ($C_{ind}$)}
Citation count of each author is computed by considering the total number of citations an author received in their active period. For each category, we define citation index as the \textit{standard deviation of citation counts} of all the authors. We compute $C_{ind}$ for three categories $ACC_{high}$, $ACC_{mid}$ and $ACC_{low}$ (see Figure~see Figure~\ref{fig:author_features}(d)). There is a stark difference in the values of $C_{ind}$ among the three categories. Authors in the class $ACC_{high}$ have low $C_{ind}$ (approx. 50) whereas authors in the class $ACC_{low}$ have high $C_{ind}$ (approx. 101). Thus, the citation counts in the class $ACC_{high}$ are far more uniform across the authors compared to the $ACC_{low}$ class.
\subsection{Experience ($E_{cnt}$)}
Experience of an author is defined in terms of number of papers he has published. For each category, we compute experience of all the authors and consider the mean of these $E_{cnt}$s. We observe $ACC_{high}$ has highest mean experience (see Figure~\ref{fig:author_features}(a)). $ACC_{mid}$ has moderate value of mean experience whereas $ACC_{low}$ has very low mean experience (see Figure~\ref{fig:author_features}(a)). From this, it is clear that $ACC_{low}$ category contains those authors who are either new in research or has very few publications.
\if{0}
\begin{table}[tbhp]
\centering
\caption{Mean experience ($E_{cnt}$) of authors in different categories}\label{tab:experience}
\begin{adjustbox}{width=0.3\textwidth}
\begin{tabular}{|c|c|} \hline
{\bf Author Categories} & {\bf Mean Experience} \\ \hline
$ACC_{high}$ & 8.49 \\ \hline
$ACC_{mid}$ & 5.35 \\ \hline
$ACC_{low}$ & 1.72 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\fi
\subsection{Topic diversity ($T_{div}$)}
We consider a topic set for each author. This topic set contains all the topics on which an author published their papers. For each author we compute \textit{topic ratio} as the ratio of the total number of topics on which he/she has written a paper to the total number of papers he/she published. For each category, we consider mean over \textit{topic ratio} of all the authors to compute topic diversity ($T_{div}$) (see Figure~\ref{fig:author_features}(e)). Interestingly, $ACC_{high}$ category authors have less $T_{div}$ (1.03) than the other two categories ($ACC_{mid}$ has 1.36 and $ACC_{low}$ has 1.57). We observe that $ACC_{low}$ category authors publish papers on a lot of topics whereas $ACC_{high}$ authors focus on a relatively less number of topics and publish a large number of papers in those topics
\subsection{$h$-index ($H_{ind}$)}
The $h$-index~\cite{Hirsch:2005} is defined as the maximum value of $h$ such that an author has published $h$ papers that have each been cited at least $h$ times. For all the three categories, we consider mean of the $H_{ind}$ of all the authors. From Figure~\ref{fig:author_features}(b), it is clear that $ACC_{high}$ have very high mean $H_{ind}$ compared to the other two categories. Thus the $ACC_{high}$ class usually comprises the high impact authors.
\subsection{Team size ($TS$)}
Team size of an author is calculated as the number of contributing co-authors averaged across all the papers that the particular author has written. We examine mean team size for each category (see Figure~\ref{fig:author_features}(c)). $ACC_{high}$ and $ACC_{mid}$ authors have mean team sizes of 2.44 and 2.15. The typical team sizes for both these classes are very similar. On the other hand, we find that the mean team size of $ACC_{low}$ is $\sim1.61$ which is quite low compared to the other two classes.
\begin{figure*}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=1\hsize]{Figures/categorywise_auth1.jpg} \\
\end{tabular}
\caption{(a) The mean experience ($E_{cnt}$) of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low). (b) The mean $h$-index for the three categories. (c) The mean team size ($TS$) for the three categories. (d) Citation index ($C_{ind}$) of the three categories. (e) Topic diversity ($T_{div}$) for the three categories.}
\label{fig:author_features}
\end{figure*}
\section{Peer review text based features ($PF_f$)}
\label{sec:peer_review_features}
\subsection{Sentiment of review text ($SNT_{r}$)}
We compute the sentiment score $[-1,1]$ of each review text for each paper\footnote{https://textblob.readthedocs.io/en/dev/}. For every author we compute the average review sentiment across all the papers (s)he has written. For every class, we take the mean of these average values across all the authors in that class (see Figure~\ref{fig:review_features}(a)). Among the three classes, the review text bears the highest positive sentiment (0.15) in the $ACC_{high}$ class. This is followed by $ACC_{mid}$ class where the overall sentiment is 0.05. Finally, the review texts corresponding to the $ACC_{low}$ class indicate the presence of high negative sentiment $(-0.26)$.
\if{0}
\begin{figure*}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=.33\hsize]{Figures/BiHighAccepted.jpg} &
\includegraphics[width=.33\hsize]{Figures/BiMediumAccepted.jpg}&
\includegraphics[width=.33\hsize]{Figures/BiLowAccepted.jpg}\\
(a) $ACC_{high}$ accepted papers & (b) $ACC_{mid}$ accepted papers & (c) $ACC_{low}$ accepted papers\\
\includegraphics[width=.33\hsize]{Figures/BiHighRejected.jpg} &
\includegraphics[width=.33\hsize]{Figures/BiMediumRejected.jpg}&
\includegraphics[width=.33\hsize]{Figures/BiLowRejected.jpg}\\
(d) $ACC_{high}$ rejected papers & (e) $ACC_{mid}$ rejected papers & (f) $ACC_{low}$ rejected papers\\
\end{tabular}
\caption{Wordclouds of frequent bi-grams in review text for accepted and rejected papers. (a) Review text of accepted papers of $ACC_{high}$ is positive and review contains positive bi-grams like \em{well written}, \em{I recommend} etc. (b) Review text of accepted papers of $ACC_{mid}$ contains positive words like \em{I recommend}, \em{recommended paper} but it also contain few negative bi-grams such as \em{can not}, \em{not clear} etc.}
\label{fig:review_text_emotion}
\end{figure*}
\fi
\subsection{Length of review text ($L_{r}$)}
Length of review text is computed as the number of words present in the review text except stop-words (see Figure~\ref{fig:review_features}(b)). Surprisingly, we find that $ACC_{high}$ category receive relatively lengthier reviews (2368) compared to $ACC_{low}$ (1305). It is therefore quite clear that papers in the $ACC_{high}$ class typically receive more detailed feedback from the referees compared to the $ACC_{low}$ class.
\iffalse
\begin{table}[tbhp]
\centering
\caption{Overall review rounds}\label{tab:review_rounds}
\begin{adjustbox}{width=0.45\textwidth}
\begin{tabular}{|c|c|c|} \hline
{\bf Categories}&{\bf Accepted papers} & {\bf Rejected papers} \\ \hline
$ACC_{high}$ & 2.33 & 2.05 \\ \hline
$ACC_{mid}$ & 2.27 & 1.83 \\ \hline
$ACC_{low}$ & 1.93 & 1.60 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\fi
\subsection{Reviewer diversity ($R_{div}$)}\label{ref_div}
We use Shannon index~\cite{Spellerberg:2003} to calculate the reviewer diversity. For each author in a particular category, we extract the reviewer ids of all his/her published papers and add it to a global list. Thus we have three global lists for each of the three categories. Next, for each category, we compute the entropy of this global list. Let the size of the global list for a category be $N$ and let the number of occurrences of a reviewer $r_i$ in the list be $f_i$. Then the entropy would be $-\sum_{\forall{i}}\frac{f_i}{N}log(\frac{f_i}{N})$. If the value of this entropy is low then this would mean that the number of reviewers to whom the papers of a class go for review are very limited. In contrast, if this value is high for a class then it would mean that many reviewers are assigned as referees for the papers in the class (see Figure~\ref{fig:review_features}(c)). Surprisingly, we notice that $ACC_{high}$ has less reviewer diversity $(\sim 6.83)$ than the other two categories. $ACC_{mid}$ and $ACC_{low}$ categories have reviewer diversity 7.36 and 7.34 respectively. This possibly indicates that for the $ACC_{high}$ class the set of referees are relatively more fixed and papers of authors from this group usually go to other peer authors (in the role of referees) mostly from this group itself for a review. This, we believe, is a sign of unhealthy reviewing practice. We shall discuss more about this in section~\ref{sec:catwise_authors_editor_reviewer_analysis}.
\subsection{Editor diversity ($Ed_{div}$)}\label{ed_div}
Once again we use Shannon index~\cite{Spellerberg:2003} to calculate editor diversity. We compute this metric exactly as $R_{div}$ with the exception that here the three global lists are composed of editor ids to whom the papers are assigned (as opposed to reviewer ids in the previous case). Here also we observe that editor diversity of $ACC_{high}$ is quite low $\sim 3.94$; on the other hand, the editor diversity of $ACC_{mid}$ and $ACC_{low}$ classes are relatively higher $\sim 4.07$ and $\sim 4.01$ respectively (see Figure~\ref{fig:review_features}(d)). It seems that the same set of editors handle the papers of the $ACC_{high}$ class.
\subsection{Linguistic quality indicator ($LQI$)}
Here we analyze the different emotions (positive, optimism, cheerfulness, confusion and contentment) reflected by each word present in the review text\footnote{https://github.com/Ejhfast/empath-client}. Then we take the mean of the emotion values of words present in a particular review text and average it over all authors in a class. We find quite a few interesting results. There are more positive emotion words in the review texts of the $ACC_{high}$ class (0.018) compared to the $ACC_{low}$ class (0.015). Further, there are more optimism related words in the review texts of the $ACC_{high}$ class (0.01) compared to the $ACC_{low}$ class (0.004). There are more cheerfulness related words present in the review texts of the $ACC_{high}$ class (0.0017) compared to the $ACC_{low}$ class (0.0014). There are less confusion words in the review texts of the $ACC_{high}$ class (0.0026) compared to the $ACC_{low}$ (0.0036) class. Last, there are more contentment related words in the review texts of the $ACC_{high}$ class (0.0079) compared to the $ACC_{low}$ class (0.0059).
\begin{figure*}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=1\hsize]{Figures/categorywise_review1.jpg} \\
\end{tabular}
\caption{(a) The sentiment of review text ($SNT_{r}$) of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low). (b) The length of the review text ($L_{r}$) for the three categories. (c) Reviewer diversity ($R_{div}$) of the three categories. (d) Editor diversity ($Ed_{div}$) for the three categories.}
\label{fig:review_features}
\end{figure*}
\section{Network analysis based features ($NE_{f}$)}
\label{sec:graph_based}
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.8\hsize,height=8cm]{Figures/DEG_BET_allgraph_properties.jpg} \\
\end{tabular}
\caption{(a) The average degree centrality of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low) for the three networks ($CRN$, $CON$ and $CCN$). (b) The average betweenness centrality of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low) for three networks.}
\label{fig:deg_bet_cc_plot}
\end{figure}
\if{0}
\subsection{\textbf{Density}}
Density~\cite{} of a network can be defined as the degree of connections among nodes. Density is calculated as the number of edges present in the network divided by number of possible edges. To understand how well each category authors are connected among themselves, we calculate density of the whole network as well as for each category induced network.
\subsection{\textbf{Assortativity}}
Assortativity~\cite{Newman:2003} is defined as a metric to measure the preference for nodes to attach to others that are similar in some way. For example, a network is called assortative if high degree nodes are connected to high degree nodes and vice versa. We compute assortativity of overall and induced co-reviewer graph. The overall assortativity of whole co-reviewer graph is 0.63.
\subsection{\textbf{Reciprocity}}
Reciprocity is defined as a ratio of number of connections that are mutually linked to total number of possible connections. This measure is for directed network. We computed reciprocity for co-citation network and its induced subgraphs.
\subsection{Degree centrality}
Degree centrality of a node is defined as the ratio of the number of neighbors it has and the total number of possible neighbors. The node with high degree centrality is most important node in the network. We compute degree centrality for nodes of three different networks.{\color{blue}Add the plots}
\subsection{Betweenness centrality}
Betweenness centrality~\cite{Freeman:1977} of a node is defined by ratio of the number of shortest path passes through the node to the total number of shortest path from source to destination. The node with high betweenness centrality has more information than the other nodes. We compute betweenness centrality for all the three types of networks.
\subsection{Clustering coefficient}
Clustering coefficient~\cite{Giorgio:2007} of a node is computed as the ratio of the number of edges among its neighbors to the number of possible edges among its neighbors. Clustering coefficient shows how well a node's neighbors are connected among themselves. We also compute clustering coefficient of nodes for each network.
\subsection{Closeness centrality}
Closeness centrality~\cite{Freeman:1978} of a node is the inverse of the sum of all the shortest path between the node and all other nodes in the network. The node with highest closeness centrality is most central node in the network and closest to all other nodes.
\subsection{Page rank}
Page rank is an algorithm which estimates importance of a node in the network. The main assumption of this algorithm is the more important nodes likely to have more incoming edges than others. We computed page rank on $CCN$ and $CRN$ network.
\fi
In this section, we study the properties of the three different networks in details.
\subsection{Analysis of the co-reviewer network ($CRN$)}
Recall that in a co-reviewer network each node corresponds to an author and two authors are connected if their papers have been co-reviewed by the same referee. We run series of analysis on this network to investigate the differences between the three categories.
\subsubsection{Centrality measures}
Here we compute four centrality measures of the whole co-reviewer network.
\noindent{\bf Degree centrality}: We compute the average degree centrality of the authors (see Figure~\ref{fig:deg_bet_cc_plot}(a)) for each category. We observe that the average degree centrality of authors of $ACC_{high}$ category is high (0.019) whereas the average degree centrality of the authors for $ACC_{mid}$ ($\sim 0.011$) and $ACC_{low}$ ($\sim 0.002$) are relatively lower. (see Figure~\ref{fig:deg_bet_cc_plot}(a)).
\noindent{\bf Betweenness centrality}: We compute the average betweenness centrality of the authors of each category. The average betweenness centrality (see Figure~\ref{fig:deg_bet_cc_plot}(b)) of $ACC_high$ category is marginally higher ($\sim 0.00019$) than the other two categories.
\noindent{\bf Closeness centrality}: We calculate the average closeness centrality of authors for each category. The average closeness centrality (see Figure~\ref{fig:clos_pg_plot}(a)) of $ACC_high$ category is higher ($\sim 0.362$) than the other two categories.
\noindent{\bf PageRank}: We calculate the average PageRank score of the authors for each category. The average PageRank (see Figure~\ref{fig:clos_pg_plot}(b)) of $ACC_high$ category is marginally higher ($\sim 0.0000719$) than the other two categories.
\subsubsection{Core periphery analysis}
Here we perform a $k$-shell decomposition of the network and inspect four different shells -- the innermost ($k=180$), the inner-mid ($k=140$), the outer-mid ($k=90$) and the outermost ($k=1$). As noted in Table~\ref{tab:core_periphery}, we observe that the innermost and inner-mid shells contain a larger fraction of nodes from the $ACC_{high}$ and $ACC_{mid}$ classes compared to the $ACC_{low}$ class. In contrast, the outermost shell contains the largest fraction of nodes from the $ACC_{low}$ class.
\begin{table}[tbhp]
\centering
\caption{Core periphery analysis of the co-reviewer network.}\label{tab:core_periphery}
\begin{adjustbox}{width=0.45\textwidth}
\begin{tabular}{|c|c|c|c|c|} \hline
{\bf Shell}&{\bf \# Authors} & {\bf \% $ACC_{high}$} & {\bf \% $ACC_{mid}$} & {\bf \% $ACC_{low}$}\\ \hline
Innermost (180) & 167 & \cellcolor{green!20}29.9 & \cellcolor{green!20}49.1 & 20.3 \\ \hline
Inner-mid (140)& 37 & \cellcolor{green!20}13.5 & \cellcolor{green!20}78.3 & 8.1 \\ \hline
Outer-mid (90) & 116 & 11.2 & 48.2 & 36.2 \\ \hline
Outermost (1) & 227 & 7 & 14.5 & \cellcolor{red!20}62 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\subsubsection{Induced co-reviewer network}
Here we construct three induced co-reviewer networks comprising the authors in the classes $ACC_{high}$, $AC_{mid}$ and $AC_{low}$ respectively.
\noindent{\bf Density}: We calculate the density of each induced graph to observe how densely the authors are connected among themselves through the common reviewers. Density of the $ACC_{high}$ induced graph is higher (0.047) than others. Density of $ACC_{mid}$ and $ACC_{low}$ are 0.016 and 0.001 respectively.
\noindent{\bf Assortativity coefficient}: We compute the assortativity coefficient of the three induced networks. While this coefficient for the $ACC_{high}$ induced graph is as high as 0.82, the same for the $ACC_{mid}$ and the $ACC_{low}$ induced graphs are 0.66 and 0.24 respectively. This indicates that the $ACC_{high}$ induced graph is much more homophilic compared to the other two graphs.
\noindent{\bf Edge transitions}: We finally study the edge transitions among the three induced graphs, i.e., given a pair of induced graphs we find the fraction of edges going from one of them to the other from the original co-reviewer network. We find that $ACC_{high}$ and $ACC_{mid}$ share almost 34.7\% edges whereas $ACC_{high}$ and $ACC_{low}$ share only 4.3\% edges. The fraction of edges between $ACC_{mid}$ and $ACC_{low}$ is around 10.4\%.
\vspace{-0.3cm}
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.8\hsize,height=8cm]{Figures/CLOS_PG_allgraph_properties.jpg} \\
\end{tabular}
\caption{(a) The average closeness centrality of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low) for three networks ($CRN$, $CON$ and $CCN$). (b) The average PageRank of $ACC_{high}$ (High), $ACC_{mid}$ (Moderate) and $ACC_{low}$ (Low) for three networks.}
\label{fig:clos_pg_plot}
\end{figure}
\vspace{-0.2cm}
\subsection{Analysis of co-citation network ($CCN$)}
Recall the the co-citation network has authors as its nodes and there is an edge from author $a_i$ to $a_j$ if $a_i$ cites a paper of $a_j$. If both $a_i$ and $a_j$ cite each other in some of their papers then there is a bidirectional edge between them.
\subsubsection{Centrality measures}
We compute four centrality measures in the co-citation network.
\noindent{\bf Degree centrality}: We compute the average degree centrality of the authors (see Figure~\ref{fig:deg_bet_cc_plot}(a)) for each category. We observe that the average degree centrality of the authors of $ACC_{high}$ category is high compared to the average degree centrality of authors for $ACC_{mid}$ and $ACC_{low}$ categories.
\noindent{\bf Betweenness centrality}: We compute the average betweenness centrality of the authors for each category. The average betweenness centrality (see Figure~\ref{fig:deg_bet_cc_plot}(b)) of the authors of $ACC_{high}$ category is marginally higher ($\sim 0.0004$) than the other two categories.
\noindent{\bf Closeness centrality}: We calculate the average closeness centrality of the authors of each category. The average closeness centrality (see Figure~\ref{fig:clos_pg_plot}(a)) of $ACC_{high}$ category authors is higher ($\sim 0.092458$) than the other two categories.
\noindent{\bf PageRank}: We calculate the average PageRank score of the authors of each category. The average PageRank (see Figure~\ref{fig:clos_pg_plot}(b)) of $ACC_{high}$ and $ACC_{mid}$ categories are marginally higher than the $ACC_{low}$ category.
\subsubsection{Induced co-citation network}
Here again we construct three induced co-citation networks comprising the authors from the three classes -- $ACC_{high}$, $ACC_{mid}$ and $ACC_{low}$.
\noindent{\bf Cross citations}: We find the fraction of citations running in between the classes. Notably, the largest fraction of citation edges run between $ACC_{high}$ and $ACC_{mid}$ induced graphs (45\%). Fraction of citation edges running between $ACC_{high}$ and $ACC_{low}$ induced graphs, on the other hand, is the least (1\%).
\noindent{\bf Self citations}: Fraction of citation edges running within the $ACC_{high}$ induced graph is the highest $(\sim 33.2\%)$. This fraction for the $ACC_{mid}$ and $ACC_{low}$ are 17.9\% and 0.4\% respectively.
\noindent{\bf Reciprocity}: We compute the reciprocity within and across all the three induced networks. Reciprocity within the $ACC_{high}$ induced network is the highest (0.61); reciprocity in the $ACC_{mid}$ induced network is 0.20 and the same for the $ACC_{low}$ induced network is 0.08 which is the least among the three.
Reciprocity in between $ACC_{high}$ and $ACC_{mid}$ induced networks is 0.34 which is higher than between $ACC_{mid}$ and $ACC_{low}$ (0.11) as well as $ACC_{high}$ and $ACC_{low}$ (0.12).
\subsubsection{$ACC_{low}$ authors that are cited by $ACC_{high}$ authors}
Although a rare case, here, we observe how the citations coming from the $ACC_{high}$ authors affect the fate of the papers written by the $ACC_{low}$ authors. We separately consider those papers which are cited by $ACC_{high}$ authors and observe the author characteristics of such papers. We find that the mean citation of papers written by $ACC_{low}$ authors and cited by $ACC_{high}$ authors is roughly double $(\sim 57.91)$ the mean citation of papers $(\sim 28.9)$ written by $ACC_{low}$ authors that are never cited by the $ACC_{high}$ authors.
We further notice that the mean citation of those $ACC_{low}$ authors $(\sim 50.07\%)$ whose papers are cited by $ACC_{high}$ authors is higher than the mean citation of the other $ACC_{low}$ authors $(\sim 28.94\%)$.
\if{0}
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=1\hsize]{Figures/cdf_plot.png} \\
\end{tabular}
\caption{Cumulative distribution function (CDF) of citations received by the authors of $ACC_{high}$,$ACC_{mid}$ and $ACC_{low}$}
\label{fig:log_citation_count}
\end{figure}
\fi
\subsubsection{$ACC_{low}$ authors cited by $ACC_{mid}$ authors}
In this section, we investigate the characteristics of those $ACC_{low}$ authors whose papers are cited by $ACC_{mid}$ authors. Once again, we observe that the mean citation of papers $(\sim 50.46)$ written by $ACC_{low}$ authors and cited by $ACC_{mid}$ authors is much higher than the mean citation of papers $(\sim 28.9)$ written by $ACC_{low}$ authors but never cited by the $ACC_{mid}$ authors.
\subsection{Analysis of collaboration network ($CON$)}
Recall that the in the collaboration network each node is an author and two authors are connected if they have co-authored a paper together. We present a visualisation of the collaboration network in Figure~\ref{fig:coauthorship}. The left sub-figure shows the authors in the three categories as nodes of different colours. The blue nodes correspond to the authors in the $ACC_{high}$ category, the red nodes correspond to the authors in the $ACC_{mid}$ category and the yellow nodes correspond to the authors in the $ACC_{low}$ category. The blue nodes are concentrated mostly in the center of the network while the red and the yellow nodes are scattered all across the network. This is more clear when we draw the network of the authors corresponding to the $ACC_{high}$ and the $ACC_{mid}$ category. The blue nodes are largely concentrated at the center of the network.
\subsubsection{Centrality measures}
We compute four centrality measures from the collaboration network.
\noindent{\bf Degree centrality}: We compute the average degree centrality of the authors (see Figure~\ref{fig:deg_bet_cc_plot}(a)) for each category. We observe that the average degree centrality of the authors in the $ACC_{high}$ category is higher (0.0039) than the average degree centrality of the authors in the other two categories.
\noindent{\bf Betweenness centrality}: We compute the average betweenness centrality of the authors of each category. The average betweenness centrality (see Figure~\ref{fig:deg_bet_cc_plot}(b)) of $ACC_{high}$ category is higher ($\sim 0.00088$) than the other two categories.
\noindent{\bf Closeness centrality}: We calculate the average closeness centrality of the authors of each category. The average closeness centrality (see Figure~\ref{fig:clos_pg_plot}(a)) of $ACC_high$ category is higher ($\sim 0.105$) than the other two categories.
\noindent{\bf PageRank}: We calculate the average PageRank score of the authors of each category. The average PageRank (see Figure~\ref{fig:clos_pg_plot}(b)) of $ACC_high$ category is marginally higher ($\sim 0.000119$) than the other two categories.
\subsubsection{Class wise collaborations}
The fraction of collaboration edges between the $ACC_{high}$ and $ACC_{mid}$ authors is 38.9\% which is much higher than either the fraction of collaboration edges between $ACC_{mid}$ and $ACC_{low}$ authors (1.0\%) or $ACC_{high}$ and $ACC_{low}$ authors (0.2\%).
On the other hand, the fraction of collaboration edges within the $ACC_{high}$ authors is 26.4\%, while this is 31.3\% for the $ACC_{mid}$ authors and 0.7\% for the $ACC_{low}$ authors.
\subsubsection{$ACC_{low}$ authors collaborating in papers primarily written by $ACC_{high}$ authors}
In this section, we focus on those $ACC_{low}$ authors who get a chance to collaborate with $ACC_{high}$ authors. In particular, we consider those papers which are written by a mix of 20\% $ACC_{low}$ authors and 80\% $ACC_{high}$ authors (i.e., papers predominantly written by authors with high acceptance ratio).
We compute various features discussed earlier for this 20\% $ACC_{low}$ authors when they write papers with $ACC_{high}$ authors and when they write papers without them. The feature values are noted in Table~\ref{tab:collaborate_ACC_high_80}. Collaborations with the $ACC_{high}$ authors seems to heavily benefit the $ACC_{low}$ authors in terms of accrued citations as well as review sentiments obtained from the referees.
\begin{table}[tbhp]
\centering
\caption{Properties of $ACC_{low}$ authors who collaborate with a high number $ACC_{high}$ authors.}\label{tab:collaborate_ACC_high_80}
\begin{adjustbox}{width=0.48\textwidth}
\begin{tabular}{|c|c|c|} \hline
{\bf Features}&{\bf Collaborated} & {\bf Not collaborated}\\
& {\bf with $ACC_{high}$} & {\bf with $ACC_{high}$ } \\ \hline
Mean \#papers & 1.1 & 1.9 \\ \hline
Team size ($TS$) &4.3 & 3.1 \\ \hline
Citation ($C_{cnt}$) & \cellcolor{green!20}30 & 12 \\ \hline
Review text sentiment ($SNT_r$) & \cellcolor{green!20}0.23 & -0.13 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\subsubsection{$ACC_{high}$ authors collaborating in papers primarily written by $ACC_{low}$ authors}
In this section, we analyze such cases where papers are written by 80\% $ACC_{low}$ and 20\% $ACC_{high}$ authors. We analyze profile features of these 80\% $ACC_{low}$ authors when they write papers with $ACC_{high}$ authors as well as when they write without them. Table~\ref{tab:collaborate_ACC_high_20} enumerates the important features and shows that even having a small fraction of $ACC_{high}$ authors in their paper can increase the citation count and reduce the negative sentiment in the reviews of the $ACC_{low}$ authors.
\begin{table}[tbhp]
\centering
\caption{Analysis of $ACC_{low}$ authors who collaborate with a low number of $ACC_{high}$ authors.}\label{tab:collaborate_ACC_high_20}
\begin{adjustbox}{width=0.48\textwidth}
\begin{tabular}{|c|c|c|} \hline
{\bf Features}&{\bf Collaborated} & {\bf Not collaborated}\\
& {\bf with $ACC_{high}$} & {\bf with $ACC_{high}$ } \\ \hline
Mean \#papers & 1.0 & 1.8 \\ \hline
Team size ($TS$) &4.12 & 2.85 \\ \hline
Citation ($C_{cnt}$) & \cellcolor{green!20}53.7 & 27.7 \\ \hline
Review text sentiment ($SNT_r$) & \cellcolor{green!20}-0.39 & -0.54 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\section{Author category prediction}\label{prediction}
\subsection{Classification model}
In our classification model, we consider $AP_{f}$, $PF_{f}$ and $NE_{f}$ features for the first three years of career of each author as the training data. For example, if an author published his first paper in 1996 then we consider papers published in between 1996 and 1998 for training purpose. We compute all the features of an author based on the first three years of career information. For testing, we leave a gap of two years to prevent any data leakage. After five years, we predict their category. We use two different classifiers -- XGBoost~\cite{Chen:2016} and random forest~\cite{Breiman:2001}. In order to evaluate the model, we compute class wise precision and recall. In addition, we also compute F1-score. We calculate precision as the fraction of authors who are correctly classified out of all the predicted authors. Recall is the fraction of relevant authors correctly classified by the classifier.
\noindent{\bf Features}: We use the author profile features ($AP_{f}$), peer review based features ($PF_{f}$) as well as network features ($NE_{f}$).
\noindent{\bf Results}: The class wise precision and recall for the XGBoost model are noted in Table~\ref{tab:XGBoost_precision_recall}. The F1-score for the model is 0.84. The confusion matrix is tabulated in Table~\ref{tab:XGBoost}.
The class wise precision and recall for the random forest model are noted in Table~\ref{tab:random_forest_precision_recall}. F1-score for this model is 0.89. We report the confusion matrix in Table~\ref{tab:RandomForest}. The random forest model outperforms the XGBoost model.
\if{0}
\subsubsection{XGBoost Model}
\label{subsubsec:XGBoost_des}
In this model, we use author profile based feature ($AP_{f}$), peer review based feature ($PF_{f}$) and network based features ($NE_{f}$) as model features. Specifically, we use $NE_{f}$ based features of three different network in our model. We found a few important features among all the features such as experience ($E_{cnt}$), degree centrality of $CCN$, H-index ($H_{ind}$), team size ($TS$), sentiment of review text ($SNT_{r}$), reviewer diversity ($R_{div}$), editor diversity ($Ed_{div}$), citation count, reciprocity of $CCN$, degree centrality of $CRN$, core, page rank of $CRN$. Confusion matrix is referrred to
\fi
\begin{table}[tbhp]
\centering
\caption{Class wise precision and recall of the XGBoost model.}\label{tab:XGBoost_precision_recall}
\begin{adjustbox}{width=0.35\textwidth}
\begin{tabular}{|c|c|c|} \hline
{\bf Categories}&{\bf Precision} & {\bf Recall} \\ \hline
{\bf $ACC_{high}$} & 0.78 & 0.75 \\ \hline
{\bf $ACC_{mid}$} & 0.84 & 0.88 \\ \hline
{\bf $ACC_{low}$} & 0.92 & 0.88 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\begin{table}[tbhp]
\centering
\caption{Confusion matrix of the XGBoost model.}\label{tab:XGBoost}
\begin{adjustbox}{width=0.35\textwidth}
\begin{tabular}{|c|c|c|c|} \hline
{\bf Categories}&{\bf $ACC_{high}$} & {\bf $ACC_{mid}$} & {\bf $ACC_{low}$}\\ \hline
{\bf $ACC_{high}$} & 2622 & 718 & 174 \\ \hline
{\bf $ACC_{mid}$} & 715 & 8669 & 502 \\ \hline
{\bf $ACC_{low}$} & 18 & 987 & 7389\\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\if{0}
\subsubsection{Random Forest Model}
\label{subsubsec:random_forest_des}
In this model, we consider all the features for our model. We found random forest performs better over XGBoost algorithm. Out of all the feature, we found top important features. These are degree centrality of $CCN$, sentiment of review text ($SNT_{r}$), page rank of $CCN$, citation count, team size ($TS$), degree centrality of $CRN$, core, page rank of $CRN$, reciprocity of $CCN$, experience, H-index ($H_{ind}$), closeness centrality of $CCN$, betweenness centrality of $CCN$,
reviewer diversity ($R_{div}$). Confusion matrix of the model is givne in Table~\ref{tab:RandomForest}. We notice that random forest performs better than XGBoost classifier. Classwise precision and recall is reported in Table~\ref{tab:random_forest_precision_recall}. F1 score of this model is 0.89. F1 score of this model is relatively higher than F1 score of XGBoost model.
\fi
\begin{table}[tbhp]
\centering
\caption{Class wise precision and recall of the random forest model.}\label{tab:random_forest_precision_recall}
\begin{adjustbox}{width=0.35\textwidth}
\begin{tabular}{|c|c|c|} \hline
{\bf Categories}&{\bf Precision} & {\bf Recall} \\ \hline
{\bf $ACC_{high}$} & 0.82 & 0.82 \\ \hline
{\bf $ACC_{mid}$} & 0.87 & 0.91 \\ \hline
{\bf $ACC_{low}$} & 0.95 & 0.91 \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\vspace{-0.3cm}
\begin{table}[tbhp]
\centering
\caption{Confusion matrix of the random forest model.}\label{tab:RandomForest}
\begin{adjustbox}{width=0.35\textwidth}
\begin{tabular}{|c|c|c|c|} \hline
{\bf Categories}&{\bf $ACC_{high}$} & {\bf $ACC_{mid}$} & {\bf $ACC_{low}$}\\ \hline
{\bf $ACC_{high}$} & 2889 & 527 & 98 \\ \hline
{\bf $ACC_{mid}$} & 604 & 9006 & 276 \\ \hline
{\bf $ACC_{low}$} & 20 & 727 & 7647\\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\noindent{\bf Feature importance}: Some of the important features for both the models are degree centrality of $CCN$, sentiment of review text ($SNT_{r}$), PageRank of $CCN$, citation count, team size ($TS$), degree centrality of $CRN$, core number, PageRank of $CRN$, reciprocity of $CCN$, experience, $h$-index ($H_{ind}$), closeness centrality of $CCN$, betweenness centrality of $CCN$,
reviewer diversity ($R_{div}$). The individual set of features that are important for the two models are noted in Figure~\ref{fig:random_forest_feature_imp} (random forest) and Figure~\ref{fig:xgboost_feature_imp} (XGBoost).
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=1\hsize]{Figures/random_forest_feature_importance_wocc.jpg} \\
\end{tabular}
\caption{Important features for the random forest model.}
\label{fig:random_forest_feature_imp}
\end{figure}
\vspace{-0.3cm}
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=1\hsize]{Figures/xgboost_feature_importance_wocc.jpg} \\
\end{tabular}
\caption{Important features for XGBoost model.}
\label{fig:xgboost_feature_imp}
\end{figure}
\vspace{-0.2cm}
\section{The role of the peer review system}
\label{sec:catwise_authors_editor_reviewer_analysis}
So far we have investigated author characteristics that could act as early indicators of the acceptance rate of the authors. However, recall, the reviewer and editor diversity measures presented in sections~\ref{ref_div} and~\ref{ed_div} respectively. In fact these features are also found to have strong predictive power in section~\ref{prediction}. Although we have used these features in profiling the authors, it can be easily reasoned that they are based on the functioning of the peer review system itself. In this section we shall therefore discuss the role of the peer review system (in any) in reinforcing the distinction among the three categories of authors.
To this purpose, we characterize the authors of different categories in terms of the set of editors and reviewers who have ever edited/reviewed their paper. We consider pairs of authors from each category and compute the Jaccard overlap ($J$) of the reviewer and the editor sets respectively. Next for each category, we calculate the average pairwise $J$ values. Interestingly, for the reviewer set we observe that the average value of $J$ for $ACC_{high}$ authors is relatively higher (0.0202) compared to $ACC_{mid}$ (0.0016) and $ACC_{low}$ (0.0008) authors. For the editor set, the average value of $J$ for $ACC_{high}$ is 0.0302 whereas the average value for $ACC_{mid}$ and $ACC_{low}$ are similar (0.0137 and 0.0105 respectively). This potentially again indicates that there is less diversity in the editors and reviewers who are assigned to the $ACC_{high}$ category. However, one might argue that this could as well be an artefact of the authors in the $ACC_{high}$ category collaborating more heavily among themselves compared to the other two categories and therefore it is obvious that they would tend to have more overlap in the reviewer and editor sets. In order to verify if this is actually an artefact, we next consider for each category the pairs of authors who have never collaborated (i.e., never co-authored a paper together). For such pairs of authors in a category, we calculate the $J$ of their editor and reviewer sets again. In particular, we identify the \% of author pairs having $J$ in the range $[0.6, 1]$ and author pairs having $J$ exactly 1. We note the percentage overlap values in Table~\ref{tab:jaccard_overlap_editor_reviewer}. For both the editor and the reviewer sets we observe that even if the authors have never collaborated they tend to get more similar referees and editors in the $ACC_{high}$ category compared to the other two categories. This result indicates that the initial observation that we made was not an artefact and that the peer-review system indeed enables a less diverse referee and editor set for the $ACC_{high}$ authors. We present a visualisation of this phenomenon in Figure~\ref{fig:top_cited_hundred_authors}. In Figure~\ref{fig:top_cited_hundred_authors} (Up), the green coloured nodes represent the reviewers and the blue, the red and the yellow nodes correspond to the authors in the $ACC_{high}$, $ACC_{mid}$ and $ACC_{low}$ categories respectively. There is a directed edge from a reviewer to an author if the reviewer had reviewed one or more papers of the author (i.e., a directed bipartite network). The visualisation again indicates that there are `patches' of clusters of unique reviewers around authors of the $ACC_{high}$ category. Similarly, in Figure~\ref{fig:top_cited_hundred_authors} (Down) the sky blue colored nodes represent the editors and the blue, the red and the yellow nodes correspond to the authors in the $ACC_{high}$, $ACC_{mid}$ and $ACC_{low}$ categories respectively. There is a directed edge from an editor to an author if the editor had edited one or more papers of the author. Similar patches of clusters also appear here. Overall, we believe that this might lead to potential discrimination and unfairness and should therefore be further investigated by the system admins.
\begin{figure}[!tbh]
\centering
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0.8}%
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=8cm, height=7cm,left]{Figures/topauths_reviewers_reingold.jpg}\\
\includegraphics[width=6cm, height=0.45cm]{Figures/top_hundred_auth_reviewer_legend.jpg}\\
\includegraphics[width=8cm, height=8cm,left]{Figures/top_hundred_editors_final.jpg}\\
\includegraphics[width=6cm, height=0.3cm]{Figures/top_auth_editor_legend.jpg}\\
\end{tabular}
\caption{(Up) This network shows the relationship between hundred top cited authors and their reviewer from three different categories. There is a directed edge from a reviewer to an author if the reviewer had reviewed one or more papers of the author. (Down) This network shows the relationship between hundred top cited authors and their editors from three different categories. There is a directed edge from an editor to an author if the editor had edited one or more papers of the author.}
\label{fig:top_cited_hundred_authors}
\end{figure}
\begin{table}[tbhp]
\centering
\caption{Percentage of author pairs having Jaccard overlap of editor and reviewer set in $[0.6, 1]$ and exactly 1. }\label{tab:jaccard_overlap_editor_reviewer}
\begin{adjustbox}{width=0.45\textwidth}
\begin{tabular}{|c|c|c|c|c|} \hline
{\bf Categories}&\multicolumn{2}{c|}{\bf Editor set} & \multicolumn{2}{c|}{\bf Reviewer set} \\ \hline
{} & $J$ ($[0.6,1]$) & $J$ ($= 1$) & $J$ ($[0.6,1]$) & $J$ ($= 1$) \\ \hline
{\bf $ACC_{high}$} & 1.94\% & 1.86\% & 0.46\% &0.31\%\\ \hline
{\bf $ACC_{mid}$} & 1.36\% & 1.31\% & 0.32\% & 0.13\%\\ \hline
{\bf $ACC_{low}$} & 1.03\% & 1.02\% & 0.04\% & 0.03\%\\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
As an additional investigation we choose author pairs across categories and observe how their editor sets overlap. If we choose author pairs with one from $ACC_{high}$ and another from $ACC_{mid}$ the $J$ value in $[0.6, 1]$ for the editor set is 0.13\%. Similarly, if we choose author pairs with one from $ACC_{mid}$ and another from $ACC_{low}$ the $J$ value in $[0.6, 1]$ for the editor set is 0.57\%. However, what is most intriguing is that if we choose author pairs with one from $ACC_{high}$ and another from $ACC_{low}$ the $J$ value in $[0.6, 1]$ for the editor set is 0\%. This indicates that the editors who are assigned to the $ACC_{high}$ category of authors are almost never assigned to the $ACC_{low}$ category users. Once again this could be indicative of a potential unfairness situation in the peer-review system and needs to be carefully investigated further.
\vspace{-0.2cm}
\section{Related Work}
\label{sec:related_work}
Peer review system plays an important role in the acceptance of a research paper in a journal. Quality peer review system helps authors to improve themselves. There are lots of debates on the quality~\cite{Jefferson:2002} and bias\footnote{https://www.nature.com/news/let-s-make-peer-review-scientific-1.20194} in a peer review system~\cite{Huisman:2017,Falkenberg:2018,Sikdar:2016}. Jefferson {et al.}~\cite{Jefferson:2002} investigated the quality of editorial peer review. They claimed that measuring the quality of peer review require huge co-operation of authors. Sikdar {et al.}~\cite{Sikdar:2017} studied reviewer-reviewer interaction network to predict the long term citation of a paper. They also studied whether the peer review system can be improved. In~\cite{Sikdar:2016}, the authors investigated anomalies in a peer review system. They computed different features from the editor and the reviewer information available. In~\cite{helmer17} the authors investigated the existence of gender bias in a peer review system. Another interesting study by Tomkins {et al.}~\cite{tomkins17} showed that a single blind reviewing system gives disproportionate advantage to the papers of famous authors and authors from highly reputed institutions. In similar lines the authors in~\cite{alina19} proposed how to improve a single blind review process.
Earlier research also explored various author profile based features such as experience, citation count, $h$-index, research topic diversity to quantify research productivity/success of an author~\cite{Bu:2018}. The productivity of an author~\cite{Abramo:2018} had been defined as the extent of his/her contribution (publications) to the scientific community. Most of the earlier research focused on whether such author profile based features are sufficient to justify ones research productivity. In~\cite{Bremholm:2005}, the authors explored the productivity of authors and their citations considering publications in the Proceedings of the Oklahoma Academy of Science (POAS). They found that authors with high productivity are not highly cited. Bayer {et al.}~\cite{Alan:1966} computed citation count to measure the productivity and found that it is less correlated with the quality of researcher's academic career but there is no correlation with his/her IQ.
Our work is very different from the above studies. We utilise author profile information, peer review information and three different networks to predict the class of an author based on his/her acceptance rate.
\vspace{-0.3cm}
\section{Conclusion}
\label{conc}
We categorize the authors into three classes based on their acceptance rate in the journal. We characterise these classes of authors based on their profile, the peer reviews their papers received and three different networks. The authors with high acceptance rate seem to be markedly different in terms of many of these characteristic features. Finally, using these features we show that it is possible to predict the acceptance rate class early for any author.
In future we would like to investigate in more details the reasons for the differences in the reviewer and editor diversities across the classes. In specific this problem can be posed as an anomaly/bias detection where we plan to use state-of-the-art techniques to understand the precise reasons for such uneven diversity across the classes.
\vspace{-0.25cm}
\section{Acknowledgements}
We thank Media Lab SISSA for providing us with the necessary JHEP data for the analysis. RH and AM thank Simons Foundation for financial support through the Simons Associateship Programme.
\vspace{-0.25cm}
\bibliographystyle{ACM-Reference-Format}
|
3,212,635,537,616 | arxiv | \section{acknowledgments}
We gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft through the research unit FOR 1162 (projects Ge1855/10-2 and Fa222/5-2) as well as experimental support by R. H{\"{o}}lldobler.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
3,212,635,537,617 | arxiv | \section{Introduction}
Let $\Omega\subset \mathbb{R}^n$ be a bounded domain with the boundary $\partial\Omega$ of
$C^2$ class. We set $Q=\Omega \times (0,T)$, where $T>0$. We use notations
$\partial_t= \frac{\partial}{\partial t}$, $\partial_i=\frac{\partial}{\partial x_i}$ ($i=1,2,\ldots, n$).
We also use the multi index $\alpha=(\alpha_1,\alpha_2, \ldots, \alpha_n)$
with $\alpha_j \in \mathbb{N}\cup\{0\}$ ($j=1,2,\ldots,n$),
$\partial_x^\alpha =\partial_1^{\alpha_1}\partial_2^{\alpha_2} \cdots \partial_n^{\alpha_n}$,
$|\alpha|=\alpha_1+\alpha_2+\cdots+\alpha_n$. Let $\nu =\nu (x)$ be the
outwards unit normal vector to $\partial\Omega$ at $x$ and let
$\partial_\nu =\nu \cdot \nabla$. In general,
the $\beta$th order Caputo-type fractional derivative is defined by
\begin{equation*}
\partial_t^{\beta}u(x,t):=
\frac{1}{\Gamma\left(n-\beta\right)}\int_0^t\frac{1}{(t-\tau )^{\beta+1-n}}
\frac{\partial^nu(x,\tau)}{\partial\tau^n}\,d\tau,
\quad (x,t)\in Q,
\end{equation*}
for $n-1<\beta<n$, $n\in\mathbb{N}$ (See e.g., \cite{Caputo1967, Pod}).
Here, $\Gamma$ is the gamma function.
We consider the following first and half
order time-fractional diffusion equation
\begin{align}
\label{eq:1+1/2eq}
&
(\rho_1 \partial_t+\rho_2\partial_t^{\frac12} - L)u(x,t)=g(x,t),
&
(x,t)\in Q,\\
\label{eq:b_condi}
&
u(x,t)=h_1(x,t),
&
(x,t)\in\partial\Omega\times(0,T),
\\
\label{eq:ini_condi}
&
u(x,0)=h_2(x),& x\in \Omega,
\end{align}
where $\rho_1>0$, $\rho_2 \neq 0$ are constants, and $L$ is a symmetric
uniformly elliptic operator given by
\begin{equation}
\nonumber
L u(x,t) :=\sum_{i,j=1}^n \partial_i (a_{ij}(x) \partial_j u(x,t))
-\sum_{j=1}^n b_j (x)\partial_j u(x,t)
- c(x)u(x,t),\ (x,t) \in Q.
\end{equation}
We assume that $a_{ij}\in C^3(\overline{\Omega})$,
$a_{ij}=a_{ji}$ ($ i,j=1,2,\ldots, n$),
$b_j \in C^2(\overline{\Omega})$ ($j=1,2,\ldots, n$),
$c \in C^2(\overline{\Omega})$, and moreover there exists a constant
$\mu>0$ such that
\begin{equation}
\nonumber
\frac1{\mu} |\xi|^2
\leq
\sum_{i,j=1}^n a_{ij}(x) \xi_i \xi_j
\leq
\mu |\xi|^2,
\quad
\xi=(\xi_1,\ldots, \xi_n) \in \mathbb{R}^n,\
x \in \overline{\Omega}.
\end{equation}
In fluid dynamics, (\ref{eq:1+1/2eq}) appears in the Basset problem
\cite{Basset1910} when the motion of a particle in a nonuniform flow is
considered \cite{Brennen05,Langlois-Farazmand-Haller15,Maxey-Riley83}.
The first and half order time-fractional equation (\ref{eq:1+1/2eq}) also appears in porous media.
Starting with the microscopic diffusion in a heterogeneous medium which has
two length scales: the microscopic length scale of a typical porous block and
the relative fracture width, a diffusion equation with the first- and
half-order time derivatives is obtained at the large scale limit by the
homogenization process \cite{Amaziane-etal04,APP}.
The first- and half-order equation (\ref{eq:1+1/2eq}) is one of parabolic
equations with multiple time-fractional terms, i.e., the time-derivative part
in the equation is given by $\sum_{j=0}^{\ell}p_j\partial_t^{\alpha_j}$, where
$0<\alpha_{\ell}<\cdots<\alpha_1<\alpha_0\le 1$ and coefficients $p_j$
generally depend on $x$. Initial-boundary-value problems for multi-term
time-fractional diffusion equations were considered in \cite{Luchko11}.
In the case that all time-derivatives are non-integer order and the
time-derivative part is given by $\sum_{j=1}^{\ell}p_j\partial_t^{\alpha_j}$,
the well-posedness was investigated \cite{Li-Liu-Yamamoto15} and moreover
the uniqueness in inverse boundary-value problems was proven
\cite{Li-Imanuvilov-Yamamoto16}. An exact solution was obtained in the
special case of a two-term time-fractional diffusion equation
\cite{Bazhlekova-Dimovski14}.
The uniqueness for two kinds of inverse problems of identifying fractional
orders in diffusion equations with multiple time-fractional derivatives was
proved \cite{Li-Yamamoto15}. The uniqueness in determining the spatial
component of the source term from interiror observation was established
\cite{Jiang-Li-Liu-Yamamoto17}. The maximum principle and uniqueness was considered in \cite{Liu17} for the determination of the temporal component of the source term from a single point observation. Also the unique continuation was considered for multi-term time-fractional diffusion equations \cite{Lin-Nakamura18}.
In \cite{Kwa}, the H\"{o}lder stability is proven for the inverse
source problem of (\ref{eq:1+1/2eq}) (See also \cite{Li-Huang-Yamamoto}).
In this paper, we further prove the
Lipshitz stability not only for the inverse source problem but also for
the inverse coefficient problem for (\ref{eq:1+1/2eq}).
The methodology of our stability analysis is based on the technique of the
Carleman estimate \cite{Carleman1939}, which was pioneered by
Bukhgeim and Klibanov \cite{Bukhgeim-Klibanov81} when they proved the global
uniqueness in inverse problems. See also \cite{Klibanov84,Klibanov92}, recent
reviews \cite{Klibanov13,Yamamoto09}, and textbooks
\cite{Isakov06,Klibanov-Timonov04}. The Carleman estimate is a weighted $L^2$
inequality for a solution of a partial differential equation. In the case
of parabolic equations with one first-order time derivative, the global
Lipschitz stability was proven by using this method of Carleman estimates
\cite{Imanuvilov-Yamamoto98}. In this paper we make use of the Carleman
estimate for parabolic equations.
The Carleman estimates have been used for
differential equations with a single term time-fractional derivative
\cite{Kawamoto18, Xu-Cheng-Yamamoto11,Yamamoto-Zhang12}.
This paper is organized as follows. In \S\ref{stability}, inverse source
problems are considered. In \S\ref{porous}, inverse coefficient problems
are considered. In \S\ref{carleman}, the Carleman estimate necessary for
our paper is established. Finally, proofs of the main theorems are given
in \S\ref{proof}.
\section{Inverse source problems}
\label{stability}
We consider the inverse problems of determining the time-independent source factor of \eqref{eq:1+1/2eq}
from spatial data and two types of observations.
One is the boundary observation and the other is the interior observation.
Let $t_0\in (0,T)$ be an arbitrarily fixed time.
Let $\gamma$ be an arbitrarily fixed open connected sub-boundary of $\partial\Omega$
and let $\omega$ be an arbitrary fixed sub-domain of $\Omega$ such that $\omega\Subset\Omega$.
We set $\Sigma=\gamma \times (0,T)$ and $Q_\omega=\omega\times (0,T)$.
Moreover we choose $\delta>0$ such that
\begin{equation*}
0<t_0-\delta <t_0 <t_0+\delta <T,
\end{equation*}
and we set $Q_\delta =\Omega\times (t_0-\delta,t_0+\delta)$, $\Sigma_\delta=\gamma\times (t_0-\delta,t_0+\delta)$,
$Q_{\omega,\delta}=\omega \times (t_0-\delta,t_0+\delta)$.
We assume that
\begin{equation}
\label{eq:R}
\left\{
\begin{aligned}
&R\in
C([0,T);C (\overline{\Omega}))\cap
C^2((0,T
;C^2(\overline{\Omega}))\cap C^3((0,T
;C(\overline{\Omega})), \\
&\partial_t^{\frac12} R \in C^2((0,T
;C(\overline{\Omega}))
\text{ and}\ |R(x,t_0)|>0,\ x\in\overline{\Omega}.
\end{aligned}
\right.
\end{equation}
Furthermore we define
\begin{equation*}
\mathcal{U}=L^2(0,T;H^4(\Omega))\cap H^1(0,T;H^2(\Omega))\cap H^2(0,T;L^2(\Omega)).
\end{equation*}
Let us assume that $g(x,t)$ in (\ref{eq:1+1/2eq}) has the form
\begin{equation}
\label{eq:gfR}
g(x,t)=f(x)R(x,t),
\end{equation}
and set $h_1=0$ in $Q$, $h_2=0$ in $\Omega$.
We consider
\begin{align}
\label{eq:eq01}
&
(\rho_1\partial_t +\rho_2\partial_t^{\frac12} - L)u(x,t)=f(x)R(x,t),&
(x,t)\in Q,
\\
\label{eq:eq03}
&
u(x,t)=0,& (x,t)\in \partial\Omega \times (0,T),
\\
\label{eq:eq02}
&
u(x,0)=0,& x\in \Omega,
\end{align}
and we investigate the two kinds of inverse problems depending on the way of observations.
In the inverse source problem via boundary observation, we determine $f(x)$,
$x\in \Omega$ by spatial data $u(x,t_0)$, $x\in \Omega$ and boundary data on
$\Sigma$. In the inverse source problem via interior observation, we
determine $f(x)$, $x\in \Omega$ by spatial data $u(x,t_0)$, $x\in \Omega$ and
interior data in $Q_\omega$. The main theorems Theorem \ref{thm:ispb} and
Theorem \ref{thm:ispi} are stated as follows.
\begin{thm}
\label{thm:ispb}
Let us assume that $u,\partial_tu,\partial_t^2u\in\mathcal{U}$ and $u$ satisfies \eqref{eq:eq01}--\eqref{eq:eq02
. We suppose that $f\in H^2(\Omega)$ with $f=0$ on $\partial\Omega$ and $\nabla f=0$ on $\gamma$ and $R$ satisfies \eqref{eq:R}.
Then there exist constants $C>0$ such that
\begin{equation}
\label{eq:seb}
\| f \|_{H^2(\Omega)}
\leq
C
\|u(\cdot, t_0)\|_{H^4(\Omega)}+CB,
\end{equation}
where
\begin{equation*}
B=\|\nabla \partial_t^3 u\|_{L^2(\Sigma_\delta)}
+\|\nabla \partial_t^{\frac52} u\|_{L^2(\Sigma_\delta)}
+\|\nabla \partial_t^2 u\|_{L^2(\Sigma_\delta)}
+\|\nabla \partial_t^{\frac32} u\|_{L^2(\Sigma_\delta)}
+\|\nabla \partial_t u\|_{L^2(\Sigma_\delta)}.
\end{equation*}
\end{thm}
\begin{thm}
\label{thm:ispi}
Let us assume that $u,\partial_t u,\partial_t^2u\in\mathcal{U}$ and $u$ satisfies \eqref{eq:eq01}--\eqref{eq:eq02
.
We suppose that $f\in H^2(\Omega)$ with $f=0$ on $\partial\Omega$ and $f=0$ in $\omega$
and $R$ satisfies \eqref{eq:R}.
Then there exist constants $C>0$ such that
\begin{equation}
\label{eq:sei}
\| f \|_{H^2(\Omega)}
\leq
C
\|u(\cdot, t_0)\|_{H^4(\Omega)}+CI,
\end{equation}
where
\begin{equation*}
I=\| \partial_t^3 u\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^{\frac52} u\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^2 u\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^{\frac32} u\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t u\|_{L^2(Q_{\omega,\delta})}.
\end{equation*}
\end{thm}
\begin{rmk}
There is another approach to obtain the Lipschitz stability in inverse source problems by final observation data.
In \cite{Sakamoto-Yamamoto11},
Sakamoto and Yamamoto considered the perturbation of
the single term time-fractional diffusion equations with a parameter as the diffusion coefficient
and they obtained the stability estimate
by means of the analytic perturbation theory
under the appropriate assumptions on the parameter.
In our case, however, we may not adopt their methodology directly
since we consider the diffusion coefficient without the perturbation.
\end{rmk}
\section{Inverse coefficient problems}
\label{porous}
\subsection{Determination of the zeroth-order coefficient}
Let us consider the inverse problem of determining the zeroth-order coefficient.
In \eqref{eq:1+1/2eq}, we consider two coefficients $c_k(x)$, $x\in \Omega$ ($k=1,2$), where $c_k\in C^2(\overline{\Omega})$, $c_k(x)\ge0$, $x\in\Omega$ ($k=1,2$). Let $u_k(x,t)$ be the corresponding solutions. We write $L$ as
\begin{equation}
Lu_k(x,t)=Au_k(x,t)-c_k(x)u_k(x,t),
\label{eq:LAck}
\end{equation}
where $A$ is defined as
\[
Au(x,t)=\sum_{i,j=1}^n\partial_i(a_{ij}(x)\partial_ju(x,t))
-\sum_{j=1}^nb_j(x)\partial_ju(x,t), \quad (x,t) \in Q.
\]
By subtraction we obtain
\[
\left\{\begin{aligned}
&
\left(\rho_1\partial_t+\rho_2\partial_t^{\frac12}-A+c_1(x)\right)u(x,t)=f(x)R(x,t),
& (x,t)\in Q,
\\
&u(x,t)=0,
& (x.t)\in \partial\Omega\times(0,T),
\\
&u(x,0)=0,
& x\in \Omega,
\end{aligned}\right.
\]
where
\[
u(x,t)=u_1(x,t)-u_2(x,t),\quad
f(x)=c_1(x)-c_2(x),\quad
R(x,t)=-u_2(x,t)
\]
for $(x,t)\in Q$.
Thus we arrive at the following two theorems as a direct consequence of the
inverse source problems. In both cases the Lipschitz stability is obtained.
Theorem \ref{thm:homo1} is proved using the inverse source problem via
boundary observation stated in Theorem \ref{thm:ispb}. Theorem
\ref{thm:homo2} is proved using the inverse source problem via interior
observation stated in Theorem \ref{thm:ispi}.
\begin{thm}[boundary observation]
\label{thm:homo1}
Let $u_k,\partial_tu_k,\partial_t^2u_k\in\mathcal{U}$ ($k=1,2$) and $u_1,u_2$ satisfy \eqref{eq:1+1/2eq}--\eqref{eq:ini_condi} with \eqref{eq:LAck}. We suppose that $c_1,c_2\in C^2(\overline{\Omega})
$ with $c_1=c_2$ on $\partial\Omega$ and
$\nabla c_1=\nabla c_2$ on $\gamma$, and $R=-u_2$ satisfies \eqref{eq:R}.
Then there exist constants $C>0$ such that
\begin{equation}
\label{porous:seb}
\|c_1-c_2\|_{H^2(\Omega)}
\leq
C
\|u_1(\cdot,t_0)-u_2(\cdot,t_0)\|_{H^4(\Omega)}+CB,
\end{equation}
where
\begin{align*}
B&=\|\nabla\partial_t^3(u_1-u_2)\|_{L^2(\Sigma_\delta)}
+\|\nabla\partial_t^{\frac52}(u_1-u_2)\|_{L^2(\Sigma_\delta)}
+\|\nabla\partial_t^2(u_1-u_2)\|_{L^2(\Sigma_\delta)}
\\
&
+\|\nabla\partial_t^{\frac32}(u_1-u_2)\|_{L^2(\Sigma_\delta)}
+\|\nabla\partial_t(u_1-u_2)\|_{L^2(\Sigma_\delta)}.
\end{align*}
\end{thm}
\begin{thm}[interior observation]
\label{thm:homo2}
Let $u_k,\partial_tu_k,\partial_t^2u_k\in\mathcal{U}$ ($k=1,2$) and $u_1,u_2$ satisfy \eqref{eq:1+1/2eq}--\eqref{eq:ini_condi} with \eqref{eq:LAck}. We suppose that $c_1,c_2\in C^2(\overline{\Omega})
$
with $c_1=c_2$ in $\partial\Omega \cup \omega$ and $R=-u_2$ satisfies \eqref{eq:R}.
Then there exist constants $C>0$ such that
\begin{equation}
\label{porous:sei}
\|c_1-c_2\|_{H^2(\Omega)}
\leq
C\|u_1(\cdot,t_0)-u_2(\cdot,t_0)\|_{H^4(\Omega)}+CI,
\end{equation}
where
\begin{align*}
I&=\|\partial_t^3(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^{\frac52}(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^2(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
\\
&+\|\partial_t^{\frac32}(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}.
\end{align*}
\end{thm}
\begin{rmk}
In the case of diffusion in porous media, the condition $|u_2(x,t_0)|=|R(x,t_0)|>0$ for $x\in\overline{\Omega}$ means that the concentration of the target particles is nonzero at the macroscopic scale.
\end{rmk}
\subsection{Determination of the diffusion coefficient}
\label{df}
We consider diffusion coefficients $a_k$ ($k=1,2$) and corresponding solutions $u_k$. Let us express $L$ as
\begin{equation}
Lu(x,t)=\mathcal{A}_ku(x,t),
\label{eq:LAk}
\end{equation}
where $\mathcal{A}_k$ is defined as
\[
\mathcal{A}_k u(x,t)=\dd (a_k (x)\nabla u(x,t))
-\mathbf{b}(x) \cdot\nabla u(x,t)
-c(x)u(x,t), \quad (x,t) \in Q,
\]
for $k=1,2$. We suppose that $a_k \in C^4(\overline{\Omega})$ $(k=1,2)$, $\mathbf{b}=(b_1,b_2,\ldots,b_n) \in \left\{ C^3(\overline{\Omega})\right\}^n$ and $c\in C^3(\overline{\Omega})$. Moreover we assume that there exists a constant $m>0$ such that $a_k(x)\geq m$, $x\in\Omega$ $(k=1,2)$. We investigate the inverse problems of determining the diffusion coefficients $a_k$ ($k=1,2$) by boundary observations and interior observations.
Set
\[
u(x,t)=u_1(x,t)-u_2(x,t),\quad
a(x)=a_1(x)-a_2(x),\quad
r(x,t)=u_2(x,t)
\]
for $(x,t)\in Q$. Then by subtracting the equations for $k=2$ from ones for $k=1$, we obtain
\begin{equation}
\label{eq:dfeq01}
\left\{\begin{aligned}
&\left(\rho_1\partial_t+\rho_2\partial_t^{\frac12}-\mathcal{A}_1 \right)u(x,t)=\dd (a(x) \nabla r(x,t)),
&(x,t)\in Q,
\\
&u(x,t)=0,
&(x,t)\in \partial\Omega\times(0,T),
\\
&u(x,0)=0,
&x\in\Omega,
\end{aligned}\right.
\end{equation}
We assume that
\begin{equation}
\label{eq:rr}
\left\{
\begin{aligned}
&r\in
C([0,T);C^3 (\overline{\Omega}))\cap
C((0,T);C^5 (\overline{\Omega}))\\
&\qquad
\cap\
C^2((0,T);C^4(\overline{\Omega}))\cap
C^3((0,T);C^2(\overline{\Omega})), \\
&\partial_t^{\frac12} r \in
C((0,T);C^3(\overline{\Omega})) \cap
C^2((0,T);C^2(\overline{\Omega})) .
\end{aligned}
\right.
\end{equation}
Let us introduce weight functions for the Carleman estimates introduced in \S\ref{carleman}. According to observation types we consider in this paper,
we prepare two kinds of distance functions $d_1$ and $d_2$. We choose
$d_1\in C^2(\overline{\Omega})$ such that
\begin{align*}
&d_1(x)>0, \ x\in\Omega, \quad
|\nabla d_1(x)|>\sigma_1, \ x\in\overline{\Omega},\\
&\sum_{i,j=1}^n a_{ij}(x)\partial_i d_1\nu_j \leq 0,\ x\in \partial\Omega\setminus \gamma,
\end{align*}
where $\sigma_1>0$ is a constant. Let $\omega_0$ be an arbitrarily fixed
sub-domain of $\Omega$ such that $\omega_0\Subset\omega$. We take
$d_2 \in C^2(\overline{\Omega})$ such that
\begin{equation*}
d_2(x)>0, \ x\in\Omega, \quad
|\nabla d_2(x)|>\sigma_2, \ x\in\overline{\Omega\setminus \omega_0},\quad
d_2(x)=0, \ x\in \partial\Omega,
\end{equation*}
where $\sigma_2>0$ is a constant. The existence of the distance functions
$d_1$ and $d_2$ is proved in \cite{FIm, Im, Imanuvilov-Yamamoto98}. Then
we introduce weight functions $\varphi_k,\psi_k$ ($k=1,2$) as
\begin{equation*}
\varphi_k(x,t)=\frac{e^{\lambda d_k(x)}}{\ell(t)}, \quad
\psi_k(x,t)=\frac{e^{\lambda d_k(x)}-e^{2\lambda \|d_k\|_{C(\overline{\Omega})}}}{\ell(t)}, \quad (x,t) \in Q,
\end{equation*}
where $\ell(t)=t(T-t)$. Moreover we assume that there exists a constant $m_1>0$ such that
\begin{equation}
\label{eq:r1}
|\nabla r (x,t_0) \cdot \nabla d_1(x)| \geq m_1, \quad x \in \overline{\Omega},
\end{equation}
or that there exists a constant $m_2>0$ such that
\begin{equation}
\label{eq:r2}
|\nabla r (x,t_0) \cdot \nabla d_2(x)| \geq m_2, \quad x \in \overline{\Omega\setminus\omega}.
\end{equation}
Let $D^\prime$ be an arbitrary sub-domain such that $\omega \Subset D^\prime \Subset \Omega$.
Set $D=\Omega \setminus D^\prime$.
Henceforth we suppose that $a\equiv 0$ in $D$.
Now we are ready to state our main results.
\begin{thm}[boundary observation]
\label{thm:df1}
Let $u_k,\partial_tu_k,\partial_t^2u_k,\nabla u_k\in\mathcal{U}$ ($k=1,2$) and $u_1,u_2$ satisfy \eqref{eq:1+1/2eq}--\eqref{eq:ini_condi} with \eqref{eq:LAk}. We suppose that $a_1,a_2\in C^4(\overline{\Omega})
$ with $a_1=a_2$ in $D$ and
$r=u_2$ satisfies \eqref{eq:rr} and \eqref{eq:r1}.
Then there exist constants $C>0$ such that
\begin{equation}
\label{df:seb}
\|a_1-a_2\|_{H^3(\Omega)}
\leq
C
\|u_1(\cdot,t_0)-u_2(\cdot,t_0)\|_{H^5(\Omega)}+CB,
\end{equation}
where
\begin{align*}
B&=\|\nabla\partial_t^3(u_1-u_2)\|_{L^2(\Sigma_\delta)}
+\|\nabla\partial_t^{\frac52}(u_1-u_2)\|_{L^2(\Sigma_\delta)}
+\|\nabla\partial_t^2(u_1-u_2)\|_{L^2(\Sigma_\delta)}
\\
&
+\|\nabla\partial_t^{\frac32}(u_1-u_2)\|_{L^2(\Sigma_\delta)}
+\|\nabla\partial_t(u_1-u_2)\|_{L^2(\Sigma_\delta)}.
\end{align*}
\end{thm}
\begin{thm}[interior observation]
\label{thm:df2}
Let $u_k,\partial_tu_k,\partial_t^2u_k,\nabla u_k\in\mathcal{U}$ ($k=1,2$) and $u_1,u_2$ satisfy \eqref{eq:1+1/2eq}--\eqref{eq:ini_condi} with \eqref{eq:LAk}. We suppose that $a_1,a_2\in C^4(\overline{\Omega})
$ with $a_1=a_2$ in $D\cup \omega$ and
$r=u_2$ satisfies \eqref{eq:rr} and \eqref{eq:r2}.
Then there exist constants $C>0$ such that
\begin{equation}
\label{df:sei}
\|a_1-a_2\|_{H^3(\Omega)}
\leq
C\|u_1(\cdot,t_0)-u_2(\cdot,t_0)\|_{H^5(\Omega)}+CI,
\end{equation}
where
\begin{align*}
I&=\|\partial_t^3(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^{\frac52}(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t^2(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
\\
&+\|\partial_t^{\frac32}(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}
+\|\partial_t(u_1-u_2)\|_{L^2(Q_{\omega,\delta})}.
\end{align*}
\end{thm}
\begin{rmk}
In one dimensional case in space, we may relax some assumptions on $u_k$ ($k=1,2$).
It depends on the assumptions of the Carleman estimate for the third order partial differential equations
(Lemma \ref{lem:ce3rd1} and Lemma \ref{lem:ce3rd2}).
See also \cite{Ren-Xu14}.
\end{rmk}
\section{Carleman estimate}
\label{carleman}
In this section, we establish the Carleman estimates for \eqref{eq:1+1/2eq}.
We transform \eqref{eq:1+1/2eq} into an integer-order partial differential
equation. The calculation is similar to \cite{Xu-Cheng-Yamamoto11}. Let us
begin with the following Lemma.
\begin{lem}[Lemma 3.1 in \cite{Kwa}]
\label{lem:halftoone}
If $u \in C([0,T];H^4(\Omega))\cap C^1((0,T);H^2(\Omega))\cap C^2((0,T);L^2(\Omega))$ satisfies \eqref{eq:1+1/2eq} through \eqref{eq:ini_condi},
then $u$ satisfies
\begin{equation}
\label{eq:lem01}
\rho_2^2 \partial_t u(x,t)- (\rho_1\partial_t - L)^2 u(x,t)=G(x,t),\quad (x,t)\in Q
\end{equation}
where
\begin{equation}
\label{eq:lem02}
G(x,t)=\left[ \rho_2\partial_t^\frac12- (\rho_1\partial_t - L) \right]g(x,t) +\frac{\rho_2 g(x,0)}{\sqrt{\pi t}},\quad (x,t)\in Q.
\end{equation}
\end{lem}
Although $\partial_t^{\frac12}\partial_t^{\frac12} \neq \partial_t$ in general,
we may obtain the above lemma by applying $\rho_2 \partial_t^{\frac12}-(\rho_1 \partial_t -L)$
to the both hand side of \eqref{eq:1+1/2eq} and using $u(x,0)=0$, $x\in \Omega$.
Now we are ready to state our Carleman estimates.
\begin{thm}[Carleman estimate for \eqref{eq:1+1/2eq} with boundary data]
\label{thm:ce0b}
Let $p\geq 0$.
Suppose that $g(x,t)=0$, $(x,t)\in \partial \Omega\times (0,T)$ and $\nabla g(x,t)=0$, $(x,t)\in \Sigma$.
Then there exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$, we can choose
$s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align}
\label{eq:pceb}
&
\int_Q
\Biggl[
(s\varphi_1)^{p-1}
\left(
|\partial_t^2 u|^2
+
\sum_{i,j=1}^n|\partial_t\partial_i \partial_j u|^2
\right)
+
(s\varphi_1)^{p+1}
|\nabla \partial_t u|^2
\\
&\qquad
+
(s\varphi_1)^{p+2}
|\nabla (\rho_1 \partial_t -L)u|^2
+
(s\varphi_1)^{p+3}
\left( |\partial_t u|^2
+\sum_{i,j=1}^n|\partial_i \partial_j u|^2
\right)
\nonumber \\
&\qquad
+
(s\varphi_1)^{p+5}
|\nabla u|^2
+
(s\varphi_1)^{p+7}
|u|^2
\Biggr]
e^{2s\psi_1}\,dxdt
\nonumber\\
&
\leq
C
\int_Q (s\varphi_1)^{p+1} \left|\left[\rho_2^2\partial_t - (\rho_1\partial_t - L)^2\right]u\right|^2 e^{2s\psi_1}\,dxdt
\nonumber \\
&\quad
+C
\int_{\Sigma}
\left[
(s\varphi_1)^{p+1}
|\nabla \partial_t u|^2
+
(s\varphi_1)^{p+2}
|\nabla \partial_t^{\frac12} u|^2
+
(s\varphi_1)^{p+5}
|\nabla u|^2
\right]
e^{2s\psi_1}
\,dSdt ,
\nonumber
\end{align}
for all $s> s_0$ and all $u\in\mathcal{U}$ satisfying \eqref{eq:1+1/2eq}
with $u(x,t)=0$, $(x,t)\in \partial\Omega \times (0,T)$ and $u(x,0)=0$, $x\in \Omega$.
\end{thm}
\begin{thm}[Carleman estimate for \eqref{eq:1+1/2eq} with interior data]
\label{thm:ce0i}
Let $p\geq 0$.
Suppose that $g(x,t)$$=0$, $(x,t)\in \partial \Omega\times (0,T)$ and $ g(x,t)=0$, $(x,t)\in Q_\omega$.
Then there exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$, we can choose
$s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align}
\label{eq:pcei}
&
\int_Q
\Biggl[
(s\varphi_2)^{p-1}
\left(
|\partial_t^2 u|^2
+
\sum_{i,j=1}^n|\partial_t\partial_i \partial_j u|^2
\right)
+
(s\varphi_2)^{p+1}
|\nabla \partial_t u|^2
\\
&\qquad
+
(s\varphi_2)^{p+2}
|\nabla (\rho_1 \partial_t -L)u|^2
+
(s\varphi_2)^{p+3}
\left( |\partial_t u|^2
+\sum_{i,j=1}^n|\partial_i \partial_j u|^2
\right)
\nonumber \\
&\qquad
+
(s\varphi_2)^{p+5}
|\nabla u|^2
+
(s\varphi_2)^{p+7}
|u|^2
\Biggr]
e^{2s\psi_2}\,dxdt
\nonumber\\
&
\leq
C
\int_Q (s\varphi_2)^{p+1} \left|\left[\rho_2^2\partial_t - (\rho_1\partial_t - L)^2\right]u\right|^2 e^{2s\psi_2}\,dxdt
\nonumber \\
&\quad
+C
\int_{Q_\omega}
\left[
(s\varphi_2)^{p+3}
| \partial_t u|^2
+
(s\varphi_2)^{p+4}
|\partial_t^{\frac12} u|^2
+
(s\varphi_2)^{p+7}
|u|^2
\right]
e^{2s\psi_2}
\,dxdt,
\nonumber
\end{align}
for all $s> s_0$ and all $u\in\mathcal{U}$ satisfying \eqref{eq:1+1/2eq}
with $u(x,t)=0$, $(x,t)\in \partial\Omega \times (0,T)$ and $u(x,0)=0$, $x\in \Omega$.
\end{thm}
To prove Theorems \ref{thm:ce0b} and \ref{thm:ce0i}, we start with
the global Carleman estimates for parabolic equations (see e.g.,
\cite{Im, Yamamoto09}) stated in Lemmas \ref{lem:celempb} and
\ref{lem:celempi} below.
\begin{lem}
\label{lem:celempb}
Let $p\geq 0$.
There exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$, we can choose
$s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align*}
&
\int_Q
\left[
(s\varphi_1)^{p-1}
\left(
|\partial_t v|^2
+
\sum_{i,j=1}^n|\partial_i \partial_j v|^2
\right)
+
(s\varphi_1)^{p+1}
|\nabla v|^2
+
(s\varphi_1)^{p+3}
|v|^2
\right]
\!
e^{2s\psi_1}\,dxdt \\
&
\leq
C\int_Q
(s\varphi_1)^{p}
|(\rho_1\partial_t -L) v|^2 e^{2s\psi_1}\,dxdt
+
C
\int_{\Sigma}
(s\varphi_1)^{p+1}|\nabla v|^2
e^{2s\psi_1}
\,dSdt,
\end{align*}
for all $s> s_0$ and all $v \in L^2(0,T;H^2(\Omega))\cap H^1(0,T;L^2(\Omega))$ satisfying $v (x,t)=0$, $(x,t)\in \partial\Omega\times(0,T)$.
\end{lem}
\begin{lem}
\label{lem:celempi}
Let $p\geq 0$. There exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$, we can
choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align*}
&
\int_Q
\left[
(s\varphi_2)^{p-1}
\left(
|\partial_t v|^2
+
\sum_{i,j=1}^n|\partial_i \partial_j v|^2
\right)
+
(s\varphi_2)^{p+1}
|\nabla v|^2
+
(s\varphi_2)^{p+3}
|v|^2
\right]
\!
e^{2s\psi_2}\,dxdt \\
&
\leq
C\int_Q
(s\varphi_2)^{p}
|(\rho_1\partial_t -L) v|^2 e^{2s\psi_2}\,dxdt
+
C
\int_{Q_\omega}
(s\varphi_2)^{p+3}|v|^2
e^{2s\psi_2}
\,dxdt,
\end{align*}
for all $s> s_0$ and all $v \in L^2(0,T;H^2(\Omega))\cap H^1(0,T;L^2(\Omega))$ satisfying $v (x,t)=0$, $(x,t)\in \partial\Omega\times(0,T)$.
\end{lem}
\begin{proof}[Proof of Theorem \ref{thm:ce0b}]
Throughout the proof, we assume that $s>1$ is large enough to satisfy $s\varphi>1$ in $Q$.
Equation \eqref{eq:lem01} yields
\begin{equation}
\label{eq:ce01}
\rho_1\partial_t w(x,t)-Lw(x,t)=\rho_2^2\partial_tu(x,t)-G(x,t),\quad(x,t)\in Q,
\end{equation}
where
\begin{equation}
w(x,t)=\rho_1\partial_t u(x,t)-Lu(x,t),\quad(x,t)\in Q.
\label{eq:ce02}
\end{equation}
Since $u(x,t)=0$, $(x,t)\in \partial\Omega\times (0,T)$ and
$g(x,t)=0$, $(x,t) \in \partial\Omega\times (0,T)$, we have by \eqref{eq:1+1/2eq},
\begin{equation*}
w(x,t)
=\rho_1\partial_t u(x,t) - L u(x,t)
=g(x,t)-\rho_2\partial_t^{\frac12} u(x,t)
=0, \quad
(x,t)\in\partial\Omega\times (0,T).
\end{equation*}
Applying the Lemma \ref{lem:celempb} to \eqref{eq:ce01}, we obtain
\begin{align}
\label{eq:ce03}
&\int_Q
\left[
(s\varphi_1)^{p_1-1} |\partial_t w|^2
+
(s\varphi_1)^{p_1+1}
|\nabla w|^2
+
(s\varphi_1)^{p_1+3}
|w|^2
\right]e^{2s\psi_1}\,dxdt
\\
&\leq
C\int_Q (s\varphi_1)^{p_1}|\partial_t u|^2 e^{2s\psi_1}\,dxdt
+C\int_Q (s\varphi_1)^{p_1}|G|^2 e^{2s\psi_1}\,dxdt
\nonumber \\
&\quad
+C
\int_{\Sigma}
(s\varphi_1)^{p_1+1}|\nabla w|^2 e^{2s\psi_1}
\,dSdt,
\nonumber
\end{align}
for $p_1\ge0$. Next by applying Lemma \ref{lem:celempb} to \eqref{eq:ce02}, we obtain
\begin{align}
\label{eq:ce04}
&\int_Q
\!
\Biggl[
(s\varphi_1)^{p_2-1}
\left(
|\partial_t u|^2
+\sum_{i,j=1}^n |\partial_i \partial_j u|^2
\right) \\
&\qquad
+(s\varphi_1)^{p_2+1}
|\nabla u|^2
+(s\varphi_1)^{p_2+3}
|u|^2
\Biggr]
\!
e^{2s\psi_1}\,dxdt
\nonumber \\
&\leq
C\int_Q (s\varphi_1)^{p_2}|w|^2 e^{2s\psi_1}\,dxdt
+C
\int_{\Sigma}
(s\varphi_1)^{p_2+1} |\nabla u|^2
e^{2s\psi_1}
\,dSdt,
\nonumber
\end{align}
for $p_2\geq 0$.
Putting $p_2=p_1+1$ and substituting the estimate of
$|\partial_t u|^2$ in \eqref{eq:ce04} into the right-hand side of \eqref{eq:ce03},
we obtain
\begin{align}
&\int_Q
\left[
(s\varphi_1)^{p_1-1}
|\partial_t w|^2
+
(s\varphi_1)^{p_1+1}
|\nabla w|^2
+
(s\varphi_1)^{p_1+3}
|w|^2
\right]e^{2s\psi_1}\,dxdt
\nonumber \\
&\leq
C\int_Q (s\varphi_1)^{p_1+1}|w|^2 e^{2s\varphi_1}\,dxdt
+C\int_Q (s\varphi_1)^{p_1}
|G|^2 e^{2s\psi_1}\,dxdt
+C B_{1,p_1},
\nonumber
\end{align}
where
\begin{equation*}
B_{1,p_1}=
\int_{\Sigma}
\left[
(s\varphi_1)^{p_1+1}
|\nabla w|^2
+
(s\varphi_1)^{p_1+2}
|\nabla u|^2
\right]
e^{2s\psi_1}
\,dSdt
.
\end{equation*}
Taking sufficiently large $s>0$, we can absorb the first term on the right-hand side of the above inequality into the left-hand side and we have
\begin{align}
\label{eq:ce05}
&\int_Q
\left[
(s\varphi_1)^{p_1-1}
|\partial_t w|^2
+
(s\varphi_1)^{p_1+1}
|\nabla w|^2
+
(s\varphi_1)^{p_1+3}
|w|^2
\right]e^{2s\psi_1}\,dxdt \\
&\leq
C\int_Q (s\varphi_1)^{p_1}
|G|^2 e^{2s\psi_1}\,dxdt
+C B_{1,p_1}.
\nonumber
\end{align}
By \eqref{eq:ce04} with $p_2=p_1+3$ and \eqref{eq:ce05}, we obitain
\begin{align}
\label{eq:ce06}
&\int_Q
\Biggl[
(s\varphi_1)^{p_1+1}
|\nabla (\rho_1 \partial_t -L)u|^2
+
(s\varphi_1)^{p_1+2}
\left(
|\partial_t u|^2
+\sum_{i,j=1}^n |\partial_i \partial_j u|^2
\right) \\
&\qquad
+(s\varphi_1)^{p_1+4}
|\nabla u|^2
+(s\varphi_1)^{p_1+6}
|u|^2
\Biggr]
e^{2s\psi_1}\,dxdt \nonumber \\
&
\leq
C\int_Q (s\varphi_1)^{p_1} |G|^2 e^{2s\psi_1}\,dxdt
+CB_{2,p_1},
\nonumber
\end{align}
where
\begin{equation*}
B_{2,p_1}=
\int_{\Sigma}
\left[
(s\varphi_1)^{p_1+1}
|\nabla w|^2
+
(s\varphi_1)^{p_1+4}
|\nabla u|^2
\right]
e^{2s\psi_1}
\,dSdt.
\end{equation*}
Let us choose $p_1=p+1$ in \eqref{eq:ce05}. Then from \eqref{eq:ce02} and \eqref{eq:ce05}, we have
\begin{equation*}
\int_Q
(s\varphi_1)^{p}
|\partial_t (\rho_1\partial_t u -L u)|^2
e^{2s\psi_1}\,dxdt
\leq
C\int_Q (s\varphi_1)^{p+1} |G|^2 e^{2s\psi_1}\,dxdt
+CB_{1,p+1}.
\end{equation*}
Setting $u_0=\partial_t u$, we obtain
\begin{equation}
\label{eq:ce10}
\int_Q
(s\varphi_1)^{p}|\rho_1\partial_t u_0 -L u_0|^2e^{2s\psi_1}\,dxdt
\leq
C\int_Q (s\varphi_1)^{p+1} |G|^2 e^{2s\psi_1}\,dxdt
+CB_{1,p+1}.
\end{equation}
If we use Lemma \ref{lem:celempb} with $v=u_0$ and applying \eqref{eq:ce10},
we obtain
\begin{align*}
&
\int_Q
\Biggl[
(s\varphi_1)^{p-1}
\left(
|\partial_t u_0|^2
+
\sum_{i,j=1}^n |\partial_i\partial_j u_0|^2
\right) \\
&\qquad
+
(s\varphi_1)^{p+1}
|\nabla u_0|^2
+
(s\varphi_1)^{p+3}
|u_0|^2
\Biggr]e^{2s\psi_1}\,dxdt\\
&\leq
C\int_Q (s\varphi_1)^{p+1} |G|^2 e^{2s\psi_1}\,dxdt
+C
\int_{\Sigma}
(s\varphi_1)^{p+1}
|\nabla u_0|^2
e^{2s\psi_1}
\,dSdt
+CB_{1,p+1}.
\end{align*}
Recalling $u_0=\partial_tu$, we have
\begin{align}
\nonumber
&\int_Q
\Biggl[
(s\varphi_1)^{p-1}
\left(
|\partial_t^2 u|^2
+
\sum_{i,j=1}^n |\partial_t\partial_i\partial_j u|^2
\right) \\
&\qquad
+
(s\varphi_1)^{p+1}
|\nabla \partial_t u|^2
+
(s\varphi_1)^{p+3}
|\partial_t u|^2
\Biggr]e^{2s\psi_1}\,dxdt
\nonumber \\
&\leq
C\int_Q
(s\varphi_1)^{p+1} |G|^2 e^{2s\psi_1}\,dxdt
+C B_{3,p},
\nonumber
\end{align}
where
\begin{equation*}
B_{3,p}=
\int_{\Sigma}
\left[
(s\varphi_1)^{p+1}
|\nabla\partial_t u|^2
+
(s\varphi_1)^{p+2}
|\nabla w|^2
+
(s\varphi_1)^{p+3}
|\nabla u|^2
\right]
e^{2s\psi_1}
\,dSdt.
\end{equation*}
Hence using \eqref{eq:ce06}, we obtain
\begin{align*}
&
\int_Q
\Biggl[
(s\varphi_1)^{p-1}
\left(
|\partial_t^2 u|^2
+
\sum_{i,j=1}^n|\partial_t\partial_i \partial_j u|^2
\right)
+
(s\varphi_1)^{p+1}
|\nabla \partial_t u|^2
\\
&\qquad
+
(s\varphi_1)^{p+2}
|\nabla (\rho_1 \partial_t -L)u|^2
+
(s\varphi_1)^{p+3}
\left( |\partial_t u|^2
+\sum_{i,j=1}^n|\partial_i \partial_j u|^2
\right)
\\
&\qquad+
(s\varphi_1)^{p+5}
|\nabla u|^2
+
(s\varphi_1)^{p+7}
|u|^2
\Biggr]
e^{2s\psi_1}\,dxdt \\
&
\leq
C
\int_Q (s\varphi_1)^{p+1} \left|G\right|^2 e^{2s\psi_1}\,dxdt
+C B_{4,p},
\end{align*}
where
\begin{equation*}
B_{4,p}=
\int_{\Sigma}
\left[
(s\varphi_1)^{p+1}
|\nabla\partial_t u|^2
+
(s\varphi_1)^{p+2}
|\nabla w|^2
+
(s\varphi_1)^{p+5}
|\nabla u|^2
\right]
e^{2s\psi_1}
\,dSdt.
\end{equation*}
Finally, we consider the boundary term $B_4$. Since $\nabla g=0$ on $\Sigma$ is assumed, $\nabla w=\nabla g-\rho_2\nabla \partial_t^{\frac12} u=-\rho_2\nabla\partial_t^{\frac12}u$ on $\Sigma$. Hence we have
\begin{align}
\nonumber
&
\int_Q
\Biggl[
(s\varphi_1)^{p-1}
\left(
|\partial_t^2 u|^2
+
\sum_{i,j=1}^n|\partial_t\partial_i \partial_j u|^2
\right)
+
(s\varphi_1)^{p+1}
|\nabla \partial_t u|^2 \\
&\qquad
+
(s\varphi_1)^{p+2}
|\nabla (\rho_1 \partial_t -L)u|^2
+
(s\varphi_1)^{p+3}
\left( |\partial_t u|^2
+\sum_{i,j=1}^n|\partial_i \partial_j u|^2
\right)
\nonumber \\
&\qquad
+
(s\varphi_1)^{p+5}
|\nabla u|^2
+
(s\varphi_1)^{p+7}
|u|^2
\Biggr]
e^{2s\psi_1}\,dxdt
\nonumber\\
&
\leq
C
\int_Q (s\varphi_1)^{p+1} \left|G\right|^2 e^{2s\psi_1}\,dxdt
\nonumber \\
&\quad
+C
\int_{\Sigma}
\left[
(s\varphi_1)^{p+1}
|\nabla \partial_t u|^2
+
(s\varphi_1)^{p+2}
|\nabla \partial_t^{\frac12} u|^2
+
(s\varphi_1)^{p+5}
|\nabla u|^2
\right]
e^{2s\psi_1}
\,dSdt .
\nonumber
\end{align}
Thus we obtain \eqref{eq:pceb}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ce0i}]
By using Lemma \ref{lem:celempi} instead of Lemma \ref{lem:celempb}, we can
prove Theorem \ref{thm:ce0i} in the same way as Theorem \ref{thm:ce0b}.
\end{proof}
Furthermore we need Carleman estimates for elliptic equations in the proof of
the stability estimates in inverse source problems which we will develop in \S\ref{proof}.
Let us assume that $\widetilde{a}_{ij}\in C^1(\overline{\Omega})$, $\widetilde{a}_{ij}= \widetilde{a}_{ji}$ ($i,j=1,\ldots,n$),
$\widetilde{b}_j \in C(\overline{\Omega})$ ($j=1,\ldots,n$),
$\widetilde{c} \in C(\overline{\Omega})$, and
that there exists a constant $\widetilde{\mu}>0$ such that
\begin{equation*}
\frac1{\widetilde{\mu}} |\xi|^2
\leq
\sum_{i,j=1}^n \widetilde{a}_{ij}(x) \xi_i \xi_j
\leq
\widetilde{\mu} |\xi|^2,
\quad
\xi=(\xi_1,\ldots, \xi_n) \in \mathbb{R}^n,\
x \in \overline{\Omega}.
\end{equation*}
We consider the following symmetric uniformly elliptic operator.
\begin{equation*}
\widetilde{L} \widetilde{v}(x) :=\sum_{i,j=1}^n \partial_i (\widetilde{a}_{ij}(x) \partial_j \widetilde{v}(x))
-\sum_{j=1}^n \widetilde{b}_j (x)\partial_j \widetilde{v}(x)
- \widetilde{c}(x)\widetilde{v}(x),\ x\in \Omega.
\end{equation*}
Set $\widetilde{\varphi}_k(x):=\varphi_k(x,t_0)$, $x\in \Omega$ and
$\widetilde{\psi}_k(x):=\psi_k(x,t_0)$, $x\in \Omega$ for $k=1,2$.
Then we have the following Lemmas.
\begin{lem}
\label{lem:celemeb}
Let $p \geq 0$. There exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$, we can
choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align*}
&
\int_\Omega
\left[
(s\widetilde{\varphi}_1)^{p-1}
\sum_{i,j=1}^n|\partial_i \partial_j \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{p+1}
|\nabla \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{p+3}
|\widetilde{v}|^2
\right]
e^{2s \widetilde{\psi}_1}\,dx
\\
&
\leq
C\int_\Omega (s\widetilde{\varphi}_1)^{p} |\widetilde{L} \widetilde{v}|^2 e^{2s \widetilde{\psi}_1}\,dx
+
C
\int_{\gamma}
(s\widetilde{\varphi}_1)^{p+1}|\nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dS,
\end{align*}
for all $s> s_0$ and all $\widetilde{v} \in H^2(\Omega)$ satisfying $\widetilde{v} (x)=0$, $x\in \partial\Omega$.
\end{lem}
\begin{lem}
\label{lem:celemei}
Let $p \geq 0$. There exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$, we can
choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align*}
&
\int_\Omega
\left[
(s\widetilde{\varphi}_2)^{p-1}
\sum_{i,j=1}^n|\partial_i \partial_j \widetilde{v}|^2
+
(s\widetilde{\varphi}_2)^{p+1}
|\nabla \widetilde{v}|^2
+
(s\widetilde{\varphi}_2)^{p+3}
|\widetilde{v}|^2
\right]
e^{2s \widetilde{\psi}_2}\,dx
\\
&
\leq
C\int_\Omega (s\widetilde{\varphi}_2)^{p} |\widetilde{L} \widetilde{v}|^2 e^{2s \widetilde{\psi}_2}\,dx
+
C
\int_{\omega}
(s\widetilde{\varphi}_2)^{p+3}|\widetilde{v}|^2 e^{2s\widetilde{\psi}_2}
\,dx,
\end{align*}
for all $s> s_0$ and all $\widetilde{v} \in H^2(\Omega)$ satisfying $\widetilde{v} (x)=0$, $x\in \partial\Omega$.
\end{lem}
These lemmas can be shown in the same manner as the parabolic case by means of integration by parts.
Hence we omit the proofs of these lemmas here.
We conclude this section by introducing Carleman estimates for the third order partial differential equations which we use in the proof of the stability estimates in inverse problems of determining the diffusion coefficients.
Let $\mathbf{p}=(p_1, \ldots, p_2)\in \{C^1(\overline{\Omega})\}^n$.
\begin{lem}
\label{lem:ce3rd1}
We assume that there exists $m_1 >0$ such that
$|\mathbf{p} (x)\cdot \nabla d_1(x)|\geq m_1$, $x\in \overline{\Omega}$.
Then there exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$,
we can choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align*}
&
\int_\Omega
\Biggl[
s\widetilde{\varphi}_1
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{2}
|\nabla \triangle \widetilde{v}|^2
\\
&\qquad
+
(s\widetilde{\varphi}_1)^{3}
\sum_{i,j=1}^n|\partial_i \partial_j \widetilde{v}|^2
+(s\widetilde{\varphi}_1)^{5}
\left(|\nabla \widetilde{v}|^2+ |\widetilde{v}|^2 \right)
\Biggr]
e^{2s \widetilde{\psi}_1}\,dx
\\
&
\leq
C\int_\Omega \left( |\nabla (\mathbf{p} \cdot \nabla \triangle \widetilde{v})|^2 + |\mathbf{p} \cdot \nabla \triangle \widetilde{v}|^2\right) e^{2s \widetilde{\psi}_1}\,dx,
\end{align*}
for all $s> s_0$ and all $\widetilde{v} \in H^4(\Omega)$ satisfying
$|\widetilde{v}(x)|=|\nabla \widetilde{v}(x)|=|\triangle \widetilde{v}(x)|=|\nabla \triangle \widetilde{v}(x)|=0$, $x\in \partial\Omega$
and
$|\nabla \partial_k \widetilde{v}(x)|=0$, $x\in \gamma$ ($k=1,2,\ldots, n$).
\end{lem}
\begin{lem}
\label{lem:ce3rd2}
We assume that there exists $m_2 >0$ such that
$|\mathbf{p} (x)\cdot \nabla d_2(x)|\geq m_2$, $x\in \overline{\Omega\setminus \omega}$.
Then there exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$,
we can choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{align*}
&
\int_\Omega
\Biggl[
s\widetilde{\varphi}_2
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k \widetilde{v}|^2
+
(s\widetilde{\varphi}_2)^{2}
|\nabla \triangle \widetilde{v}|^2
\\
&\qquad
+
(s\widetilde{\varphi}_2)^{3}
\sum_{i,j=1}^n|\partial_i \partial_j \widetilde{v}|^2
+(s\widetilde{\varphi}_2)^{5}
\left(|\nabla \widetilde{v}|^2+ |\widetilde{v}|^2 \right)
\Biggr]
e^{2s \widetilde{\psi}_2}\,dx
\\
&
\leq
C\int_\Omega \left( |\nabla (\mathbf{p} \cdot \nabla \triangle \widetilde{v})|^2 + |\mathbf{p} \cdot \nabla \triangle \widetilde{v}|^2\right) e^{2s \widetilde{\psi}_2}\,dx,
\end{align*}
for all $s> s_0$ and all $\widetilde{v} \in H^4(\Omega)$ satisfying
$|\widetilde{v}(x)|=|\nabla \widetilde{v}(x)|=|\triangle \widetilde{v}(x)|=|\nabla \triangle \widetilde{v}(x)|=0$, $x\in \partial\Omega$
and
$\widetilde{v}(x)=0$, $x\in \omega$.
\end{lem}
To establish the Carleman estimate for $\mathbf{p} \cdot \nabla \triangle \widetilde{v}$,
we start by proving the first order partial differential equations $\mathbf{p} \cdot \nabla \widetilde{v}$.
\begin{lem}
\label{lem:ce1st1}
We assume that there exists $m_1 >0$ such that
$|\mathbf{p} (x)\cdot \nabla d_1(x)|\geq m_1$, $x\in \overline{\Omega}$.
Then there exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$,
we can choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{equation*}
\int_\Omega
(s\widetilde{\varphi}_1)^{2}
\left(
|\nabla \widetilde{v}|^2
+
|\widetilde{v}|^2
\right)
e^{2s \widetilde{\psi}_1}\,dx
\leq
C\int_\Omega \left( |\nabla (\mathbf{p} \cdot \nabla \widetilde{v})|^2 + |\mathbf{p} \cdot \nabla \widetilde{v}|^2\right) e^{2s \widetilde{\psi}_1}\,dx,
\end{equation*}
for all $s> s_0$ and all $\widetilde{v} \in H^2(\Omega)$ satisfying
$|\widetilde{v}(x)|=|\nabla \widetilde{v}(x)|=0$, $x\in \partial\Omega$.
\end{lem}
\begin{lem}
\label{lem:ce1st2}
We assume that there exists $m_2 >0$ such that
$|\mathbf{p} (x)\cdot \nabla d_2(x)|\geq m_2$, $x\in \overline{\Omega\setminus \omega}$.
Then there exists $\lambda_0>0$ such that for any $\lambda>\lambda_0$,
we can choose $s_0(\lambda)>0$ for which there exists $C=C(s_0,\lambda)>0$ such that
\begin{equation*}
\int_\Omega
(s\widetilde{\varphi}_2)^{2}
\left(
|\nabla \widetilde{v}|^2
+
|\widetilde{v}|^2
\right)
e^{2s \widetilde{\psi}_2}\,dx
\leq
C\int_\Omega \left( |\nabla (\mathbf{p} \cdot \nabla \widetilde{v})|^2 + |\mathbf{p} \cdot \nabla \widetilde{v}|^2\right) e^{2s \widetilde{\psi}_2}\,dx,
\end{equation*}
for all $s> s_0$ and all $\widetilde{v} \in H^2(\Omega)$ satisfying
$|\widetilde{v}(x)|=|\nabla \widetilde{v}(x)|=0$, $x\in \partial\Omega$
and $\widetilde{v}(x)=0$, $x\in\omega$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem:ce1st1}]
Setting $\widetilde{w}=\widetilde{v} e^{s\widetilde{\psi}_1}$ in $\Omega$, we have
\begin{equation*}
e^{s\widetilde{\psi}_1}(\mathbf{p}\cdot \nabla \widetilde{v})
=\mathbf{p}\cdot \nabla \widetilde{w} -s\lambda\widetilde{\varphi}_1 (\mathbf{p}\cdot \nabla d_1)\widetilde{w}
\quad \text{in $\Omega$}.
\end{equation*}
Taking the weighted $L^2$ norm, by integrating by parts we obtain
\begin{align*}
&
\int_\Omega
|\mathbf{p}\cdot \nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx \\
&=
\int_\Omega
|\mathbf{p}\cdot \nabla \widetilde{w}|^2
\,dx
+
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
(\mathbf{p}\cdot \nabla d_1)^2
|\widetilde{w}|^2
\,dx \\
&\quad
-2
\int_\Omega
s\lambda\widetilde{\varphi}_1 (\mathbf{p}\cdot \nabla d_1) \sum_{j=1}^n p_j \widetilde{w} \partial_j \widetilde{w}
\,dx \\
&\geq
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
(\mathbf{p}\cdot \nabla d_1)^2
|\widetilde{w}|^2
\,dx
-
\int_\Omega
s\lambda\widetilde{\varphi}_1 (\mathbf{p}\cdot \nabla d_1) \sum_{j=1}^n p_j \partial_j (\widetilde{w})^2
\,dx \\
&=
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
(\mathbf{p}\cdot \nabla d_1)^2
|\widetilde{w}|^2
\,dx
+
\int_\Omega
s\lambda^2 \widetilde{\varphi}_1
(\mathbf{p}\cdot \nabla d_1)^2
|\widetilde{w}|^2
\,dx
\\
&\quad
+
\int_\Omega
s\lambda\widetilde{\varphi}_1
\left[
(\mathbf{p}\cdot \nabla d_1)
(\dd \mathbf{p})
+
\mathbf{p}\cdot \nabla (\mathbf{p}\cdot \nabla d_1)
\right]
|\widetilde{w}|^2
\,dx
\end{align*}
Hence we have
\begin{equation*}
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
|\widetilde{w}|^2
\,dx
\leq
C\int_\Omega
|\mathbf{p}\cdot \nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx
+C
\int_\Omega
\left(s\lambda^2 \widetilde{\varphi}_1 + s\lambda \widetilde{\varphi}_1 \right)
|\widetilde{w}|^2
\,dx .
\end{equation*}
Taking sufficiently large $s>0$,
we may absorb the second term on the right-hand side of the above inequality into the left-hand side and we see that
\begin{equation*}
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
|\widetilde{w}|^2
\,dx
\leq
C\int_\Omega
|\mathbf{p}\cdot \nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx ,
\end{equation*}
that is,
\begin{equation}
\label{eq:ce1st01}
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
|\widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx
\leq
C\int_\Omega
|\mathbf{p}\cdot \nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx .
\end{equation}
Set $\widetilde{v}_k =\partial_k \widetilde{v}$ in $\Omega$ for $k=1,2\ldots, n$.
We consider
\begin{equation*}
\mathbf{p}\cdot \nabla \widetilde{v}_k
=
\partial_k (\mathbf{p}\cdot \nabla \widetilde{v})
-(\partial_k \mathbf{p}) \cdot \nabla \widetilde{v}
\end{equation*}
Applying the estimate \eqref{eq:ce1st01} to the above equation,
we may obtain
\begin{align*}
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
|\widetilde{v}_k|^2 e^{2s\widetilde{\psi}_1}
\,dx
&\leq
C\int_\Omega
|\mathbf{p}\cdot \nabla \widetilde{v}_k|^2 e^{2s\widetilde{\psi}_1}
\,dx \\
&\leq
C\int_\Omega
|\partial_k (\mathbf{p}\cdot \nabla \widetilde{v})|^2 e^{2s\widetilde{\psi}_1}
\,dx
+
C\int_\Omega
|(\partial_k \mathbf{p}) \cdot \nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx .
\end{align*}
Hence we have
\begin{align*}
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
|\nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx
&\leq
C\int_\Omega
|\nabla (\mathbf{p}\cdot \nabla \widetilde{v})|^2 e^{2s\widetilde{\psi}_1}
\,dx
+
C\int_\Omega
|\nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx .
\end{align*}
Choosing sufficiently large $s>0$,
we can absorb the second term on the right-hand side of the above inequality into the left-hand side and we may get
\begin{equation*}
\int_\Omega
s^2\lambda^2 \widetilde{\varphi}_1^2
|\nabla \widetilde{v}|^2 e^{2s\widetilde{\psi}_1}
\,dx
\leq
C\int_\Omega
|\nabla (\mathbf{p}\cdot \nabla \widetilde{v})|^2 e^{2s\widetilde{\psi}_1}
\,dx.
\end{equation*}
Combining this with \eqref{eq:ce1st01},
we obtain the Carleman estimate of Lemma \ref{lem:ce1st1}.
Thus we conclude the Lemma \ref{lem:ce1st1}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:ce1st2}]
By an argument similar to that used in the proof of Lemma \ref{lem:ce1st1}, we may obtain Lemma \ref{lem:ce1st2}.
\end{proof}
\begin{rmk}
In one spatial dimension, the assumption $|\nabla \widetilde{v}|=0$ on $\partial\Omega$ is not necessary in Lemma \ref{lem:ce1st1} and Lemma \ref{lem:ce1st2}. In this case, we have the following Carleman estimate by integration by parts.
\begin{equation*}
\int_\Omega
(s\widetilde{\varphi}_k)^{2}
\left(
|\partial_1 \widetilde{v}|^2
+
|\widetilde{v}|^2
\right)
e^{2s \widetilde{\psi}_k}\,dx
\leq
C\int_\Omega |p_1\partial_1 \widetilde{v}|^2 e^{2s \widetilde{\psi}_k}\,dx,
\end{equation*}
for $k=1,2$.
\end{rmk}
Now we are ready to prove Lemma \ref{lem:ce3rd1} and Lemma \ref{lem:ce3rd2}.
\begin{proof}[Proof of Lemma \ref{lem:ce3rd1}]
Set $\widetilde{y}=\triangle \widetilde{v}$ in $\Omega$.
By the assumptions $|\triangle \widetilde{v}(x)|=|\nabla \triangle \widetilde{v}(x)|=0$, $x\in \partial\Omega$, we see that
$|\widetilde{y}(x)|=|\nabla \widetilde{y}(x)|=0$, $x\in \partial\Omega$.
By Lemma \ref{lem:ce1st1}, we obtain
\begin{equation*}
\int_\Omega
(s\widetilde{\varphi}_1)^{2}
\left(
|\nabla \widetilde{y}|^2
+
|\widetilde{y}|^2
\right)
e^{2s \widetilde{\psi}_1}\,dx
\leq
C\int_\Omega \left( |\nabla (\mathbf{p} \cdot \nabla \widetilde{y})|^2 + |\mathbf{p} \cdot \nabla \widetilde{y}|^2\right) e^{2s \widetilde{\psi}_1}\,dx,
\end{equation*}
that is,
\begin{align}
\label{eq:ce3rd01}
&\int_\Omega
(s\widetilde{\varphi}_1)^{2}
\left(
|\nabla \triangle \widetilde{v}|^2
+
|\triangle \widetilde{v}|^2
\right)
e^{2s \widetilde{\psi}_1}\,dx \\
&\leq
C\int_\Omega \left( |\nabla (\mathbf{p} \cdot \nabla \triangle \widetilde{v})|^2 + |\mathbf{p} \cdot \nabla \triangle \widetilde{v}|^2\right) e^{2s \widetilde{\psi}_1}\,dx. \nonumber
\end{align}
Next we use the Carleman estimate for elliptic equations to estimate
the left-hand side of the above inequality.
By Lemma \ref{lem:celemeb} with $p=2$,
we have
\begin{align}
\label{eq:ce3rd02}
&
\int_\Omega
\left[
s\widetilde{\varphi}_1
\sum_{i,j=1}^n|\partial_i \partial_j \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{3}
|\nabla \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{5}
|\widetilde{v}|^2
\right]
e^{2s \widetilde{\psi}_1}\,dx
\\
&
\leq
C\int_\Omega (s\widetilde{\varphi}_1)^{2} |\triangle \widetilde{v}|^2 e^{2s \widetilde{\psi}_1}\,dx
. \nonumber
\end{align}
Setting $\widetilde{v}_k =\partial_k \widetilde{v}$ in $\Omega$ for $k=1,2\ldots, n$
and using Lemma \ref{lem:celemeb} again,
we see that
\begin{align*}
&
\int_\Omega
\left[
s\widetilde{\varphi}_1
\sum_{i,j=1}^n|\partial_i \partial_j \widetilde{v}_k|^2
+
(s\widetilde{\varphi}_1)^{3}
|\nabla \widetilde{v}_k|^2
+
(s\widetilde{\varphi}_1)^{5}
|\widetilde{v}_k|^2
\right]
e^{2s \widetilde{\psi}_1}\,dx
\\
&
\leq
C\int_\Omega (s\widetilde{\varphi}_1)^{2} |\triangle \widetilde{v}_k|^2 e^{2s \widetilde{\psi}_1}\,dx,
\end{align*}
that is,
\begin{align}
\label{eq:ce3rd03}
&
\int_\Omega
\left[
s\widetilde{\varphi}_1
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{3}
\sum_{i,j=1}^n
|\partial_i \partial_j \widetilde{v}|^2
+
(s\widetilde{\varphi}_1)^{5}
|\nabla \widetilde{v}|^2
\right]
e^{2s \widetilde{\psi}_1}\,dx
\\
&
\leq
C\int_\Omega (s\widetilde{\varphi}_1)^{2} |\nabla \triangle \widetilde{v}|^2 e^{2s \widetilde{\psi}_1}\,dx.
\nonumber
\end{align}
Summing up the inequalities \eqref{eq:ce3rd01} --\eqref{eq:ce3rd03},
we may obtain the Carleman estimate of Lemma \ref{lem:ce3rd1}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:ce3rd2}]
Using Lemma \ref{lem:ce1st2} and Lemma \ref{lem:celemei}
instead of Lemma \ref{lem:ce1st1} and Lemma \ref{lem:celemeb},
we may prove Lemma \ref{lem:ce3rd2} in the same way as Lemma \ref{lem:ce3rd1}.
\end{proof}
\section{Proof of stability estimates}
\label{proof}
Hereafter we let $C$ denote a generic constant which is independent of $s,x,t$ and let
$C(s)$ denote a generic constant which is independent of $x,t$ but depends on $s$.
\subsection{Stability for the zeroth-order coefficient}
\begin{proof}[Proof of Theorem \ref{thm:ispb}]
By using Lemma \ref{lem:halftoone} for \eqref{eq:eq01}, we obtain
\begin{equation}
\label{eq:pr01}
\rho_2^2\partial_t u (x,t)
-(\rho_1\partial_t -L)^2 u(x,t)
=F(x,t),\quad (x,t)\in Q,
\end{equation}
where we introduced $F(x,t)$ as
\begin{align}
\label{eq:pr02}
F(x,t)
&=\left[ \rho_2\partial_t^\frac12- (\rho_1\partial_t - L) \right] \left(f(x)R(x,t)\right) +\rho_2 f(x)\frac{R(x,0)}{\sqrt{\pi t}}\\
&=
R(x,t)
\sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_j f(x)) \nonumber\\
&\quad
+\sum_{j=1}^n \left( 2 \sum_{i=1}^n a_{ij}(x)\partial_i R(x,t) - b_j(x) R(x,t)\right) \partial_j f(x) \nonumber\\
&\quad
+\Biggl[
\rho_2\partial_t^{\frac12}R(x,t)-\rho_1\partial_t R(x,t)
+\sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_j R(x,t)) \nonumber \\
&\qquad\quad
-\sum_{j=1}^n b_j(x) \partial_j R(x,t)-c(x) R(x,t)
+
\frac{\rho_2 R(x,0)}{\sqrt{\pi t}}
\Biggr]f(x),\ (x,t) \in Q
\nonumber .
\end{align}
Let us set $y=\partial_t u$, $z=\partial_t^2 u$ in $Q$. By differentiating
\eqref{eq:pr01} with respect to $t$, we have
\begin{align}
\label{eq:pr06}
&
\rho_2^2\partial_t y (x,t)
-(\rho_1\partial_t -L)^2 y(x,t)
=\partial_t F(x,t), & (x,t)\in Q,\\
\label{eq:pr07}
&
\rho_2^2\partial_t z (x,t)
-(\rho_1\partial_t -L)^2 z(x,t)
=\partial_t^2 F(x,t), & (x,t)\in Q.
\end{align}
Since $u(x,t)=0$, $(x,t)\in\partial\Omega\times(0,T)$, we see that
\begin{equation*}
y(x,t)=z(x,t)=0,\quad (x,t)\in\partial\Omega\times(0,T).
\end{equation*}
To use the Carleman estimate in $Q_\delta$, we introduce the weight functions.
Set
\begin{equation*}
\varphi_{\delta,1}(x,t)=\frac{e^{\lambda d_1(x)}}{\ell_\delta(t)}, \quad
\psi_{\delta,1}(x,t)=\frac{e^{\lambda d_1(x)}-e^{2\lambda \|d_1\|_{C(\overline{\Omega})}}}{\ell_\delta(t)}, \quad (x,t) \in Q_\delta,
\end{equation*}
where $\ell_\delta(t)=(t-t_0+\delta)(t_0+\delta-t)$.
Fixing $\lambda>0$ and applying Theorem \ref{thm:ce0b} ($p=0$) to \eqref{eq:pr06} and \eqref{eq:pr07} in $Q_\delta$, we have
\begin{align}
\label{eq:pr08}
&
\int_{Q_\delta}
\Biggl[
(s\varphi_{\delta,1})^3
\left(
|\partial_t y|^2
+
|\partial_t z|^2
+\sum_{i,j=1}^n|\partial_i \partial_j y|^2
+\sum_{i,j=1}^n|\partial_i \partial_j z|^2
\right) \\
&\qquad
+
(s\varphi_{\delta,1})^5
\left(
|\nabla y|^2
+ |\nabla z|^2
\right)
+
(s\varphi_{\delta,1})^7
\left(
|y|^2
+
|z|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}\,dxdt
\nonumber\\
&
\leq
C
\int_{Q_\delta} s\varphi_{\delta,1} \left(
|\partial_t F|^2
+
|\partial_t^2 F|^2
\right) e^{2s\psi_{\delta,1}}\,dxdt
+C \widetilde{B},
\nonumber
\end{align}
where
\begin{align*}
\widetilde{B}=& \int_{\Sigma_\delta}
\Biggl[
s\varphi_{\delta,1}
\left(
|\nabla \partial_t y|^2
+
|\nabla \partial_t z|^2
\right)\\
&\qquad
+
(s\varphi_{\delta,1})^2
\left(
|\nabla \partial_t^{\frac12} y|^2
+
|\nabla \partial_t^{\frac12} z|^2
\right)
+
(s\varphi_{\delta,1})^5
\left(
|\nabla y|^2
+
|\nabla z|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}
\,dSdt.
\end{align*}
We can estimate $\widetilde{B}$ by $B^2$.
We note that
$\partial_t^{\frac12}\partial_t^m=\partial_t^{m+\frac12}$, $m\in\mathbb{N}$.
Since there exist constants $C_k(s)>0$ such that $\varphi_{\delta,1}^ke^{2s\psi_{\delta,1}} \leq C_k(s)$ on $\Sigma_\delta$ for $k=0,1,2,\ldots$, we have
\begin{align*}
\widetilde{B}&\leq
C
\int_{\Sigma_\delta}
\Biggl[
s\varphi_{\delta,1}
|\nabla \partial_t^3 u|^2
+
(s\varphi_{\delta,1})^2
\left(
|\nabla \partial_t^{\frac12} \partial_t u|^2
+
|\nabla \partial_t^{\frac12} \partial_t^2 u|^2
\right) \\
&\qquad\qquad
+
(s\varphi_{\delta,1})^5
\left(
|\nabla \partial_t u|^2
+
|\nabla \partial_t^2 u|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}
\,dSdt \\
&\leq C(s) B^2.
\end{align*}
Note that
\begin{equation*}
\int_{Q_\delta} s\varphi_{\delta,1} \left(
|\partial_t F|^2
+
|\partial_t^2 F|^2
\right) e^{2s\psi_{\delta,1}}\,dxdt
\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1} \sum_{|\alpha| \leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt.
\end{equation*}
This together with \eqref{eq:pr08} gives
\begin{align}
\label{eq:pr09}
&
\int_{Q_\delta}
\Biggl[
(s\varphi_{\delta,1})^3
\left(
|\partial_t y|^2
+
|\partial_t z|^2
+\sum_{i,j=1}^n|\partial_i \partial_j y|^2
+\sum_{i,j=1}^n|\partial_i \partial_j z|^2
\right) \\
&\qquad
+
(s\varphi_{\delta,1})^5
\left(
|\nabla y|^2
+ |\nabla z|^2
\right)
+
(s\varphi_{\delta,1})^7
\left(
|y|^2
+
|z|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}\,dxdt
\nonumber\\
&
\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1} \sum_{|\alpha |\leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt
+C(s)B^2.
\nonumber
\end{align}
Let us expand the left-hand side of \eqref{eq:pr01}. We have
\begin{equation*}
\rho_2^2\partial_t u (x,t)
-\rho_1^2\partial_t^2 u(x,t) +2\rho_1\partial_t L u(x,t) -L^2 u(x,t)
=F(x,t),
\quad
(x,t)\in Q.
\end{equation*}
In particular at $t=t_0$, we have
\begin{equation}
\label{eq:pr04}
\rho_2^2\partial_t u (x,t_0)
-\rho_1^2\partial_t^2 u(x,t_0) +2\rho_1\partial_t L u(x,t_0) -L^2 u(x,t_0)
=F(x,t_0),\quad x\in \Omega.
\end{equation}
Taking the weighted $L^2$ norm of \eqref{eq:pr04} in $\Omega$, we obtain
\begin{align}
\label{eq:pr05}
&
\int_{\Omega} \varphi_{\delta,1}(x,t_0) |F(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx
\\
&\leq
C \sum_{k=1}^3 J_k+
C
\int_{\Omega} \varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 4} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
\nonumber
\end{align}
where
\begin{align}
J_1&=
\int_{\Omega} \varphi_{\delta,1}(x,t_0) |\partial_t u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
\nonumber \\
J_2&=
\int_{\Omega} \varphi_{\delta,1}(x,t_0) |\partial_t^2 u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
\nonumber \\
J_3&=
\int_{\Omega} \varphi_{\delta,1}(x,t_0) |\partial_t L u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx.
\nonumber
\end{align}
Let us estimate $J_1,J_2,J_3$. We assume that $s>1$ is large enough to satisfy $s\varphi_{\delta,1}>1$ in $Q$.
We note that $\partial_t \psi_{\delta,1}(x,t)=(e^{2\lambda (\| d_1\|_{C(\overline{\Omega})} - d_1(x))} -e^{-\lambda d_1(x)})(T-2t)\varphi_{\delta,1}^2(x,t)$ for $(x,t) \in Q$.
\begin{align*}
J_1
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( \varphi_{\delta,1}|y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left[ \varphi_{\delta,1}^2|y|^2+
\varphi_{\delta,1}|\partial_t y||y| +
s\varphi_{\delta,1}^3|y|^2 \right]e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^3
\left(
|y|^2+|z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt .
\end{align*}
Combining this with \eqref{eq:pr09}, we may estimate the right-hand side of the above inequality and we obtain
\begin{equation}
\label{eq:pr13}
J_1
\leq
\frac{C}{s^5}
\int_{Q_\delta}
\varphi_{\delta,1} \sum_{|\alpha |\leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt
+C(s)B^2.
\end{equation}
Similarly, we obtain
\begin{align*}
J_2
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( \varphi_{\delta,1}|\partial_t y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left[ \varphi_{\delta,1}^2|\partial_ty|^2+
\varphi_{\delta,1}|\partial_t^2 y||\partial_ty| +
s\varphi_{\delta,1}^3|\partial_t y|^2 \right]e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^3
\left(
|\partial_t y|^2+|\partial_t z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt.
\end{align*}
Putting this together with \eqref{eq:pr09}, we see that
\begin{equation}
\label{eq:pr14}
J_2
\leq
\frac{C}{s}
\int_{Q_\delta}
\varphi_{\delta,1} \sum_{|\alpha |\leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
Moreover we have
\begin{align*}
J_3
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( \varphi_{\delta,1}|L y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left[ \varphi_{\delta,1}^2|L y|^2+
\varphi_{\delta,1}|\partial_t L y||L y| +
s\varphi_{\delta,1}^3|L y|^2 \right]e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^3
\left(
|L y|^2+|L z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^3
\left(
\sum_{|\alpha| \leq 2}
|\partial_x^\alpha y|^2
+
\sum_{|\alpha|\leq 2}
|\partial_x^\alpha z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt
\end{align*}
This together with \eqref{eq:pr09} gives
\begin{equation}
\label{eq:pr15}
J_3
\leq
\frac{C}{s}
\int_{Q_\delta}
\varphi_{\delta,1} \sum_{|\alpha |\leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
By \eqref{eq:pr05} through \eqref{eq:pr15}, we have
\begin{align}
\label{eq:pr16}
&\int_{\Omega} \varphi_{\delta,1}(x,t_0) |F(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx \\
&
\leq
\frac{C}{s}
\int_{Q_\delta}
\varphi_{\delta,1} \sum_{|\alpha |\leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt
\nonumber \\
&\quad
+
C
\int_{\Omega} \varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 4} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s) B^2.
\nonumber
\end{align}
We will estimate the left-hand side of the inequality \eqref{eq:pr16} from
below using the Carleman estimate for the elliptic equation stated
Lemma \ref{lem:celemeb} ($p=1$). By \eqref{eq:pr02} at $t=t_0$, we have
\begin{align}
\label{eq:pr17}
&
\sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_j \widetilde{f}(x)) \\
&\quad
+\frac{1}{R(x,t_0)}\sum_{j=1}^n \left( 2 \sum_{i=1}^n a_{ij}(x)\partial_i R(x,t_0) - b_j(x) R(x,t_0) \right) \partial_j \widetilde{f}(x) \nonumber\\
&\quad
+\frac{1}{R(x,t_0)}\Biggl[
\rho_2\partial_t^{\frac12}R(x,t_0)-\rho_1\partial_t R(x,t_0)
+\sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_j R(x,t_0)) \nonumber \\
&\qquad\qquad\qquad
-\sum_{j=1}^n b_j(x) \partial_j R(x,t_0)-c(x) R(x,t_0)
+
\frac{\rho_2 R(x,0)}{\sqrt{\pi t_0}}
\Biggr]\widetilde{f}(x) \nonumber \\
&=
\frac{F(x,t_0)}{R(x,t_0)}, \quad x\in \Omega.
\nonumber
\end{align}
We note that $f(x)=0$, $x\in \partial\Omega$ and $\nabla f(x)=0$, $x\in \gamma$
are assumed. Applying the Lemma \ref{lem:celemeb} to \eqref{eq:pr17} in
$\Omega$, we obtain
\begin{align}
\label{eq:pr18}
&
\frac1{s}
\int_{\Omega}
\varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 2} | \partial_x^\alpha f(x) |^2 e^{2s\psi_{\delta,1}(x,t_0)}
\,dx \\
&\leq
\frac{C}{s}
\int_{\Omega}
\sum_{|\alpha|\leq 2} | \partial_x^\alpha f(x) |^2 e^{2s\psi_{\delta,1}(x,t_0)}
\,dx \nonumber \\
&\leq
\frac{C}{s}
\int_{\Omega}
\Biggl(
\sum_{i,j=1}^n
|\partial_i \partial_j f(x)|^2
\nonumber \\
&\qquad \qquad
+
(s\varphi_{\delta,1}(x,t_0))^2
|\nabla f(x)|^2
+
(s\varphi_{\delta,1}(x,t_0))^4
|f(x)|^2
\Biggr)
e^{2s\psi_{\delta,1}(x,t_0)}\,dx \nonumber \\
&\leq
C
\int_{\Omega}
\varphi_{\delta,1}(x,t_0)
\left|
\frac{F(x,t_0)}{R(x,t_0)}
\right|^2
e^{2s\psi (x,t_0)}\,dx
\nonumber \\
&\leq
C
\int_{\Omega}
\varphi_{\delta,1}(x,t_0)
\left|
F(x,t_0)
\right|^2
e^{2s\psi (x,t_0)}\,dx.
\nonumber
\end{align}
By \eqref{eq:pr16} and \eqref{eq:pr18}, we obtain
\begin{align}
\label{eq:pr19}
&
\frac1{s}
\int_{\Omega}
\varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 2} | \partial_x^\alpha f(x) |^2 e^{2s\psi_{\delta,1}(x,t_0)}
\,dx \\
&
\leq
\frac{C}{s}
\int_{Q_\delta}
\varphi_{\delta,1}(x,t_0) \sum_{|\alpha |\leq 2} |\partial_x^\alpha f(x)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dxdt
\nonumber \\
&\quad
+
C
\int_{\Omega} \varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 4} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s) B^2.
\nonumber
\end{align}
Let us estimate the first integral term on the right-hand side of \eqref{eq:pr19}.
\begin{equation*}
\int_{Q_\delta}
\varphi_{\delta,1} \sum_{|\alpha |\leq 2} |\partial_x^\alpha f|^2e^{2s\psi_{\delta,1}}\,dxdt
\leq
\int_{\Omega} \varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 2} | \partial_x^\alpha f |^2 e^{2s\psi_{\delta,1}(x,t_0)} h_s(x)\,dx,
\end{equation*}
where
\begin{equation*}
h_s(x)=
\frac{1}{\varphi_{\delta,1}(x,t_0)}\int_{t_0-\delta}^{t_0+\delta} \varphi_{\delta,1}e^{-2s(\psi_{\delta,1}(x,t_0)-\psi_{\delta,1}(x,t))}\,dt.
\end{equation*}
Since $\psi_{\delta,1}(x,t_0)-\psi_{\delta,1}(x,t) \geq 0$,
$(x,t)\in Q_\delta$, $h_s$ converges pointwise to $0$ in $\Omega$ as
$s\to\infty$ by Lebesgue's dominated convergence theorem. Moreover by
Dini's theorem, we see that $h_s$ converges uniformly to $0$ in $\Omega$ as
$s\to \infty$. Hence, taking sufficiently large $s>0$, we can absorb the
first term on the right-hand side of \eqref{eq:pr19}
into the left-hand side and obtain
\begin{align}
\label{eq:pr20}
&\frac1{s}
\int_{\Omega}
\varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 2} | \partial_x^\alpha f(x) |^2 e^{2s\psi_{\delta,1}(x,t_0)}
\,dx \\
&
\leq
C
\int_{\Omega} \varphi_{\delta,1}(x,t_0) \sum_{|\alpha|\leq 4} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s) B^2.
\nonumber
\end{align}
Fix $s>0$. Noting that $ \varphi_{\delta,1}(\cdot,t_0)e^{2s\psi_{\delta,1}(\cdot,t_0)}$ has its upper and lower bound in $\overline{\Omega}$, we see that
\begin{equation*}
\|f \|_{H^2(\Omega)}
\leq C \| u (\cdot, t_0) \|_{H^4(\Omega)} +CB.
\end{equation*}
Thus we obtain the stability estimate.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ispi}]
We may prove Theorem \ref{thm:ispi} by an argument similar to that used in the proof of Theorem \ref{thm:ispb}. In the proof,
Theorem \ref{thm:ce0i} and Lemma \ref{lem:celemei} are used
instead of Theorem \ref{thm:ce0b} and Lemma \ref{lem:celemeb}.
\end{proof}
\subsection{Stability for the diffusion coefficient}
Next we prove Theorem \ref{thm:df1} and Theorem \ref{thm:df2}.
The proofs is very similar to the proofs of Theorem \ref{thm:ispb} and Theorem \ref{thm:ispi}.
\begin{proof}[Proof of Theorem \ref{thm:df1}]
Applying Lemma \ref{lem:halftoone} to \eqref{eq:dfeq01}, we obtain
\begin{equation}
\label{eq:dfpr01}
\rho_2^2\partial_t u (x,t)
-(\rho_1\partial_t -\mathcal{A}_1)^2 u(x,t)
=\widetilde{F}(x,t),\quad (x,t)\in Q,
\end{equation}
where
\begin{align}
\label{eq:dfpr02}
\widetilde{F}(x,t)
&=\left[ \rho_2\partial_t^\frac12- (\rho_1\partial_t - \mathcal{A}_1) \right] \left(\dd (a(x)\nabla r(x,t))\right)
+
\frac{\rho_2 \dd (a(x)\nabla r(x,0))}{\sqrt{\pi t}}
\\
&=
a_1(x) \nabla r(x,t)\cdot \nabla \triangle a(x)
+2a_1(x) \sum_{i,j=1}^n (\partial_i \partial_j r(x,t)) (\partial_i\partial_j a(x))
\nonumber \\
&\quad
+a_1(x) \triangle r(x,t) \triangle a(x)
+(\nabla a_1(x)-\mathbf{b}(x))\cdot (\nabla r(x,t)\cdot \nabla)\nabla a(x)
\nonumber \\
&\quad
+\Biggl[
(\rho_2 \partial_t^{\frac12} -\rho_1 \partial_t)\nabla r(x,t)
+3 a_1(x)\nabla \triangle r(x,t)
+\left(\triangle r(x,t)\right) \nabla a_1(x)
\nonumber \\
&\qquad
-\left(\triangle r(x,t)\right) \mathbf{b}(x)
-c(x) \nabla r(x,t)
+\frac{\rho_2 \nabla r(x,0)}{\sqrt{\pi t}}
\Biggr]\cdot \nabla a(x)
\nonumber \\
&\quad
+(\nabla a_1(x)-\mathbf{b}(x))\cdot (\nabla a(x)\cdot \nabla) \nabla r(x,t)
\nonumber \\
&\quad
+\Biggl[
(\rho_2 \partial_t^{\frac12} -\rho_1 \partial_t)\triangle r(x,t)
+\left(\nabla a_1(x)\cdot \nabla \triangle r(x,t)\right)
+a_1(x)\triangle^2 r(x,t)
\nonumber \\
&\qquad
-\left(\mathbf{b}(x) \cdot \nabla\triangle r(x,t) \right)
-c(x) \triangle r(x,t)
+
\frac{\rho_2 \triangle r(x,0)}{\sqrt{\pi t}}
\Biggr]a(x),\quad
(x,t) \in Q
\nonumber .
\end{align}
Setting $y=\partial_t u$, $z=\partial_t^2 u$ in $Q$ and differentiating
\eqref{eq:dfpr01} with respect to $t$, we have
\begin{align}
\label{eq:dfpr06}
&
\rho_2^2\partial_t y (x,t)
-(\rho_1\partial_t -\mathcal{A}_1)^2 y(x,t)
=\partial_t \widetilde{F}(x,t), & (x,t)\in Q,\\
\label{eq:dfpr07}
&
\rho_2^2\partial_t z (x,t)
-(\rho_1\partial_t -\mathcal{A}_1)^2 z(x,t)
=\partial_t^2 \widetilde{F}(x,t), & (x,t)\in Q.
\end{align}
Since $u(x,t)=0$, $(x,t)\in\partial\Omega\times(0,T)$, we see that
\begin{equation*}
y(x,t)=z(x,t)=0,\quad (x,t)\in\partial\Omega\times(0,T).
\end{equation*}
Fixing $\lambda>0$ and applying Theorem \ref{thm:ce0b} ($p=1$) to \eqref{eq:dfpr06} and \eqref{eq:dfpr07} in $Q_\delta$, we have
\begin{align}
\label{eq:dfpr08}
&
\int_{Q_\delta}
\Biggl[
(s\varphi_{\delta,1})^2
\left(
|\nabla\partial_t y|^2
+
|\nabla\partial_t z|^2
\right)
\\
&\qquad
+
(s\varphi_{\delta,1})^3
\left(
|\nabla(\rho_1\partial_t - \mathcal{A}_1)y|^2
+
|\nabla(\rho_1\partial_t - \mathcal{A}_1)z|^2
\right)
\nonumber \\
&\qquad
+
(s\varphi_{\delta,1})^4
\left(
|\partial_t y|^2
+
|\partial_t z|^2
+\sum_{i,j=1}^n|\partial_i \partial_j y|^2
+\sum_{i,j=1}^n|\partial_i \partial_j z|^2
\right) \nonumber\\
&\qquad
+
(s\varphi_{\delta,1})^6
\left(
|\nabla y|^2
+ |\nabla z|^2
\right)
+
(s\varphi_{\delta,1})^8
\left(
|y|^2
+
|z|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}\,dxdt
\nonumber\\
&
\leq
C
\int_{Q_\delta}
(s\varphi_{\delta,1})^2
\left(
|\partial_t \widetilde{F}|^2
+
|\partial_t^2 \widetilde{F}|^2
\right) e^{2s\psi_{\delta,1}}\,dxdt
+C \widehat{B},
\nonumber
\end{align}
where
\begin{align*}
\widehat{B}=& \int_{\Sigma_\delta}
\Biggl[
(s\varphi_{\delta,1})^2
\left(
|\nabla \partial_t y|^2
+
|\nabla \partial_t z|^2
\right)\\
&\qquad
+
(s\varphi_{\delta,1})^3
\left(
|\nabla \partial_t^{\frac12} y|^2
+
|\nabla \partial_t^{\frac12} z|^2
\right)
+
(s\varphi_{\delta,1})^6
\left(
|\nabla y|^2
+
|\nabla z|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}
\,dSdt.
\end{align*}
As we have seen in the proof of Theorem \ref{thm:ispb}, we may obtain $\widehat{B}\leq C(s) B^2$.
Note that
\begin{align*}
&\int_{Q_\delta}
(s\varphi_{\delta,1})^2
\left(
|\partial_t \widetilde{F}|^2
+
|\partial_t^2 \widetilde{F}|^2
\right) e^{2s\psi_{\delta,1}}\,dxdt \\
&
\leq
C
\int_{Q_\delta}
(s\varphi_{\delta,1})^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt.
\end{align*}
This together with \eqref{eq:dfpr08} gives
\begin{align}
\label{eq:dfpr09}
&
\int_{Q_\delta}
\Biggl[
(s\varphi_{\delta,1})^2
\left(
|\nabla\partial_t y|^2
+
|\nabla\partial_t z|^2
\right)
\\
&\qquad
+
(s\varphi_{\delta,1})^3
\left(
|\nabla(\rho_1\partial_t - \mathcal{A}_1)y|^2
+
|\nabla(\rho_1\partial_t - \mathcal{A}_1)z|^2
\right)
\nonumber \\
&\qquad
+
(s\varphi_{\delta,1})^4
\left(
|\partial_t y|^2
+
|\partial_t z|^2
+\sum_{i,j=1}^n|\partial_i \partial_j y|^2
+\sum_{i,j=1}^n|\partial_i \partial_j z|^2
\right) \nonumber\\
&\qquad
+
(s\varphi_{\delta,1})^6
\left(
|\nabla y|^2
+ |\nabla z|^2
\right)
+
(s\varphi_{\delta,1})^8
\left(
|y|^2
+
|z|^2
\right)
\Biggr]
e^{2s\psi_{\delta,1}}\,dxdt
\nonumber\\
&
\leq
C
\int_{Q_\delta}
(s\varphi_{\delta,1})^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\nonumber
\end{align}
Let us expand the left-hand side of \eqref{eq:dfpr01}. We have
\begin{equation*}
\rho_2^2\partial_t u (x,t)
-\rho_1^2\partial_t^2 u(x,t) +2\rho_1\partial_t \mathcal{A}_1 u(x,t) -\mathcal{A}_1^2 u(x,t)
=\widetilde{F}(x,t),
\quad
(x,t)\in Q.
\end{equation*}
Moreover we have
\begin{equation*}
\rho_2^2\nabla\partial_t u (x,t)
-\rho_1^2\nabla\partial_t^2 u(x,t) +2\rho_1\nabla\partial_t \mathcal{A}_1 u(x,t) -\nabla\mathcal{A}_1^2 u(x,t)
=\nabla\widetilde{F}(x,t),
\
(x,t)\in Q.
\end{equation*}
In particular at $t=t_0$, we have
\begin{equation}
\label{eq:dfpr03}
\rho_2^2\partial_t u (x,t_0)
-\rho_1^2\partial_t^2 u(x,t_0) +2\rho_1\partial_t \mathcal{A}_1 u(x,t_0) -\mathcal{A}_1^2 u(x,t_0)
=\widetilde{F}(x,t_0),\ x\in \Omega,
\end{equation}
and
\begin{equation}
\label{eq:dfpr04}
\rho_2^2\nabla\partial_t u (x,t_0)
-\rho_1^2\nabla\partial_t^2 u(x,t_0) +2\rho_1\nabla\partial_t \mathcal{A}_1 u(x,t_0) -\nabla\mathcal{A}_1^2 u(x,t_0)
=\nabla\widetilde{F}(x,t_0),\ x\in \Omega.
\end{equation}
Taking the weighted $L^2$ norm of \eqref{eq:dfpr03} and \eqref{eq:dfpr04} in $\Omega$, we obtain
\begin{align}
\label{eq:dfpr05}
&
\int_{\Omega} \left(|\widetilde{F}(x,t_0)|^2+|\nabla\widetilde{F}(x,t_0)|^2\right)e^{2s\psi_{\delta,1}(x,t_0)}\,dx
\\
&\leq
C \sum_{k=1}^6 \widetilde{J}_k+
C
\int_{\Omega} \sum_{|\alpha|\leq 5} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
\nonumber
\end{align}
where
\begin{align}
&\widetilde{J}_1=
\int_{\Omega} |\partial_t u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
&&\widetilde{J}_2=
\int_{\Omega} |\partial_t^2 u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
\nonumber \\
&\widetilde{J}_3=
\int_{\Omega} |\partial_t \mathcal{A}_1 u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
&&\widetilde{J}_4=
\int_{\Omega} |\nabla \partial_t u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
\nonumber \\
&\widetilde{J}_5=
\int_{\Omega} |\nabla \partial_t^2 u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx,
&&\widetilde{J}_6=
\int_{\Omega} |\nabla \partial_t \mathcal{A}_1 u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx.
\nonumber
\end{align}
Henceforth we estimate $\widetilde{J}_1$ through $\widetilde{J}_6$ by using the Carleman estimate.
We assume that $s>1$ is large enough to satisfy $s\varphi_{\delta,1}>1$ in $Q$.
We note that $\partial_t \psi_{\delta,1}(x,t)=(e^{2\lambda (\| d_1\|_{C(\overline{\Omega})} - d_1(x))} -e^{-\lambda d_1(x)})(T-2t)\varphi_{\delta,1}^2(x,t)$ for $(x,t) \in Q$.
\begin{align*}
\widetilde{J}_1
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( |y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left(
|\partial_t y||y| +
s\varphi_{\delta,1}^2
|y|^2 \right)e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|y|^2+|z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt .
\end{align*}
Combining this with \eqref{eq:dfpr09}, we may estimate the right-hand side of the above inequality and we obtain
\begin{equation}
\label{eq:dfpr101}
\widetilde{J}_1
\leq
\frac{C}{s^5}
\int_{Q_\delta}
\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
Similarly, we obtain
\begin{align*}
\widetilde{J}_2
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( |\partial_t y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left(
|\partial_t^2 y||\partial_ty| +
s\varphi_{\delta,1}^2
|\partial_t y|^2 \right)e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\partial_t y|^2+|\partial_t z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt.
\end{align*}
Putting this together with \eqref{eq:dfpr09}, we see that
\begin{equation}
\label{eq:dfpr102}
\widetilde{J}_2
\leq
\frac{C}{s}
\int_{Q_\delta}
\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
Moreover we have
\begin{align*}
\widetilde{J}_3
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( |\mathcal{A}_1 y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left(
| \mathcal{A}_1\partial_t y||\mathcal{A}_1 y| +
s\varphi_{\delta,1}^2
|\mathcal{A}_1 y|^2 \right)e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\mathcal{A}_1 y|^2+|\mathcal{A}_1 z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
\sum_{|\alpha| \leq 2}
|\partial_x^\alpha y|^2
+
\sum_{|\alpha|\leq 2}
|\partial_x^\alpha z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt
\end{align*}
This together with \eqref{eq:dfpr09} gives
\begin{equation}
\label{eq:dfpr103}
\widetilde{J}_3
\leq
\frac{C}{s}
\int_{Q_\delta}
\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
We have
\begin{align*}
\widetilde{J}_4
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( |\nabla y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left(
|\nabla \partial_t y||\nabla y| +
s\varphi_{\delta,1}^2|\nabla y|^2 \right)e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla y|^2+|\nabla z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt .
\end{align*}
Combining this with \eqref{eq:dfpr09}, we may estimate the right-hand side of the above inequality and we obtain
\begin{equation}
\label{eq:dfpr104}
\widetilde{J}_4
\leq
\frac{C}{s^3}
\int_{Q_\delta}
\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
Similarly, we obtain
\begin{align*}
\widetilde{J}_5
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( |\nabla \partial_t y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left(
|\nabla \partial_t^2 y||\nabla \partial_ty| +
s\varphi_{\delta,1}^2|\nabla \partial_t y|^2 \right)e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla \partial_t y|^2+|\nabla \partial_t z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt.
\end{align*}
Putting this together with \eqref{eq:dfpr09}, we see that
\begin{equation}
\label{eq:dfpr105}
\widetilde{J}_5
\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
Moreover we have
\begin{align*}
\widetilde{J}_6
&=
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\partial_t \left( |\nabla \mathcal{A}_1 y|^2 e^{2s\psi_{\delta,1}}\right)
\,dxdt \\
&\leq
C
\int_{t_0-\delta}^{t_0}
\int_{\Omega}
\left(
|\nabla \mathcal{A}_1\partial_t y||\nabla \mathcal{A}_1 y| +
s\varphi_{\delta,1}^2|\nabla\mathcal{A}_1 y|^2 \right)e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla \mathcal{A}_1 y|^2+|\nabla \mathcal{A}_1 z|^2
\right)
e^{2s\psi_{\delta,1}}
\,dxdt \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\Bigl(
|\nabla \partial_t y|^2
+
|\nabla (\rho_1\partial_t-\mathcal{A}_1) y|^2
\\
&\qquad \qquad \qquad \quad
+
|\nabla \partial_t z|^2
+
|\nabla (\rho_1\partial_t-\mathcal{A}_1) z|^2
\Bigr)
e^{2s\psi_{\delta,1}}
\,dxdt
\end{align*}
This together with \eqref{eq:dfpr09} gives
\begin{equation}
\label{eq:dfpr106}
\widetilde{J}_6
\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)
e^{2s\psi_{\delta,1}}\,dxdt
+C(s) B^2.
\end{equation}
Summing up the estimate of \eqref{eq:dfpr05} through \eqref{eq:dfpr106}, we have
\begin{align}
\label{eq:dfpr16}
&
\int_{\Omega} \left(|\widetilde{F}(x,t_0)|^2+|\nabla\widetilde{F}(x,t_0)|^2\right)e^{2s\psi_{\delta,1}(x,t_0)}\,dx
\\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)e^{2s\psi_{\delta,1}}\,dxdt
\nonumber \\
&\quad
+
C
\int_{\Omega} \sum_{|\alpha|\leq 5} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s) B^2.
\nonumber
\end{align}
Let us estimate the left-hand side of the inequality \eqref{eq:dfpr16} from below.
By \eqref{eq:dfpr02} at $t=t_0$, we have
\begin{align}
\label{eq:dfpr17}
&a_1(x) \nabla r(x,t_0)\cdot \nabla \triangle a(x) \\
&=
\widetilde{F}(x,t_0)
-2a_1(x) \sum_{i,j=1}^n (\partial_i \partial_j r(x,t_0)) (\partial_i\partial_j a(x))
\nonumber \\
&\quad
-a_1(x) \triangle r(x,t_0) \triangle a(x)
-(\nabla a_1(x)-\mathbf{b}(x))\cdot (\nabla r(x,t_0)\cdot \nabla)\nabla a(x)
\nonumber \\
&\quad
-\Biggl[
(\rho_2 \partial_t^{\frac12} -\rho_1 \partial_t)\nabla r(x,t_0)
+3 a_1(x)\nabla \triangle r(x,t_0)
+\left(\triangle r(x,t_0)\right) \nabla a_1(x)
\nonumber \\
&\qquad
-\left(\triangle r(x,t_0)\right) \mathbf{b}(x)
-c(x) \nabla r(x,t_0)
+\frac{\rho_2\nabla r(x,0)}{\sqrt{\pi t_0}}
\Biggr]\cdot \nabla a(x)
\nonumber \\
&\quad
-(\nabla a_1(x)-\mathbf{b}(x))\cdot (\nabla a(x)\cdot \nabla) \nabla r(x,t_0)
\nonumber \\
&\quad
-\Biggl[
(\rho_2 \partial_t^{\frac12} -\rho_1 \partial_t)\triangle r(x,t_0)
+\left(\nabla a_1(x)\cdot\nabla \triangle r(x,t_0)\right)
+a_1(x)\triangle^2 r(x,t_0)
\nonumber \\
&\qquad
-\left(\mathbf{b}(x) \cdot \nabla\triangle r(x,t_0)\right)
-c(x) \triangle r(x,t_0)
+\frac{\rho_2 \triangle r(x,0)}{\sqrt{\pi t_0}}
\Biggr]a(x),\quad
x \in \Omega
\nonumber .
\end{align}
Note that
\begin{equation*}
|\nabla r(x,t_0)\cdot \nabla d_1(x)| \geq m_1>0, \quad x\in \overline{\Omega},
\end{equation*}
and $a\in H^4(\Omega)$ satisfies $a(x)=0$, $x\in D$.
Let us apply the Lemma \ref{lem:ce3rd1} to \eqref{eq:dfpr17} in
$\Omega$. Then we obtain
\begin{align}
\nonumber
&
\int_\Omega
\Biggl[
s\varphi_{\delta,1}(x,t_0)
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k a(x)|^2
+
(s\varphi_{\delta,1}(x,t_0))^{2}
|\nabla \triangle a(x)|^2
\\
&\qquad
+
(s\varphi_{\delta,1}(x,t_0))^{3}
\sum_{i,j=1}^n|\partial_i \partial_j a(x)|^2
\nonumber \\
&\qquad
+(s\varphi_{\delta,1}(x,t_0))^{5}
\left(|\nabla a(x)|^2+ |a(x)|^2 \right)
\Biggr]
e^{2s \psi_{\delta,1}(x,t_0)}\,dx
\nonumber \\
&\leq
\int_{\Omega} \left(|\widetilde{F}(x,t_0)|^2+|\nabla\widetilde{F}(x,t_0)|^2\right)e^{2s\psi_{\delta,1}(x,t_0)}\,dx
+
\int_{\Omega} \sum_{|\alpha|\leq 3} |\partial_x^\alpha a(x) |^2 e^{2s\psi_{\delta,1}(x,t_0)}\,dx
\nonumber
\end{align}
Taking sufficiently large $s>0$, we may absorb the second term on the right-hand side of the above inequality and we get
\begin{align}
\nonumber
&
\int_\Omega
\Biggl[
s\varphi_{\delta,1}(x,t_0)
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k a(x)|^2
+
(s\varphi_{\delta,1}(x,t_0))^{2}
|\nabla \triangle a(x)|^2
\\
&\qquad
+
(s\varphi_{\delta,1}(x,t_0))^{3}
\sum_{i,j=1}^n|\partial_i \partial_j a(x)|^2
\nonumber \\
&\qquad
+(s\varphi_{\delta,1}(x,t_0))^{5}
\left(|\nabla a(x)|^2+ |a(x)|^2 \right)
\Biggr]
e^{2s \psi_{\delta,1}(x,t_0)}\,dx
\nonumber \\
&\leq
\int_{\Omega} \left(|\widetilde{F}(x,t_0)|^2+|\nabla\widetilde{F}(x,t_0)|^2\right)e^{2s\psi_{\delta,1}(x,t_0)}\,dx
\nonumber
\end{align}
Combining this with \eqref{eq:dfpr16}, we obtain
\begin{align}
\label{eq:dfpr19}
&
\int_\Omega
\Biggl[
s\varphi_{\delta,1}(x,t_0)
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k a(x)|^2
+
(s\varphi_{\delta,1}(x,t_0))^{2}
|\nabla \triangle a(x)|^2
\\
&\qquad
+
(s\varphi_{\delta,1}(x,t_0))^{3}
\sum_{i,j=1}^n|\partial_i \partial_j a(x)|^2
\nonumber \\
&\qquad
+(s\varphi_{\delta,1}(x,t_0))^{5}
\left(|\nabla a(x)|^2+ |a(x)|^2 \right)
\Biggr]
e^{2s \psi_{\delta,1}(x,t_0)}\,dx
\nonumber \\
&\leq
C
\int_{Q_\delta}
s\varphi_{\delta,1}^2
\left(
|\nabla \triangle a|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a|^2
\right)e^{2s\psi_{\delta,1}}\,dxdt
\nonumber \\
&\quad
+
C
\int_{\Omega} \sum_{|\alpha|\leq 5} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s) B^2.
\nonumber\\
&\leq
C
\int_{\Omega}
s\varphi_{\delta,1}^2(x,t_0)
\left(
|\nabla \triangle a(x)|^2
+
\sum_{|\alpha| \leq 2} |\partial_x^\alpha a(x)|^2
\right)e^{2s\psi_{\delta,1}(x,t_0)}\,dx
\nonumber \\
&\quad
+
C
\int_{\Omega} \sum_{|\alpha|\leq 5} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s)B^2.
\nonumber
\end{align}
In the last inequality, we used the fact that
\begin{equation*}
\varphi_{\delta,1}^2(x,t)e^{2s\psi_{\delta,1}(x,t)}\leq
\varphi_{\delta,1}^2(x,t_0)e^{2s\psi_{\delta,1}(x,t_0)}, \quad (x,t)\in Q_{\delta}
\end{equation*}
for large $s>0$.
Choose sufficiently large $s>0$ and absorb the first term on the right-hand side of \eqref{eq:dfpr19}
into the left-hand side. Then we obtain
\begin{align}
\label{eq:dfpr20}
&
\int_\Omega
\Biggl[
s\varphi_{\delta,1}(x,t_0)
\sum_{i,j,k=1}^n|\partial_i \partial_j \partial_k a(x)|^2
+
(s\varphi_{\delta,1}(x,t_0))^{2}
|\nabla \triangle a(x)|^2
\\
&\qquad
+
(s\varphi_{\delta,1}(x,t_0))^{3}
\sum_{i,j=1}^n|\partial_i \partial_j a(x)|^2
\nonumber \\
&\qquad
+(s\varphi_{\delta,1}(x,t_0))^{5}
\left(|\nabla a(x)|^2+ |a(x)|^2 \right)
\Biggr]
e^{2s \psi_{\delta,1}(x,t_0)}\,dx \nonumber \\
&\leq
C
\int_{\Omega} \sum_{|\alpha|\leq 5} |\partial_x^\alpha u(x,t_0)|^2e^{2s\psi_{\delta,1}(x,t_0)}\,dx +C(s)B^2.
\nonumber
\end{align}
Fix $s>0$. Noting that $ \varphi_{\delta,1}(\cdot,t_0)e^{2s\psi_{\delta,1}(\cdot,t_0)}$ has its upper and lower bound in $\overline{\Omega}$, we see that
\begin{equation*}
\|a \|_{H^3(\Omega)}
\leq C \| u (\cdot, t_0) \|_{H^5(\Omega)} +CB.
\end{equation*}
Thus we obtain the stability estimate \eqref{df:seb}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:df2}]
We may prove Theorem \ref{thm:df2} in the same way as Theorem \ref{thm:df1}.
\end{proof}
\section*{Acknowledgments}
The authors acknowledge support from the Japan Society for the Promotion of Science (JSPS) A3 foresight program: Modeling and Computation of Applied Inverse Problems. MM also acknowledges support from
Grant-in-Aid for Scientific Research (17K05572 and 17H02081) of JSPS.
|
3,212,635,537,618 | arxiv | \section{Introduction}
For statistical hypothesis testing, one of the widely-used conventional methods
is using the Student $t$-test statistic,
\[
T= \frac{\bar{X}-\mu}{S/\sqrt{n}},
\]
where $\bar{X}$ is the sample mean and $S$ is the sample standard deviation.
However, a statistical inference using this Student $t$-test statistic is extremely
sensitive to data contamination.
In this article, we briefly review recently developed alternative methods proposed by
\cite{Park:2018a} and \cite{Jeong/Son/Lee/Kim:2018} which {are shown to be robust to} data contamination.
Their statistics are
{developed based on the median and the median absolute deviation estimators,
and the} Hodges-Lehmann estimator \citep{Hodges/Lehmann:1963}
and the Shamos estimator \citep{Shamos:1976}.
{They have shown that these statistics} are pivotal and converge
to the standard normal distribution.
However, when the sample size is small, it is not
appropriate to use {the asymptotic property of these statistics
(i.e., the standard normal distribution) for making a} statistical inference.
This motivates us to implement extensive Monte Carlo simulations to obtain the empirical distributions of the robustified
$t$-test statistics and calculate their related quantile values,
which can then be used for making a statistical inference.
\section{Robustified $t$-test statistics}
{For the sake of completeness, in} this section, we briefly review the test statistics
proposed by \cite{Park:2018a} and \cite{Jeong/Son/Lee/Kim:2018}.
By replacing the mean and the standard deviation
with the median and the median absolute deviation (MAD), respectively,
\cite{Park:2018a} proposed the following robustified $t$-test statistic
\[
T = \frac{ \hat{\mu}_m - \mu }{ \hat{\sigma}_M/\sqrt{n} },
\]
where $\hat{\mu}_m = \mathop{\mathrm{median}}_{1\le i\le n} X_i$ and
$\hat{\sigma}_M = \mathop{\mathrm{median}}_{1\le i\le n}
\big| X_i - \mathop{\mathrm{median}}_{1\le i\le n} X_i \big|$.
He also showed that the above statistic is {a pivotal quantity}.
However, it does not converge to the standard normal distribution.
He suggested the following statistic which converges to the standard normal distribution.
\begin{equation} \label{EQ:TA}
T_A =
\sqrt{\frac{2n}{\pi}} {\Phi^{-1}\Big(\frac{3}{4}\Big)} \cdot
\frac{\displaystyle\mathop{\mathrm{median}}_{1\le i\le n}X_i-\mu
{\displaystyle\mathop{\mathrm{median}}_{1\le i\le n}
\big| X_i - \mathop{\mathrm{median}}_{1\le i\le n} X_i \big|}
\stackrel{d}{\longrightarrow} N(0,1),
\end{equation}
{where $\Phi^{-1}(\cdot)$ is the inverse of the standard normal cumulative distribution function and $\stackrel{d}{\longrightarrow}$ denotes convergence in distribution.}
Analogous to the idea of \cite{Park:2018a}, \cite{Jeong/Son/Lee/Kim:2018} also proposed
another robustified $t$-test statistic in which
the Hodges-Lehmann estimator \citep{Hodges/Lehmann:1963} and
the Shamos estimator \citep{Shamos:1976} are considered.
It is given by
\[
T = \frac{\hat{\mu}_H - \mu_0}{\hat{\sigma}_S/\sqrt{n}},
\]
where {$\hat{\mu}_H$ and $\hat{\sigma}_S$ represent the Hodges-Lehmann and the Shamos estimators, respectively}.
Note that the Hodges-Lehmann estimator is defined as
\[
\hat{\mu}_H
= \mathop{\mathrm{median}}_{i \le j} \Big( \frac{X_i+X_j}{2} \Big)
\]
and the Shamos estimator is defined as
\[
\hat{\sigma}_S
= \displaystyle\mathop{\mathrm{median}}_{i \le j} \big( |X_i-X_j| \big).
\]
It is easy to show that the above test statistic {by \cite{Jeong/Son/Lee/Kim:2018} is also a pivotal quantity}.
However, it does not converges to the standard normal distribution.
In Section 2.2 of \cite{Jeong/Son/Lee/Kim:2018}, they suggested the following
\begin{equation} \label{EQ:TB}
T_{B} = \sqrt{\frac{6n}{\pi}} {\Phi^{-1}\Big( \frac{3}{4} \Big)}
\frac{\displaystyle\mathop{\mathrm{median}}_{i \le j} \Big(\frac{X_i+X_j}{2}\Big) - \mu
{\displaystyle\mathop{\mathrm{median}}_{i \le j}\big(|X_i-X_j|\big)},
\end{equation}
which converges to the standard normal distribution and is also pivotal.
\section{Empirical distributions}
As afore-mentioned, the robustified statistics, $T_A$ in (\ref{EQ:TA})
and $T_B$ in (\ref{EQ:TB}), {converge} to the standard normal distribution.
However, when a sample size is small,
it is not appropriate to use the standard normal distribution.
It may be impossible to find the theoretical distributions of $T_A$
and $T_B$. Thus, we will obtain the empirical distributions of $T_A$ and $T_B$ using {extensive Monte Carlo simulations}
and calculate {their} empirical quantiles which are useful for {estimating} critical values, confidence interval, p-value,
etc. {We used the R language~\citep{R:2018} to conduct simulations summarized as follows.}
We generated one hundred million ($N=10^8$) samples of size $n$
from the standard normal distribution to obtain the empirical distributions of
$T_A$ and $T_B$, where $n=4,5,\ldots,50$.
Using these samples, we can obtain the empirical distribution of $T_A$ or $T_B$
for each of size $n$.
Then by inverting the empirical distribution, we obtained the empirical quantiles
of $p$.
We provide these empirical quantile values {in Table~\ref{TBL:quantilesTA} for $T_A$ and
Tables~\ref{TBL:quantilesTB} for $T_B$}.
In {these two} tables, we provide the lower quantiles of
0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.975, 0.98, 0.99, and 0.995
for sample sizes $n$ ranging from $4$ to $50$ with an increment by one.
It is worthwhile to discuss the accuracy of the empirical quantiles obtained above.
Let $F^{-1}_N(p)$ be the
empirical quantile of $p$ obtained from the $N$ replications and
$F^{-1}(p)$ be the true quantile of $p$. Then it is easily seen from
Corollary~21.5 of \cite{Vaart:1998} that the sequence $\sqrt{N}\big(
F_N^{-1} - F^{-1}(p) \big)$ is asymptotically normal with mean zero and
variance $p/(1-p)/f^2\big(F^{-1}(p) \big)$. Thus, the standard deviation
of the empirical quantile of $p$, is approximately proportional to
$\sqrt{p(1-p)/N}$ which has its maximum value at $p=0.5$. {Consequently},
the empirical quantile values are computed with an approximate accuracy of
$0.5/\sqrt{N}$. With $N=10^8$, we have $0.5/\sqrt{N}=0.00005$ which
roughly indicates that the empirical quantile values are accurate
up to the fourth decimal point.
Given that the probability density functions of $T_A$ and $T_B$ are symmetric at zero,
we have $F(-x) = 1 - F(x)$ and $F^{-1}(1/2)=0$.
{Letting $q_p$ be the $p$th lower quantile so that $F(q_p)=p$,
we} have $q_{1-p} = -q_{p}$.
Thus, it is enough to find the $p$th quantile only when $p>1/2$.
Let $G(\cdot)$ be the cumulative distribution function of $|X|$. Then we have
\[
G(x) = P[ |X| \le x ] = P[ -x \le X \le x ] = F(x) - F(-x)
= 2F(x) -1.
\]
Substituting $x=q_p$ into the above, we have $G(q_p) = 2p-1$.
Thus, we have
\[
q_p = G^{-1}(2p-1),
\]
which is more effective than using $q_p = F^{-1}(p)$ in obtaining empirical quantile values.
In what follows, we illustrate the use of the empirical quantiles.
\section{Illustrative examples}
\subsection{Confidence intervals}
{It deserves mentioning that the above robustified statistics $T_A$ of \cite{Park:2018a} and $T_B$ of \cite{Jeong/Son/Lee/Kim:2018} are simple and easy to implement in practical applications. More importantly, they are pivotal quantities and converge
to the standard normal distribution.}
Let $\alpha_1$ and $\alpha_2$ with $\alpha=\alpha_1+\alpha_2$.
Let $q_{\alpha_1}$ and $q_{\alpha_2}$ be the $1-\alpha_1$ and $\alpha_2$ upper quantiles of
the distribution of {the statistic} $T_A$, respectively. Then we have
\[
P( q_{\alpha_1} \le T_A \le q_{\alpha_2} ) = 1 - \alpha.
\]
Thus, solving the following for $\mu$
\[
q_{\alpha_1} \le T_A \le q_{\alpha_2},
\]
we can obtain a $100(1-\alpha)$\% confidence interval for $\mu$ as follows:
\[
\Bigg[
\mathop{\mathrm{median}}_{1\le i\le n}X_i
- \frac{ q_{\alpha_2} \sqrt{{\pi}/{2}}}{\Phi^{-1}\Big(\frac{3}{4}\Big)\sqrt{n}}
\big|X_i-\mathop{\mathrm{median}}_{1\le i\le n}X_i\big|, ~
\mathop{\mathrm{median}}_{1\le i\le n}X_i
- \frac{ q_{\alpha_1} \sqrt{{\pi}/{2}}}{\Phi^{-1}\Big(\frac{3}{4}\Big)\sqrt{n}}
\big|X_i-\mathop{\mathrm{median}}_{1\le i\le n}X_i\big|
\Bigg].
\]
If we consider {the equi-tailed} confidence interval ($\alpha_1=\alpha_2=\alpha/2$),
then we have $q_{\alpha_1}=-q_{\alpha/2}$ and $q_{\alpha_2}=q_{\alpha/2}$
since the distribution of $T_A$ is symmetric.
The end points of the confidence interval are given by
\[
\mathop{\mathrm{median}}_{1\le i\le n}X_i
\pm q_{\alpha/2} \frac{\sqrt{{\pi}/{2}}}{\Phi^{-1}\Big(\frac{3}{4}\Big)\sqrt{n}}
\big|X_i-\mathop{\mathrm{median}}_{1\le i\le n}X_i\big|.
\]
{In a similar way as done above, we can also} obtain a $100(1-\alpha)$\% confidence interval for $\mu$ using the statistic $T_B$.
This is given by
\begin{align*}
&\Bigg[
\mathop{\mathrm{median}}_{i \le j} \biggl(\frac{X_i+X_j}{2} \biggr)
- \frac{ q_{\alpha_2} \sqrt{{\pi}/{6}}}{\Phi^{-1}\Big(\frac{3}{4}\Big)\sqrt{n}}
\mathop{\mathrm{median}}_{i \le j}\big(|X_i-X_j|\big), ~ \\
&\qquad\qquad\qquad\qquad\qquad
\mathop{\mathrm{median}}_{i \le j} \biggl(\frac{X_i+X_j}{2} \biggr)
- \frac{ q_{\alpha_1} \sqrt{{\pi}/{6}}}{\Phi^{-1}\Big(\frac{3}{4}\Big)\sqrt{n}}
\mathop{\mathrm{median}}_{i \le j}\big(|X_i-X_j|\big)
\Bigg],
\end{align*}
where $q_{\alpha_1}$ and $q_{\alpha_2}$ be the $1-\alpha_1$ and $\alpha_2$ upper quantiles of
the distribution of $T_B$, respectively.
The end points of {the equi-tailed} confidence interval are also easily obtained as
\[
\mathop{\mathrm{median}}_{i \le j} \big(\frac{X_i+X_j}{2} \big)
\pm \frac{ q_{\alpha/2} \sqrt{{\pi}/{6}}}{\Phi^{-1}\Big(\frac{3}{4}\Big)\sqrt{n}}
\mathop{\mathrm{median}}_{i \le j}\big(|X_i-X_j|\big).
\]
\begin{figure}[t]
\includegraphics{CI}
\caption{Confidence interval and its corresponding interval length of
the three statistics.
(a) Confidence intervals. (b) Interval lengths.
}
\label{FIG:CI}
\end{figure}
As an illustration,
we consider the data set provided by Example 7.1-5 of \cite{Hogg/Tanis/Zimmerman:2015}.
In the example, the data on the amount of butterfat in pounds produced by a typical cow
are provided.
These data are 481, 537, 513, 583, 453, 510, 570, 500, 457, 555,
618, 327, 350, 643, 499, 421, 505, 637, 599, 392.
Assuming the normality, they obtained the confidence interval based on the
Student $t$-test statistic which is given by
\[
[472.80, ~542,20].
\]
To investigate the effect of {data} contamination,
we replaced the last observation (392)
with the value of $\delta$ {ranging} from $0$ to $2000$ in a grid-like fashion.
In Figure~\ref{FIG:CI} (a), we plotted the low and upper ends of the confidence intervals
based on the Student, $T_A$ and $T_B$ versus the value of $\delta$.
In Figure~\ref{FIG:CI} (b), we plotted the interval lengths of the confidence intervals
under consideration. As shown in Figure~\ref{FIG:CI}, the confidence interval based on the
conventional Student $t$-test statistic changes dramatically while
the confidence intervals based on $T_A$ and $T_B$ do not change much.
\subsection{Empirical powers}
Using the confidence interval, we can easily
employ the robustified $t$-test statistics $T_A$ and $T_B$ to
perform the hypothesis test of
$H_0: \mu=0$ versus $H_0: \mu \neq 0$.
{In this subsection, we compare the empirical statistical powers of
these two statistics with the power using the conventional Student $t$-test statistic.
Here, the statistical power of a hypothesis test is the probability
that the test correctly rejects the null hypothesis
when the alternative hypothesis is true.}
\begin{figure}[t]
\includegraphics{power1}
\caption{The empirical powers for $H_0:\mu=0$
versus $H_1:\mu \neq 0$ with
sample size $n=10$.
(a) No contamination
and (b) Contamination ($x_1=10$).
}
\label{FIG:power11}
\end{figure}
To obtain the power curve of a hypothesis test,
we generated the first sample of size $n=10$ from
$N(\mu,1)$. The second sample of size $n=10$ is also generated from $N(\mu,1)$
but one observation in the sample is contaminated by assigning {the value of} $10$.
For a given value of $\mu$, we generated a sample and performed the hypothesis test.
We repeated this hypothesis test 10,000 times.
By calculating the number of {rejections of $H_0$}
divided by
the 10,000, we can obtain the empirical power at a given value of $\mu$.
The value of $\mu$ is changed from $-2$ to $2$ in a grid-like fashion.
These results are plotted in Figure~\ref{FIG:power11}.
As shown in Figure~\ref{FIG:power11} (a),
the empirical power using the conventional Student $t$-test statistic has
the highest when there is no contamination.
Note that the power using $T_B$ is very close to that using the Student $t$-test statistic
while the power using $T_A$ loses power noticeably. However, when there is contamination, the powers based on $T_A$ and $T_B$ clearly outperform
that based on the conventional method as shown in Figure~\ref{FIG:power11} (b).
\section{{Concluding remarks}}
For brevity reasons, we only provide the empirical quantiles of
0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.975, 0.98, 0.99, and 0.995
for each of sample sizes, $n=4,5,\ldots,50$.
These empirical quantiles can be sufficient for most practical problems.
However, to obtain an accurate p-value for hypothesis testing,
we need more accurate empirical
quantiles values at more various probabilities.
We are currently developing the R package which provides
all the detailed empirical quantiles
which will be enough for calculating the p-value.
We are planning to upload the developed R package to
CRAN:
\begin{center}
\url{https://cran.r-project.org/}
\end{center}
\section*{Acknowledgment}
This work was supported by the National Research Foundation of Korea (NRF) grant
funded by the Korea government (No. NRF-2017R1A2B4004169).
\bibliographystyle{chicago}
|
3,212,635,537,619 | arxiv | \section{Force-Extension Relation}
As a first step towards an understanding of the macroscopic viscoelastic
properties of an entangled network of wormlike chains one has to understand the
elastic properties of a single wormlike chain. The linear force-extension
relation of a wormlike chain is obtained by the following argument. Consider a
wormlike chain with one end clamped at fixed orientation at the origin. Apply a
weak force $f${\boldmath $n$} (directed along the unit vector {\boldmath $n$})
to the other end \cite{comple}. The configurational distribution
function is then modified by a Boltzmann factor $\exp(f\mbox{\boldmath
$nR$}_L/k_BT)$. The extension $\delta\! R_L:=\mbox{\boldmath $n$}(\langle
\mbox{\boldmath $R$}_L\rangle_f-\langle\mbox{\boldmath $R$}_L\rangle)$ in the
direction of the applied force to first order $f$ is given by the linear
extension coefficient $\tilde f^{-1}_{\theta_0}:=\partial\delta\!R_L/\partial
f|_{f=0}$,
\begin{equation}
\label{ext}
\tilde f^{-1}_{\theta_0} = \int\!\! ds\!\! \int\!\! ds' \, \langle
\cos\theta_s\cos\theta_{s'} \rangle -
\left(\int\!\!ds\, \langle \cos\theta_s \rangle \right)^2 .
\end{equation}
By $\theta_s$ we denote the tilt angles of the tangents of the polymer contour
with respect to {\boldmath $n$}. The thermal average is to be taken under the
constraint that the angle $\theta_0$ at the clamped end is kept fixed. Standard
methods \cite{sai67} yield for $\tilde f^{-1}_{\theta_0}$ the dashed curves in
Fig.~\ref{cumplot}. In general, Eq.~(\ref{ext}) predicts a polymer of contour
length $L$ to appear more floppy if $L_p\simeq L$ than in the high temperature
limit ($L_p\to0$) and the low temperature limit ($L_p\to\infty$), when it
contracts to a little ball or becomes a rigid rod respectively. In the flexible
limit, where the chain becomes an isotropic random coil, all curves fall
together and reproduce entropy elasticity. But for stiff chains, as a
consequence of the chain anisotropy, the force-extension relation depends
strongly on the value of $\theta_0$. Obviously, $\theta_0=0$ is an exceptional
case. Whereas for all other angles $\theta_0$ the ultimate asymptotic form of
$\tilde f_{\theta_0}$ in the stiff limit is $\kappa/\!L^3$, at $\theta_0=0$ the
force coefficient becomes $\tilde f_0 \simeq \kappa^2\!/k_BTL^4$; i.e.\ it is
second order in the bending modulus and diverges at low temperatures $T$. The
latter result was previously obtained in Ref.\ \cite{mac95}. Note that
$\theta_0$ is the angle between the applied force and the average orientation
of $\mbox{\boldmath $R$}_L$, i.e.\ for $\theta_0=0$ the force is parallel to
$\mbox{\boldmath $R$}_L$ on average. Especially for $T=0$ the force is pulling
or pressing on a rigid rod along its axis. In this limit the above expansion of
the Boltzmann factor breaks down and we encounter the so called Euler buckling
instability, i.e., the force-extension relation becomes highly nonlinear and
the force coefficient in linear response does not exist. This situation is well
known for foams and other cellular materials \cite{gib88}. If we require
$fR_L\ll k_BT$, the buckling instability is evaded by thermal undulations, and
we find a linear contribution to the force-extension relation (`thermodynamic
buckling'). But with decreasing temperature the volume fraction
$k_BTL/\!\kappa$ (stored thermal energy over bending energy) occupied by the
thermal undulations vanishes, and ultimately there remain no more undulations
to be bent or pulled out, hence the divergence of $\tilde f_0$ with $T^{-1}$.
We take as the force coefficient $\tilde f$ of a `general strand' of length $L$
in a random network the average of $\tilde f^{-1}_{\theta_0}$ over all
orientations $\theta_0$ \cite{kre95}:
\begin{equation}
\label{deltaR}
\tilde f^{-1}=L^2f_{\rm ext}(L/\!L_p)/k_BT\, ,
\end{equation}
with $f_{\rm ext}(x):=(2x-3+4e^{-x}-e^{-2x})/3x^2$ (the extension function).
This result is also shown in Fig.~\ref{cumplot}. In the stiff limit ($L\ll
L_p$) it reduces to $\tilde f \simeq \kappa/\!L^3$.
\begin{figure}
\narrowtext \hspace{0.005\columnwidth} \epsfxsize=0.9\columnwidth
\epsfbox{ext_lp.eps}
\caption{The deformation $\delta\!R_L$ of a wormlike chain of given length
$L$ to leading order in the applied force $f$ as a function of the
persistence length $L_p$. See explanation in the text.}
\label{cumplot}
\end{figure}
Now, to relate the force-extension relation of the general strand to the
observed elastic modulus of a polymer network in the rubber plateau regime we
proceed in close analogy to solid state physics. In the harmonic approximation
the elasticity tensor {\bf E} of a monatomic Bravais lattice is written as
$E_{ijkl}=-\sum_{\{R\}}R_iD_{jl}(\mbox{\boldmath $R$})R_k/2V$, with $V$ being
the volume of the primitive cell, $\mbox{\boldmath $R$}$ the lattice vectors
and {\bf D} the matrix of second derivatives of the interaction potential with
respect to lattice displacements. For an isotropic entangled polymer solution,
we take the analogue of the primitive cell to be an entanglement volume $V_e$
and the analogue of the primitive vectors to be the average distance $\xi_e$
between adjacent entanglements (in the embedding space). This scaling argument
suggests that the storage modulus in the plateau regime should (up to a
constant factor depending on the strain geometry) be given by $G^0\simeq
c_e\tilde f_e \xi_e^2$. Here $\tilde f_e$ is the force coefficient of polymer
sections of length $L_e$ between adjacent entanglements and $c_e\simeq
V_e^{-1}$ their concentration. (As far as self-avoidance effects can be
neglected, $L_e$ and $\xi_e$ are related by the Debye function.) The situation
may be visualized by external forces acting on contour elements distributed
with an average spacing $L_e$ along the polymer. Inserting $\tilde f$ from
Eq.\ (\ref{deltaR}) with $L_e$ substituted for $L$ into the formula for $G^0$
we finally arrive at the following explicit expression for the plateau modulus
of an entangled solution of wormlike chains,
\begin{equation}
\label{G0}
G^0 \simeq c_ek_BT
\frac{f_D(L_e/\!L_p)}{f_{\rm ext}(L_e/\!L_p)}\simeq
\left\{
\begin{array}{ll}
c_ek_BT & (L_e\gg L_p) \\
c_e\frac\kappa{L_e} & (L_e\ll L_p)\, .
\end{array}
\right.
\end{equation}
The entanglement length $L_e$ is obviously the crucial quantity in Eq.\
(\ref{G0}). In the literature several scaling ideas \cite{col90} have been
reported on how $L_e$ and $\xi_e$ may be derived from the known static
properties of a flexible polymer network. Note however that Eq.\ (\ref{G0})
holds independently of such considerations. For a homogeneously crosslinked
gel of semiflexible or rod-like polymers $\xi_e$ can essentially be identified
with the mesh size $\xi_m$ of the network. In this case Eq.~(\ref{G0}) predicts
$G^0\propto \kappa c^2$ in the stiff limit. We conjecture that for a solution
of wormlike chains of arbitrary stiffness one has to distinguish three
different regimes. We will treat the limiting cases of scale invariant chain
structure -- i.e.\ a virtually Gaussian or straight conformation respectively
-- in a very similar manner. The breaking of scale invariance due to Eq.\
(\ref{worham}) gives rise to an intermediate regime for chains with $L\approx
L_p$, which will be discussed subsequently.
For a weakly bending contour we have from Eq.\ (\ref{worham}) the scaling
relation $R_L^{\perp2}=2L^3/3L_p$ for the transverse amplitudes $R_L^\perp$ of
the largest bending undulations. If these amplitudes are smaller than the mesh
size $\xi_m$ of the surrounding polymer network, i.e.\ $\xi_m^2>2L^3/3L_p$,
then the bending undulations are not substantially perturbed and are supposed
to be rather irrelevant to the question of entanglement. In this case we should
thus be allowed to represent the polymers as straight (but not rigid) ``rods''
in our derivation of $L_e$. In the opposite extreme ($L,\xi_m\gg L_p$) of a
strongly coiled polymer conformation the polymer may be represented by a
fractal curve (or a freely jointed chain of ``blobs'', if screened
self-avoidance is to be included). We feel that the flexible case has been
described successfully earlier \cite{kav87} and will adapt this approach to
straight rods now. It is based on the crucial observation that polymer ends are
not contributing efficiently to long-lived entanglements. Namely, if
entanglements would depend on dangling ends, they could not be long-lived as
compared to unperturbed, free fluctuations of the polymer and hence could not
give rise to a rubber plateau. To be specific, we assume that an entanglement
requires a sufficient number of non-end neighboring polymer segments that on
average restrict the lateral degrees of freedom of a test chain. Consider a
sphere of radius $\xi_e$ around such a mean entanglement point. Then for a
given monomer concentration $c$ and volume fraction $cv$ the excluded volume in
this ``primitive cell of entanglement'' of volume $V_e = 4 \pi (\xi_e/2)^3 / 3$
is given by $c v V_e$. In order to achieve an entanglement one requires that a
certain amount of polymer material, ${\cal C}\pi (a/2)^2 L_e$, is contained in
the test volume. Here $a$ denotes the lateral diameter of the polymer. The
quantity ${\cal C}$ is a geometry factor which measures the amount of polymer
material in the test volume, if $K$ polymers cross the sphere around the test
chain \cite{footn}. We determine $L_e$ by equating the excluded volume (reduced
by the contribution coming from free ends) with the volume of polymer material
needed for an entanglement
\begin{equation}
\label{ansatz}
cv\left( 1-\frac{L_e}L\right)=\frac{3a^2L_e{\cal C}}{2\xi_e^3}.
\end{equation}
The implicit equation Eq.\ (\ref{ansatz}) can be solved analytically in the
random coil limit \cite{kav87} and for a straight conformation. For the latter
we find
\begin{equation}
\label{Le}
L_e=\frac L3\left\{ 1-2\sin\left[\frac13 \arcsin\left(1-\frac{27 L_e^{\infty
2}}{2L^2}\right)\right]\right\} \, ,
\end{equation}
where $L_e^\infty := a\sqrt{3{\cal C}/2cv}\simeq\sqrt{\cal C} \xi_m$ is the
entanglement length in the limit of infinitely long molecules. Eq.\ (\ref{Le})
describes an entanglement transition of the polymer solution characterized by a
cusp singularity of the entanglement length $L_e$ as a function of the polymer
length $L$ or the volume fraction $c v$. The phase boundary between the
entangled and the disentangled regime is given by either of the two equations
$c^\star(L) = 27 {\cal C} {\bar c} (L) / 4$, $L^\star (c) = 3 \sqrt{3}
L_e^{\infty} (c) / 2$, where $\bar c$ denotes the geometrical overlap
concentration ${\bar c} v = 3 a^2/ 2 L^2$. The value of the entanglement
length and the contour length at the cusp singularity are related by
$L_e^\star= 2 L^\star / 3$. The above results have some important
consequences on the rheological properties of semiflexible polymer solutions.
Upon taking the experimental value for the geometric constant ${\cal
C} = 9.1$ \cite{kav87} of flexible polymers to be a universal quantity (also
valid in the rod-limit) one estimates that the critical polymer length
for the solution to show entanglement has to be about eight times the mesh
size. Accordingly, the critical concentration $c^\star$ is predicted to be
almost two orders of magnitude larger than the overlap concentration ${\bar
c}$, $c^\star / {\bar c} = 27 {\cal C} / 4$. In the intermediate
concentration regime, $c^\star > c > {\bar c}$, there is already a significant
overlap of the semiflexible polymers but no long-lived entanglements leading to
a rubber plateau regime. In this disentangled phase the magnitude of the
storage modulus is supposed to show a linear concentration dependence.
Finally we comment shortly on the intermediate case of a network of wormlike
chains with a mesh size $\xi_m$ smaller than the persistence length $L_p$ and
the amplitudes of the largest bending undulations $R_L^\perp$. To distinguish
it from the case discussed above, which could be called the ``rod-like''
regime, we will address it as the ``snake-like'' regime. It is characterized by
the property that all bending undulations with wavelength longer than a
critical wavelength, the ``deflection length'' \cite{odi83} $\lambda\simeq
(3L_p\xi_m^2/2)^{1/3}$, are perturbed by the network. For the snake-like regime
we thus identify the entanglement length $L_e^\infty$ for an infinitely long
polymer with $\lambda$. We expect the qualitative features of the entanglement
transition derived above for the rod-like regime to hold also in the snake-like
regime. The isotropic entanglement volume $V_e\simeq\xi_e^3$ in Eq.\
(\ref{ansatz}) has now to be replaced by $V_e\simeq\xi_eR^{\perp2}_{L_e}$. The
implicit dependence of $V_e$ on $L_p$ again reflects the broken scale
invariance in the snake-like regime. We leave the problem of the crossover
between the snake-like and the scale invariant cases for further investigation.
Now we turn to the comparison of our results with available experimental data.
We suppose that the existence of a disentangled phase above the overlap
concentration $\bar c$, as predicted by Eq.\ (\ref{ansatz}), is likely to
explain some discrepancies
of the Doi-Edwards theory \cite{doi92,doi75} for the rotational diffusion of
rigid rods in a semidilute solution with experimental data \cite{pec85}. The
Doi-Edwards theory is based on the assumption that for concentrations larger
than the overlap concentration ${\bar c}$ there is a separation of time scales.
One anticipates that each step of the rotational diffusion process is
determined by the constraint that each rod is confined to remain within an
angular range $\xi_T/\!L$ during the time it takes a rod to diffuse a distance
equal to its length. The tube of radius $\xi_T \propto 1/cL$ is assumed to
impose a long-lived topological restriction on the motion of the rods.
However, as we have argued above, long-lived entanglements emerge only at a
much higher concentration $c^\star = 27 {\cal C}\bar c/4$. This would explain
why the onset of entanglement as defined by a marked decrease in the rotational
diffusion coefficient occurs at a concentration $c_{\rm exp}$ fully two orders
of magnitude above the overlap concentration \cite{pec85}.
As an important practical application of the above ideas we already mentioned
actin, a semiflexible macromolecule, which is of major biological interest but
is also an almost ideal model system for physicists \cite{kas95}. It is well
suited to test our ideas, because it is characterized by a large ratio $L_p/a$
($\simeq 10^3$) and thus by a broad semidilute regime, so that the wormlike
chain model applies without modification over several orders of magnitude in
concentration. The average length of the molecules can be adjusted by adding so
called actin binding proteins such as gelsolin or severin. Existing data on
actin \cite{mul91,jan94,jan88} give only an incomplete picture of the rather
complex situation sketched above but seem to support our results. For short
(rod-like) filaments a length dependence of the plateau modulus near the
entanglement transition has been observed \cite{jan94}, which is qualitatively
well described by Eq.\ (\ref{G0}) and (\ref{Le}), but is somewhat smeared out
(probably as an effect of sample polydispersity). In the entangled phase Eq.\
(\ref{G0}) together with Eq.\ (\ref{Le}) predicts a plateau modulus $G^0\propto
\kappa c^2$ for the rod-like case far from the entanglement transition. Near
the transition the concentration dependence is enhanced according to Eq.\
(\ref{Le}). In the snake-like case as defined above we have $L_e\simeq
(L_p\xi_m^2)^{1/3}$ with $\xi_m\propto c^{-1/2}$ and hence $G^0\propto
c^{5/3}\kappa^{1/3}(k_BT)^{2/3}$. In experiments with actin the exponent of the
observed power law for $G^0(c)$ ranges from $1.7$ to $2.3$ in the entangled
regime \cite{mul91,mac95,jan88}. As a critical test of our ideas we suggest a
comparison of the plateau modulus for actin networks with and without
tropomyosin, which is known to cause a considerable stiffening of actin
filaments. The rod-like regime and the snake-like regime should be readily
discernible due to their markedly different dependence of $G^0$ on $\kappa$.
\newline In summary, we have derived the force-extension relation for a
wormlike chain and discussed some of its consequences for the viscoelastic
properties of entangled solutions and gels of semiflexible polymers.
Especially, we analyzed the entanglement transition and predicted various
exponents for the dependence of the plateau modulus $G^0$ on concentration and
bending rigidity.
\newline It is a pleasure to acknowledge helpful discussions
with Jan Wilhelm, Markus Tempel and Erich Sackmann. We are also thankful to the
authors of Ref.\ \cite{mac95} for making a preprint of their work available to
us. Our work has been supported by the Deutsche Forschungsgemeinschaft (DFG)
under Contract No.\ Fr.\ 850/2 and No.\ SFB 266.
|
3,212,635,537,620 | arxiv | \section{Introduction} \label{sec:intro}
In many real world problems, optimization decisions have to be made with limited information. Whether it is a static optimization or dynamic control problem, obtaining detailed and accurate information about the problem or system can often be a costly and time consuming process. In some cases, acquiring extensive information on system characteristics may be simply infeasible. In others, the observed system may be so nonstationary that by the time the information is obtained, it is already outdated due to system's fast-changing nature. Therefore, the only option left to the decision-maker is to develop a strategy for collecting information efficiently and choose a model to estimate the ``missing portions'' of the problem in order to solve it satisfactorily and according to a given objective.
To make the discussion more concrete, consider the problem of maximizing a (Lipschitz) continuous \textit{nonconvex} objective function, which is unknown except from its value at only a small number of data points. The decision maker may have no a priori information about the function and start with zero data points. Furthermore, only a limited number of --possibly noisy-- observations may be available before making a decision on the maximum value and its location. The function itself, however, remains unknown even after the decision is made. \textit{What is the best strategy to address this problem}?
The decision making framework presented in this paper captures the posed problem by taking into account the information collection (observation), estimation (regression), and (multi-objective) optimization aspects in a holistic and structured manner. Hence, the framework enables the decision maker to solve the problem by expressing preferences for each aspect quantitatively and concurrently. It explicitly incorporates many concepts that have been implicitly considered by heuristic schemes, and builds upon many results from seemingly disjoint but relevant fields such as information theory, machine learning, and optimization and control theories. Specifically, it combines concepts from these fields by
\begin{itemize}
\item explicitly quantifying the information acquired using the entropy measure from information theory,
\item modeling and estimating the (nonconvex) function or (nonlinear) system adopting a Bayesian approach and using Gaussian processes as a state-of-the-art regression method,
\item using an iterative scheme for observation, learning, and optimization,
\item capturing all of these aspects under the umbrella of a multi-objective ``meta'' optimization formulation.
\end{itemize}
Despite methods and approaches from machine (statistical) learning are heavily utilized in this framework, the problem at hand is very different from many classical machine learning ones, even in its learning aspect. In most classical application domains of
machine learning such as data mining, computer vision, or image and voice recognition, the difficulty is often in handling significant amount of data in contrast to lack of it. Many methods such as Expectation-Maximization (EM) inherently make this assumption, except from ``active learning'' schemes \cite{Bishopbook}. Information
theory plays plays an important role in evaluating scarce (and expensive) data and developing strategies for obtaining it. Interestingly, data scarcity converts at the same time the disadvantages of some methods into advantages, e.g. the scalability problem of Gaussian processes.
It is worth noting that the class of problems described here are much more frequently encountered in practice than it may first seem. For example, the class of black-box methods known as ``kriging'' \cite{kriging1} have been applied to such problems in geology and mining as well as to hydrology since mid-1960s. In addition, the solution framework proposed is applicable to a wide variety of fields due to its fundamental nature. One example is decentralized resource allocation decisions in networked and complex systems, e.g. wired and wireless networks, where parameters change quickly and global information on network characteristics are not available at the local decision-making nodes. Another example is security-related decisions where opponents spend a conscious effort to hide their actions. A related area is security and information technology risk management in large-scale organizations, where acquiring information on individual subsystems and processes can be very costly. Yet another example application is in biological systems where individual organisms or subsystems operate autonomously (even if they are part of a larger system) under limited local information.
\section{Problem Definition and Approach} \label{sec:problem}
A concrete definition of the motivating problem mentioned in the introduction section is helpful for describing the multiple aspects of the limited information decision making framework. Without loss of any generality, let
$$\mathcal X \subseteq \Psi \subset \mathbb R^{d}$$
be a nonempty, convex, and compact (closed and bounded) subset of the original problem domain $\Psi$ of $d$ dimensions. The original domain $\Psi$ does not have to be convex, compact, or even fully known. However, adopting a ``divide and conquer'' approach, the subset $\mathcal X$ provides a reasonable starting point.
Define next the objective function to be maximized
$$f: \mathcal X \rightarrow \mathbb R, $$
which is unknown except from on a finite number of points (possibly imperfectly) observed. As a simplifying
assumption, let $f$ be Lipschitz continuous on $\mathcal X$. One of the main distinguishing characteristics of this problem is the limitations on set of observations
$$\Omega_n:=\{x_1,\ldots,x_n \, : x_i \in \mathcal X\; \forall i,\; n \geq 1 \},$$
due to cost of obtaining information or non-stationarity of the underlying system. Assume for now that the cost of observing the value of the objective function $f(x)$ is the same for any $x \in \mathcal X$. Then, a basic search problem is defined as follows:
\begin{prob}[\textit{Basic Search Problem}] \label{prob:search1}
Consider a Lipschitz-continuous objective function $f: \mathcal X \rightarrow \mathbb R$ on the $d$-dimensional nonempty, convex, and compact set $\mathcal X \subset \mathbb R^{d}$. The function is unknown except from on a finite number of observed data points. What is the best search strategy
$$\Omega_N:=\{x_1,\ldots,x_N \, : x_i \in \mathcal X\; \forall i,\; N \geq 1 \}$$
that solves
$$ \max_{ \Omega_N} f(x) ,$$
for a given $N$?
\end{prob}
The number of observations, $N$, in Problem~\ref{prob:search1} may be imposed by the nature of the specific application domain. In many problems, where there is no time constraint, adopting an iterative (one-by-one) approach, and hence choosing $N=1$ is clearly beneficial
as it allows for usage of incoming new information at each step. Alternatively, the assumption on the equal observation cost can be relaxed and be formulated as a constraint
$$ \sum_{x \in \Omega_n} c_o(x) \leq C, $$
where $c_o(x): \mathcal X \rightarrow \mathbb R $ is the observation cost function, and the scalar $C$ is the total ``exploration budget''. It is also possible to define this cost iteratively based on the (distance from) previous observation, e.g. $c_o(x_n,x_{n-1})$. In such cases, a location-based iterative search scheme can be considered.
The simplest (both conceptually and computationally) strategy to solve Problem~\ref{prob:search1} is random search on the domain $\mathcal X$. As such no attempt is made to ``learn'' the properties of the function $f$. Unless, $f$ is
``algorithmically random'' \cite{algobook}, which is rarely the case, this strategy wastes the information collected on $f$. A slightly more complicated and very popular set of strategies combine random search with simple modeling of the function through gradient methods. In this case, the collected information is used to model $f$ rudimentarily using derived gradients to ``define slopes'' in a heuristic manner. Then, these slopes of $f$ are explored step-by-step in the upwards direction to find a local maximum, after which the search algorithm randomly jumps to another location. It is also possible to randomize the gradient climbing scheme for additional flexibility \cite{simannealing}.
The framework presented in this paper takes one further step and \textbf{explicitly} models the (entire) objective function $f$ (on the set $\mathcal X$) using the information collected instead of heuristically describing only the slopes. The function $\hat f$, which models, approximates, and estimates $f$, belongs to a certain class functions such that $\hat f \in \mathcal F$. The selection and properties of this class is based on ``a priori'' information available and can be interpreted as the ``world view'' of the decision maker. These properties can often be expressed using meta-parameters which are then updated based on the observations through a separate optimization process. Likewise, a slower time-scale process can be used for model selection if processing capabilities permit a multi-model approach.
This model-based search process, which lies at the center of the framework, is fundamentally a manifestation of the Bayesian approach \cite{MacKaybook}. It first imposes explicit and a priori modeling assumptions by choosing $\hat f$ from a certain class of functions, $\mathcal F$, and then infers (learns, updates) $\hat f$ in a structured manner as more information becomes available through observations.
From a computational point of view, the decision making framework with limited information lies at one end of the computation vs. observation spectrum, while random search is at the opposite end. The framework tries to utilize each piece of information to the maximum possible extent almost regardless of the computational cost. The underlying assumption here is: \textbf{observation is very costly whereas computation is rather cheap}. This assumption is not only valid for a wide variety of problems from different fields ranging from networking and security to economics and risk management, but also inspired from biological systems. In many biological organisms, from single cells to human beings, operating close to this end of the computation-observation spectrum is more advantageous than doing random search.
When doing random search on the domain $\mathcal X$, at each stage i.e. given the previous observations, each remaining candidate data point provides equivalent amount of information. However, this is not the case when doing model-based search. Depending on the model adopted and previous information collected, different unexplored points provide different amount of information. This information can be exactly quantified using the definition of entropy and information from the field of (Shannon) information theory. Accordingly, the scalar quantity $\mathcal I(\hat f, \Omega_n)$ denotes the aggregate information obtained from the set of observations $\Omega_n$ within the model represented by $\hat f$. A related issue is the reliability and possibly noisy nature of observations, which will be discussed in further detail in the next section.
An extension of Problem~\ref{prob:search1} that captures the aspects discussed above is defined next.
\begin{prob}[\textit{Model-based Search Problem}] \label{prob:search2}
Let $f: \mathcal X \rightarrow \mathbb R$ be an objective function on the $d$-dimensional nonempty, convex, and compact set $\mathcal X \subset \mathbb R^{d}$, which is unknown except from on a finite number of observed data points. Further let $\hat f(x)$ be an estimate of the objective function obtained using an a priori model and observed data. What is the best search strategy $\Omega_N:=\{x_1,\ldots,x_N \, : x_i \in \mathcal X\; \forall i,\; N \geq 1 \}$ that solves the multi-objective problem with the following components?
\begin{itemize}
\item \textit{Objective 1:} $ \max_{\Omega_N} f(x) \text{ given } \hat f(x)$
\item \textit{Objective 2:} $ \arg \min_{\Omega_N} R\left( f(x), \hat f(x) \right) , \; \hat f \in \mathcal F$
\item \textit{Objective 3:} $ \max_{\Omega_N} \mathcal I(\hat f, \Omega_n)$
\end{itemize}
Here, $R(\cdot,\cdot)$ is a risk or expected loss function quantifying the mismatch between actual and estimated functions on the observation data \cite{GPbook}. The scalar quantity $\mathcal I$ is the aggregate information obtained from the set of observations $\Omega_N$ within the model represented by $\hat f$. The cardinality of $\Omega_N$, $N$, can be either given, e.g. $N=1$, or defined as an additional constraint $ \sum_{x \in \Omega_n} c_o(x) \leq C$, where $c_o(x): \mathcal X \rightarrow \mathbb R $ is the observation cost function, and the scalar $C$ is the total ``exploration budget''.
\end{prob}
\begin{figure}[htp]
\centering
\includegraphics[width=0.6\columnwidth]{abstractgoals1.eps}
\caption{The three fundamental aspects of decision making with limited information.}
\label{fig:goals1}
\end{figure}
It is important to observe here that the three objectives defined in Problem~\ref{prob:search2} are (almost) independent from and orthogonal to each other despite being closely related. \textit{Objective 1} purely aims to maximize the unknown objective function $f$ using the best estimate (model) $\hat f$. \textit{Objective 2} focuses on minimizing the error between the estimate $\hat f$ and the real unknown function $f$ based on the observations made. \textit{Objective 3} tries to maximize the amount of information provided by each (costly) observation or experiment. It is worth noting that
\textit{Objective 3} is independently formulated from \textit{Objective 2}, in other words, exploration is done independently from estimation. In contrast, ensuring a balance between \textit{Objective 1} and \textit{2} is necessary to ensure that solution is robust. These objectives and the fundamental aspects of decision making with limited information are visually depicted in Figure~\ref{fig:goals1}.
\begin{table}[htp]
\caption{Fundamental Trade-offs}
\begin{center}
\begin{tabular}{l|c|l}
\hline \hline
Exploration & & Exploitation \\[1 ex]
Observation & versus & Computation \\[1 ex]
Robustness & & Optimization \\ [1 ex]
\hline \hline
\end{tabular}
\label{tbl:tradeoff}
\end{center}
\end{table}
There are multiple trade-offs that are inherent to this problem as listed in Table~\ref{tbl:tradeoff}. The first one, exploration versus exploitation, puts exploration or obtaining more observations against exploitation, i.e. trying to achieve the given objective. Observation versus computation captures the trade-off between building sophisticated models using the available information to the fullest extend and making more observations. Robustness versus optimization puts risk avoidance against optimization with respect to the original objective as in exploitation.
\section{Methodology} \label{sec:methods}
This section presents the methods that are utilized within the framework which addresses the problem defined in the previous one. First, the regression model and Gaussian Processes (GP) are presented. Subsequently, modeling and measurement of information is discussed based on (Shannon) information theory.
\subsection{Regression and Gaussian Processes (GP)} \label{sec:gp}
Problem~\ref{prob:search2} presented in the previous section involves inferring or learning the function $f$ using the set of observed data points. This is known as the \textit{regression} problem in machine learning and is a supervised learning method since the observed data constitutes at the same time the learning data set. This learning process involves selection of a ``model'', where the learned function $\hat f$ is, for example, expressed in terms of a set of parameters and specific basis functions, and at the same time minimization of an error measure between the functions $f$ and $\hat f$ on the learning data set. Gaussian processes (GP) provide a nonparametric alternative to this but follow in spirit the same idea.
The main goal of regression involves a trade-off. On the one hand, it tries to minimize the \textit{observed} error between $f$ and $\hat f$. On the other, it tries to infer the ``real'' shape of $f$ and make good estimations using
$\hat f$ even at unobserved points. If the former is overly emphasized, then one ends up with ``over fitting'', which means $\hat f$ follows $f$ closely at observed points but has weak predictive value at unobserved ones. This delicate balance is usually achieved by balancing the prior ``beliefs'' on the nature of the function, captured by the model (basis functions), and fitting the model to the observed data.
This paper focuses on Gaussian Process \cite{GPbook} as the chosen regression method within the framework developed without loss of any generality. There are multiple reasons behind this preference. Firstly, GP provides an elegant mathematical method for easily combining many aspects of the framework. Secondly, being a nonparametric method GP eliminates any discussion on model degree. Thirdly, it is easy to implement and understand as it is based on well-known Gaussian probability concepts. Fourthly, noise in observations is immediately taken into account if it is modeled as Gaussian. Finally, one of the main drawbacks of GP namely being computational heavy, does not really apply to the problem at hand since the amount of data available is already very limited.
It is not possible to present here a comprehensive treatment of GP. Therefore, a very rudimentary overview is provided next within the context of the decision making problem. Consider a set of $M$ data points
$$\mathcal D=\{x_1, \ldots, x_M\},$$
where each $x_i \in \mathcal X$ is a $d-$dimensional vector, and the corresponding vector of scalar values is $f(x_i), \; i=1,\ldots,M$. Assume that the observations are distorted by a zero-mean Gaussian noise, $n$ with variance $\sigma \sim \mathcal N(0,\sigma)$. Then, the resulting observations is a vector of Gaussian $y=f(x)+n \sim \mathcal N(f(x),\sigma)$.
A GP is formally defined as a collection of random variables, any finite number of which have a joint Gaussian distribution. It is completely specified by its mean function $m(x)$ and covariance function $C(x,\tilde x)$, where
$$ m(x)=E[\hat f(x)] \text{ and } C(x,\tilde x)=E[(\hat f(x)-m(x))(\hat f(\tilde x)-m(\tilde x))],
\; \forall x, \tilde x \in \mathcal D. $$
Let us for simplicity choose $m(x)=0$. Then, the GP is characterized entirely by its covariance function $C(x,\tilde x)$. Since the noise in observation vector $y$ is also Gaussian, the covariance function can be defined as the sum of a \textit{kernel function} $Q (x,\tilde x)$ and the diagonal noise variance
\begin{equation} \label{e:gcov}
C(x,\tilde x) = Q (x,\tilde x) + \sigma I, \; \forall \, x, \tilde x \in \mathcal D ,
\end{equation}
where $I$ is the identity matrix. While it is possible to choose here any (positive definite) kernel $Q(\cdot,\cdot)$, one classical choice is
\begin{equation} \label{e:gaussiankernel}
Q(x,\tilde x)=\exp \left[-\frac{1}{2}\norm{x -\tilde x}^2 \right].
\end{equation}
Note that GP makes use of the well-known \textit{kernel trick} here by representing an infinite dimensional continuous function
using a (finite) set of continuous basis functions and associated vector of real parameters in
accordance with the \textit{representer theorem} \cite{schoelkopfbook}.
The (noisy)\footnote{The special case of perfect observation without noise is handled the same way as long as the kernel function $Q(\cdot,\cdot)$ is positive definite} training set $(\mathcal D, y)$ is used to define the corresponding GP, $\mathcal{GP} (0,C(\mathcal D))$, through the $M \times M$ covariance function $C(\mathcal D)=Q+\sigma I$, where the conditional Gaussian distribution of any point outside the training set, $\bar y \in \mathcal X, \bar y \notin \mathcal D$, given the training data $(\mathcal D, t)$ can be computed as follows. Define the vector
\begin{equation} \label{e:k}
k(\bar x)=[Q(x_1,\bar x), \ldots Q(x_M,\bar x)]
\end{equation}
and scalar
\begin{equation} \label{e:kappa}
\kappa=Q(\bar x,\bar x)+\sigma.
\end{equation}
Then, the conditional distribution $p(\bar y | y)$ that characterizes the $\mathcal{GP} (0,C)$ is a Gaussian $\mathcal N(\hat f,v)$ with mean $\hat f$ and variance $v$,
\begin{equation} \label{e:gp1}
\hat f(\bar x)=k^T C^{-1} y \text{ and } v(\bar x)=\kappa - k^T C^{-1} k .
\end{equation}
This is a key result that defines GP regression as the mean function $\hat f(x)$ of the Gaussian distribution and provides a prediction of the objective function $f(x)$. At the same time, it belongs to the well-defined class $\hat f \in\mathcal F$, which is the set of all possible sample functions of the GP
$$\mathcal F := \{\hat f(x): \mathcal X \rightarrow \mathbb R \text{ such that } \hat f \in \mathcal{GP} (0,C(\mathcal D)),\; \forall \mathcal D, \, C \} ,$$
where $ C(\mathcal D)$ is defined in (\ref{e:gcov}) and $\mathcal{GP}$ through (\ref{e:k}), (\ref{e:kappa}), and (\ref{e:gp1}), above.
Furthermore, the variance function $v(x)$ can be used to measure the uncertainty level of the predictions provided by $\hat f$, which will be discussed in the next subsection.
\subsection{Quantifying Information in Observations} \label{sec:obsinfo}
In the framework presented, each observation provides a data point to the regression problem (estimating $f$ by constructing $\hat f$) as discussed in the previous subsection. Many works in the learning literature consider the ``training'' data used in regression available (all at once or sequentially) and do not discuss the possibility of the decision maker influencing or even optimizing the data collection process. The \textit{active learning} problem defined in Section~\ref{sec:problem} requires, however, exactly addressing the question of ``how to quantify information obtained and optimize the observation process?''. Following
the approach discussed in \cite{MacKaydataselect,MacKaybook}, the framework here provides a precise answer to this question.
Making any decision on the next (set of) observations in a principled manner necessitates first \textit{measuring the information obtained from each observation within the adopted model}. It is important to note that the information measure here is dependent on the chosen model. For example, the same observation provides a different amount of information to a random search model than a GP one.
Shannon information theory readily provides the necessary mathematical framework for measuring the information content of a variable. Let $p$ be a probability distribution over the set of possible values of a discrete random variable $A$. The \textbf{entropy} of the random variable is given by
$H(A)=\sum_i p_i \log_2 (1/p_i)$, which quantifies the amount of uncertainty. Then, the information obtained from an observation on the variable, i.e. reduction in uncertainty, can be quantified simply by taking the difference of its initial and final entropy,
$$\mathcal I=H_0 - H_1. $$
It is important here to avoid the common conceptual pitfall of equating entropy to information itself as it is sometimes done in communication theory literature.\footnote{Since this issue is not of great importance for the class of problems considered in communication theory, it is often ignored. However, the difference is of conceptual importance in this problem. See
\url{http://www.ccrnp.ncifcrf.gov/~toms/information.is.not.uncertainty.html} for a detailed discussion.}
Within this framework, (Shannon) \textit{information is defined as a measure of the decrease of uncertainty after (each) observation (within a given model)}. This can be best explained with the following simple example.
\subsubsection{Example: Bisection} \label{exp:bisection}
Choose a number between $1$ and $64$ randomly with uniform probability (prior). What is the best searching strategy for finding this number? Let the random variable $A$ represent this number. In the beginning the entropy of $A$ is
$$H_0(A)=\sum_{i=1}^{64} \dfrac{1}{64} \log_2 \left( \dfrac{1}{64} \right)=6 \text{ (bits)}.$$
The information maximization problem is defined as
$$ \max \mathcal I= \max H_0 - H_1 = \min H_1 ,$$
since $H_0$, the entropy before the action (obtaining information) is constant. The entropy $H_1$ is the one after information is obtained, and hence is directly affected by the specific action chosen. Now, define the action as setting a threshold $1 < t <64$ to check whether the chosen number is less or higher than this threshold $t$. To simplify the analysis, consider a continuous version of the problem by defining $p$ as the probability of the chosen number being less than the threshold. Thus, in this uniform prior case, the problem simplifies to
$$ \min_p H_1= \min_p \; p \log(p) + (1-p) \log (1-p),$$
which has the derivative
$$ \dfrac{d H_1}{d p}= \log(p) - \log(1-p) .$$
Clearly, the threshold $p^*=0.5$ is the global minimum, which roughly corresponds to $t=32$ (ignoring quantization and boundary effects). Thus, bisection from the middle is the optimal search strategy for the uniform prior. In this example, the number can be found in the worst-case in $6$ steps, each providing one bit of information. Nonuniform probabilities (priors) can be handled in a similar way.
If this search process (bisection) is repeatedly applied without any feedback, then it results in the optimal quantization of the search space both in the uniform case above and for the nonuniform probabilities. If feedback is available, i.e. one learns after each bisection whether the number is larger or less than the boundary, then this is as shown the best search strategy.
\section{Model} \label{sec:model}
The model adopted in the framework for decision making with limited information builds on the methods presented in the previous section and addresses the problem introduced in Section~\ref{sec:problem}. The model consists of three main parts: observation, update of GP for regression, and optimization to determine next action. These three steps, shown in Figure~\ref{fig:model1} are taken iteratively to achieve the objectives in Problem~\ref{prob:search2}. As a result of its iterative nature, this approach can be considered in a sense similar to the well-known Expectation-Maximization algorithm \cite{Bishopbook}.
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\columnwidth]{liminfopt1.eps}
\caption{The main parts of the underlying model of the decision making framework.}
\label{fig:model1}
\end{figure}
Observations, given that they are a scarce resource in the class of problems considered, play an important role in the model. Uncertainties in the observed quantities can be modeled as additive noise. Likewise, properties (variance or bias) of additive noise can be used to model the reliability of (and bias in) the data points observed. GPs provide a straightforward mathematical structure for incorporating these aspects to the model under some simplifying assumptions.
The set of observations collected provide the (supervised) training data for GP regression in order to estimate the characteristics of the function or system at hand. This process relies on the GP methods described in Subsection~\ref{sec:gp}. Thus, at each iteration an up-to-date description of the function or system is obtained
based on the latest observations. Specifically, $\hat f$ provides an estimate of the original function $f$.\footnote{See \cite[Chap 7.2]{GPbook} for a discussion on asymptotic analysis of GP regression. It should not be noted, however, that asymptotic properties are of little relevance to the problem at hand. }
Assuming an additive Gaussian noise model, the noise variance $\sigma$ can be used to model uncertainties, e.g. older and noisy data resulting in higher $\sigma$ values.
The final and most important part of the model provides a basis for determining the next action after an optimization process that takes into account all three objectives in Problem~\ref{prob:search2}. The information aspect of these objectives is already discussed in Subsection~\ref{sec:obsinfo}. An important issue here is the fact that there are infinitely many candidate points in this optimization process, but in practice only a finite collection of them can be evaluated.
\subsection{Sampling Solution Candidates}
When making a decision on the next action through multi-objective optimization, there are (infinitely) many candidate points. A pragmatic solution to the problem of finding solution candidates is to (adaptively) sample the problem domain $\mathcal X$ to obtain the set
$$\Theta:=\{x_1, \ldots, x_T : x_i \in \mathcal X, \, x_i \notin \mathcal D, \; \forall i \}$$
that does not overlap with known points. In low (one or two) dimensions, this can be easily achieved through grid
sampling methods. In higher dimensions, (Quasi) Monte Carlo schemes can be utilized. For large problem domains, the current domain of interest $\mathcal X$ can be defined around the last or most promising observation in such a way that such a sampling is computationally feasible.
Likewise, multi-resolution schemes can also be deployed to increase computational efficiency.
Although such a solution may seem restrictive at first glance, it is in spirit not very different from other schemes such as simulated annealing, which are widely used to address nonconvex optimization problems. However, a major difference
between this and other schemes is the fact that the candidate sampling and evaluation are done here ``a priori'' due to experimentation being costly while other methods rely on abundance of information.
A natural question that arises is: whether and under what conditions does such a sampling method give satisfactory results.
The following result from \cite{tempo-sampling,tempobook} provides an answer to this question in terms of number of samples required.
\begin{thm} \label{thm:sampling}
Define a multivariate function $f(x)$ on the convex, compact set $\mathcal X$, which admits the maximum $x^*=\arg \max_{x \in \mathcal X} f(x)$. Based on a set of $N$ random samples $\Theta=\{x_1, \ldots, x_N: x_i \in \mathcal X \; \forall i \}$ from the entire set $\mathcal X$, let $\hat x:= \arg \max_{x \in \Theta} f(x)$ be an estimate of the maximum $x^*$.
Given an $\varepsilon>0$ and $\delta>0$, the minimum number of random samples $N$ which guarantees that
$$ Pr\left( Pr[f(x^*)>f(\hat x)] \leq \varepsilon \right) \geq 1-\delta,$$
i.e. the probability that 'the probability of the real maximum surpassing the estimated one being less than $\varepsilon$' is larger than $1-\delta$, is
$$ N \geq \dfrac{\ln 1/ \delta}{ 1/ (1-\varepsilon)} .$$
Furthermore, this bound is tight if the function $f$ is continuous on $\mathcal X$.
\end{thm}
It is interesting and important to note that this bound is independent of the sampling distribution used (as long as it covers the whole set $\mathcal X$ with nonzero probability), the function $f$ itself, as well as the properties and dimension of the set $\mathcal X$.
\subsection{Quantifying Information in GP}
The information measurement and GP approaches in Section~\ref{sec:methods} can be directly combined. Let the zero-mean multivariate Gaussian (normal) probability distribution be denoted as
\begin{equation} \label{e:multivargauss}
p(x)=\dfrac{1}{\sqrt{2\pi |C_p(x)}|} \exp \left( -\frac{1}{2}[x-m]^T|C_p(x)|^{-1} [x-m]\right) ,\; x \in \mathcal X,
\end{equation}
where $|\cdot|$ is the determinant, $m$ is the mean (vector) as defined in (\ref{e:gp1}), and $C_p(x)$ is the covariance matrix as a function of the newly observed point $x \in \mathcal X$ given by
\begin{equation} \label{e:covx}
C_p(x)=\left[
\begin{array}{cccc}
& & & \\
& C(\mathcal D) & & k(x)^T \\
& & & \\
& k(x) & & \kappa
\end{array}
\right] .
\end{equation}
Here, the vector $k(x)$ is defined in (\ref{e:k}) and $\kappa$ in (\ref{e:kappa}), respectively. The matrix $C(\mathcal D)$ is the covariance matrix based on the training data $\mathcal D$ as defined in (\ref{e:gcov}).
The entropy of the multivariate Gaussian distribution (\ref{e:multivargauss}) is \cite{entropygaussian}
$$ H(x)=\dfrac{d}{2}+\dfrac{d}{2}\ln(2\pi)+\dfrac{1}{2} \ln |C_p(x)| ,$$
where $d$ is the dimension. Note that, this is the entropy of the GP estimate at the point $x$ based on the available data $\mathcal D$. The aggregate entropy of the function on the region $\mathcal X$ is given by
\begin{equation} \label{e:aggentropy}
H^{agg}:=\int_{x \in \mathcal X} \dfrac{1}{2} \ln |C_p(x)| dx.
\end{equation}
The problem of choosing a new data point $\hat x$ such that the information obtained from it within the
GP regression model is maximized can be formulated in a way similar to the one in the bisection example:
\begin{equation} \label{e:infocollect1}
\hat x=\arg \max_{\tilde x} \mathcal I= \arg \max_{\tilde x} \int_{x \in \mathcal X} \left[ H_0 - H_1 \right] \, dx = \arg \min_{\tilde x} \int_{x \in \mathcal X} \dfrac{1}{2} \ln |C_q(x,\tilde x)| dx,
\end{equation}
where the integral is computed over all $x \in \mathcal X$, and the covariance matrix $C_q(x, \tilde x)$ is defined as
\begin{equation} \label{e:covxbar}
C_q(x, \tilde x)=\left[
\begin{array}{ccccc}
& & & & \\
& C(\mathcal D)& & k^T(\tilde x) & k^T(x) \\
& & & & \\
& k(\tilde x) & & \tilde \kappa & Q(x,\tilde x) \\
& k(x) & & Q(x,\tilde x) & \kappa
\end{array}
\right] ,
\end{equation}
and $\tilde \kappa=Q(\tilde x,\tilde x)+\sigma$. Here, $C(\mathcal D)$ is a $M \times M$ matrix and $C_q$ is a $(M+2) \times (M+2)$ one, whereas $\kappa$ and $Q(x,\tilde x)$ are scalars and $k$ is a $M \times 1$ vector.
This result is summarized in the following proposition.
\begin{prop} \label{thm:GPsearch}
As a maximum information data collection strategy for a Gaussian Process with a covariance matrix $C(\mathcal D)$, the next observation $\hat x$ should be chosen in such a way that
$$ \hat x= \arg \max_{\tilde x} \mathcal I= \arg \min_{\tilde x} \int_{x \in \mathcal X} \ln |C_q(x,\tilde x)| dx,$$
where $C_q(x, \tilde x)$ is defined in (\ref{e:covxbar}).
\end{prop}
\subsubsection*{An Approximate Solution to Information Maximization}
Given a set of (candidate) points $\Theta$ sampled from $\mathcal X$, the result in Proposition~\ref{thm:GPsearch} can be revisited. The problem in (\ref{e:infocollect1}) is then approximated \cite{tempobook} by
\begin{eqnarray} \label{e:infocollect2}
\max_{\tilde x} \mathcal I \approx \min_{\tilde x} \sum_{x \in \Theta} \ln |C_q(x,\tilde x)| \\
\nonumber \Rightarrow \hat x= \arg \min_{\tilde x \in \Theta} \prod_{x \in \Theta} |C_q(x, \tilde x)|,
\end{eqnarray}
using monotonicity property of the natural logarithm and the fact that the determinant of a covariance matrix is non-negative.
Thus, the following counterpart of Proposition~\ref{thm:GPsearch}
is obtained:
\begin{prop} \label{thm:GPsearch2}
As an approximately maximum information data collection strategy for a Gaussian Process with a covariance matrix $C(\mathcal D)$ and given a collection of candidate points $\Theta$, the next observation $\hat x \in \Theta$ should be chosen in such a way that
$$ \hat x= \arg \min_{\tilde x \in \Theta} \prod_{x \in \Theta} |C_q(x, \tilde x)| \approx \arg \max_{\tilde x \in \Theta} \mathcal I,$$
where $C_q(x, \tilde x)$ is given in (\ref{e:covxbar}).
\end{prop}
Although it is an approximation, finding a solution to the optimization problem in Proposition~\ref{thm:GPsearch2} can still be computationally costly. Therefore, a greedy algorithm is proposed as a computationally simpler alternative.
Let, $x^* \in \Theta$ be defined as
$$ x^* :=\arg \max_{x \in \Theta} |C_p(x)|=|C(\mathcal D)|\, |\kappa(x) - k(x) C^{-1}(\mathcal D) k^T(x) |,$$
where the matrix $C_p$ is given by (\ref{e:covx}) \cite{matrixcookbook}. The first term above, $|C(\mathcal D)|$ is fixed and the second one,
$$|\kappa(x) - k(x) C^{-1}(\mathcal D) k^T(x) |, $$
is the same as the GP variance $v(x)$ in (\ref{e:gp1}). Hence, the sample $x^*$ is one of those with the maximum variance in the set $\Theta$, given current data $\mathcal D$.
It follows from (\ref{e:covxbar}) and basic matrix theory that if $\tilde x=x$ for a given $x$ then
$ |C_q(x, \tilde x)|$ is minimized. As a simplification, ignore the dependencies between $C_q(x, \tilde x)$ matrices for different $x \in \Theta$. Then, choosing the maximum variance $\hat x$ as
$$ \hat x = \arg \max_{\tilde x \in \Theta} v(\tilde x) \approx \arg \min_{\tilde x \in \Theta} \prod_{x \in \Theta} |C_q(x, \tilde x)|,$$
leads to a large (possibly largest) reduction in $\prod_{x \in \Theta} |C_q(x, \hat x)|$, and hence
provides a rough approximate solution to (\ref{e:infocollect2}) and to the result in Proposition~\ref{thm:GPsearch}.
This result is consistent with widely-known heuristics such as ``maximum entropy'' or ``minimum variance'' methods \cite{activelearning} and a variant has been discussed in \cite{MacKaydataselect}.
\begin{prop} \label{thm:GPsearch3}
Given a Gaussian Process with a covariance matrix $C(\mathcal D)$ and a collection of candidate points $\Theta$, an approximate solution to the maximum information data collection problem defined in Proposition~\ref{thm:GPsearch} is to choose
the sample point(s) $\tilde x$ in such a way that it has (they have) the maximum variance within the set $\Theta$.
\end{prop}
\section{Optimization with Limited Information} \label{sec:staticopt}
Let $f: \mathcal X \rightarrow \mathbb R$ be the unknown Lipschitz-continuous function of interest on the $d$-dimensional nonempty, convex, and compact set $\mathcal X \subset \mathbb R^{d}$. The amount of information about this function available to the decision maker is limited to a finite number of possibly noisy observations. Since the observations are costly, the goal of the decision maker is to find the maximum of $f$, estimate $f$ as accurately as possible using available observations, and select the most informative data points, at the same time. This naturally calls for an iterative and myopic optimization procedure since each new observation provides a new data point that concurrently affects the maximization, function estimation (regression), and
information quantity.
The first and basic objective is the maximization of the function $f(x)$ on $x \in \mathcal X$. As a simplification, observations are assumed to be sequential, one at a time. Since $f$ is basically unknown, this problem has to be formulated as
$$ \max_{\tilde x \in \mathcal X} F_1(\tilde x)= \hat f(\tilde x),$$
where $\hat f$ is the best estimate obtained through GP regression (\ref{e:gp1}) using the current data set $\mathcal D$. Data uncertainty (observation errors) is modeled through additive Gaussian noise with variance $\sigma$ as a first approximation.
The second objective is to minimize the difference (estimation error) between $\hat f$ and $f$. Define $e(x)=\hat f(x) - f(x), \; \forall x \in \mathcal X$. Given the set of noisy observations
$$\mathcal O=\{f(x_i)+n(x_i): x \in \mathcal D, \, \forall i \} ,$$
where $n \sim \mathcal N(0,\sigma)$ denotes zero mean Gaussian noise, it is possible to use another GP regression (\ref{e:gp1}) to estimate this error function, $\hat e(\mathcal D, x)$, on the entire set $\mathcal X$. Thus, the second objective is to ensure that the next observation $\tilde x$ solves
$$ \min_{\tilde x \in \mathcal X} F_2(\tilde x)= \int_{\tau \in \mathcal X} \abs{\hat e(\tilde x,\mathcal D, \tau)} d \tau.$$
Note that, $F_2$ here corresponds to a risk or loss estimate function.
The third objective is to maximize the amount of information obtained with each observation $\tilde x$, or
$$ \max_{\tilde x \in \mathcal X} F_3(\tilde x)=\mathcal I (\tilde x, \hat f)= \int_{x \in \mathcal X} \ln |C_q(x,\tilde x)| dx, $$
given the best estimate of the original function, $\hat f$. This objective has already been discussed in Section~\ref{sec:obsinfo} in detail.
The values of the three objectives, $F_1,\, F_2,\, F_3$, cannot be evaluated numerically on the entire set $\mathcal X$. Therefore, a sampling method is used as described in Section~\ref{sec:model} to obtain a set of solution candidates $\Theta$, which replaces $\mathcal X$ in the maximization and minimization problems above. Next, specific problem formulations are presented based on such a sampling of solution candidates. The overall structure of the framework is visualized in Figure~\ref{fig:model2}.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{liminfopt2.eps}
\caption{The decision making framework for static optimization with limited information.}
\label{fig:model2}
\end{figure}
\subsection{Solution Approaches}
The most common approach to multi-objective optimization is the \textbf{weighted sum method} \cite{moosurvey1,moosurvey2}. The three objectives discussed above can be combined to obtain a single objective using the respective weights $[w_1,\, w_2, \, w_3]$, $\sum_{i=1}^3 w_i=1$, $0 \leq w_i \leq 1 \, \forall i$. Assuming a single data point is chosen from and observed among the candidates $\Theta$ at each step, i.e. $\tilde x= \Omega_1$, a specific weighted sum formulation to address Problem~\ref{prob:search2} is obtained.
\begin{prop} \label{prob:weight1}
The solution, $\tilde x \in \Theta$, to the optimization problem
\begin{equation} \label{e:pw1}
\max_{\tilde x \in \Theta} F(\tilde x)=\sum_{i=1}^3 F_i(\tilde x)= w_1 \hat f(\tilde x)
- w_2 \dfrac{1}{N} \sum_{\tau \in \Theta} \abs{\hat e(\tilde x,\mathcal D, \tau)}
+ w_3 \mathcal I (\tilde x, \hat f),
\end{equation}
constitutes the best search strategy for this weighted sum formulation of Problem~\ref{prob:search2}.
\end{prop}
As discussed in Subsection~\ref{sec:obsinfo} and stated in Proposition~\ref{thm:GPsearch2}, the information objective, $F_3$, in (\ref{e:pw1}) can be approximated by substituting it with GP variance $v(x)$ in (\ref{e:gp1}) to decrease computational load. Thus, an approximation to the solution in Proposition~\ref{prob:weight1} is:
\begin{prop} \label{prob:weight2}
The solution, $\tilde x \in \Theta$, to the optimization problem
\begin{equation} \label{e:pw2}
\max_{\tilde x \in \Theta} F(x)= \sum_{i=1}^3 F_i(\tilde x)= w_1 \hat f(x)
- w_2 \dfrac{1}{N} \sum_{\tau \in \Theta} \abs{\hat e(\tilde x,\mathcal D, \tau)}
+ w_3 v(\tilde x),
\end{equation}
where $v(\tilde x)$ is defined in (\ref{e:gp1}), approximates the search strategy in Proposition~\ref{prob:weight1}.
\end{prop}
The weighting scheme described is only meaningful if the three objectives are of the same order of magnitude. Therefore, the original objective functions, $F_i$, $i=1,2,3$, have to be transformed or ``normalized''. There are many different approaches to perform such a transformation \cite{moosurvey1,moosurvey2}. The most common one, which coincidentally is known as normalization, aims to map each objective function to a predefined interval, e.g. $[0,\, 1]$. To do this, estimate first an upper $F_i^U$ and lower $F_i^L$ bound on each individual objective $F_i(x)$. Then, the $i^{th}$ normalized objective is
$$ F_i^N(x)= \dfrac{F_i(x) - F_i^L}{F_i^U -F_i^L }.$$
The main issue in normalization is to determine the appropriate upper and lower bounds, which is a very problem-dependent one. In the case of Proposition~\ref{prob:weight2}, the estimated functions $\hat f$ and $\hat e$ on the set $\Theta$ as well as the existing observations $\mathcal D$, can be utilized to obtain these values. The specific bounds for the respective objectives $F_1^U=\max_{x \in \Theta} \hat f(x)$, $F_1^L=\min_{x \in \Theta} \hat f(x)$, $F_2^U=\max_{x \in \Theta} \abs{\hat e(x,\mathcal D)}$, $F_2^L=0$, $F_3^U=\max_{x \in \Theta} \kappa(x)$, and $F_3^U=0$ provide a suitable starting estimate and can be combined with a prior domain knowledge if necessary. Thus, a normalized version of the formulation in Proposition~\ref{prob:weight2} is obtained.
\begin{prop} \label{prob:weight3}
The solution, $\tilde x \in \Theta$, to the optimization problem
\begin{equation} \label{e:pw3}
\max_{\tilde x \in \Theta} F(x)= \sum_{i=1}^3 F_i^N(\tilde x)=
\dfrac{w_1}{\Delta_1} \left( \hat f(x) - F_1^L\right)
- \dfrac{w_2}{\Delta_2} \dfrac{1}{N} \sum_{\tau \in \Theta} \abs{\hat e(\tilde x,\mathcal D, \tau)}
+ \dfrac{w_3}{\Delta_3} v(\tilde x),
\end{equation}
where $\Delta_i=F_i^U- F_i^L \; i=1,2,3$, provides an approximation to the best search strategy for solving the normalized weighted-sum formulation of Problem~\ref{prob:search2}.
\end{prop}
The \textbf{bounded objective function} method provides a suitable alternative to the weighted sum formulation above in addressing the multi-objective problem defined. The bounded objective function method minimizes the single most important objective, in this case $F_1(x)$, while the other two objective functions $F_2(x)$ and $F_3(x)$ are converted to form additional constraints. Such constraints are in a sense similar to QoS ones that naturally exist in many real life problems \cite{tcomm,alpcan-winet2,srikantbook}.
As an advantage, in the bounded objective formulation there is no need for normalization.
The bounded objective counterpart of the result in Proposition~\ref{prob:weight2} is as follows.
\begin{prop} \label{prob:bound1}
The solution, $\tilde x \in \Theta$, to the constrained optimization problem
\begin{eqnarray} \label{e:pw4}
\max_{\tilde x \in \Theta} \hat f(x) \\
\nonumber \text{such that } 0 \leq F_2(\tilde x)= \dfrac{1}{N} \sum_{\tau \in \Theta} \abs{\hat e(\tilde x,\mathcal D, \tau)} \leq b_1, \\
\nonumber \text{and } 0 \leq F_3(\tilde x)= v(\tilde x) \leq b_2,
\end{eqnarray}
where $b_1$ and $b_2$ are given (predetermined) scalar bounds on $F_2$ and $F_3$, respectively, provides an approximate best search strategy for a bounded-objective formulation of Problem~\ref{prob:search2}.
\end{prop}
The advantage of the bounded objective function method is that it provides a bound on the information collection and estimation objectives while maximizing the estimated function. This leads in practice to an initial emphasis on information collection and correct estimation of the objective function. In that sense, the method is more ``classical'', i.e. follows the common method of learn first and maximize later. Furthermore, it does not require normalization, i.e. it is easier to deploy. The method has, however, a significant disadvantage which makes its usage
prohibitive. In large-scale or high-dimensional problems, the space to explore to satisfy any bound on information
is simply immense. Therefore, one does not have the luxury of identifying the function first to maximize it later as it would take too many samples to do this. In such cases, it makes more sense to deploy the weighted sum method, possibly along with a cooling scheme to modify the weights as part of a cooling scheme to balance depth-first vs.
breadth-first search.
Until now, it has been (implicitly) assumed that the static optimization problem at hand is stationary. However, in a variety of problems this is not the case and the function $f(x,t)$ changes with time. The decision making framework allows for modeling such systems in the following way. Let
$$\mathcal O(t)=\{f(x_i,t_i)+n(x_i,t_i): x_i \in \mathcal D, t_i \leq t, \, \forall i\} ,$$
be the set of noisy or unreliable past observations until time $t$, where $n(x,t) \sim \mathcal N(0,\sigma(t))$ is the zero mean Gaussian ``noise'' term at time $t$. Now, the deterioration in the past information due to change in $f(x,t)$ can be captured by increasing the variance of the noise term, $\sigma(t)$, with time. For example, a simple linear dynamic can be defined as
$$ \dfrac{d\sigma(t)}{dt}= \eta,$$
where $\eta>0$ captures the level of stationarity, e.g. a large $\eta$ indicates a rapidly changing system and function $f(x,t)$.
\subsection{Algorithm}
An algorithmic summary of the solution approaches discussed above for a specific set of choices is provided by Algorithm~\ref{alg:algopt1}, which describes both weighted-sum and bounded objective variants.
\begin{algorithm}[htbp]
\caption{Optimization with Limited Information}
\label{alg:algopt1}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Function domain, $\mathcal X$, GP meta-parameters, objective weights $[w_1, w_2, w_3]$ or bounds $b_1, b_2$, initial data set $(\mathcal D, y)$.
\STATE Use GP with a Gaussian kernel and specific expected error variances for function $\hat f$ and error function $\hat e$ estimation.
\WHILE{Search budget available, $1 \leq n \leq N_{max}$.}
\STATE Sample domain $\mathcal X$ to obtain $\Theta(n)$. In some cases, $\Theta(n)=\Theta \; \forall n$.
\STATE Estimate $\hat f$ and $\hat e$ based on observed data $(\mathcal D, y)$ on $\Theta(n)$ using GPs.
\STATE Compute variance, $v(x)$, of $\hat f$ (\ref{e:gp1}) on $\Theta(n)$ as an estimate of $\mathcal I(\hat f)$.
\IF{Weighted-sum method}
\STATE Next action maximizes a normalized and weighted sum of objectives $\sum_{i=1}^3 F_i^N$ as stated in Proposition \ref{prob:weight3}.
\ELSIF{Bounded objective method}
\STATE Next action is solution to the constrained problem in Proposition \ref{prob:bound1}.
\ENDIF
\STATE Update the observed data $(\mathcal D, y)$.
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{Numerical Analysis} \label{sec:numeric}
The Algorithm \ref{alg:algopt1} is illustrated next with multiple numerical examples. It is worth reminding that
the main issue here is to solve the optimization problems with minimum data using active learning. In all examples,
a uniform grid is used to sample the solution space rather than resorting to a more sophisticated method since
the examples are chosen to be only one or two dimensional for visualization purposes.
\subsubsection*{Example 1}
The first numerical example aims to visualize the presented framework and algorithm. Hence, the chosen function is only one dimensional, $f(x)=sin(5x)/x$ on the interval $\mathcal X =[0.1, 3.9]$. The interval is linearly sampled to obtain a grid with a distance of $0.01$ between points, i.e. $\Theta=\{x_i \in \mathcal X \, \forall i: x_1=0.1, x_2=0.11,\ldots, x_N=3.9 \}$. A Gaussian kernel with variance $0.1$ is chosen for estimating both $\hat f$ and $\hat e$. The weights are equal to one, $w=[1,\, 1,\, 1]$, in the weighted-sum method. The bounds are $b_1=0.5$ for the error bound and $b_2=0.2$ for the bound on maximum variance estimate in the bounded objective method. The initial data consists of a single point, $x=0.1$.
Figure~\ref{fig:weighted6} shows the results based on the normalized weighted-sum method in Proposition~\ref{prob:weight3} after $5$ iterations ($6$ samples in total, together with the initial data point). The variance here is $v(x)$ of the estimated function $\hat f$ using data points $\mathcal D$. Clearly, the estimated peak is not the one of the real function $f$.
Next, Figure~\ref{fig:weighted12} shows that after $11$ iterations ($12$ data points in $\mathcal D$), the function and the location of its peak is estimated correctly.
The sequence of points selected during the iteration process are:
$$ \mathcal D=\{0.47, 3.22, 1.17, 1.66, 2.43, 2.06, 3.9, 2.83, 3.6, 0.82, 1.42 \}.$$
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{opt-weigted6.eps}
\caption{Optimization result using the weighted-sum method with $6$ data points.}
\label{fig:weighted6}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{opt-weigted-12.eps}
\caption{Optimization result using the weighted-sum method with $12$ data points.}
\label{fig:weighted12}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{opt-weigted-12-entropy.eps}
\caption{Mean variance $v$ and entropy $\mathcal I$ on $\Theta$ at each iteration step.}
\label{fig:info1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{opt-bounded1.eps}
\caption{Optimization result using the bounded objective method with $12$ data points.}
\label{fig:bounded1}
\end{figure}
The amount of information obtained during the iterative optimization is of particular interest. Figure~\ref{fig:info1} depicts the mean variance $v$ and entropy $\mathcal I$ of the estimated function $\hat f$ on $\Theta$ at each iteration step. In this specific example, the two quantities are very well correlated. Note, however, that this correlation is a function of the relative weights between information collection and other objectives.
Finally, Figure~\ref{fig:bounded1} depicts the results of the bounded objective method with the given bounds. The number of iterations is $11$ as before, which gives an opportunity of direct comparison with the weighted-sum method.
The sequence of points selected during the iteration process are:
$$ \mathcal D=\{0.47, 3.22, 1.17, 1.66, 2.43, 2.06, 3.9, 2.83, 3.6, 0.82, 1.42 \}.$$
\subsubsection*{Example 2}
The objective function in the second numerical example is the Goldstein\&Price function~\cite{gpfunc}, which
is shown in Figure~\ref{fig:gpfunc} in its inverted form to ensure consistency with the maximization formulation in this paper.
The problem domain consists of the two dimensional rectangular region $\mathcal X=[-2, 2] \times [-2, 2]$, which is linearly sampled to obtain a uniform grid with a $0.05$ interval between sample points. A Gaussian kernel with variance $0.5$ and $0.1$ is chosen for estimating $\hat f$ and $\hat e$, respectively. The weighted-sum method is utilized in Algorithm~\ref{alg:algopt1} with the weights $w=[4,\, 2,\, 3]$. The search budget is chosen as $50$ before stopping the algorithm (for the search space of approx. $6400$ samples in the grid). The real global minimum (peak) of the (inverted) Goldstein\&Price function is at $(0, -1)$ and the location found by the algorithm using the $50$ data points is $(-0.15, -1.05)$. Figure~\ref{fig:gpopt1} depicts the estimated function,
the data points as well as the optimum found. Although the real optimum value is $-3$ (in the inverted version) while the obtained one is $-9.75$, the result is still very satisfactory considering that the simple sampling scheme used and the Goldstein\&Price function takes values in a range of $1$ million, i.e. the error is less than $0.001$ percent of the range. Finally, Figure~\ref{fig:infogp} depicts the mean variance $v$ and entropy $\mathcal I$ of the estimated function $\hat f$ on $\Theta$ at each iteration step.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{gpfunc.eps}
\caption{The inverted Goldstein\&Price function~\cite{gpfunc}.}
\label{fig:gpfunc}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{gpopt2.eps}
\caption{Optimization of the inverted Goldstein\&Price function~\cite{gpfunc} using the weighted-sum method with $50$ data points.}
\label{fig:gpopt1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{gp-entropy2.eps}
\caption{Mean variance $v$ and entropy $\mathcal I$ on $\Theta$ at each iteration step.}
\label{fig:infogp}
\end{figure}
\subsubsection*{Example 3}
The third example uses the same setup as the second one but this time with the (inverted) Brain function~\cite{braninfunc}
shown in Figure~\ref{fig:braninfunc}. The rectangular problem domain $\mathcal X=[-5, 10] \times [0, 15]$ is sampled uniformly to obtain a grid of points with a $0.2$ interval. The real global minimums (peaks) of the (inverted) Branin function are at
$(9.4,2.47)$, $(-\pi,12.28)$, and $(\pi,2.28)$ whereas the locations found by the algorithm are $(9,2.6)$, $(-3.2,12)$, and $(3,2.2)$. The values at these locations found vary between $-4.3$ and $-0.5$ compared to the real global value of
$-0.4$ (of the inverted function). Thus, the algorithm again performs satisfactorily. Figure~\ref{fig:gpopt1} shows the
computed location of one optimum, the data points, as well as the estimated function based on the data points.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{braninfunc.eps}
\caption{The inverted Branin function~\cite{braninfunc}.}
\label{fig:braninfunc}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{braninopt1.eps}
\caption{Optimization of the inverted Branin function~\cite{braninfunc} using the weighted-sum method with $50$ data points.}
\label{fig:braninopt1}
\end{figure}
\subsubsection*{Example 4}
The fourth example is based on the six-hump camel function~\cite{globaloptold} (see Figure~\ref{fig:camfunc}) on the domain $\mathcal X=[-2, 2] \times [-2,2]$, which is sampled uniformly with a $0.05$ interval. All of the parameters are chosen to be the same as before. Figure~\ref{fig:camopt1} shows the
computed location of two optimums, the $50$ data points, as well as the estimated function based on the data points.
The optimum locations found are $(0, \, 0.65)$ and $(0.05,\, -0.6)$ with respective values of $0.98$ and $1.06$,
whereas the real locations are $(-0.09,0.71)$ and $(0.09,-0.71)$ with the value $1.03$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{camfunc.eps}
\caption{The inverted six-hump camel function.}
\label{fig:camfunc}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{camopt1.eps}
\caption{Optimization of the inverted six-hump camel function~\cite{globaloptold} using the weighted-sum method with $50$ data points.}
\label{fig:camopt1}
\end{figure}
\section{Literature Review} \label{sec:literature}
Decision making with limited information is related to search theory. The idea of using information (theory) in this context is hardly new as evidenced by the article ``A New Look at the Relation Between Information Theory and Search Theory'' from 1979 \cite{pierce}. The subject is further studied in \cite{jaynes}. The topic of optimal search is more recently revisited by \cite{Zhu-search}, which contains substantial historical notes and studies problems where the search target distribution in itself is unobservable.
The book \cite{MacKaybook} provides important and valuable insights into the relationship between information theory, inference, and learning. Measuring information content of experiments using Shannon information is explicitly mentioned and a slightly informal version of the bisection example in Subsection~\ref{sec:obsinfo} is discussed. However, focusing mainly on more traditional coding, communication, and machine learning topics, the book does not discuss the type of decision making problems presented in this paper.
Learning plays an important role in the presented framework, especially \textit{regression}, which is a classical machine (or statistical) learning method. A very good introduction to the subject can be found in \cite{Bishopbook}. A complementary and detailed discussion on kernel methods is in \cite{schoelkopfbook}. Another relevant topic is Bayesian inference \cite{Tipping,MacKaybook}, which is in the foundation of the presented framework. In machine learning literature, Gaussian processes (GPs) are getting increasingly popular due to their various favorable characteristics. The book \cite{GPbook} presents a comprehensive treatment of GPs. Additional relevant works on the subject include \cite{MacKaybook,schoelkopfbook,MacKayGP}, which also discuss GP regression.
Convex optimization \cite{boydbook} is a well-understood topic that is often easy to handle even if available information is limited. Optimizing nonconvex functions, however, is still a research subject \cite{globaloptsurvey}. It is interesting to note that the method known as \textit{kriging} in global optimization is almost the same as GP regression in machine learning. The field \textit{stochastic programming} focuses on optimization under uncertainty but assumes a certain amount of prior knowledge on the problem at hand and models the uncertainty probabilistically \cite{stochasticsurvery}. The popular heuristic method \textit{simulated annealing} \cite{simannealing} is essentially based on iterative random search. Another popular heuristic scheme particle swarm optimization \cite{swarmorig} is also based on random search but parallel in nature as a distinguishing characteristic rather than iterative.
Gaussian processes have been recently applied to the area of optimization and regression \cite{BoylePhD} as well as system identification \cite{ThompsonPhD}. While the latter mentions active learning, neither work discusses explicit information quantification or builds a connection with Shannon information theory. The recent articles \cite{GPopt,turner1}, which utilize GP regression for optimization in a setting similar to the one in this paper and for state-space inference and learning, respectively, do not consider information-theoretic aspects of the problem, either. Likewise,
the article \cite{kriging1} on stochastic black box optimization, which considers a problem similar to the one here, does not take into account explicit measurement of information.
The area of active learning or experiment design focuses on data scarcity in machine learning and makes use of Shannon information theory among other criteria \cite{activelearning}. The paper \cite{MacKaydataselect} discusses objective functions which measure the expected informativeness of candidate measurements within a Bayesian learning framework.
The subsequent study \cite{gpactive1} investigates active learning for GP regression using variance as a (heuristic) confidence measure for test point rejection.
\section{Discussion} \label{sec:discuss}
The foundation of the approach adopted in this paper is Bayesian inference, where the main idea is to choose an a priori model and update it with actual experimental data observed (see \cite[Chap. 2]{MacKaybook} for a beautiful introductory discussion on the subject). As long as the a priori model is close to the reality (of the problem at hand), this inference methodology works very efficiently as indicated by the numerical examples in Section~\ref{sec:numeric}. In many cases this background information, which is sometimes referred to as ``domain knowledge'', is already available. However, in others one has to explore the model domain and learn model meta-parameters in a time scale naturally longer than the one of actual optimization \cite{MacKayGP}.
The GP regression adopted in the presented framework is only one method for function estimation and other, e.g. parametric, methods can easily replace GP for the regression part. In any case, the regression methodology here is consistent with
the principle of ``Occam's razor'', more specifically its interpretation using Kolmogorov complexity \cite{algobook}. A priori,
the optimization problems at hand
are more probable to be simple
rather than complex to describe in accordance with \textit{universal distribution} \cite{algobook}. Hence, given a data set it is reasonable to start describing it with the simplest explanation. GP regression already incorporates this line of thinking by relying on a kernel-based approach and making use of the representer theorem \cite[Chap. 6.2]{GPbook}. As a visual example, we refer Figures \ref{fig:weighted6} and \ref{fig:weighted12} for a comparison of function estimates with different sets of available data.
This paper considers a class of problems where data is scarce and obtaining it is costly. Information theory plays an especially important role in devising optimal schemes for obtaining new data points (active learning). The entropy measure from Shannon information theory provides the necessary metric for this purpose, which quantifies the ``exploration'' aspect of the problem. Using a multi-objective optimization formulation, the presented framework allows explicit weighting of \textit{exploration} vs. \textit{exploitation} aspects. This trade-off is also very similar to one between the well-known depth-first vs. breadth-first search algorithms in search theory.
The amount of information obtained from each data point is different here only because a specific a priori general model is utilized to explain the observed data (GP regression). Because of this the amount of information obtained is specific to the model. Otherwise, without this Bayesian approach, each data point would give the same information (inversely proportional to the total number of candidate points).
The illustrative examples discussed are low-dimensional, which makes it possible to use grids for sampling. However, in higher dimensions (i.e. when the problem is much more ``difficult'') this ``luxury'' is not affordable and one has to necessarily resort to Monte Carlo methods. In such cases, the trade-off between exploration and exploitation
is even more emphasized. Possible methods to address this issue include, ``cooling'' approaches similar
to those used in simulated annealing, multi-resolution sampling based on region of interest or using topological properties of Gaussian mixtures to intelligently estimate candidate points based on the current state.
The optimization approach presented here can also be interpreted from a biological perspective. If an analogy between
the decision-maker and a biological organism is established, then the a-priori Bayesian model (meta parameters of the GP) that is refined over a long time scale corresponds to evolution of a species in an environment
(problem domain). Each individual organism belogning to the species obtains new information to achieve its objective while preserving resources as much as possible. The existing evolutionary basis (GP model) gives them an advantage to find a solution much faster compared to random search. From the perspective of the species, it also makes sense for some of its members to explore the model (meta parameter) domain and further refine it through adaptation. Those with better meta parameters achieve then their objectives even more efficiently and obtain an evolutionary edge in natural selection (assuming competition).
\section{Conclusion} \label{sec:conclusion}
The decision making framework presented in this paper addresses the problem of decision making under limited information by taking into account the information collection (observation), estimation (regression), and (multi-objective) optimization aspects in a holistic and structured manner. The methodology is based on Gaussian processes
and active learning. Various issues such as quantifying information content of new data points using information theory, the relationship between information and GP variance as well as related approximation and multi-objective optimization schemes are discussed. The framework is demonstrated with multiple numerical examples.
The presented framework should be considered mainly as an initial step. Future research directions are abundant
and include further investigation of the exploration-exploitation trade-off, adaptive weighting parameters, and random sampling methods for problems in higher dimensional spaces. Additional research topics are the relationship of the framework with genetic/evolutionary methods, dynamic control problems, and multi-person decision making, i.e.
game theory.
\section*{Acknowledgements}
This work is supported by Deutsche Telekom Laboratories. The author wishes to thank Lacra Pavel, Slawomir Stanczak, Holger Boche, and Kivanc Mihcak for stimulating discussions on the subject.
|
3,212,635,537,621 | arxiv | \section{Introduction}
\section{Introduction}
Expressions involving products of Dirac spinors are among the most common objects appearing in the problems of high energy physics. For example, any Feynmann diagram involving fermions includes Dirac bilinears (e.g. as in FIG. I). Various conventions for spinors are present in the literature, and those mostly rely on the two-spinor formalism which generally involves an explicit choice of Dirac matrices and defining the four-component Dirac spinors in terms of the well known two-component Pauli spinors (see e.g. \cite{peskin, zuber, greinerrqm}). However, calculating covariant expressions for them in terms of the relevant Lorentz vectors remained an unfinished task \cite{lorce}. Although existing conventions appear to be sufficient for standard perturbative calculations, the use of Lorentz covariant expressions in the study of bound states, for example in hadronic physics \cite{lorce} is expected to be more enlightening. Another possible use of Lorentz covariant expressions is expected to be in strong background physics, for example in strong background QED, where, just like in hadronic physics, fermions ``dressed" with gauge bosons (and also with virtual pairs) are involved \cite{seipt}.
\begin{figure}
\includegraphics[scale=0.4]{diagram.png}
\caption{An example diagram expressing the process $e^{+}e^{-}\rightarrow \mu^{+}\mu^{-}$ at the lowest order in the corresponding perturbative expansion \cite{peskin}. The matrix element $A$ for this diagram is: $ A=\bar{v}^{s'}(p')\left(-i e \gamma^{\mu} \right)u^{s}(p)\frac{-ig_{\mu \nu}}{q^{2}}\bar{u}^{r}(k)\left(-i e \gamma^{\nu} \right)v^{r'}(k') $.}
\end{figure}
What actually is expected from the use of Lorentz covariant expressions of Dirac bilinears can be easily exemplified within the context of hadronic physics. As is well known, hadrons are bound states of quarks and glouns. For a specified hadron, all multi-particle Fock states having the same quantum numbers with that hadron contribute to the quantum state of the hadron. For example, for a meson, one can write in light-cone quantization \cite{lepage, brodsky_e, brodsky_q, brodsky_kitap, hwang, dapaper}:
\begin{align}
& |M(P;^{2S+1}L_{J_{z}},J_{z})>= \nonumber \\
& \sum_{Fock\, states}\int\left[\prod_{i}\frac{dk_{i}^{+}d^{2}k_{\perp,i}}{2(2\pi)^{3}}\right]2(2\pi)^{3}\delta^{(3)}\left(\tilde{P}-\sum_{i}\tilde{k}_{i}\right) \nonumber \\
& \times \sum_{\lambda_{i}}\Psi_{LS}^{JJ_{z}}(\tilde{k}_{i},\lambda_{i})|relevant\; Fock\; state>.
\end{align}
where $\tilde{k}=(k^{+},\vec{k}_{\perp})$ and $\Psi_{LS}^{JJ_{z}}(\tilde{k}_{i})$ are the light cone wave functions corresponding to the Fock states having the same quantum numbers with the hadron. The light-cone wave function involves outer products of spinors with different momentum arguments \cite{brodsky_e}. For example, for parapositronium \cite{brodsky_e}, one can write:
\begin{align}
\Psi_{0,0}^{0,0}(\tilde{k}_{1},\tilde{k}_{2})=& N(\tilde{k}_{1},\tilde{k}_{2})\times \nonumber \\
& \lbrace u(\tilde{k}_{1},\uparrow ) \bar{v}(\tilde{k}_{2},\downarrow ) - u(\tilde{k}_{1},\downarrow ) \bar{v}(\tilde{k}_{2},\uparrow ) \rbrace,
\end{align}
where $N(\tilde{k}_{1},\tilde{k}_{2})$ is the momentum-dependent normalization factor for the wave function, and $u\, (v)$ are the free positive (negative) energy spinors, respectively. When writing down amplitudes, traces are taken and products of spinors with different momentum arguments appear.
Previously, C. Lorcé calculated Lorentz covariant expressions for Dirac bilinears and presented a list of bilinears involving all linearly independent combinations of Dirac matrices \cite{lorce}. The approach used by Lorcé relied on the fact that the timelike direction in spacetime distinguished between positive and negative energy spinors, so the vector field corresponding to the timelike direction can explicitly appear in the expressions for bilinears (both as certain zero components, and as an explicit vector $n$ in that work) \cite{lorce}.
Although the final results in \cite{lorce} are Lorentz covariant, this is not explicit. In this work, explicitly Lorentz covariant expressions are sought.
Our approach examines the foliation of spacetime in terms of a set of basis vectors in more detail. Using our approach, we have as well calculated all linearly independent structures of Dirac bilinears involving spinors with different momentum arguments, but without the need to explicitly choose a timelike direction of spacetime.
Our paper is organized as follows. In the first part, we present the well known relations between Dirac spinors and the four vectors which are in a sense ``arguments'' of these spinors. In the second part, we concretize our approach in calculating one set of Dirac spinors in terms of another set, making use of the relations between spinors and vectors discussed in the first part. This calculation is equivalent to calculating the scalar bilinear structures, and once the scalar structures are calculated, all tensorial structures can be calculated in terms of them. This is the content of the third part. And finally, in the fourth part, we present comments on our results and on the relation of our results with some of the conventions in the literature. Some calculations are also presented explicitly in the Appendix.
\section{I. Dirac spinors and Lorentz vectors}
Dirac spinors are solutions to the celebrated Dirac equation. In momentum space, Dirac equation can be expressed as (see e.g. \cite{peskin, zuber, greinerrqm}):
\begin{align}
\left(\gamma _{\mu}p^{\mu}-\epsilon m\right)w_{\epsilon }(p)\equiv \left(\slashed{p}-\epsilon m\right)w_{\epsilon }(p)=0,
\end{align}
where $\gamma_{\mu}$ are the Dirac matrices satisfying:
\begin{align}
\lbrace \gamma_{\mu},\gamma_{\nu}\rbrace =2g_{\mu \nu}
\end{align}
and $g_{\mu \nu}$ are the components of the metric tensor. Here, $p$ and $m$ are respectively the momentum four-vector (with $p^{0}>0$ assumed \cite{zuber}) and mass of the relevant fermion and $w_{\epsilon }(p)$ is the corresponding Dirac spinor. $\epsilon =+1(-1)$ corresponds to positive (negative) energy solutions. In $3+1$ dimensions, there are two linearly independent solutions for each value of $\epsilon $ \cite{peskin, zuber, greinerrqm}.
Information about the spin of the particle is carried by the Pauli-Lubansky vector \cite{zuber}:
\begin{align}
W_{\mu}=\frac{i}{4}\varepsilon_{\mu \nu \alpha \beta}p^{\nu}\sigma^{\alpha \beta}, \quad \sigma^{\alpha \beta}=\frac{i}{2}\left[ \gamma_{\alpha},\gamma_{\beta}\right].
\end{align}
Pauli-Lubansky vector satisfies \cite{zuber}:
\begin{align}
W\cdot W = -m^{2}\lambda (\lambda +1),
\end{align}
where $\lambda $ is the spin of the relevant particle, which is equal to $1/2$ for quarks and leptons. The projection of this vector on any four-vector $s$ orthogonal to $p$ (that is, satisfying $s\cdot p=0$) is related to the rest-frame spin projections of the fermion along a four-vector which is obtained by Lorentz transforming $s$ to the rest frame \cite{zuber}:
\begin{align}
-\frac{W\cdot s}{m}w_{\epsilon ,\sigma}=\epsilon\times\frac{1 }{2}\gamma _{5}\slashed{s}w_{\epsilon ,\sigma}=\epsilon \times\sigma \times \frac{1}{2}w_{\epsilon ,\sigma},
\end{align}
where $\sigma =\pm 1$ and $s^{2}=-1$ (dependence on $p$ and $s$ is suppressed for simplicity of notation). Thus, the four linearly independent Dirac spinors can be identified with the following eigenvalue equations:
\begin{align}
\slashed{p}w_{\epsilon ,\sigma}=\epsilon mw_{\epsilon ,\sigma}, \quad \gamma _{5}\slashed{s}w_{\epsilon ,\sigma}=\sigma w_{\epsilon ,\sigma}.
\label{definingeq}
\end{align}
Our approach for calculating Dirac bilinears in terms of Lorentz scalars is based on covariantly using the four-vector $s$ in line with the momentum four-vector $p$, instead of calculating rest frame spinors using a specific coordinate system and boosting them to a generic frame where the fermion has momentum $p$, as is usually preferred in the literature. Once this goal is achieved, one can make an explicit choice for the four-vector $s$ so as to relate the results with the conventional expressions in the literature.
One can derive various identities involving Dirac spinors and combinations of Dirac matrices; these have been studied in detail in \cite{lorce}. Here, we concentrate on a number of identities which will be of practical use. Using the normalization:
\begin{align}
\bar{w}_{\epsilon ,\sigma}w_{\epsilon ',\sigma'}=2m\epsilon \delta _{\epsilon \epsilon '}\delta _{\sigma \sigma '},
\end{align}
the eigenvalue equations for Dirac spinors and the anti-commutation relations for the Dirac matrices, one obtains \cite{lorce}:
\begin{align}
\bar{u}_{\sigma}\gamma_{\mu}u_{\sigma'} & =2p_{\mu}\delta _{\sigma \sigma '}, \\
\bar{u}_{\sigma}\gamma_{\mu}\gamma_{5}u_{\sigma} & =2m\sigma s_{\mu} ,
\end{align}
where $u_{\sigma}\equiv w_{+,\sigma}$ are the positive energy solutions. One can derive similar identities for the negative energy solutions as well, using $\gamma _{5}w_{\epsilon ,\sigma}=-\epsilon \sigma w_{-\epsilon ,-\sigma}$ as in \cite{lorce}. It is interesting to observe that the simple trick using the eigenvalue equations cannot provide information on the combination $\bar{u}_{\sigma}\gamma_{\mu}\gamma_{5}u_{-\sigma}$, and in fact one observes that this expression is actually non-zero (which can be verified using any specific explicit representation). This observation motivates defining $\bar{u}_{\sigma}\gamma_{\mu}\gamma_{5}u_{-\sigma}$ as two other Lorentz vectors related to the particle under study, and examine their relation to $p$ and $s$ vectors:
\begin{align}
-\frac{1}{4m}\bar{u}_{+}\gamma_{\mu}\gamma_{5}u_{-}\equiv & d_{\mu} \\
\Rightarrow -\frac{1}{4m}\bar{u}_{-}\gamma_{\mu}\gamma_{5}u_{+} = & d^{*}_{\mu}.
\end{align}
One observes that:
\begin{align}
\slashed{d}\gamma _{5}u_{+} & =\left( -\frac{1}{4m}\bar{u}_{+}\gamma _{\mu}\gamma_{5} u_{-}\right)\times \gamma ^{\mu}\gamma _{5}u_{+}\nonumber \\
& = \frac{1}{4m}\gamma _{5}\gamma ^{\mu}\left( u_{+}\otimes \bar{u}_{+}\right) \gamma _{\mu}\gamma _{5}u_{-}\nonumber \\
& = \frac{1}{4m}\gamma _{5}\gamma ^{\mu}\left( \slashed{p}+m\right)\frac{\left( 1+\gamma _{5}\slashed{s}\right)}{2} \gamma _{\mu}\gamma _{5}u_{-}\nonumber \\
& = u_{-}.
\end{align}
Here, the projection operators have been used \cite{zuber}:
\begin{align}
\left( u_{\sigma}\otimes \bar{u}_{\sigma}\right)=\left( \slashed{p}+m\right)\frac{\left( 1+\sigma \gamma _{5}\slashed{s}\right)}{2}.
\end{align}
By a similar reasoning, one also observes that:
\begin{align}
\slashed{d}^{*}\gamma _{5}u_{-}=u_{+}, \quad \slashed{d}\gamma _{5}u_{-}=\slashed{d}^{*}\gamma _{5}u_{+}=0.
\end{align}
The last equalities follow from the fact that $d\cdot d=d^{*}\cdot d^{*}=0$. So, one derives the conclusion that $\slashed{d}^{*}\gamma _{5}$ and $\slashed{d}\gamma _{5}$ are simply the spin raising and lowering matrices for Dirac spinors. Thus, one can define the following ``spin-flip" matrices:
\begin{align}
\frac{u_{-}\otimes \bar{u}_{+}}{2m}=\slashed{d}\gamma _{5}, \quad \frac{u_{+}\otimes \bar{u}_{-}}{2m}=\slashed{d}^{*}\gamma _{5}.
\end{align}
Using the eigenvalue equations and the normalization discussed above, one can easily verify that the following equalities hold:
\begin{align}
& d\cdot d^{*}=-\frac{1}{2}, \quad d\cdot d=d^{*}\cdot d^{*}=0; \\
& d\cdot p=d^{*}\cdot p=0, \quad d\cdot s=d^{*}\cdot s =0.
\end{align}
As is seen from the above equations, $d$ and $d^{*}$ are null vectors and they span a subspace of the $3+1$ dimensional Minkowski space that is orthogonal to the subspace spanned by $p$ and $s$.
This also implies that the set of vectors $\lbrace p,\, s,\, d,\, d^{*}\rbrace $ (which we will call the $p-set$ from now on)
can be used as a basis for spanning the whole $3+1$ dimensional Minkowski space. This observation has the following interesting consequences:
\begin{itemize}
\item Any Lorentz vector, say $q$, can be decomposed into its components along each of the $p-set$ vectors:
\begin{align}
q^{\mu} & =\frac{q\cdot p}{p^{2}}\, p^{\mu}+\frac{q\cdot s}{s^{2}}\, s^{\mu}+\frac{1}{d\cdot d^{*}}\left( q\cdot d\, d^{*\mu}+q\cdot d^{*}\, d^{\mu}\right) \nonumber \\
& =\frac{q\cdot p}{m^{2}}\, p^{\mu}-q\cdot s\, s^{\mu}-2\left( q\cdot d\, d^{*\mu}+q\cdot d^{*}\, d^{\mu}\right)
\end{align}
which can easily be verified by taking dot products with each of the $p-set$ vectors.
\item The independence of $q^{2}$ from the basis set used for computing it implies:
\begin{align}
q^{2} & = \left( \frac{q\cdot p}{m}\right) ^{2}-\left( q\cdot s\right) ^{2}-4q\cdot d\, q\cdot d^{*}\\
\Rightarrow g_{\mu \nu} & =\frac{p_{\mu}p_{\nu}}{m^{2}}-s_{\mu}s_{\nu}-2\left( d^{*}_{\mu}d_{\nu}+d_{\mu}d^{*}_{\nu}\right). \label{metric}
\end{align}
This decomposition of the metric tensor in terms of the $p-set$ vectors implies that the $p-set$ vectors are nothing but a set of vierbeins$^{1}$\footnotetext[1]{Vierbeins (or vielbeins in general) $E^{A}\, _{\mu}$ are defined in the following way:
\begin{align}
g_{\mu \nu}=E^{A}\, _{\mu}E^{B}\, _{\nu}G_{AB},\nonumber
\end{align}
where $g_{\mu \nu}$ and $G_{AB}$ are metric tensor components referring to two different sets of basis vectors where one set is orthonormal. Vielbeins are generally used in the treatment of fermion fields in curved backgrounds, where local orthonormal frames are needed to handle spinors \cite{bertlmann, olpak}. So, this observation may be of practical value when calculating Dirac bilinears in curved backgrounds or using basis vectors of curvilinear systems.} defined locally at the spacetime position of the particle under study.
\item Using the definitions for $d$ and $d^{*}$ vectors, one observes that the following equality holds:
\begin{align}
& d^{*}_{\mu}d_{\nu}=\frac{1}{16m^{2}}\bar{u}_{-}(p)\gamma_{\mu}\gamma_{5}u_{+}(p)\bar{u}_{+}(p)\gamma_{\nu}\gamma_{5}u_{-}(p) \nonumber \\
& = \frac{Tr\bigg (\gamma_{\mu}\gamma_{5}\left(\slashed{p}+m\right)\left(1+\gamma_{5}\slashed{s}\right)\gamma_{\nu}\gamma_{5}\left(\slashed{p}+m\right)\left(1-\gamma_{5}\slashed{s}\right)\bigg )}{64m^{2}} \nonumber \\
& =-\frac{1}{4}\left( g_{\mu \nu}+s_{\mu}s_{\nu}-\frac{p_{\mu}p_{\nu}+im\varepsilon_{\mu \nu \alpha \beta}p^{\alpha}s^{\beta}}{m^{2}}\right) \label{dds} \\
& \Rightarrow \varepsilon_{\mu \nu \alpha \beta}d^{*\mu}d^{\nu}p^{\alpha}s^{\beta}=\frac{im}{2},
\end{align}
which is related to the ``handedness" of the $p-set$.
Note that Eq. (\ref{dds}) is equivalent to Eq. (\ref{metric}).
The only issue to be addressed is that this relation does not violate the linear independence of the $p-set$, since it involves linear combinations of the tensor products of the related vectors rather than linear combinations of the vectors themselves.
\item It can be shown that, the vectors $d$ and $d^{*}$ can always be written in terms of two real spacelike unit vectors, say $n_{1}$ and $n_{2}$, which are also orthogonal to $p$ and $s$, such that $d=n_{1}-i n_{2}$ and $d^{*}=n_{1}+i n_{2}$. Any Lorentz transformation $\Lambda $ which leaves $p$ and $s$ unchanged (that is, any rotation in the plane spanned by $d$ and $d^{*}$) rotates the spinors in the spinor space but does not alter Eq. (\ref{definingeq}). That is, the rotated spinors will still be the solutions to Eq. (\ref{definingeq}) with the same eigenvalues:
\begin{align}
& \Lambda ^{\mu}\, _{\nu}p^{\nu}=p^{\mu}, \; \Lambda ^{\mu}\, _{\nu}s^{\nu}=s^{\mu}, \nonumber \\
\Rightarrow & S(\Lambda)\slashed{p}w_{\epsilon, \sigma}=S(\Lambda)\slashed{p}S^{-1}(\Lambda)S(\Lambda)w_{\epsilon, \sigma},\nonumber \\
\Rightarrow & \epsilon m S(\Lambda)w_{\epsilon, \sigma}=\slashed{p}S(\Lambda)w_{\epsilon, \sigma},\nonumber \\
\nonumber \\
\Rightarrow & S(\Lambda)\gamma_{5}\slashed{s}w_{\epsilon, \sigma}=S(\Lambda)\gamma_{5}\slashed{s}S^{-1}(\Lambda)S(\Lambda)w_{\epsilon, \sigma},\nonumber \\
\Rightarrow & \sigma S(\Lambda)w_{\epsilon, \sigma}=\gamma_{5}\slashed{s}S(\Lambda)w_{\epsilon, \sigma},
\end{align}
due to $S(\Lambda)\gamma_{\nu}S^{-1}(\Lambda)=\gamma_{\mu}\Lambda ^{\mu}\, _{\nu}$. This obviously corresponds to a freedom in defining the spinors, which can be fixed (up to an overall phase related to the normalization of the spinors) by fixing $d$ and $d^{*}$. From now on, we will continue our discussion by assuming that all of the $p-set$ vectors are fixed, so that our $p-set$ defines our spinor basis.
\end{itemize}
\section{II. Dirac Equation in terms of Lorentz scalars and its solutions}
At this point, we can return to the calculation of Dirac bilinears. If one considers another Dirac spinor, for example some $U_{+}$ satisfying:
\begin{align}
\slashed{q}U_{+}=MU_{+}, \quad \gamma _{5}\slashed{r}U_{+}=U_{+},
\label{Ueqs}
\end{align}
using the decompositions of $q$ and $r$ in terms of the $p-set$ and the resolution of identity in terms of the $u,v$ spinors:
\begin{align}
1=\sum_{\sigma}\left( \frac{u_{\sigma}(p)\otimes \bar{u}_{\sigma}(p)}{2m}-\frac{v_{\sigma}(p)\otimes \bar{v}_{\sigma}(p)}{2m} \right)
\end{align}
one can construct two eigenvalue equations which involve the projections of $U_{+}$ on the $u,\, v$ spinors:
\begin{flushleft}
\begin{align}
& \begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix} = \begin{bmatrix}
\frac{q\cdot p}{m\, M} & 0 & \frac{q\cdot s}{M} & \frac{2q\cdot d}{M} \\
0 & \frac{q\cdot p}{m\, M} & \frac{-2q\cdot d^{*}}{M} & \frac{q\cdot s}{M} \\
\frac{-q\cdot s}{M} & \frac{2q\cdot d}{M} & \frac{-q\cdot p}{m\, M} & 0 \\
\frac{-2q\cdot d^{*}}{M} & \frac{-q\cdot s}{M} & 0 & \frac{-q\cdot p}{m\, M}
\end{bmatrix}
\begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix},
\end{align}
\begin{align}
& \begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix} = \begin{bmatrix}
-r\cdot s & 2r\cdot d & \frac{-r\cdot p}{m} & 0 \\
2r\cdot d^{*} & r\cdot s & 0 & \frac{r\cdot p}{m} \\
\frac{r\cdot p}{m} & 0 & r\cdot s & 2r\cdot d \\
0 & \frac{-r\cdot p}{m} & 2r\cdot d^{*} & -r\cdot s
\end{bmatrix}
\begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix}.
\end{align}
\end{flushleft}
The matrices appearing in the equations are nothing but $\slashed{q}$ and $\gamma_{5}\slashed{r}$ written in the basis of $u,v$ spinors. Then, the solutions of these equations are the eigenvectors of $\slashed{q}$ and $\gamma_{5}\slashed{r}$ written in the basis of $u,v$ spinors. The corresponding components are the projections of these spinors on the $u,v$ spinor basis, which are proportional to the scalar ones within the bilinears that we have been looking for.
In the $u,v$ spinor basis, one also has the following intuitively appreciable representation for $\gamma_{5}$ and $\beta $ where $\bar{w}\equiv w^{\dagger}\beta $:
\begin{equation}
\beta =\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1
\end{bmatrix}
,\,
\gamma_{5}= \begin{bmatrix}
0 & 0 & -1 & 0 \\
0 & 0 & 0 & 1 \\
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{bmatrix}.
\end{equation}
Here, one does not necessarily choose a coordinate system to define $\beta $. If one chooses a timelike unit vector $n$ as done in \cite{lorce}, one can simply define $\beta \equiv\slashed{n}$ and so $\bar{w}= w^{\dagger}\slashed{n}$, and $(\gamma_{\mu})^{\dagger}=\slashed{n}\gamma_{\mu}\slashed{n}$.
The eigenvectors of $\slashed{q}$ and $\gamma_{5}\slashed{r}$ written in the basis of $u,v$ spinors read:
\begin{align*}
& U_{+}=
\frac{1}{\sqrt{2m}}
\begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix}\nonumber \\
& = N' \begin{bmatrix}
\frac{1}{2}\left(1+\frac{q\cdot s\, r\cdot p}{m\,M + q\cdot p}-r\cdot s\right)\\
r\cdot d^{*}-\frac{q\cdot d^{*}\, r\cdot p}{m\,M + q\cdot p}\\
\frac{M\, r\cdot p + m\left(2q\cdot d\, r\cdot d^{*}-2q\cdot d^{*}\, r\cdot d-q\cdot s\right)}{2\left(m\,M + q\cdot p\right)}\\
-\frac{m\left(q\cdot d^{*}+q\cdot s\, r\cdot d^{*}-q\cdot d^{*}\, r\cdot s\right)}{m\,M + q\cdot p}
\end{bmatrix},
\end{align*}
\begin{align*}
& U_{-}=
\frac{1}{\sqrt{2m}}
\begin{bmatrix}
\bar{u}_{+}U_{-} \\
\bar{u}_{-}U_{-} \\
\bar{v}_{-}U_{-} \\
\bar{v}_{+}U_{-}
\end{bmatrix}\nonumber \\
& = N' \begin{bmatrix}
-r\cdot d+\frac{q\cdot d\, r\cdot p}{m\,M + q\cdot p}\\
\frac{1}{2}\left(1+\frac{q\cdot s\, r\cdot p}{m\,M + q\cdot p}-r\cdot s\right)\\
\frac{m\left(q\cdot d+q\cdot s\, r\cdot d-q\cdot d\, r\cdot s\right)}{m\,M + q\cdot p}\\
\frac{M\, r\cdot p + m\left(2q\cdot d\, r\cdot d^{*}-2q\cdot d^{*}\, r\cdot d-q\cdot s\right)}{2\left(m\,M + q\cdot p\right)}
\end{bmatrix},
\end{align*}
\begin{align*}
& V_{-}=
\frac{1}{\sqrt{2m}}
\begin{bmatrix}
\bar{u}_{+}V_{-} \\
\bar{u}_{-}V_{-} \\
\bar{v}_{-}V_{-} \\
\bar{v}_{+}V_{-}
\end{bmatrix}\nonumber \\
& = -N' \begin{bmatrix}
\frac{M\, r\cdot p + m\left(2q\cdot d\, r\cdot d^{*}-2q\cdot d^{*}\, r\cdot d-q\cdot s\right)}{2\left(m\,M + q\cdot p\right)}\\
\frac{m\left(q\cdot d^{*}+q\cdot s\, r\cdot d^{*}-q\cdot d^{*}\, r\cdot s\right)}{m\,M + q\cdot p}\\
\frac{1}{2}\left(1+\frac{q\cdot s\, r\cdot p}{m\,M + q\cdot p}-r\cdot s\right)\\
- r\cdot d^{*}+\frac{q\cdot d^{*}\, r\cdot p}{m\,M + q\cdot p}
\end{bmatrix},
\end{align*}
\begin{align}
& V_{+}=
\frac{1}{\sqrt{2m}}
\begin{bmatrix}
\bar{u}_{+}V_{+} \\
\bar{u}_{-}V_{+} \\
\bar{v}_{-}V_{+} \\
\bar{v}_{+}V_{+}
\end{bmatrix}\nonumber \\
& = -N' \begin{bmatrix}
-\frac{m\left(q\cdot d+q\cdot s\, r\cdot d-q\cdot d\, r\cdot s\right)}{m\,M + q\cdot p}\\
\frac{M\, r\cdot p + m\left(2q\cdot d\, r\cdot d^{*}-2q\cdot d^{*}\, r\cdot d-q\cdot s\right)}{2\left(m\,M + q\cdot p\right)}\\
r\cdot d-\frac{q\cdot d\, r\cdot p}{m\,M + q\cdot p}\\
\frac{1}{2}\left(1+\frac{q\cdot s\, r\cdot p}{m\,M + q\cdot p}-r\cdot s\right).
\end{bmatrix},
\end{align}
where
\begin{align}
& N'\left(q,r,p,s\right)\equiv \sqrt{2M} N\left(q,r,p,s\right) ,\nonumber \\
& N\left(q,r,p,s\right)\equiv \nonumber \\
& \frac{\left(m\,M + q\cdot p\right)}{\sqrt{m\, M\lbrace q\cdot s\, r\cdot p+\left(m\,M + q\cdot p\right)\left(1-r\cdot s\right)\rbrace} }.
\end{align}
It would immediately be understood that the dependence of the spinors on the Lorentz vectors is mostly suppressed for brevity.
In a sense, the above expressions for the $U,V$ spinors can be understood as that they have been written in a covariantly defined spinor basis, and there still is an algebraic freedom in defining the $U,V$ spinors.
This is a consequence of the fact that a Dirac spinor in the irreducible representation in $3+1$ dimensions involves $4$ complex (and equivalently $8$ real) functions (apart from normalization) to be calculated,
however Eq. (\ref{definingeq}) provides $4$ independent equations in total.
If we were to construct a $q-set$, which would also involve two other vectors, say $\Delta$ and $\Delta^{*}$, orthogonal to $q$ and $r$ and which would relate the spinors having the same energy eigenvalues but opposite spin eigenvalues,
we would be able to provide additional equations for the functions in the $U,V$ spinors. So, the still remaining algebraic freedom in the above expressions comes from the fact that we have not fixed $\Delta$ and $\Delta^{*}$. One can perform such fixing,
for example, by defining a Lorentz transformation relating the $p-set$ to the $q-set$.
The expressions above involve $p,s,d,d^{*}$ vectors; however, in practice one usually deals with $p,s,q,r$ vectors. One observes that $d$ and $d^{*}$ vectors can be written in terms of $p,s,q,r$ vectors; relevant details have been given in the Appendix. As a result, we can treat the above expressions as functions of $p,s,q,r$ vectors.
At this point, one can further simplify the notation. Regarding $d$ and $d^{*}$ vectors as functions of $p,s,q,r$ vectors, one defines:
\begin{align}
& A\left(q,r,p,s\right) \equiv \frac{1}{2}\left(1+\frac{q\cdot s\, r\cdot p}{m\,M + q\cdot p}-r\cdot s\right),\nonumber \\
& B\left(q,r,p,s\right) \equiv r\cdot d^{*}-\frac{q\cdot d^{*}\, r\cdot p}{m\,M + q\cdot p}\nonumber \\
& C\left(q,r,p,s\right) \nonumber \\
& \equiv \frac{M\, r\cdot p + m\left(2q\cdot d\, r\cdot d^{*}-2q\cdot d^{*}\, r\cdot d-q\cdot s\right)}{2\left(m\,M + q\cdot p\right)}\nonumber \\
& D\left(q,r,p,s\right) \equiv -\frac{m\left(q\cdot d^{*}+q\cdot s\, r\cdot d^{*}-q\cdot d^{*}\, r\cdot s\right)}{m\,M + q\cdot p},
\end{align}
and one obtains:
\begin{equation*}
\begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix}=
\sqrt{4mM}N\left(q,r,p,s\right) \begin{bmatrix}
A\left(q,r,p,s\right)\\
B\left(q,r,p,s\right)\\
C\left(q,r,p,s\right)\\
D\left(q,r,p,s\right)
\end{bmatrix},
\end{equation*}
\begin{equation*}
\begin{bmatrix}
\bar{u}_{+}U_{-} \\
\bar{u}_{-}U_{-} \\
\bar{v}_{-}U_{-} \\
\bar{v}_{+}U_{-}
\end{bmatrix}=
\sqrt{4mM}N\left(q,r,p,s\right) \begin{bmatrix}
-B^{*}\left(q,r,p,s\right)\\
A\left(q,r,p,s\right)\\
-D^{*}\left(q,r,p,s\right)\\
C\left(q,r,p,s\right)
\end{bmatrix},
\end{equation*}
\begin{equation*}
\begin{bmatrix}
\bar{u}_{+}V_{-} \\
\bar{u}_{-}V_{-} \\
\bar{v}_{-}V_{-} \\
\bar{v}_{+}V_{-}
\end{bmatrix}=
\sqrt{4mM}N\left(q,r,p,s\right) \begin{bmatrix}
-C\left(q,r,p,s\right)\\
D\left(q,r,p,s\right)\\
-A\left(q,r,p,s\right)\\
B\left(q,r,p,s\right)
\end{bmatrix},
\end{equation*}
\begin{equation}
\begin{bmatrix}
\bar{u}_{+}V_{+} \\
\bar{u}_{-}V_{+} \\
\bar{v}_{-}V_{+} \\
\bar{v}_{+}V_{+}
\end{bmatrix}
=\sqrt{4mM}N\left(q,r,p,s\right) \begin{bmatrix}
-D^{*}\left(q,r,p,s\right)\\
-C\left(q,r,p,s\right)\\
-B^{*}\left(q,r,p,s\right)\\
-A\left(q,r,p,s\right)
\end{bmatrix}.
\end{equation}
So, we have arrived at the covariant expressions for scalar bilinears in terms of $q$, $r$, $p$ and $s$ vectors. Using $w_{-\epsilon ,-\sigma} =-\epsilon \sigma \gamma_{5}w_{\epsilon ,\sigma}$, pseudoscalar structures can directly be obtained from the scalar ones:
\begin{align}
& \begin{bmatrix}
\bar{u}_{+}U_{+} \\
\bar{u}_{-}U_{+} \\
\bar{v}_{-}U_{+} \\
\bar{v}_{+}U_{+}
\end{bmatrix}=\begin{bmatrix}
-\bar{u}_{+}\gamma_{5}V_{-} \\
-\bar{u}_{-}\gamma_{5}V_{-} \\
-\bar{v}_{-}\gamma_{5}V_{-} \\
-\bar{v}_{+}\gamma_{5}V_{-}
\end{bmatrix};\;
\begin{bmatrix}
\bar{u}_{+}U_{-} \\
\bar{u}_{-}U_{-} \\
\bar{v}_{-}U_{-} \\
\bar{v}_{+}U_{-}
\end{bmatrix}=\begin{bmatrix}
\bar{u}_{+}\gamma_{5}V_{+} \\
\bar{u}_{-}\gamma_{5}V_{+} \\
\bar{v}_{-}\gamma_{5}V_{+} \\
\bar{v}_{+}\gamma_{5}V_{+}
\end{bmatrix}\nonumber \\
& \begin{bmatrix}
\bar{u}_{+}V_{-} \\
\bar{u}_{-}V_{-} \\
\bar{v}_{-}V_{-} \\
\bar{v}_{+}V_{-}
\end{bmatrix}=\begin{bmatrix}
-\bar{u}_{+}\gamma_{5}U_{+} \\
-\bar{u}_{-}\gamma_{5}U_{+} \\
-\bar{v}_{-}\gamma_{5}U_{+} \\
-\bar{v}_{+}\gamma_{5}U_{+}
\end{bmatrix};\;
\begin{bmatrix}
\bar{u}_{+}V_{+} \\
\bar{u}_{-}V_{+} \\
\bar{v}_{-}V_{+} \\
\bar{v}_{+}V_{+}
\end{bmatrix}=\begin{bmatrix}
\bar{u}_{+}\gamma_{5}U_{-} \\
\bar{u}_{-}\gamma_{5}U_{-} \\
\bar{v}_{-}\gamma_{5}U_{-} \\
\bar{v}_{+}\gamma_{5}U_{-}
\end{bmatrix}
\end{align}
\section{III. Calculating tensorial structures}
We can now proceed to the calculation of vector, axial-vector and anti-symmetric tensor structures. But before delving into calculations, it would be useful to express certain equalities involving vector and axial-vector structures. Using these equalities, one only needs to calculate 8 combinations, within the totality of 32 possible combinations.
\begin{align}
& \bar{u}_{+}\gamma_{\mu}U_{+}=-\bar{u}_{+}\gamma_{\mu}\gamma_{5}V_{-}=\bar{v}_{-}\gamma_{\mu}V_{-}=-\bar{v}_{-}\gamma_{\mu}\gamma_{5}U_{+},\nonumber \\
& \bar{u}_{+}\gamma_{\mu}U_{-}=\bar{u}_{+}\gamma_{\mu}\gamma_{5}V_{+}=-\bar{v}_{-}\gamma_{\mu}V_{+}=-\bar{v}_{-}\gamma_{\mu}\gamma_{5}U_{-},\nonumber \\
& \bar{u}_{+}\gamma_{\mu}V_{-}=-\bar{u}_{+}\gamma_{\mu}\gamma_{5}U_{+}=\bar{v}_{-}\gamma_{\mu}U_{+}=-\bar{v}_{-}\gamma_{\mu}\gamma_{5}V_{-},\nonumber \\
& \bar{u}_{+}\gamma_{\mu}V_{+}=\bar{u}_{+}\gamma_{\mu}\gamma_{5}U_{-}=-\bar{v}_{-}\gamma_{\mu}U_{-}=-\bar{v}_{-}\gamma_{\mu}\gamma_{5}V_{+},\nonumber \\
& \bar{u}_{-}\gamma_{\mu}U_{+}=-\bar{u}_{-}\gamma_{\mu}\gamma_{5}V_{-}=-\bar{v}_{+}\gamma_{\mu}V_{-}=\bar{v}_{+}\gamma_{\mu}\gamma_{5}U_{+},\nonumber \\
& \bar{u}_{-}\gamma_{\mu}U_{-}=\bar{u}_{-}\gamma_{\mu}\gamma_{5}V_{+}=\bar{v}_{+}\gamma_{\mu}V_{+}=\bar{v}_{+}\gamma_{\mu}\gamma_{5}U_{-},\nonumber \\
& \bar{u}_{-}\gamma_{\mu}V_{-}=-\bar{u}_{-}\gamma_{\mu}\gamma_{5}U_{+}=-\bar{v}_{+}\gamma_{\mu}U_{+}=\bar{v}_{+}\gamma_{\mu}\gamma_{5}V_{-},\nonumber \\
& \bar{u}_{-}\gamma_{\mu}V_{+}=\bar{u}_{-}\gamma_{\mu}\gamma_{5}U_{-}=\bar{v}_{+}\gamma_{\mu}U_{-}=\bar{v}_{+}\gamma_{\mu}\gamma_{5}V_{+},
\end{align}
A similar reasoning holds for higher rank tensor structures as well. Noting that $\sigma_{\mu \nu}\gamma_{5}=-\frac{1}{2}\epsilon_{\mu \nu \alpha \beta}\sigma^{\alpha \beta}$ \cite{zuber, lorce}, one notices that there are 4 independent structures out of 16:
\begin{align}
& \bar{u}_{+}\sigma_{\mu \nu}U_{+}=-\bar{u}_{+}\sigma_{\mu \nu}\gamma_{5}V_{-}=-\bar{v}_{-}\sigma_{\mu \nu}V_{-}=\bar{v}_{-}\sigma_{\mu \nu}\gamma_{5}U_{+},\nonumber \\
& \bar{u}_{+}\sigma_{\mu \nu}U_{-}=\bar{u}_{+}\sigma_{\mu \nu}\gamma_{5}V_{+}=\bar{v}_{-}\sigma_{\mu \nu}V_{+}=\bar{v}_{-}\sigma_{\mu \nu}\gamma_{5}U_{-},\nonumber \\
& \bar{u}_{-}\sigma_{\mu \nu}U_{+}=-\bar{u}_{-}\sigma_{\mu \nu}\gamma_{5}V_{-}=\bar{v}_{+}\sigma_{\mu \nu}V_{-}=-\bar{v}_{+}\sigma_{\mu \nu}\gamma_{5}U_{+},\nonumber \\
& \bar{u}_{-}\sigma_{\mu \nu}U_{-}=\bar{u}_{-}\sigma_{\mu \nu}\gamma_{5}V_{+}=-\bar{v}_{+}\sigma_{\mu \nu}V_{+}=-\bar{v}_{+}\sigma_{\mu \nu}\gamma_{5}U_{-}.
\end{align}
Now we can write down the independent vector, axial-vector and anti-symmetric tensor structures and calculate them. In the appendix we show that $d$ and $d^{*}$ can be written in terms of $p$, $s$, $q$ and $r$. So, we can expand the tensorial structures in terms of $p$, $s$, $d$ and $d^{*}$ and eliminate $d$ and $d^{*}$ from the expressions later. This approach is easier because $p$, $s$, $d$ and $d^{*}$ are orthogonal and so no matrix inversion will be necessary to calculate the coefficients in the expansions of the tensorial structures. For vector and axial-vector structures, one writes:
\begin{align}
\bar{u}_{\pm}(p)\gamma_{\mu}W_{\epsilon, \sigma}(q) & \equiv \sqrt{4mM}N\left(q,r,p,s\right)\times \nonumber \\
& \left( \frac{\alpha_{p}}{m} p_{\mu}-\alpha_{s}s_{\mu}-2\alpha_{d}d_{\mu}-2\alpha_{d^{*}}d^{*}_{\mu}\right),
\label{vectorcoef}
\end{align}
where $W_{\epsilon, \sigma}(q)$ is a spinor which satisfies $\slashed{q}W_{\epsilon, \sigma}(q)=\epsilon M W_{\epsilon, \sigma}(q)$ and $\gamma_{5}\slashed{r}W_{\epsilon, \sigma}(q)=\sigma W_{\epsilon, \sigma}(q)$. Then, one contracts this expression with $p$, $s$, $d$ and $d^{*}$ to get the unknown coefficients $\alpha_{p}$, $\alpha_{s}$, $\alpha_{d}$ and $\alpha_{d^{*}}$. The results$^{2}$\footnotetext[2]{Notice that some of the bilinear expressions appear to be already independent from $d$ and $d^{*}$} of this procedure are presented in TABLE \ref{table1}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& $\alpha_{p}$ & $\alpha_{s}$ & $\alpha_{d}$ & $\alpha_{d^{*}}$ \\
\hline
$\bar{u}_{+}\gamma_{\mu}U_{+}$ & $A\left(q,r,p,s\right)$ & $-C\left(q,r,p,s\right)$ & $-D\left(q,r,p,s\right)$ & $0$ \\
\hline
$\bar{u}_{+}\gamma_{\mu}U_{-}$ & $-B^{*}\left(q,r,p,s\right)$ & $D^{*}\left(q,r,p,s\right)$ & $-C\left(q,r,p,s\right)$ & $0$ \\
\hline
$\bar{u}_{+}\gamma_{\mu}V_{-}$ & $-C\left(q,r,p,s\right)$ & $A\left(q,r,p,s\right)$ & $-B\left(q,r,p,s\right)$ & $0$ \\
\hline
$\bar{u}_{+}\gamma_{\mu}V_{+}$ & $-D^{*}\left(q,r,p,s\right)$ & $B^{*}\left(q,r,p,s\right)$ & $A\left(q,r,p,s\right)$ & $0$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
& $\alpha_{p}$ & $\alpha_{s}$ & $\alpha_{d}$ & $\alpha_{d^{*}}$ \\
\hline
$\bar{u}_{-}\gamma_{\mu}U_{+}$ & $B\left(q,r,p,s\right)$ & $D\left(q,r,p,s\right)$ & $0$ & $C\left(q,r,p,s\right)$ \\
\hline
$\bar{u}_{-}\gamma_{\mu}U_{-}$ & $A\left(q,r,p,s\right)$ & $-C\left(q,r,p,s\right)$ & $0$ & $-D^{*}\left(q,r,p,s\right)$ \\
\hline
$\bar{u}_{-}\gamma_{\mu}V_{-}$ & $D\left(q,r,p,s\right)$ & $B\left(q,r,p,s\right)$ & $0$ & $-A\left(q,r,p,s\right)$ \\
\hline
$\bar{u}_{-}\gamma_{\mu}V_{+}$ & $-C\left(q,r,p,s\right)$ & $A\left(q,r,p,s\right)$ & $0$ & $-B^{*}\left(q,r,p,s\right)$ \\
\hline
\end{tabular}
\caption{Expansion coefficients according to Eq. (\ref{vectorcoef}).}
\label{table1}
\end{table}
The same approach can be used for calculating the anti-symmetric tensor structures. In $3+1$ dimensions, an anti-symmetric tensor has $6$ independent components, and hence can be expanded as follows:
\begin{align}
& \frac{\bar{u}_{\bar{\sigma}}\sigma_{\mu \nu}U_{\sigma}}{\sqrt{4mM}N\left(q,r,p,s\right)}\equiv \nonumber \\
& \beta_{ps}\left(p_{\mu}s_{\nu}-p_{\nu}s_{\mu}\right)+ \beta_{dd^{*}}\left(d_{\mu}d^{*}_{\nu}-d^{*}_{\nu}d_{\mu}\right) \nonumber \\
& \beta_{pd}\left(p_{\mu}d_{\nu}-d_{\nu}p_{\mu}\right)+\beta_{pd^{*}}\left(p_{\mu}d^{*}_{\nu}-d^{*}_{\nu}p_{\mu}\right) \nonumber \\
& \beta_{sd}\left(s_{\mu}d_{\nu}-d_{\nu}s_{\mu}\right)+\beta_{sd^{*}}\left(s_{\mu}d^{*}_{\nu}-d^{*}_{\nu}s_{\mu}\right).
\label{astensor}
\end{align}
Contracting with each of the terms present in the expansion, one calculates the coefficients for the $4$ independent anti-symmetric tensor structures, which are presented in TABLE \ref{table2}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
& $\bar{u}_{+}\sigma_{\mu \nu}U_{+}$ & $\bar{u}_{+}\sigma_{\mu \nu}U_{-}$ \\
\hline
$\beta_{ps}$ & $\frac{iC\left(q,r,p,s\right)}{m}$ & $\frac{-iD^{*}\left(q,r,p,s\right)}{m}$ \\
\hline
$\beta_{dd^{*}}$ & $-2iA\left(q,r,p,s\right)$ & $2iB^{*}\left(q,r,p,s\right)$ \\
\hline
$\beta_{pd}$ & $\frac{2iD\left(q,r,p,s\right)}{m}$ & $\frac{2iC\left(q,r,p,s\right)}{m}$ \\
\hline
$\beta_{pd^{*}}$ & $0$ & $0$ \\
\hline
$\beta_{sd}$ & $2iB\left(q,r,p,s\right)$ & $2iA\left(q,r,p,s\right)$ \\
\hline
$\beta_{sd^{*}}$ & $0$ & $0$ \\
\hline \hline
& $\bar{u}_{-}\sigma_{\mu \nu}U_{+}$ & $\bar{u}_{-}\sigma_{\mu \nu}U_{-}$ \\
\hline
$\beta_{ps}$ & $\frac{iD\left(q,r,p,s\right)}{m}$ & $\frac{iC\left(q,r,p,s\right)}{m}$ \\
\hline
$\beta_{dd^{*}}$ & $2iB\left(q,r,p,s\right)$ & $2iA\left(q,r,p,s\right)$ \\
\hline
$\beta_{pd}$ & $0$ & $0$ \\
\hline
$\beta_{pd^{*}}$ & $\frac{-2iC\left(q,r,p,s\right)}{m}$ & $\frac{2iD^{*}\left(q,r,p,s\right)}{m}$ \\
\hline
$\beta_{sd}$ & $0$ & $0$ \\
\hline
$\beta_{sd^{*}}$ & $-2iA\left(q,r,p,s\right)$ & $2iB^{*}\left(q,r,p,s\right)$ \\
\hline
\end{tabular}
\caption{Expansion coefficients according to Eq. (\ref{astensor}).}
\label{table2}
\end{table}
\section{IV. Relation with the literature}
The last issue to be addressed is how to relate these results to the conventional expressions present in the literature. To do this, we can remind one procedure which is widely used in the literature.
In the rest frame of the fermion, its momentum four-vector reduces to a vector which can be chosen as the time-like vector in a basis set. In the usual Minkowskian coordinates $t,x,y,z$, in the usual representation, one writes:
\begin{align}
p^{rest}=(m,0,0,0).
\end{align}
Similarly, one chooses a spatial direction, say, the $z$-axis, represented by the vector $(0,0,0,1)$ along which the spin projections in the rest frame are to be calculated. One then calculates the simulateneous eigenspinors of $m\gamma^{0}$ and $\gamma^{3}$, and boosts those spinors to a frame where the fermion has momentum four-vector $p$ using the projectors $\frac{\slashed{p}\pm m}{2m}$. The procedure can be followed in e.g. \cite{zuber}. A suitable choice for the four-vectors $s$ and $r$ in this procedure would be the following:
\begin{align}
& s\equiv \frac{1}{\sqrt{E_{p}^{2}-p_{z}^{2}}}(-p_{z},0,0,E_{p}),\nonumber \\
& r\equiv \frac{1}{\sqrt{E_{q}^{2}-q_{z}^{2}}}(-q_{z},0,0,E_{q}),
\end{align}
where $E_{p}^{2}\equiv m^{2}+\vec{p}^{2}$ and $E_{q}^{2}\equiv M^{2}+\vec{q}^{2}$. Clearly, each of these vectors reduce to $(0,0,0,1)$ in the rest frames for the corresponding fermions. Since the bilinear expressions calculated in this work are all expressed in terms of Lorentz scalars, these choices for the $s$ and $r$ vectors directly lead us to expressions which can be calculated by the above mentioned procedure using an explicit representation for the spinors. Of course, this choice for $s$ and $r$ vectors is not unique; there are alternative choices in the literature which all reduce to $(0,0,0,1)$ in the rest frames of the corresponding fermions. For example, in \cite{zuber}, the helicity basis has also been introduced:
\begin{align}
s_{p}\equiv \left(\frac{|\vec{p}|}{m},\frac{E_{p}}{m}\frac{\vec{p}}{|\vec{p}|}\right).
\end{align}
Such a choice will directly lead one to the expressions which can be obtained by beginning the calculations with the corresponding explicit representations in the helicity basis. So, if one needs to relate our results to a specific coordinate system and a specific explicit representation, one's specification of $s$ and $r$ like the above will be sufficient.
As an example, we can present the conventions adopted in \cite{zuber} and explicitly calculate any one of the bilinears. In \cite{zuber}, the following representations of Dirac matrices is used:
\begin{align}
\gamma ^{0}=\begin{bmatrix}
I & 0 \\
0 & I
\end{bmatrix},\; \gamma ^{i}=\begin{bmatrix}
0 & \sigma^{i} \\
-\sigma^{i} & 0
\end{bmatrix},\; \gamma _{5}=\begin{bmatrix}
0 & I \\
I & 0
\end{bmatrix},
\end{align}
where $\sigma^{i}\, (i=1,2,3)$ are the Pauli matrices and $I$ is the $2\times 2$ identity matrix. One calculates the Dirac spinors by first calculating the rest frame spinors and then boosting them to a frame in which the fermion has 4-momentum $p$:
\begin{align}
u^{rest}_{+}=\begin{bmatrix}
1 \\
0 \\
0 \\
0 \\
\end{bmatrix},\; u^{rest}_{-}=\begin{bmatrix}
0 \\
1 \\
0 \\
0 \\
\end{bmatrix},\; v^{rest}_{-}=\begin{bmatrix}
0 \\
0 \\
1 \\
0 \\
\end{bmatrix},\; v^{rest}_{+}=\begin{bmatrix}
0 \\
0 \\
0 \\
1 \\
\end{bmatrix},
\end{align}
then:
\begin{align}
u_{\pm}(p)=\frac{(\slashed{p}+m)}{\sqrt{E_{p}+m}}u^{rest}_{\pm},\; v_{\pm}(p)=\frac{(-\slashed{p}+m)}{\sqrt{E_{p}+m}}v^{rest}_{\pm}.
\end{align}
Here, one calculates the $s$ vector to be equal to:
\begin{align}
s & =\frac{1}{m(m+E_{p})}
\nonumber \\
& \times \left( p_{z}(m+E_{p}),\, p_{z}p_{x}\, ,p_{z}p_{y}\, ,p_{z}^{2}+m(m+E_{p}) \right).
\end{align}
Similarly, one can construct the spinors $U_{\pm}(q),V_{\pm}(q)$ and calculate:
\begin{align}
r & =\frac{1}{M(M+E_{q})}
\nonumber \\
& \times \left( q_{z}(M+E_{q}),\, q_{z}q_{x}\, ,q_{z}q_{y}\, ,q_{z}^{2}+M(M+E_{q}) \right).
\end{align}
In order to compare our approach with the existing conventions in the literature, we can calculate $\bar{u}_{+}(p)U_{+}(q)$ both by using the above calculated vectors and our covariant expression $A(q,r,p,s)$. Indeed, one observes that:
\begin{align}
& \sqrt{4m M}N(q,r,p,s)A(q,r,p,s) =\nonumber \\
& \sqrt{\left(m M+q\cdot p\right)\left(1-r\cdot s\right)+q\cdot s\, r\cdot p};\nonumber \\
\nonumber \\
& \bar{u}_{+}(p)U_{+}(q) = \nonumber \\
& \frac{\left(E_{p}+m\right)\left(E_{q}+M\right)-\vec{p}\cdot \vec{q}+i\left(p_{x}q_{y}-p_{y}q_{x}\right)}{\sqrt{\left(E_{p}+m\right)\left(E_{q}+M\right)}},
\end{align}
where the second equality arises from the above mentioned conventions of \cite{zuber}. When one calculates the absolute square of the second relation, one arrives at the absolute square of the first relation. This implies that our results are in agreement with the representation given in \cite{zuber} up to phase. Notice that the phase of the square root is not specified in our expression, which allows us to conclude that our result reproduces the explicit expression obtained using the representation of \cite{zuber}.
\section{Conclusion}
At the end, we can say that we have achieved the goal to write down Dirac bilinears purely in terms of Lorentz scalars composed of the vectors $p$, $s$, $q$ and $r$. The lengthy appearance of the expressions is a consequence of the fact that they have been calculated for the most general case. Whenever it is possible to impose further constraints on the vectors $p$, $s$, $q$ and $r$ (like, for example, considering cases where it is possible to choose $r=s$), the expressions can obviously be simplified. In addition, we can argue that the need for specifying a timelike vector indicating the timelike direction in the foliation of spacetime (as needed in \cite{lorce}) is no longer necessary in our approach. This is an advantage of our approach over that presented in \cite{lorce}. Further more, we also observe that all bilinears can be expressed in terms of the four complex functions $A(q,r,p,s), \, B(q,r,p,s), \, C(q,r,p,s), \, D(q,r,p,s)$. This is another advantage of our approach which will definitely be useful in more specific problems.
\section{Appendix}
The only conditions on the vectors $d$ and $d^{*}$ are:
\begin{itemize}
\item that they span s subspace orthogonal to that spanned by $p$ and $s$;
\item that they are null vectors; and
\item $d\cdot d^{*}=-1/2$.
\end{itemize}
Apart from these conditions, they are arbitrary. Above, we have argued that they can be written in terms of two arbitrary spacelike vectors $n_{1}$ and $n_{2}$ as $n_{1}\pm i n_{2}$. In this appendix, we propose an approach for how these vectors $n_{1}$ and $n_{2}$ can be chosen.
When one wishes to calculate the above mentioned bilinears, one obtains two more vectors, $q$ and $r$. As long as the set $p,s,q,r$ is linearly independent, one can make use of $q$ and $r$ to choose $n_{1}$ and $n_{2}$. One can begin with defining the following vector:
\begin{align}
q_{\perp \mu}\equiv q_{\mu}-\frac{q\cdot p}{m^{2}}p_{\mu}+q\cdot s s_{\mu}.\nonumber
\end{align}
This vector is orthogonal to both $p$ and $s$, and is spacelike. One defines, say,
\begin{align*}
n_{1\mu}\equiv \frac{q_{\perp \mu}}{\sqrt{-q_{\perp}^{2}}}.
\end{align*}
The other one can be defined as
\begin{align*}
n_{2\mu }\equiv C \epsilon_{\mu \nu \alpha \beta}n_{1}^{\nu}p^{\alpha}s^{\beta},
\end{align*}
where $C$ is a normalization factor such that $n_{2}^{2}=-1$. Notice that $n_{2}$ is also spacelike. Such a choice is further convenient since $n_{1}$ and $n_{2}$ are also orthogonal to each other. Then, one can define
\begin{align*}
d\equiv \frac{n_{1}-in_{2}}{2}\Rightarrow d^{*}\equiv \frac{n_{1}+in_{2}}{2}.
\end{align*}
This approach works even when only $3$ of the vectors of the set $p,s,q,r$ are independent. For example, when $r_{\perp}=0$ but $q_{\perp}\neq 0$, the above approach works. When $q_{\perp}=0$ but $r_{\perp}\neq 0$, one can define
\begin{align}
r_{\perp \mu}\equiv r_{\mu}-\frac{r\cdot p}{m^{2}}p_{\mu}+r\cdot s s_{\mu}, \nonumber
\end{align}
and perform the same procedure using $r_{\perp}$. Only when both $q_{\perp}=0$ and $r_{\perp}=0$, the vectors $d$ and $d^{*}$ remain arbitrary. In this case, $q$ and $r$ can only be vectors lying in the subspace spanned by $p$ and $s$, and this fact can be used for calculating the bilinear structures, since, obviously, $q\cdot d=q\cdot d^{*}=0$ and $r\cdot d=r\cdot d^{*}=0$ in this case.
So, we conclude that one can eliminate $d$ and $d^{*}$ from the bilinear expressions using the above approach, or using any other approach which is consistent with the conditions on $d$ and $d^{*}$.
|
3,212,635,537,622 | arxiv |
\section{Deep Impact Framework}
\boldparagraph{Document Expansion.}
In our approach, we leverage \textsf{DocT5Query}\ document expansion to enrich the original document collection with expansion terms. As noted by \citet{doc2query}, document expansion can be seen as a two-fold approach. By adding terms that are already part of the document, it rewrites their frequencies, similar to \textsf{DeepCT}\xspace. Furthermore, it injects into the passage new terms, originally not part of the document, in order to address the term mismatch problem. We refer to the two as \textit{Rewrite} and \textit{Inject}, respectively.
Table~\ref{tab:dtfq} summarizes
the effect of \textsf{DocT5Query}\ when applied to the MSMARCO passage ranking collection, and isolates the two contributions.
While \textit{Rewrite} alone
achieves stronger MRR@10 than \textit{Inject}, the latter achieves higher recall. Using both significantly outperforms either one on both measures. Indeed, \textit{Inject} is important for capturing additional results, but \textit{Rewrite} is needed to then properly weight the injected terms. However, the comparison of \textit{Rewrite} vs. \textsf{DeepCT}\xspace indicates that \textsf{DocT5Query}\ is still sub-optimal in determining the right frequencies, and resulting impact scores, for the terms.
This motivates our approach, \textsf{DeepImpact}, where we first use the \textit{Inject} step of \textsf{DocT5Query}\ to add new terms, and then directly learn the right impact scores for both old and newly injected terms.
\begin{table}[h]
\renewcommand{\arraystretch}{0.7}
\caption{Different contributions to effectiveness metrics of \textsf{DocT5Query}\ on the MSMARCO passage ranking collection.}
\centering
\begin{tabular}{l ccccc}
\toprule
& \multirow{2}{*}{\textsf{BM25}\xspace} & \multirow{2}{*}{\textsf{DeepCT}\xspace} & \multicolumn{3}{c}{\textsf{DocT5Query}} \\
\cmidrule(lr){4-6}
& & &\multicolumn{1}{c}{Cumulative} & \multicolumn{1}{c}{\textit{Rewrite}} & \multicolumn{1}{c}{\textit{Inject}} \\
\midrule
MRR@10 & 0.188 & 0.244 & 0.278 & 0.215 & 0.194 \\
Recall & 0.858 & 0.910 & 0.947 & 0.878 & 0.912 \\
\bottomrule
\end{tabular}
\label{tab:dtfq}
\end{table}
\boldparagraph{Neural Network Architecture.}
\looseness -1 The overall architecture of the \textsf{DeepImpact}\ neural network is depicted in Figure~\ref{fig:architecture}. \textsf{DeepImpact}{} feeds a contextual LM encoder the original document terms (in white) and the injected expansion terms (in gray), separating both by a \texttt{[SEP]} separator token to distinguish both contexts. The LM encoder produces an embedding for each input term. The first occurrence of each \textit{unique} term is provided as input to the impact score encoder, which is a two-layer MLP with ReLU activations. This produces a single-value score for each unique term in the document, representing its impact. Given a query $q$, we model the score of document $d$ as simply the sum of impacts for the intersection of terms in $q$ and $d$.
\begin{figure}
\includegraphics[width=0.7\linewidth]{arch3.pdf}
\caption{Neural network architecture of \textsf{DeepImpact}.}\label{fig:architecture}
\vspace{-3mm}
\end{figure}
\boldparagraph{Network Training.}
We train our model using triples sampled from the official MS-MARCO training dataset, consisting of a query, a relevant passage, and a presumed non-relevant passage per sample. We expand each passage using the \textsf{DocT5Query}\ as discussed.
The model converts each document into a list of scores, corresponding to the document terms matching the query. These scores are then summed up, obtaining an accumulated query-document score. For each triple, two scores for the corresponding two documents are computed. The model is optimized via pairwise softmax cross-entropy loss over the computed scores of the documents. We use BERT-base as the contextualized language model. Max input text length was set to 160 tokens. Losses are back-propagated through the whole \textsf{DeepImpact}\ neural model with a learning rate of $3 \times 10^{-6}$ with the Adam optimizer. We used batches of 32 triples and train for 100,000 iterations.
\boldparagraph{Impact Scores Computation.}
Following the training phase, \textsf{DeepImpact}\ can leverage the learned term-weighting scheme to predict the semantic importance of each token of the documents without the need
for queries. Each document is represented as a list of term-score pairs, which are converted into an inverted index. The index can then be deployed and searched as usual for efficient query processing. We infer the scores using three digits of precision, and we do not perform any scaling.
\boldparagraph{Quantization and Query Processing.}
In our approach we predict real-valued document-term scores, also called impact scores, that we store in the inverted index. Since storing a floating point value per posting would blow up the space requirements of the inverted index, we decided to store impacts in a quantized form. The quantized impact scores belong to the range of $[1,2^b-1]$, where $b$ is the number of bits used to store each value. We experimented with $b = 8$ using linear quantization, and did not notice any loss in precision w.r.t. the original scores. Since we quantized all the scores in the index in the same way, to compute a query-document score at query processing we can just sum up all the quantized scores of the document terms matching the query. %
\section{Experimental Results}
In this section, we analyze the performance of the proposed method with an extensive experimental evaluation in a realistic and reproducible setting, using state-of-the-art baselines and a standard test collection and query logs.
\boldparagraph{Hardware.}
To evaluate the latency, we use a single core of a machine
with four Intel Xeon Platinum 8268 CPUs and 369 GB of RAM, running Linux 4.18.
To run \textsf{ColBERT}\xspace\ a GPU is required, and we used an NVIDIA RTX8000 with 48GB of memory.
\boldparagraph{Dataset and query logs.}
We conduct our experiments on the MSMARCO passage ranking \cite{msmarco} dataset.
To evaluate query processing effectiveness and efficiency, we compare with existing methods using the \textsf{MSMARCO Dev Queries}\xspace{},\footnote{We have made a submission to the official leaderboard and obtained an MRR@10 of 0.318 on the ``eval'' queries.} and we test all methods on the \textsf{TREC 2019}\xspace{}~\cite{trec2019} and \textsf{TREC 2020}\xspace{}~\cite{trec2020} queries from the TREC Deep Learning passage ranking track.
\boldparagraph{Baselines.}
We perform two different sets of experiments. Our initial experiment aims at comparing the performance of \textsf{DeepImpact}\ as a first-stage ranker, processing queries on inverted indexes but without complex reranking. In this experiment we compare our proposed \textsf{DeepImpact}\ with the classical \textsf{BM25}\xspace\ relevance model over the unmodified collection, and state-of-the-art solutions dealing with inverted indexes, namely \textsf{DeepCT}\xspace, and \textsf{BM25}\xspace\ over a collection expanded with \textsf{DocT5Query}.
We do not compare with \textsf{DeepCT}\xspace\ over the collection expanded with \textsf{DocT5Query}, since that would involve training a new \textsf{DeepCT}\xspace{} model from scratch to learn how to weigh expanded documents.
Our second set of experiments compares \textsf{DeepImpact}\ in a re-ranking setting. First, the top 1000 documents retrieved by \textsf{DeepImpact}\ are re-ranked by \textsf{EPIC}\ and \textsf{ColBERT}\xspace and compared to \textsf{ColBERT}\xspace end-to-end (E2E) where the candidates are generated using ANN search. Finally, we look at first-stage recall and re-ranking-stage MRR@10 when applying \textsf{ColBERT}\xspace\ at several first-stage cutoffs to different candidate generation methods.
\boldparagraph{Implementations.}
We use Anserini~\cite{anserini} to generate the inverted indexes of the collections. We then export the Anserini indexes using the CIFF common index file format \cite{ciff}, and process them with PISA~\cite{pisa} using the MaxScore query processing algorithm~\cite{maxscore}.
We use the \textsf{BM25}\xspace{} scoring method provided by Anserini.
For \textsf{DeepCT}\xspace{}, we used the source code and data\footnote{\url{https://github.com/jmmackenzie/term-weighting-efficiency}} provided by \citet{deepct-efficiency}.
For \textsf{DocT5Query}{} we use the predicted queries available online\footnote{https://github.com/castorini/docTTTTTquery}, using 40 concatenated predictions for each passage in the corpus, as recommended by \citet{docTTTTTquery}.
We use the \textsf{EPIC}{} implementation in OpenNIR~\cite{onir} and the official pretrained model\footnote{\url{https://github.com/Georgetown-IR-Lab/epic-neural-ir}}.
We use the \textsf{ColBERT}\xspace{} implementation\footnote{\url{https://github.com/stanford-futuredata/ColBERT}} provided by \citet{colbert}, trained for 200k iterations.
Both training and indexing tasks of \textsf{DeepImpact}\ are implemented in Python. After the quantization step, the documents are indexed directly by PISA.
Query processing efficiency is measured using PISA for all baselines. Query processing is performed using MaxScore to retrieve the top $1000$ documents. %
Our source code is publicly available\footnote{\url{https://github.com/DI4IR/SIGIR2021}}.
\boldparagraph{Metrics.}
To measure effectiveness, we use the official metrics for each query set, mean reciprocal rank (MRR@10) for MSMARCO queries, and normalised discounted cumulative gain (NDCG@10) as well as mean average precision (MAP) for TREC queries, following~\cite{hofstatter2020local}. We also report recall on the first stage and MRR@10 on the re-ranking stage at different cutoff values. Finally, we compute the mean response time (MRT) for every query processing strategy, in ms.
We conduct Bonferroni corrected pairwise t-tests, and report significance with $p < 0.05$.
\boldparagraph{Overall comparison.}
Our first experiment aims to show the early-stage effectiveness improvements that \textsf{DeepImpact}\ achieves when compared to prior work.
The results are presented in Table~\ref{tab:overall}, which shows effectiveness and efficiency for the three query logs on MS MARCO. We retrieve the top $1000$ documents for each query, without re-ranking, and report the values of NDCG@10, MRR@10, and MAP, as well as MRT.
\textsf{DeepImpact}\ significantly outperforms all methods and is statistically significantly better than other strategies for all effectiveness metrics on the \textsf{MSMARCO Dev Queries}\xspace. For the \textsf{TREC 2019}\xspace\ and \textsf{TREC 2020}\xspace\ queries, \textsf{DeepImpact}\ is always better than the competitors, with statistically significant improvements on NDCG@10 and MAP in some cases. Statistical significance on the latter two query traces is limited by their relatively small number of queries.
We also see that \textsf{DeepImpact}\ mean response time exceeds the time reported for other methods. We trace this to the query processing strategy: the distribution of scores induced by BM25, used in \textsf{BM25}\xspace, \textsf{DeepCT}\xspace, and \textsf{DocT5Query}\ is exploited more efficiently by the MaxScore algorithm. In contrast, \textsf{DeepImpact}\ learns new scores, whose distribution is not efficiently exploited by MaxScore. We performed additional experiments using disjunctive query processing without optimizations, omitted for space limitations. These experiments show \textsf{DeepImpact}\ to be in line with the speed of the other approaches. Optimizing the query processing speed of \textsf{DeepImpact}{} is an interesting open problem for future research.
\input{figures/effectiveness}
\input{figures/reranking}
\boldparagraph{Re-ranking evaluation.}
Table~\ref{tab:reranking} shows the effect of re-ranking the top $1000$ candidates produced by \textsf{DeepImpact}\ using two complex re-rankers, \textsf{EPIC}\ and \textsf{ColBERT}\xspace. The table also shows, as a comparison, the performance obtained by \textsf{ColBERT}\xspace\ when used end-to-end by employing ANN search as the first-stage retrieval mechanism. \textsf{DeepImpact}\ followed by a \textsf{ColBERT}\xspace\ re-ranker obtains higher effectiveness values than \textsf{ColBERT}\xspace\ {\sf E2E} on all query sets. Moreover, \textsf{DeepImpact}\ + \textsf{ColBERT}\xspace\ exhibits a $4.4\times - 5.1\times$ speedup w.r.t. \textsf{ColBERT}\xspace\ {\sf E2E}.
\boldparagraph{First-stage cutoff evaluation.}
\textsf{DeepImpact}\ is able to achieve statistically significant higher recall than all the compared methods (with one single exception at cutoff 1000). In particular, Table~\ref{table:recall} shows that the gap with the other methods is greater with smaller cutoff values, which reduces the re-ranking cost and thus could enable the use of more complex pairwise ranking models, such as DuoBERT \cite{duobert}. In re-ranking, \textsf{DeepImpact}\ outperforms all other methods at cutoff 10. Moreover, it outperforms \textsf{DeepCT}\xspace on all cutoff values except 1000, and it is comparable with \textsf{DocT5Query}.
\input{figures/recall}
\section{Introduction}
Modern search engines employ complex, machine-learned ranking functions to retrieve the most relevant documents for a query. Recently, the development of pre-trained contextualized language models such as BERT~\cite{devlin2018bert} has resulted in impressive benefits in search effectiveness, at the cost of expensive query processing times, which can make their deployment in production scenarios challenging. \citet{nogueira2019bertranker} and \citet{cedr} showed the superior performance of BERT in term of effectiveness for passage and document re-ranking tasks, respectively, by fine-tuning the pre-trained transformer network to distinguish between relevant and non-relevant query--document pairs.
However, several recent studies~\cite{cedr,Hofsttter2019LetsMR} have shown that this can have very high computational cost, even if re-ranking just the top 1000 results.
Other studies~\cite{epic,prettr,colbert} proposed methods with lower lower computational cost but typically some loss in retrieval quality. BERT's Transformer encoder is composed of many neural layers performing expensive processing to compute the query-document relevance signals. Different solutions have been proposed to address this performance bottleneck, based on the pre-computation of query-document representations produced by BERT. \textsf{EPIC}~\cite{epic} proposes to build on top of BERT a new ranking model trained to generate query and document representations in a given fixed-length vector space, equal to the size of the lexicon. Document representations are pre-computed, while query representations are computed at retrieval time, and then used to obtain a ranking score by computing a similarity between the two representations. %
\textsf{PreTTR}~\cite{prettr} and \textsf{ColBERT}\xspace~\cite{colbert} experimentally show that the query-document interactions in most layers of BERT have little impact on the final effectiveness. This leads them to pre-compute document representations at indexing time, which are used at query processing time to compute the query-document interaction only in a final layer. While \textsf{PreTTR}\ still relies upon a first-stage candidate generation based on BM25, \textsf{ColBERT}\xspace investigates the ability of the pre-computed document representations to identify relevant documents among \textit{all} documents in the index. Due to space/time requirements of the document representation, \textsf{ColBERT}\xspace leverages approximate nearest neighbor (ANN) search applied to dense representations as a first-stage retrieval system, followed by an exact re-ranking stage, while similar approaches using exact nearest neighbor search~\cite{xiong2020approximate} can perform processing in a single stage.
Following a different paradigm, \citet{deepct} investigated the use of the contextual word representations from BERT to generate more effective document term weights for bag-of-words retrieval. \textsf{DeepCT}\xspace~\cite{deepct}, for passages, and \textsf{HDCT}\xspace~\cite{deepct2}, for documents, estimate a term's context-specific importance in each passage/document, by projecting each word’s BERT representation into a single term weight. These term weights are then transformed into term frequency-like integer values that can be stored in an inverted index to be used with classical retrieval models. A main limitation of \textsf{DeepCT}\xspace\ that we address in this work is that it is trained as a \textit{per-token regression task}, in which a ground truth term weight for every word is needed, and which does not permit the individual impact scores to co-adapt for the downstream objective of identifying relevant documents. %
By storing new integer values as term frequencies in the inverted index, \textsf{DeepCT}\xspace and \textsf{HDCT}\xspace enrich a document's bag-of-words representation with additional document-level context information, to match queries more accurately. Using a different approach, \citet{docTTTTTquery} propose \textsf{DocT5Query}{}, a document expansion strategy to enrich each document with additional terms able to improve the retrieval effectiveness of documents w.r.t. queries for which they are relevant. \textsf{DocT5Query}{} trains a sequence-to-sequence model to predict queries potentially relevant to a given document, and appends these queries to the documents before indexing. As another way of expanding documents, the very recent {\sf SparTerm}~\cite{bai2020sparterm} method predicts an importance score for every term in the vocabulary and uses a gating mechanism to only keep a sparse subset of those, using them to learn an \textit{end-to-end} score for relevant and non-relevant documents. However, this only increases the MRR@10 of \textsf{DocT5Query}{} from 0.277 to 0.279.
\looseness -1 We propose \textsf{DeepImpact}, a more effective approach for learning a relevance score contribution for term-document pairs that can also be stored in a classical inverted index. \textsf{DeepImpact}{} improves impact-score modeling and tackles the vocabulary-mismatch problem~\cite{zhao2012modeling} between queries and documents. Instead of learning \textit{independent} term-level scores without taking into account the term co-occurrences in the document, as in \textsf{DeepCT}\xspace, or relying on unchanged BM25 scoring, as in \textsf{DocT5Query}, \textsf{DeepImpact}\ directly optimizes the \textit{sum} of query term impacts to maximize the score difference between relevant and non-relevant passages for the query.
In other words, while \textsf{DeepCT}\xspace learns the term frequency component of existing IR models, e.g., BM25, \textit{in this work we aim at learning the final term impact jointly across all query terms occurring in a passage}.
In this way, our proposed model learns richer interaction patterns among the impacts, when compared to training each impact in isolation. To address vocabulary mismatch, \textsf{DeepImpact}\ leverages \textsf{DocT5Query}\ to enrich every document with new terms likely to occur in queries for which the document is relevant. Using a contextualized language model, it directly estimates the semantic importance of tokens in a document, producing a single-value representation for each token in each document that can be stored in an inverted index for efficient retrieval.
Our experiments show that \textsf{DeepImpact}\ significantly outperforms prior first-stage retrieval approaches by up to $17\%$ on effectiveness metrics w.r.t. \textsf{DocT5Query}. When deployed in a re-ranking scenario, it reaches the same effectiveness as state-of-the-art approaches up to $5.1\times$ faster.
In summary, this paper makes the following contributions:
\vspace{-0.4em}
\begin{itemize}
\item We propose \textsf{DeepImpact}{}, a more effective scheme for \textit{jointly learning} term impacts over \textit{expanded documents}.
\item We evaluate \textsf{DeepImpact}{} on the MS MARCO passage ranking task.
We find that \textsf{DeepImpact}{} can improve ranking effectiveness for passage ranking versus prior first-stage retrieval approaches and is competitive when compared to complex systems based on ANN search, while exhibiting much lower computational costs.
\item We evaluate \textsf{DeepImpact}\ as a first-stage model in a re-ranking pipeline, and show that this pipeline matches or outperforms strong baseline approaches, while being highly efficient.
\end{itemize}
\section{Conclusions and future work}
In this paper, we introduced \textsf{DeepImpact}, a new first-stage retrieval method that leverages a combination of a traditional inverted indexes and contextualized language models for efficient retrieval. By estimating semantic importance, \textsf{DeepImpact}\ produces a single-value impact score for each tokens of a document collection. Our results show that \textsf{DeepImpact}\ outperforms every inverted-index based baseline, in some cases even matching the effectiveness of more complex neural retrieval approaches such as \textsf{ColBERT}\xspace. Furthermore, when \textsf{ColBERT}\xspace\ is used to re-rank candidates retrieved by \textsf{DeepImpact}\, instead of approximate nearest neighbor, we find a dramatic reduction of query processing latency, and a more modest improvement in effectiveness of the whole pipeline.
Future work will focus on further enhancing the underlying model. First, we would like to experiment with more relaxed matching conditions, instead of exact match, between the query-document terms. Second, we believe that we could improve further term expansion with more sophisticated techniques. Finally, we plan to investigate how changing the distribution of impact scores affects query processing algorithms such as MaxScore, and how we can address this issue.
|
3,212,635,537,623 | arxiv | \section{Introduction}
In most challenging real-world supervised machine learning tasks in model serving such as sentiment analysis of Twitter users and weather forecasting, data evolve and change over time, causing the machine learning models built on historical data to become increasingly unreliable. One reason is usually that the classification or regression models assume stationarity, i.e. the training and test data should be independent and identically distributed \cite{iid1,iid2}. This assumption is often violated, leading to limited generalization ability. The changes of the relationship between features and labels are referred to as a concept drift, while the changes of features only are referred to as covariate shift. When either one occurs, the model performance deteriorates. In order to handle the concept drift or the covariate shift, the easiest way is to retrain the model as soon as a batch of new labeled data is available. This is impractical as 1) continuous retraining is extremely computationally time-consuming and 2) it is a waste of effort when the data distribution does not change. A better strategy is to require the model serving system to continuously diagnose signals such as the classification error and feature reconstruction error, and to automatically adapt to changes in data over time. For example, when the classification error rate increases significantly, the system is supposed to detect the signal and retrain the model.
We consider the model serving process where we continuously receive new batches of data (a batch can as well be a single sample corresponding to streaming) for inference. The data comes without labels which can either arrive immediately after the batch or at any time in the future. Before a new batch arrives, the system needs to make a decision to retrain the model or not and it needs to select a subset of samples to use in retraining if triggered.
There exist several works on drift detection \cite{ddm, eddm, ph, adwin, ewma, cdstudy1} that mainly focus on concept drift with classification performance as the signal for detection. However, this may be problematic. On the one hand, having only one signal may greatly increase the likelihood of false detection or missed detection. On the other hand, drift detection relying on classification performance requires in-time online labeling and is impractical in real-world applications, as in-time online labeling is time-consuming, costly and requires a large amount of human intervention \cite{hi1,hi2, iid2}.
We address the first problem by including six different signals, capturing different characteristics of data changes such as a lagged classification error rate and model uncertainty. They function as an ensemble and use a tailored majority voting strategy for drift detection, and thus reduce the reliance on one specific signal; the model is also less sensitive to anomaly data.
Although unavoidable, in order to reduce the reliance on in-time online labeling, we address the second problem by introducing the \textit{lag of labels} setting. In this setting, the system receives the labels of input data after certain time periods to allow time for labeling, instead of receiving labels immediately after a batch as in previous works. In order to detect drift effectively and not waiting until labels are available, our proposed method utilizes a lagged classification error rate for concept drift and other signals for covariate shift, which monitor feature distribution changes as an early indicator of drift. This \textit{lag of labels} scenario is prevalent in real-world applications due to domain expert labeling efforts.
Moreover, most of the existing drift detection algorithms do not have mechanisms to determine what data to use for model retraining when drift is reported, which poses a challenge when applied to real-world applications, because not only do we care about effective and timely drift detection, but the model performance in serving is important as well. Our proposed method automatically determines the data that are used for retraining by collecting samples that are in the warning zone, which can be easily deployed into model serving without human efforts to determine retraining data.
In this work, we propose Concept Drift and Covariate Shift Detection Ensemble (CDCSDE), a drift detection ensemble algorithm in the \textit{lag of labels} setting, where the system receives the labels of input features after a time period due to labeling costs. The ensemble system is composed of six drift detection modules, capturing different characterstics of incoming data such as misclassification rate and feature reconstruction error. The proposed system is also able to decide when to retrain the model and automatically select the data to be used for retraining. We evaluate CDCSDE on both structured and unstructured datasets, and on simulated and real-world datasets. The results show that the proposed method consistently outperforms all the benchmark drift detection models by a large margin.
Our contributions are summarized as follows.
\begin{itemize}
\item We propose a novel and effective method for drift detection in the \textit{lag of labels} setting.
\item The proposed method can detect both concept drift and covariate shift; it can determine when to retrain and what data to use to retrain automatically, by utilizing an ensemble of six different drift detectors.
\item CDCSDE is suitable for both structured and unstructured data.
\item We conduct extensive experiments on popular drift detection benchmark datasets. The results show the proposed method consistently outperforms all other methods by a large margin.
\end{itemize}
The rest of the paper is organized as follows. In the next section, we provide a review of the relevant literature. In Section 3, we provide the problem description while in Section 4 we discuss the proposed approach in detail. In Section 5, we show the experimental study results. We conclude the paper by reiterating the main contributions in Section 6.
\section{Related Work}
There are several works utilizing statistical control to monitor and detect drift in data streaming. A Cumulative Sum control chart \cite{ph} (CUSUM) is a sequential analysis technique, which is typically used for monitoring change detection. The system reports an alarm as soon as the cumulative sum of incoming data exceeds a user-specified threshold value. The Page Hinkley \cite{ph} (PH) test, a variant of CUSUM, is also a sequential analysis technique often used for change detection in the average of a Gaussian signal. Similar to CUSUM, the PH test alarms a user of a change in the distribution when the test statistic of incoming data is greater than a user-specified threshold. The exponentially weighted moving average \cite{ewma} (EWMA) method monitors the mis-classification rate of a classifier for change detection. It calculates the recent error rate by down-weighting the previous data progressively and reports a drift when the EWMA estimator exceeds an adaptive threshold value. Slightly different from the previous approaches, Adaptive Windowing \cite{adwin} (ADWIN) is an adaptive sliding window algorithm for drift detection, which keeps updated statistics from a window of a variable size. The algorithm takes the binary prediction results for incoming data as input, and decides the size of the window by cutting the statistics window at different points and it analyzes the average of statistics over these sub-windows. Whenever the absolute value of the difference between the two averages from two sub-windows exceeds a threshold, the algorithm concludes that the corresponding expected values are different and reports a drift. The Drift Detection Method \cite{ddm} (DDM) is the most widely used concept drift detection method based on the assumption that the model error rate would decrease as the number of analyzed samples increases, provided that the data distribution is stationary. Same as ADWIN, DDM uses a binomial distribution to describe the model performance. DDM then calculates the sum of the overall classification error and its standard deviation. The most significant difference between DDM and the previous works is that when the sum of the two statistics exceeds a threshold, either drift is detected or the algorithm warns that drift may occur soon. The data that DDM flags as a warning can be potentially used for future retraining.
There are four major differences between our proposed algorithm and the existing works. First, most of the existing works utilize a user-specified threshold for drift detection, and thus the performance of the drift detector largely depends on the choice of the threshold with the model performance being extremely sensitive to it. Our proposed algorithm does not require such threshold. Second, most of the existing works focus on an ideal supervised setting, i.e. they assume the labels are always available as soon as the input features are received, which does not capture many real-world applications due to labeling costs. Our proposed algorithm assumes the \textit{lag of labels} setting, which can report a drift as an early indicator even though the labels are not yet available. Third, these works utilize only a single statistic, e.g. the classification error rate, to monitor changes in data, which suffers from the existence of outliers and several other issues, while our work utilizes an ensemble of six drift detectors that can capture different characteristics of incoming data. Fourth, the previous works only alarm the user about changes in data, omitting deciding the data to be used for retraining. In our work, we progressively select the retraining data based on the calculated warning zone.
\section{Problem Formulation}
A data stream is a data set where observations have time stamps, which induces either a total or a partial order between observations \cite{Webb2018AnalyzingCD}. In our work, we assume classification as the only task, although the proposed method can easily generalize to the unsupervised setting where no labels are available.
Suppose the joint distribution $P(X,Y)$ generates random variables $X$ and $Y$, where $X$ denotes the features for classification and $Y$ denotes the corresponding labels. We further denote $P_t(X,Y)$ as the joint distribution at time $t$. Following the conventional definition, concept drift occurs when
$$P_n(X,Y) \neq P_m(X,Y)$$ for time $n$ and $m$, i.e., the joint distribution changes from time $n$ to time $m$ ($n<m$), which often results in model performance degradation, as the model trained to fit one distribution no longer delivers.
Similarly, covariate shift occurs when
$$P_n(X) \neq P_m(X)$$ for times $n$ and $m$, i.e. the feature distribution has changed. Detection of covariate shift is also of great importance, as on the one hand, it can be used as an early indicator of concept drift, especially when labels are not available or cannot be obtained in-time. On the other hand, most parametric models output probability distributions, which are useful information to learn the model confidence about such predictions. If the model is no longer confident about the predictions, it is desirable to alarm and retrain the model.
In real-world applications, a data stream is usually generated by different joint distributions, as characteristics of incoming data may change over time. If a concept drift or covariate shift occurs, the classification performance is often affected, thus there arises the need of drift detection and subsequent model retraining.
In a data stream in our work, we assume that labels $Y_n$ can only be obtained after a time period $l$, i.e. at time $n$ features $X_n$ are available, while the corresponding labels $Y_n$ are available at time $n+l$. This is a more difficult but practical scenario as opposed to the setting where labels $Y_n$ are available as soon as $X_n$ arrives. In real-world scenarios, the labels may arrive at any time, and thus $l$ may be a random variable following a statistical distribution.
Due to the aforementioned reasons, one cannot have a fixed model during the entire process of model serving, as incoming data distribution might have changed and causes performance degradation. The main tasks are: 1) how to monitor model performance, 2) how to decide if data distribution has changed and 3) what data to use to retrain if change is detected. The ultimate goal is to maximize the progressive accuracy of the provided classifier across all incoming data by addressing the three tasks.
\section{Concept Drift Detection Ensemble and Model Retraining}
In this section, we exhibit our approach to solve the drift detection and model retraining problems.
\begin{figure}[h]
\includegraphics[width=0.99\textwidth]{approach.png}
\centering
\caption{Overall approach. For each batch of incoming data, we calculate six descriptive statistics of time series, then utilize drift detection module for each of them to monitor drift and decide what data used to retrain.}
\centering
\label{approach_fig}
\end{figure}
We assume the label lag is $l$. Data $(X_{tr},Y_{tr})$ are used to train the current classification model $clf$. We denote an incoming batch as $B_n=(X_n,Y_n)$, where $Y_n$ are labels for feature set $X_{n-l}$. Sets $Y_1, Y_2, ..., Y_l$ are assumed to be empty.
Figure \ref{approach_fig} shows the general approach of our methodology, which contains six different drift detection modules capturing different characteristics of data: EWMA for delayed KPI, model uncertainty, Hellinger distance, auto-encoder reconstruction error, SPN fitting loss and gradient changes. They are further explained in the following subsections. The combination of the six modules is able to detect both concept drifts and covariate shifts. Based on the tailored majority voting strategy among the six modules, when the system decides to retrain the model, it utilizes proper batches to retrain the model, after which the entire monitoring process is repeated.
\subsection{Descriptive statistics calculation}
We construct six time series which are input to the drift detection modules. At each time step six different scores are calculated, which are either based on only the current batch or based on the current batch as well as the previous batches. The six scores are then added to the six time series for drift detection. We provide details on how we calculate the scores in the following subsections.
\textbf{EWMA of a delayed classification indicator:}
Let $kpi(Y,X)$ be the most important key performance indicator of $clf$, e.g. error rate or $1-F1$.
We calculate the exponentially weighted moving average of the delayed KPI as follows. Assume we trace back $k$ batches to calculate the moving average, i.e. at time $n$, we use the KPI of batches $n-k+1$, $n-k+2$,..., $n$, as the labels are delayed by $l$ based on our assumption. For weight decay $w, 0<w<1$, the score is calculated as
\begin{equation}
q^1_n = \mathlarger{\sum}_{i=n-k+1}^{n}kpi(Y_{i-l},clf(X_{i-l})) \cdot w^{n-i+1}.
\end{equation}
\noindent The main difference between (1) and the prior works is that the score defined in (1) avoids using the labels of the current batch due to label delay, and it uses the delayed KPI for drift detection, while the prior works utilizing EWMA assume $l=0$, i.e. they are not designed to handle \textit{lag of labels} and they use a different statistical control method. Despite of this, it generalizes in a straightforward manner to the setting where features and labels arrive simultaneously ($l=0$). Moreover, utilizing EWMA instead of focusing on a single batch is beneficial to the stability of the system, as it is more robust against potential outliers. Intuitively, when EWMA increases significantly, the model suffers from performance degradation and the drift has likely occurred.
\textbf{Model uncertainty:}
It is also helpful to understand how confident the model is regarding the predictions. In the second score, we first predict every sample in the current batch $X_n$ to obtain the predicted probability distributions. For each of these distributions, we construct the histogram of the largest probability and the histogram of the second largest probability, and fit each histogram using a Gaussian distribution to obtain probability density functions $N_1, N_2$. The model uncertainty score is obtained by
\begin{equation}
q^2_n = \int_{-\infty}^{\infty} min(N_1, N_2).
\end{equation}
\noindent When the overlapping area increases, the mean and variance of the two Gaussians are close to each other, indicating that the largest probability and the second largest probability from predictions are very similar and thus the model is more uncertain regarding the predictions. A significant increase of $q^2$ is a signal for drift, since when the model predictions are uncertain, the likelihood of a change in distribution is high. On the other hand, when the model is uncertain regarding a distribution, even if the predictions are correct (i.e. $q^1$ does not vary much), it is still detrimental to the overall process of model serving, as a small fluctuation of the distribution may move the model decision boundary and makes the model vulnerable to future outliers.
\textbf{Hellinger distance:}
In statistics and measure theory, the Hellinger distance is often used to quantify similarity between two distributions, and is also widely used in tasks such as anomaly detection and classification. We utilize the Hellinger distance to construct the distance between two datasets. Having discretized the features, we define the Hellinger distance between the training set and the current batch as
\begin{equation}
q_n^3=\frac{1}{|F|}\mathlarger{\sum}_{f\in F} \sqrt{\mathlarger{\sum}_{z \in f}( \sqrt{\frac{|X_{tr,f=z}|}{|X_{tr}|}} - \sqrt{\frac{|X_{n,f=z}|}{|X_{n}|}})^2},
\end{equation}
\noindent where $F$ denotes the set of all features. By averaging over all features, we calculate the distance between two datasets or batches. If the distance between the training dataset (i.e. the dataset that the model is fitted on) and the current batch is large, the existing model no longer fits the current distribution and needs to be retrained. Note that if the input data is unstructured such as image data, we calculate the Hellinger distance based on encoded feature vectors from an auto-encoder, instead of the raw features themselves.
\textbf{Auto-encoder reconstruction error:}
Auto-encoders are widely used in tasks such as dimension reduction and embedding learning. In our work, we employ an auto-encoder to measure how different the two datasets or batches are. We first train an auto-encoder using training features $X_{tr}$ and obtain the training reconstruction MSE loss $L_{tr}=MSE(X_{tr},AE(X_{tr}))$. For the current batch $X_n$, we calculate the test reconstruction loss $L_n=MSE(X_n,AE(X_n))$. The auto-encoder reconstruction score is defined as
\begin{equation}
q_n^4=tanh(\frac{L_{te}}{L_{tr}}).
\end{equation}
\noindent This score allows us to measure divergence by how large the test reconstruction error is compared to the training error. The increase of $q^4$ indicates that the auto-encoder is unable to fully reconstruct the incoming data and thus the covariate shift has occurred.
\textbf{Sum-Product Networks: }
The Sum-Product Network (SPN) is a deep probabilistic model widely used as a black box density estimator by comparing the likelihoods on tasks such as image completion and image classification \cite{Vergari2018VisualizingAU}. Similarly as for $q^4$, we use an SPN to monitor if the incoming distribution has changed or not, compared to the training data. After training an SPN using training data $X_{tr}$, for batch $X_n$, we obtain the log of negative log-likelihood $log(-ll_n)$. We consider the log since log likelihood from SPN is very small. The SPN module score is defined as
\begin{equation}
q_n^5=log(-ll_n).
\end{equation}
\noindent As vanilla SPNs do not take unstructured data as input, in such cases, we feed the embedded features from the encoder of the auto-encoder to SPN. The higher the score is, the less likely $X_n$ is generated by the same distribution as $X_{tr}$.
\textbf{Gradient changes:}
Proposed in \cite{kungangwork, kungangworkarxiv}, the changes of gradients can also be used to tackle the drift detection task, i.e. the larger the gradient changes, the more likely the data has changed. Instead of using the conventional gradients, we utilize natural gradients instead, which show promising performance in areas such as robotics and control. As natural gradients are rescaled by the Fisher information matrix, they are more stable and often used to solve issues such as catastrophic forgetting.
Let the optimal parameters from the training data $(X_{tr},Y_{tr})$ be $\theta ^*$. We evaluate the natural gradients at time $n$ using batch $(X_{n-l},Y_{n})$ as $\nabla _N Loss = \nabla Loss(X_{n-l},Y_{n},\theta^*) \cdot F^{-1}$, where $Loss$ is the conventional loss function (cross-entropy in classification) and $F$ is the Fisher information matrix, approximated by using Kronecker factorization \cite{kfac}. The gradients change score is obtained by
\begin{equation}
q_n^6=\frac{(\nabla_N Loss)^T (\nabla_N Loss)}{u}
\end{equation}
where $u$ denotes the dimension of $\theta^*$.
\noindent As gradient changes, the score should increase which is a signal for potential drifts.
It is worth pointing out that \cite{kungangwork, kungangworkarxiv} also transformed the gradient by the inverse Fisher information matrix in a manner that is quite similar, but not identical, to our method. Namely, \cite{kungangwork, kungangworkarxiv} monitored the components of the `decoupled' score vector (i.e. the gradient transformed by the inverse Fisher information matrix). Their statistic also implicitly transformed the score vector by the inverse of the covariance matrix, which is closely related to the Fisher information matrix.
\subsection{Drift detection}
Each descriptive statistic defined in the previous subsection generates a time series over time. The remaining problem is how to detect drift based on these time series. We describe next for the current batch $X_n$, the resulting algorithm to report drift, warning or safe signals, which utilizes an ensemble of six independent drift detection modules to monitor drift.
Similarly to \cite{ddm}, to detect drift of a time series $\{q_n\}_{n=1,2,...}$, we denote the progressive average by $p_i=\mathlarger{\sum}^i_{n=1} q_n/i$ and standard deviation by $s_i=std(p_1,p_2,...,p_i)$ . We also denote the minimum values at time $i$ by $p_{min}^i$ and $s_{min}^i$ such that $p_{min}^i + s_{min}^i = \displaystyle\min_{1\leq n\leq i}(p_n+s_n)$. Note that different from \cite{ddm}, at each time step we receive a batch instead of a single observation, thus our time series is not generated by Bernoulli distributions. To monitor drift, we apply the following rules:
If $p_i+s_i>p^i_{min}+2\cdot s^i_{min}$, the system is in the warning zone;
If $p_i+s_i>p^i_{min}+3\cdot s^i_{min}$, we report drift and retrain the model using batches in the warning zone;
If $p_i+s_i<p^i_{min}+2\cdot s^i_{min}$, the system exits the warning zone and is safe from drift.
There are two assumptions for this statistical control module. First, the values in the time series decrease when the data distribution is stationary, and thus the system should either exit the warning zone or stay in the safe zone. Second, a significant increase in the time series indicates the existence of a drift.
With a simple drift detection module, we would be able to detect drift and decide what data to use to retrain based on a single time series. Having six different time series, our method CDCSDE works as an ensemble. To detect a drift event, we employ a tailored majority voting rule as follows: if the drift detection module on $q^1$ reports drift, then the system reports drift and retrains all models (classifier, auto-encoder and SPN), as the KPI is the ultimate goal; otherwise, if most of the remaining modules (i.e. not less than 3 modules) report a drift, CDCSDE reports drift and retrains all models.
If drift is reported, the data used to retrain is determined by CDCSDE as well. Suppose the latest warning zone (if a module exits the warning zone, then the previous warning zone is not included in future retraining) for each module is $W_i,i=1,2,...,6$. The union of these warning batches $\bigcup\limits_{i=1}^6 W_i$ is used to retrain all models.
\section{Experimental Results}
In this section, we report experiments on both simulated and real-world datasets to evaluate the proposed method with the KPI being accuracy. The simulated datasets have ground truth about drift. For simulated datasets, 5 different metrics are used: the mean accuracy value over all batches (MA), mean time between false alarms (MTFA), mean time to detection (MTD), missed detection rate (MDR), total number of drifts detected (TD). A good drift detection method should have high MA and MTFA (or no MTFA due to no false alarms) as well as low MTD and MDR. Low MTD is desired because a good algorithm should be able to detect drift sooner instead of waiting for a long time to react (i.e. smaller time to detect drift).
Low TD is also beneficial since it reduces retraining time and computational costs. Among these metrics, MA is the most important as in real-world model serving the overall model performance is what we care most about. In real-world datasets only MA and TD are available as we do not have the knowledge when drift `indeed' occurs. We conduct experiments on both structured and unstructured data to validate the effectiveness of CDCSDE.
We compare the proposed method with four so-far-best-performing benchmarks: PH, ADWIN, EWMA and DDM, which are discussed in Section 2. For fair comparison, the benchmarks use the delayed KPI as signal in the \textit{lag of labels} setting and the same training data (i.e. the first 50 batches) are used across all methods. We experimented with other values of the number of initial batches to consider for training to find out that the performance is insensitive to this number. The number of batches 50 leads to 3,200 samples which is an appropriate number for both structured and unstructured data sets under consideration. In our experiments we also observe that with our retraining strategy, most of the time the number of retraining batches are less than 50. Moreover, in our observation, the number of retraining batches does not show an increasing or decreasing trend when additional batches arrive, since our algorithm selects only the useful batches to retrain instead of including as many batches as possible.
In the following experiments, the \textit{lag of labels} is assumed to follow the exponential distribution and thus the time between events is a Poisson process, i.e., a process where events occur continuously and independently at a constant rate. We employ scale = 4 and a sampled value is rounded down to an integer. We also study the sensitivity with respect to the \textit{lag of labels} and we construct an ablation study.
\subsection{Structured data stream}
We first perform experiments on structured data which consist of the following widely used datasets.
\textbf{1. Sea} \cite{sea}: simulated data with abrupt drift, 50,000 samples, 3 features and 2 classes. There are 3 drifts in total, each occuring after 12,500 samples. \textbf{2. Sine2} \cite{ddm}: simulated data with abrupt drift, 100,000 samples, 2 features and 2 classes. There are 9 drifts in total, each occuring after 10,000 samples. \textbf{3. Elec} \cite{elec}: real-world data, 45,312 samples, 8 features and 2 classes, which are recorded every half an hour for two years from the Australian New South Wales Electricity Market. \textbf{4. Weather} \cite{weather}: real-world data, 18,159 samples, 8 features and 2 classes, which record weather measurements from over 7,000 weather stations worldwide to provide a wide scope of weather trends. \textbf{5. Temp} \cite{temp}: real-world data, 4,137 samples, 24 features and 2 classes, which are collected from a monitor system mounted in houses.
Across all datasets, we utilize a 2-layer auto-encoder with shape `Input-6 neurons-Reconstructed Input' and a one-layer feed-forward network with shape `6-Output' as the classifier which takes the embedded features from the auto-encoder as input. Also taking the embedded features as input, the SPN consists of 4 layers, i.e. normal, product, sum, product. We optimize the models with the Adam optimizer with the 0.001 initial learning rate. The batch size is set as 64.
\begin{table}[h!]
\centering
\caption{Results on simulated structured datasets. }
\begin{tabular}{l| c | c| c | c | c | c| c | c| c|c |c}
\hline
& \multicolumn{5}{c|}{\textbf{Sea}} & \multicolumn{5}{c|}{\textbf{Sine2}} \\ \hline
\textbf{Method}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{AVG}\\ \hline
PH &69.50& \textbf{-}& 43.5&33.3 & 3 & 69.78& 38.0 & 22.3 & 33.3 & 8&69.64 \\
ADWIN & 77.59& 9.5 &18.0 & 0 &14 & 78.39& 15.9 & 40.0 & 33.3 &16& 77.99 \\
EWMA & 75.73 & 15.5 & 22.0 & 0& 7&47.92& 31.5 & 25.0 & 44.4&11&61.83 \\
DDM & 85.32 & \textbf{-}& 42.5 & 33.3 & 2&77.38& \textbf{-} & \textbf{9.3} & 55.6& 5&81.35 \\ \hline
CDCSDE & \textbf{88.08} &\textbf{-} &\textbf{17.7} &\textbf{0} &3 & \textbf{83.69} & \textbf{-}& 10.4 &\textbf{22.2} & 8&\textbf{85.89} \\ \hline
\end{tabular}
\label{tab:simulated_structured}
\end{table}
\begin{table}[h!]
\centering
\caption{Results on real-world structured datasets.}
\begin{tabular}{l| c | c| c|c | c| c|c }
\hline
& \multicolumn{2}{c|}{\textbf{Elec}} & \multicolumn{2}{c|}{\textbf{Weather}} & \multicolumn{2}{c|}{\textbf{Temp}} \\ \hline
\textbf{Method}&\textbf{MA}&\textbf{TD}&\textbf{MA}&\textbf{TD}&\textbf{MA}&\textbf{TD}&\textbf{AVG}\\ \hline
PH & 57.51& 6 & 56.18& 2& 67.00&6 &60.23\\
ADWIN & 58.38 &2 & 67.70& 1&78.45 &2&68.18 \\
EWMA & 59.61 &8 &32.29& 1&74.39 &3&55.46\\
DDM & 58.63 &1 & 67.70& 1& 66.25&1&64.19\\ \hline
CDCSDE & \textbf{67.71} &5 &\textbf{74.97} &2 &\textbf{82.80} &2&\textbf{75.16}\\ \hline
\end{tabular}
\label{tab:real_structured}
\end{table}
From Table \ref{tab:simulated_structured} where `-' denotes not available, i.e. no false alarms or only one false alarm, `AVG' denotes the average MA across all datasets, and the numbers in bold denote the best across all methods if applicable, we observe that among all evaluation metrics, CDCSDE consistently outperforms the benchmarks, especially in the most important metric MA, as it effectively detects drift and decides what data to use to retrain. Our relative improvements on average accuracy are 23.33\%, 10.13\%, 38.91\%, 5.58\% for PH, ADWIN, EWMA and DDM, respectively. Regarding MTFA, CDCSDE predicts no false detection in Sea and only one false detection in Sine2, while other benchmarks do not exhibit a stable performance, resulting in more false alarms and high MTFA. Our method also achieves the best performance in MTD due to the timely alarm of drift, with 17.7 on Sea and 10.4 on Sine2, while other benchmarks generally being less sensitive to the drifts. Lastly, among all methods, CDCSDE achieves the smallest MDR and a reasonable TD.
We also conduct experiments on Elec, Weather and Temp datasets to provide better insights on how our method performs on real datasets. The results are provided in Table \ref{tab:real_structured} where `-' denotes not available, i.e. no false alarms or only one false alarm, `AVG' denotes the average MA across all datasets, and the numbers in bold denote the best across all methods if applicable. Note that we do not have measures such as MTFA, MTD, etc. as they are real-world datasets without the ground truth information regarding drifts. From Table \ref{tab:real_structured}, CDCSDE consistently achieves the best performance in MA for all benchmarks, which again validates the effectiveness of CDCSDE. Our relative improvements on average accuracy are 24.79\%, 10.24\%, 35.52\%, 17.09\% for PH, ADWIN, EWMA and DDM, respectively. The TD of our method is also reasonable, while some other benchmarks either report too many drifts or do not report a drift at all.
\subsection{Unstructured data stream}
In order to evaluate our method on unstructured data which is more ubiquitous in real-world applications, we conduct experiments utilizing conventional image classification datasets MNIST and USPS \cite{USPS}. They contain the same 10 digits as labels (i.e. 0-9), but their distributions differ. The two datasets are widely used in domain adaptation tasks due to a moderate domain gap. In our work, we employ them to create 5 different data streams to validate CDCSDE.
\begin{figure}[h]
\includegraphics[width=1.1\textwidth]{exp_plot.png}
\centering
\caption{Unstructured datasets with various types of drifts.}
\centering
\label{uns_case}
\end{figure}
Across all scenarios, we use MNIST as the `base' dataset, i.e. all incoming batches are supposed to contain MNIST 0-9. USPS is used as the `drift' dataset, i.e. samples from USPS are added to the data stream in a specific way to introduce drifts. The batch size is set as 64 and the total number of samples is 60,000 (the size of the MNIST training set), and thus the number of batches is 60,000/64 = 938. The details of each dataset are provided as follows, see also Figure \ref{uns_case}. There are 9,298 USPS samples in total.
\textbf{1. Sudden drift:}
Samples of one digit from USPS are introduced after each 100 batches. Therefore, there are 9 sudden drifts in total. \textbf{2. Sudden and gradual drift:}
Samples of one digit from USPS are gradually introduced after each 100 batches. There are 9 sudden drifts in total. \textbf{3. Gradual drift - increase:}
USPS 0-9 digits are gradually and increasingly introduced. \textbf{4. Gradual drift - plateau:}
USPS 0-9 digits are gradually introduced. After the $500^{th}$ batch , the rate of adding USPS 0-9 digits is kept unchanged. \textbf{5. Gradual drift - decrease:}
USPS 0-9 digits are gradually introduced. After the $500^{th}$ batch , USPS 0-9 digits are gradually removed at the same rate of introducing.
\begin{table}[h!]
\centering
\caption{Results on unstructured datasets with sudden drifts.}
\begin{tabular}{l| c | c| c | c | c | c| c | c| c|c |c}
\hline
& \multicolumn{5}{c|}{\textbf{Sudden}} & \multicolumn{5}{c|}{\textbf{Sudden and gradual}} \\ \hline
\textbf{Method}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{AVG}\\ \hline
PH &71.59& \textbf{-}& 52.0&77.8 & 2 & 73.48& \textbf{-} & 43.0 & 88.9 & 2 &72.53\\
ADWIN & 85.49& 29.3 &\textbf{23.4} & 22.2 &19 & 84.17&29.4 &31.9 & 44.4 &24&84.83 \\
EWMA & 90.92 & 53.9 & 29.4 & 33.3& 11 & 92.01& 49.2 & 26.1 & 33.3&19&90.47 \\
DDM & 87.00 & \textbf{-}& 25.5 & 77.8 & 2&85.51& \textbf{-} & 35.5 & 77.8& 2 &86.26\\ \hline
CDCSDE & \textbf{97.41} &252.5 &26.3 &\textbf{11.1} &10 & \textbf{97.89} & 178.3&\textbf{18.3} &\textbf{11.1} & 11 &\textbf{97.65}\\ \hline
\end{tabular}
\label{tab:sudden_unstructured}
\end{table}
Across all datasets, we utilize a 2-layer CNN with output shape 500 as the encoder and a one-layer feed-forward network with shape `500-10' as classifier which takes the embedded features from the encoder as input. Also taking the embedded features as input, the SPN consists of 4 layers, i.e. normal, product, sum, product. We optimize our models by using the Adam optimizer with 0.001 initial learning rate. The batch size is 64.
We first experiment on the sudden drift, and sudden and gradual drift scenarios. From Table \ref{tab:sudden_unstructured}, we observe that when sudden drift occurs, the conventional benchmarks generally suffer from high MDR (MDR of PH and DDM are as high as 88.9\% and 77.8\%) due to the lack of ability for detecting changes in feature distributions, as they only take the classification error rate as input. CDCSDE achieves 11.1\% MDR, i.e. only 1 drift missed, which shows the ability of the proposed method for detecting both covariate shifts and concept drifts. CDCSDE's TD are 10 and 11 for the two scenarios which is reasonable for a total of 9 drifts, while other benchmarks either report too many drifts (24 for ADWIN) or too few drifts (2 for PH and DDM). Similarly, the MTFA metric of CDCSDE is much higher than the benchmarks when their TD is reasonable (as otherwise there would be no false alarms at all), indicating that the proposed method is much less likely to make false alarms. As a comparison, ADWIN reports a false alarm every 29.3 time steps on average. The MTD metric of the proposed method is 26.3 for the sudden dataset and 18.3 for the sudden and gradual dataset which is the lowest among all benchmarks, which again shows the ability of detecting drifts as low MTD indicates the method can respond to the changes in data distributions in time. Most importantly, the MA of CDCSDE consistently outperforms other benchmarks by a large margin due to the accurate and timely detection of drifts. Our relative improvements on average accuracy are 34.63\%, 15.11\%, 7.94\%, 13.20\% with respect to PH, ADWIN, EWMA and DDM, respectively.
\begin{table}[h!]
\centering
\caption{Results on unstructured datasets with gradual drifts.}
\begin{tabular}{l| c | c| c|c | c| c|c }
\hline
& \multicolumn{2}{c|}{\textbf{Gradual increase}} & \multicolumn{2}{c|}{\textbf{Gradual plateau}} & \multicolumn{2}{c|}{\textbf{Gradual decrease}}\\ \hline
\textbf{Method}&\textbf{MA}&\textbf{TD}&\textbf{MA}&\textbf{TD}&\textbf{MA}&\textbf{TD} & \textbf{AVG}\\ \hline
PH & 85.63& 3 & 81.28& 2& 81.12&4&82.68 \\
ADWIN & 89.45 &13 & 91.65& 9&87.17 &7 &89.42\\
EWMA & 91.11 &11 &88.32& 9&92.68 &10&90.70\\
DDM & 92.31 &2 & 94.15& 3& 93.68&3&93.38\\ \hline
CDCSDE & \textbf{97.99} &3 &\textbf{97.76} &4 &\textbf{97.50} &3&\textbf{97.75}\\ \hline
\end{tabular}
\label{tab:gradual_unstructured}
\end{table}
We then conduct experiments only with gradual drifts to observe model performance if there are no sudden drifts in the data stream, the results shown in Table \ref{tab:gradual_unstructured}. From Table \ref{tab:gradual_unstructured} we observe a relatively low TD for CDCSDE and the MA greatly exceeds the benchmarks across all three datasets, which shows the robustness of the proposed method for gradual drift as well. Our relative improvements on average accuracy are 18.23\%,9.31\%, 7.72\%, 4.68\% with respect to PH, ADWIN, EWMA and DDM, respectively.
\textbf{Ablation study}
In order to establish how important each component of CDCSDE is, we conduct an ablation study on three unstructured datasets: sudden, sudden and gradual, gradual decrease. The mean accuracies are shown in Table \ref{tab:ablations_add} and Table \ref{tab:ablations_wo}.
\begin{table}[h!]
\centering
\caption{Ablation study: increasingly adding components. For example, + Hellinger denotes the model with components EWMA error rate, uncertainty and Hellinger. }
\begin{tabular}{r | r | r | r |r}\hline
& \textbf{Sudden} & \textbf{Sudden and gradual} & \textbf{Gradual decrease}&\textbf{AVG}\\ \hline
EWMA error rate& 94.38 &95.68 & 92.18&94.08 \\\hline
+ uncertainty & 95.19 & 95.88 &93.91& 94.79\\\hline
+ Hellinger & 95.59 &95.70& 94.24& 95.18\\\hline
+ AE error& 96.10 & 96.30 &95.31& 95.90\\\hline
+ SPN likelihood& 97.25& 97.10 &96.24& 96.86\\\hline
+ gradient norm& 97.41 & 97.89 &97.50& 97.60\\\hline
\end{tabular}
\label{tab:ablations_add}
\end{table}
We first add each component cumulatively at one time to observe performance changes reported in Table \ref{tab:ablations_add}. The largest gains are from the SPN and gradient norm modules, which empirically shows that monitoring changes in feature distribution is beneficial to model performance.
\begin{table}[h!]
\centering
\caption{Ablation study: MA decrease without each component. For example, w/o uncertainty denotes the entire model without the uncertainty component.}
\begin{tabular}{r | r | r | r |r}\hline
& \textbf{Sudden} & \textbf{Sudden and gradual} & \textbf{Gradual decrease}&\textbf{AVG}\\ \hline
w/o EWMA error rate& \textbf{2.61} &\textbf{3.59} & \textbf{2.33}&\textbf{2.84} \\\hline
w/o uncertainty & 0.11 & 0.37 &0.30& 0.26\\\hline
w/o Hellinger & 0.21 &0.28& 0.17& 0.22\\\hline
w/o AE error& 0.60 & 0.88 &0.91& 0.80\\\hline
w/o SPN likelihood& 1.38& 1.60 &1.10& 1.36\\\hline
w/o gradient norm& 1.01 & 1.25 &1.07& 1.11\\\hline
\end{tabular}
\label{tab:ablations_wo}
\end{table}
We then experiment removing each component to observe the accuracy decrease in Table \ref{tab:ablations_wo}, which shows similar patterns as Table \ref{tab:ablations_add}. The model without SPN suffers from the second largest performance drop, which again validates that modeling feature distribution as an early indicator of drift is important. It is not surprising that CDCSDE without EWMA's error rate suffers from the largest performance degradation, as it utilizes both labels and features to calculate accuracy of predictions and a good accuracy in model serving is what we are ultimately interested in.
\textbf{Varying the level of \textit{lag of labels}}
We also experiment how the model performs on different levels of \textit{lag of labels}. In addition to $l\sim exp(scale=4)$ as in previous experiments, we conduct experiments on small lag $l\sim exp(2)$ and large lag $l\sim exp(10)$. The results are shown in Table \ref{tab:differentlag}.
\begin{table}[h!]
\centering
\caption{Results on unstructured datasets with different levels of \textit{lag of labels}.}
\begin{tabular}{l| c | c| c | c | c | c| c | c| c|c |c}
\hline
& \multicolumn{5}{c|}{\textbf{Sudden}} & \multicolumn{5}{c|}{\textbf{Sudden and gradual}} \\ \hline
\textbf{Method}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{AVG}\\ \hline \hline
& \multicolumn{10}{c}{$l\sim exp(2)$ }\\ \hline
PH &73.94& \textbf{-}& 45.5&77.8 & 3 & 73.01& \textbf{-} & 39.0 & 77.8 & 3 &73.48\\
ADWIN & 87.74& 25.5 &22.0 & 33.3 &17 & 90.01& 29.2 &\textbf{21.2}& 33.3 &16&88.38 \\
EWMA & 92.01 & 59.0 & \textbf{19.1} & 33.3& 12 & 90.19& 29.9 & 26.8 & 44.4&14&90.60 \\
DDM & 87.64 & 39.0& 32.0 & 88.9 & 3&87.10& \textbf{-} & 30.0 & 77.8& 2 &86.26\\ \hline
CDCSDE & \textbf{97.57} &287.0 &24.0 &\textbf{11.1} &10 & \textbf{97.71} &151.0&21.5 &\textbf{11.1} & 10 &\textbf{97.65}\\ \hline\hline
& \multicolumn{10}{c}{$l\sim exp(4)$ } \\ \hline
PH &71.59& \textbf{-}& 52.0&77.8 & 2 & 73.48& \textbf{-} & 43.0 & 88.9 & 2 &72.53\\
ADWIN & 88.10& 31.6 &\textbf{25.1} & 33.3 &19 & 84.41& 28.4 &29.5 & 55.6 &14&86.26 \\
EWMA & 93.11 & 65.5 & 28.5 & 33.3& 8 & 90.82& 45.0 & 20.2 & 33.3&11&91.97 \\
DDM & 87.00 & \textbf{-}& 25.5 & 77.8 & 2&85.51& \textbf{-} & 35.0 & 77.8& 2 &86.26\\ \hline
CDCSDE & \textbf{97.21} &252.5 &26.3 &\textbf{11.1} &10 & \textbf{97.49} & 178.3&\textbf{18.3} &\textbf{11.1} & 11 &\textbf{97.35}\\ \hline \hline
& \multicolumn{10}{c}{$l\sim exp(10)$ } \\ \hline
PH &69.70& 34.0& 41.0&88.9 & 3 & 71.40& 56.0 & 51.0 & 77.8 & 4 &70.55\\
ADWIN &87.45& 29.8 &32.0 & 33.3 &22 & 86.30& 27.0 &33.5 & 44.4 &17&86.88 \\
EWMA & 86.98 & 55.3 & 39.5 & 44.4& 10 & 89.33& 58.2 & 28.3 & 44.4&13&88.16 \\
DDM & 84.57 & 33.5& 45.5 & 77.8 & 4&82.05& 40.0 & 36.0 & 77.8& 4 &83.31\\ \hline
CDCSDE & \textbf{96.01} &\textbf{191.0} &\textbf{28.6} &\textbf{22.2} &11 & \textbf{95.41} &\textbf{117.0}&\textbf{24.9} &\textbf{11.1} & 11 &\textbf{95.71}\\ \hline
\end{tabular}
\label{tab:differentlag}
\end{table}
From Table \ref{tab:differentlag}, we observe that in general models perform better for smaller \textit{lag of labels}, which is as expected because when the labels arrive sooner, such up-to-date information can be immediately taken into account. When the labeling costs are high, i.e. it takes more time for labels to arrive, the system can only use delayed labels to detect drifts. Comparing among all methods, CDCSDE still shows stable performance across all metrics, consistently outperforming benchmarks by a considerable margin for all levels of \textit{lag of labels}.
\textbf{Fixed \textit{lag of labels}}
Instead of using a stochastic \textit{lag of labels} as in the previous experiments, we further experiment on employing a fixed \textit{lag of labels} of $l=2,4$ and $10$. The results are shown in Table \ref{tab:randomlag}.
\begin{table}[h!]
\centering
\caption{Results on unstructured datasets with fixed \textit{lag of labels}. }
\begin{tabular}{l| c | c| c | c | c | c| c | c| c|c |c}
\hline
& \multicolumn{5}{c|}{\textbf{Sudden}} & \multicolumn{5}{c|}{\textbf{Sudden and gradual}} \\ \hline
\textbf{Method}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{MA}&\textbf{MTFA}&\textbf{MTD}&\textbf{MDR}&\textbf{TD}&\textbf{AVG}\\ \hline \hline
& \multicolumn{10}{c}{$l=2$ }\\ \hline
PH &72.40& 18.5 & 41.5 &77.8 & 4 & 75.41& \textbf{-} & 31.0 & 77.8 & 3 &73.91\\
ADWIN & 91.21& 53.5 &\textbf{23.5} & 33.3 &17 & 88.10& 28.9 &30.1 & 33.3 &14&89.66 \\
EWMA & 90.01 & 41.0 & 32.3 &44.4& 12 & 90.83 & 35.2 & 23.6 & 44.4&10&90.42 \\
DDM & 89.02 & 54.5& 28.5 & 77.8 & 4&87.45&38.5& 29.0 & 88.9& 3 &88.24\\ \hline
CDCSDE & \textbf{97.73} &\textbf{-} &25.1 &\textbf{22.2} &8 & \textbf{97.49} &\textbf{-}&\textbf{20.9} &\textbf{11.1} & 9 &\textbf{97.61}\\ \hline\hline
& \multicolumn{10}{c}{$l=4$ } \\ \hline
PH &71.04& \textbf{-}& 55.5&77.8 & 3 & 73.01& \textbf{-} & 50.0 & 77.8 & 3 &72.03\\
ADWIN & 87.39& 36.0 &33.8 & 33.3 &14 & 88.50& 31.5 &44.6 & 44.4 &13&87.95 \\
EWMA & 89.59& 35.5 & 39.3 & 44.4& 12 & 88.19& 40.0 & 27.5 & 44.4&13&88.89 \\
DDM & 86.24 & 39.0& 32.0 & 88.9 & 3&84.20& \textbf{-} & 30.0 & 77.8& 2 &86.26\\ \hline
CDCSDE & \textbf{97.27} &225.0 &\textbf{30.1} &\textbf{11.1} &10 & \textbf{97.51} &151.0&\textbf{22.5} &\textbf{11.1} & 10 &\textbf{97.39}\\ \hline\hline
& \multicolumn{10}{c}{$l=10$ } \\ \hline
PH &70.01& 21.5& 51.0&77.8 & 4 & 71.21& 41.0 & 71.5 & 77.8 & 5 &70.61\\
ADWIN & 84.10& 25.8 &\textbf{33.0} & 55.6 &18& 83.51& 35.8 &55.5 & 44.4 &19&83.81 \\
EWMA & 85.19 & 49.0 & 41.0 & 55.6& 8 & 83.91& 48.8 & \textbf{28.3} & 44.4&23&84.55 \\
DDM & 83.87 & 41.0& 50.3 & 66.7 & 5&81.29& 51.3 & 36.5 & 77.8& 5 &82.58\\ \hline
CDCSDE & \textbf{96.51} &\textbf{170.0} &34.1 &\textbf{22.2} &9 & \textbf{95.80} &\textbf{108.5}&29.7 &\textbf{33.3} & 8 &\textbf{96.16}\\ \hline
\end{tabular}
\label{tab:randomlag}
\end{table}
Compared with Table \ref{tab:sudden_unstructured}, we observe similar patterns in Table \ref{tab:randomlag}. The benchmarks generally are either too sensitive to drifts or unable to detect drifts. CDCSDE still shows robust performance across all metrics, consistently outperforming benchmarks by a considerable margin even in the fixed \textit{lag of labels} scenario. As expected, Table \ref{tab:randomlag} reflects the intuition that constant lag yields higher accuracy. The differences are not substantial, but they are consistent. Observations are similar regarding other metrics: for example, TD in the fixed \textit{lag of labels} scenario is smaller than the stochastic scenario with reasonable MDR.
\section{Conclusions}
In this paper, we present a novel and effective drift detection method in the practical \textit{lag of labels} setting, which is able to detect both the concept drift and covariate shift and automatically decide what data to use to retrain, with the help of the ensemble of different drift detectors. Extensive experiments on structured and unstructured data for different type of drifts have shown that our method consistently outperforms the state-of-the-art drift detection methods by a large margin.
|
3,212,635,537,624 | arxiv |
\section{Introduction}
\vspace{-0.08in}
\LetLtxMacro{\section}{\oldsection}
\vspace{-0.06in}
{
\bibliographystyle{style/IEEEtran}
\section{Introduction}
\setlength{\epigraphwidth}{0.9\columnwidth}
\renewcommand{\epigraphflush}{center}
\epigraph{All simulations are wrong, but some are useful.}{\textit{A variant of a popular quote by George Box}
}
\vspace{-10pt}
\IEEEPARstart{T}{he} vision, language, and learning communities have recently witnessed a resurgence of interest
in studying integrative robot-inspired agents that perceive, navigate, and interact with their environment.
For a variety of reasons,
such work has commonly been carried out in simulation rather than in real-world environments.
Simulators can run orders of magnitude faster than real-time~\cite{habitat19iccv}, can be highly parallelized, and
enable \emph{decades} of agent experience to be collected in days~\cite{ddppo}.
Moreover, evaluating agents in simulation is safer, cheaper, and enables easier
benchmarking of scientific progress than running robots in the real-world.
Consequentially, these communities have rallied around simulation as a testbed -- developing several increasingly realistic indoor/outdoor navigation simulators~\cite{gupta2017cognitive,dosovitskiy2017carla,xia2018gibson,beattie2016deepmind,ai2thor,DBLP:SadeghiL17,habitat19iccv,savva2017minos},
designing a variety of tasks set in them~\cite{gupta2017cognitive,anderson2018vision,embodiedqa},
holding workshops about such platforms~\cite{simulation_workshop_eccv2018},
and even running challenges in these simulated worlds~\cite{habitat_challenge,carla_challenge,robothor_challenge}.
As a result, significant progress has been made in these settings. For example,
agents can reach point goals in novel home environments with near-perfect efficiency~\cite{ddppo},
control vehicles in complex, dynamic city environments~\cite{dosovitskiy2017carla},
follow natural-language instructions~\cite{anderson2018vision}, and answer questions~\cite{embodiedqa}.
However, no simulation is a perfect replica of reality, and AI systems are known to exploit imperfections and biases
to achieve strong performance in simulation which may be unrepresentative of performance in reality.
Notable examples include evolving tall creatures for locomotion that fall and somersault instead of learning
active locomotion strategies \cite{lehman2018digitalcreativity} and OpenAI's hide-and-seek agents abusing their physics engine to `surf' on
top of obstacles \cite{baker2019emergent}.
This raises several fundamental questions of deep interest to the scientific and engineering communities:
\textbf{Do comparisons drawn from simulation translate to reality for robotic systems?}
Concretely, if one method outperforms another in simulation, will it continue to do so when deployed on a robot?
Should we trust the outcomes of embodied AI challenges (\eg the AI Habitat Challenge at CVPR 2019)
that are performed entirely in simulation? These are questions not only of simulator \emph{fidelity}, but rather of \emph{predictivity}.
In this work, we examine the above questions in the context of visual navigation -- focusing on measuring and optimizing the predictivity of a simulator. High predictivity enables researchers to use simulation for evaluation with confidence that the performance of different models will generalize to real robots. Given this focus, our efforts are orthogonal to techniques for sim2real transfer, including those based on adjusting simulator parameters. To answer these questions, we introduce engineering tools and a research paradigm for performing simulation-to-reality (sim2real) indoor navigation studies, revealing surprising findings about prior work.
First, we develop the \hpbfull (\hpb), a software library that enables seamless sim2robot transfer. \hpb is an interface between (1) Habitat~\cite{habitat19iccv}, a high-performance photorealistic 3D simulator,
and (2) PyRobot~\cite{pyrobot2019}, a high-level python library for robotics research.
Crucially, \hpb
makes it trivial to \emph{execute identical code in simulation and reality}.
Sim2robot transfer with \hpb involves only a single line edit to the code (changing the \texttt{config.simulator} variable from \texttt{Habitat-Sim-v0} to \texttt{PyRobot-Locobot-v0}),
essentially treating reality as just another simulator!
This reduces code duplication, provides an intuitive high-level abstraction, and allows for rapid prototyping with modularity
(training a large number of models in simulation and `tossing them over' for testing on the robot).
In fact, all experiments in this paper were conducted by a team of researchers physically separated by thousands of miles --
one set training and testing models in simulation, another conducting on-site tests with the robot, made trivial due to \hpb.
We will open-source \hpb so that everyone has this ability.
Second, we propose a general experimental paradigm for performing sim2real studies, which we call
\emph{sim2real predictivity}.
Our thesis is that simulators need not be a perfect replica of reality to be useful.
Specifically, we should primarily judge simulators not by their visual or physical realism, but by their
\emph{sim2real predictivity} -- if method A outperforms B in simulation, how likely is the trend to hold in reality?
To answer this question, we propose the use of a quantity we call
Sim2Real Correlation Coefficient (SRCC).
We prepare a real lab space within which the robot must navigate while avoiding obstacles.
We then \emph{virtualize} this lab space (under different obstacle configurations) by 3D scanning the space
and importing it in Habitat.
Armed with the power to perform parallel trials in reality and simulation, we test a suite of navigation models both in simulation and in the lab with a real robot.
We then produce a scatter plot where every point is a navigation model, the x-axis is the performance in simulation, and the y-axis is performance in reality.
SRCC is shown in a box at the top.
If SRCC is high (close to 1), this is a `good' simulator setting in the sense that we can conduct scientific development and testing purely in simulation, with confidence that we are making `real' progress because the improvements in simulation will generalize to real robotic testbeds.
If SRCC is low (close to 0), this is a `poor' simulator, and we should have no confidence in results reported solely
in simulation.
We apply this methodology in the context of PointGoal Navigation (PointNav)~\cite{anderson2018evaluation} with Habitat and the \locobot robot~\cite{locobot} as our simulation and reality platforms -- our experiments made easy with \hpb.
These experiments reveal a number of surprising findings:
\begin{compactitem}[\hspace{2pt}--]
\item We find that SRCC for Habitat as used for the CVPR19 challenge is $0.603$ for the Success
weighted by (normalized inverse) Path Length (SPL) metric and $0.18$ for agent success. When ranked by SPL, we observe $9$ relative ordering reversals from simulation to reality, suggesting that
the results/winners may not be the same if the challenge were run on \locobot.
\item We find that large-scale RL trained models can learn to `cheat' by exploiting the way Habitat allows for `sliding' along walls on collision.
Essentially, the virtual robot is capable of cutting corners by sliding around obstacles, leading to unrealistic shortcuts through parts of non-navigable space and `better than optimal' paths.
Naturally, such exploits do not work in the real world where the robot stops on contact with walls.
\item We \emph{optimize} SRCC over Habitat design parameters
and find that a few simple changes improve \srccSPL from $0.603$ to $0.875$ and \srccSucc from $0.18$ to $0.844$. The number of rank reversals nearly halves to $5$ ($13.8$\%).
Furthermore, we identify highly-performant agents in \emph{both} this new simulation and on \locobot in real environments.
\end{compactitem}
\noindent While our experiments are conducted on the PointNav task, we believe our software (\hpb), experimental paradigm (sim2real predictivity and SRCC), and take-away messages are
useful to the broader community.
While we believe our controlled environment is complex enough to robustly estimate sim2real predictivity, we do not believe it should be used to measure navigation performance.
Our work is complementary to ongoing efforts to improve sim2real performance and our metric is independent of the simulator implementation.
\iffalse
\section{Introduction}
\setlength{\epigraphwidth}{0.95\columnwidth}
\renewcommand{\textflush}{center}
\renewcommand{\epigraphflush}{center}
\epigraph{A ship in harbor is safe, but that is not what ships are built for.}{\textit{John A.~Shedd} \cite{shedd1928salt}}\vspace{-10pt}
We are witnessing a resurgence of interest in Embodied AI -- the study of embodied \emph{agents}, \eg Robots, perceiving, navigating, and interacting with their \emph{environment}.
For a variety of reasons,
such work is commonly carried out in simulation rather than in real-world environments.
Simulators can run orders of magnitude faster than real-time~\cite{habitat19iccv}, can be highly parallelized, and
enable \emph{decades} of agent experience to be collected in days~\cite{ddppo}.
Evaluating agents in simulation is safer, cheaper, and enables easier
benchmarking of scientific progress than running robots in the real-world.
The vision, language, and learning research communities have rallied around simulators --
developing several indoor/outdoor navigation simulators that present increasingly realistic environments to agents~\cite{gupta2017cognitive,dosovitskiy2017carla,xia2018gibson,beattie2016deepmind,ai2thor,DBLP:SadeghiL17,habitat19iccv,savva2017minos},
designing a variety of tasks set in them~\cite{gupta2017cognitive,anderson2018vision,embodiedqa},
holding workshops about such platforms~\cite{simulation_workshop_eccv2018},
and even running challenges in these simulated worlds~\cite{habitat_challenge,carla_challenge,gibson_challenge}.
As a result, significant progress has been made in these settings. For example,
agents can reach point goals in novel home environments with near-perfect efficiency~\cite{ddppo},
control vehicles in complex, dynamic city environments~\cite{dosovitskiy2017carla},
follow natural-language instructions~\cite{anderson2018vision}, and answer questions~\cite{embodiedqa}.
However, no simulation is a perfect replica of reality, and AI systems are known to exploit imperfections and biases
to achieve strong performance in simulation which may be unrepresentative of reality.
Notable examples include evolving tall creatures for locomotion that fall and somersault instead of learning
active locomotion strategies \cite{lehman2018digitalcreativity} and OpenAI's hide-and-seek agents abusing their physics engine to `surf' on
top of obstacles \cite{baker2019emergent}.
This raises several fundamental questions of deep interest to the scientific and engineering communities:
\textbf{How much of progress in simulated environments
is due to `overfitting' to anomalies in simulators and how much translates to progress in reality?}
Concretely, do improvements observed in simulated environments correlate with improved
performance on robots?
Should we trust the outcomes of embodied AI challenges (\eg the AI Habitat Challenge at CVPR 2019)
that are performed entirely in simulation? How can we make simulators reflect how well agents do relative to one another in reality?
In this work, we examine these questions in the context of embodied visual navigation. Our goal is to optimize simulation parameters such that performance in simulation will correlate with performance in the real world. This will enable researchers to use simulation for evaluation with confidence that their results will generalize to real robots.
Specifically, we introduce engineering tools and a research paradigm for performing simulation-to-reality
(sim2real) indoor navigation studies, revealing surprising findings about prior work.
\begin{comment}
Producing these answers is ostensibly easy -- `simply' deploy agents
in both simulated and real environments (\ie on a physical robot) and measure
if the improvements in simulation carry over to reality.
While conceptually simple, actually executing identical experiments in simulation
and reality is a daunting undertaking, littered with logistical and engineering challenges.
Evaluating robots in the real world is
\begin{inparaitem}[]
\item \emph{slow} (reality runs no faster than real time),
\item \emph{dangerous} (poorly-trained robots can injure themselves and others),
\item \emph{expensive} (costing money and time), and
\item \emph{difficult to control} (reality does not expose a convenient \texttt{env.reset}() function).
\end{inparaitem}
Ironically, the challenges in crossing the sim2real barrier are \emph{precisely} the reasons motivating
the community's embrace of simulators.
But cross this barrier we must.
\end{comment}
First, we develop the \hpbfull (\hpb), a software library that enables seamless sim2robot transfer. \hpb is an interface between (1) Habitat~\cite{habitat19iccv}, a high-performance photorealistic 3D simulator,
and (2) PyRobot~\cite{pyrobot2019}, a high-level python library for robotics research.
Crucially, \hpb
makes trivial to \emph{execute identical code in simulation and reality}.
As shown in \Cref{fig:habitat_pyrobot_bridge}, sim2robot transfer with \hpb involves only a single line edit to the code (changing the \texttt{config.simulator} variable from \texttt{Habitat-Sim-v0} to \texttt{PyRobot-Locobot-v0}),
essentially treating reality as just another simulator!
This reduces code duplication, provides an intuitive high-level abstraction, and allows for rapid prototyping with modularity
(training a large number of models in simulation and `tossing them over' for testing on the robot).
In fact, all experiments in this paper were conducted by a team of researchers physically separated by thousands of miles --
one set training and testing models in simulation, another conducting on-site tests with the robot, made trivial due to \hpb.
We will open-source \hpb so that everyone has this ability.
\begin{figure}
\inputpython{figures/sim2real.py}{0}{90}
\caption{\hpbfull (HaPy): sim2robot transfer is
as simple as a one line code change
(swapping \texttt{Habitat-Sim-v0} and \texttt{PyRobot-Locobot-v0}).}
\label{fig:habitat_pyrobot_bridge}
\end{figure}
Second, we propose a general experimental paradigm for performing sim2real studies, which we call
\emph{sim2real predictivity}.
Our notable thesis is that simulators need not be a perfect replica of reality to be useful.
Specifically, we should primarily judge simulators not by their visual or physical realism, but by their
\emph{sim2real predictivity} -- if method A outperforms B in simulation, how likely is the trend to hold in reality?
To answer this question, we propose the use of a quantity we call
Sim2Real Correlation Coefficient (SRCC).
We prepare a real lab space within which the robot must navigate while avoiding obstacles.
We then \emph{virtualize} this lab space (under different obstacle configurations) by 3D scanning the space
and importing it in Habitat.
Armed with the power to perform parallel trials in reality and simulation, we test a suite of navigation models both in simulation and in the lab with a real robot.
We then produce a scatter plot where every point is a navigation model, the x-axis is the performance in simulation, and the y-axis is performance in reality.
SRCC is shown in a box at the top.
If SRCC is high (close to 1), this is a `good' simulator setting in the sense that we can conduct scientific development and testing purely in simulation, with confidence that we are making `real' progress because the improvements in simulation will generalize to real robotic testbeds.
If SRCC is low (close to 0), this is a `poor' simulator, and we should have no confidence in results reported solely
in simulation.
We apply this methodology in the context of PointGoal Navigation (PointNav)~\cite{anderson2018evaluation} with Habitat and the \locobot robot~\cite{locobot} as our simulation and reality platforms -- our experiments made easy with \hpb.
These experiments reveal a number of surprising findings:
\begin{compactitem}[\hspace{2pt}--]
\item We find that SRCC for Habitat as used for the CVPR19 challenge is $0.603$ for the SPL metric and $0.18$ for agent success. When ranked by SPL, we observe $9$ relative ordering reversals from simulation to reality, suggesting that
the results/winners may not be the same if the challenge were run on \locobot.
\item We find that large-scale RL trained models can learn to `cheat' by exploiting the way Habitat allows for `sliding' along walls on collision.
Essentially, the virtual robot is capable of cutting corners by sliding around obstacles, leading to unrealistic shortcuts through parts of non-navigable space and `better than optimal' paths.
Naturally, such exploits do not work in the real world where the robot stops on contact with walls.
\item We \emph{optimize} SRCC over Habitat design parameters
and find that a few simple changes improve \srccSPL from $0.603$ to $0.875$ and \srccSucc from $0.18$ to $0.844$. The number of rank reversals nearly halves to $5$ ($13.8$\%).
Furthermore, we identify highly-performant agents in \emph{both} this new simulation and on \locobot in real environments.
\end{compactitem}
\noindent While our experiments are conducted on PointNav, we believe our software (\hpb), experimental paradigm (sim2real predictivity and SRCC), and take-away messages will be
useful to the broader community.
\begin{comment}
Seemingly benign design choices in 3D simulators (such as using a navigation mesh with default sliding along obstacles) often inherited from video games have disastrous consequences that are revealed only after performing laborious and expensive sim2real transfer experiments. And this (using a navmesh with sliding) is just one of a multitude of design choices that simulation designers must make.
Thus, we recommend that all simulation efforts (e.g. Deepmind Lab, Unity ML agents, Gibson, AI2 Thor) report SRCC (or equivalent) metrics and quantify how correlated is the performance in simulation vs on a real robotic platform. Otherwise, as a community we risk becoming isolated from real problems and `overfitting` to specific simulators.
\end{comment}
\fi
\section{Related Work}
\begin{comment}
\xhdr{Simulation platforms.}
There have been numerous recent efforts proposing 3D simulators, in particular for developing embodied agents that navigate within indoor environments.
Prominent examples include AI2 THOR~\cite{ai2thor}, Deepmind Lab~\cite{beattie2016deepmind}, Gibson~\cite{xia2018gibson}, MINOS~\cite{savva2017minos}, and Habitat~\cite{habitat19iccv}. It is under-explored how well techniques developed in these simulators transfer to reality. We extend the Habitat simulator to enable sim2real experiments and address this question by measuring the correlation between in-simulator and in-reality performance. We recommend similar studies be done for other simulators to ensure the community is making transferable progress by pursuing simulated performance.
\end{comment}
\xhdr{Embodied AI tasks.}
Given the emergence of several 3D simulation platforms, it is not surprising that there has been a surge of research activity focusing on investigation of embodied AI tasks.
One early example leveraging simulation is the work of Zhu ~\etal~\cite{zhu2017target} on target-driven navigation using deep reinforcement learning in synthetic environments within AI2 THOR~\cite{ai2thor}.
Follow up work by Gupta \etal \cite{gupta2017cmp} demonstrated an end-to-end learned joint mapping and planning method evaluated in simulation using reconstructed interior spaces.
More recently, Gordon \etal \cite{gordon2019splitnet} showed that decoupling perception and policy learning modules can aid in generalization to unseen environments, as well as between different environment datasets.
Beyond these few examples, a breadth of recent work on embodied AI tasks demonstrates the acceleration that 3D simulators have brought to this research area.
In contrast, deployment on real robotic platforms for similar AI tasks still incurs significant resource overheads and is typically only feasible with large, well-equipped teams of researchers.
One of the most prominent examples is the DARPA Robotics Challenge (DRC)~\cite{krotkov2017darpa}.
Another example of real-world deployment is the work of Gandhi~\etal~\cite{gandhi2017learning} who trained a drone to fly in reality by locking it in a room.
Our goal is to characterize how well a model trained in simulation can generalize when deployed on a real robot.
\xhdr{Simulation-to-reality transfer.}
Due to the logistical limitations of physical experimentation, transfer of agents trained in simulation to real platforms is a topic of much interest.
There have been successful demonstrations of sim2real transfer in several domains.
The CAD2RL~\cite{DBLP:SadeghiL17} system of Sadeghi and Levine trained a collision avoidance policy entirely in simulation and deployed it on real aerial drones.
Similarly, Muller~\etal~\cite{muller2018driving} show that driving policies can be transferred from simulated cars to real remote-controlled cars by leveraging modularity and abstraction in the control policy.
Tan~\etal~\cite{tan2018sim} train quadruped locomotion policies in simulation by leveraging domain randomization and demonstrate robustness when deployed to real robots.
Chebotar~\etal~\cite{simopt} improve the simulation using the difference between simulation and reality observations.
Lastly, Hwangbo~\etal~\cite{hwangbo2019learning} train legged robotic systems in simulation and transfer the learned policies to reality.
The goal of this work is to enable researchers to use simulation for evaluation with confidence that their results will generalize to real robots.
This brings to the forefront the key question: can we establish a correlation between performance in simulation and in reality?
We focus on this question in the domain of indoor visual navigation.
\section{Habitat-PyRobot Bridge: Simple Sim2Real}
\label{sec:hapy}
\noindent Deploying AI systems developed in simulation to physical robots presents significant financial, engineering, and logistical challenges -- especially for non-robotics researchers.
Approaching this directly requires researchers to maintain two parallel software stacks, one typically based on ROS~\cite{quigley2009ros} for the physical robot and another for simulation.
In addition to requiring significant duplication of effort, this model can also introduce inconsistencies between agent details and task specifications in simulation and reality.
To reduce this burden and enable our experiments, we introduce the Habitat-PyRobot Bridge (HaPy).
As its name suggests, HaPy integrates the Habitat~\cite{habitat19iccv} platform with PyRobot APIs~\cite{pyrobot2019} -- enabling identical agent and evaluation code to be executed in simulation with Habitat and on a PyRobot-enabled physical robot.
Habitat is a platform for embodied AI research that aims to standardize the different layers of the embodied agent software stack, covering 1) datasets, 2) simulators, and 3) tasks.
This enables researchers to cleanly define, study, and share embodied tasks, metrics, and agents.
For deploying simulation-trained agents to reality, we replace the simulator layer in this stack with `reality' while maintaining task specifications and agent interfaces.
Towards this end, we integrate Habitat with PyRobot \cite{pyrobot2019}, a recently released high-level API that implements simple interfaces to abstract lower-level control and perception for mobile platforms (LoCoBot), and manipulators (Sawyer), and offers a seamless interface for adding new robots.
HaPy is able to benefit from the scalability and generalizability of both Habitat and PyRobot.
Running an agent developed in Habitat on the LoCoBot platform requires changing a single argument, \texttt{Habitat-Sim-v0}$\rightarrow$\texttt{PyRobot-Locobot-v0}.
\noindent At a high level, HaPy enables the following:\\[-15pt]
\begin{compactenum}[\hspace{3pt}--]
\item A uniform observation space API across simulation and reality. Having a shared implementation ensures that observations from simulation and reality sensors go through the same transformations (e.g. resizing, normalization).
\item A uniform action space API for agents across simulation and reality. Habitat and PyRobot differ in their agent action spaces. Our integration unifies the two action spaces and allows an agent model to remain agnostic between simulation and reality.
\item Integration at the simulator layer in the embodied agent stack allows reuse of functionalities offered by task layers of the stack -- task definition, metrics, \etc stay the same across simulation and reality.
This also opens the potential for \emph{jointly} training in simulation and reality.
\item Containerized deployment for running challenges where participants upload their code for seamless deployment to mobile robot platforms such as \locobot.
\end{compactenum}
\noindent We hope this contribution will allow the community to easily build agents in simulation and deploy them in the real world.
\begin{comment}
\noindent Deploying AI systems developed in simulation to physical robots presents significant financial, engineering, and logistical challenges -- especially for non-robotics researchers. \sml{To reduce this burden and enable our experiments, we introduce Habitat-PyRobot Bridge (HaPY), a software library that integrates the high-performance Habitat-Sim \cite{habitat19iccv} simulation platform with PyRobot \cite{pyrobot2019}, a high-level interface for robotics research.} It is our hope that this contribution will allow the community to easily build agents in simulation and deploy them in the real world.
Habitat is a unified platform for embodied AI research that integrates across the layers of embodied agents stack 1) Datasets, 2) Simulators, and 3) Tasks. This enables researchers to cleanly define, study, and share embodied tasks and metrics. \sml{For deploying simulation trained agents to reality, we essentially would like to replace the Simulator layer in this stack with `Reality' while maintaining task specifications.}
Towards this end, We develop HaPy -- a flexible extension of Habitat that treats a PyRobot-enabled robot as just another simulator in user-level APIs.
As shown in \Cref{fig:habitat_pyrobot_bridge}, running an agent developed in Habitat on the LoCoBot platform requires changing only a single argument, \texttt{Habitat-Sim-v0''}$\rightarrow$\texttt{PyRobot-Locobot-v0}.
\noindent In the past researchers working with both simulation and reality would have to work with two different software stacks, the ROS stack and simulation/graphics stack. Through this work we try to unify the common parts of the two stacks. Concretely, our contribution enables the following functionalities:
\begin{compactenum}[\hspace{3pt}--]
\item A uniform observation space API across simulation and reality. Having a shared implementation ensures that observations from simulation and reality sensors go through the same transformations (e.g. resizing, normalization).
\item A uniform action space API for agent across simulation and reality. Habitat and PyRobot differ in their agent action spaces. Our integration unifies the two action spaces and allows an agent model to remain agnostic between simulation and reality.
\item Integrating at the Simulator layer in the embodied agents stack allows reuse of functionalities offered by task layers of the stack -- task definitions, metrics, etc.~stay the same across simulation and reality. This also opens the door for simultaneous training in simulation and reality in future.
\item Containerized deployment opens the door for running challenges on real robots, where participants can upload their code and it can be seamlessly deployed to \locobot.
\end{compactenum}
\end{comment}
\section{Visual Navigation in Simulation \& Reality}
\noindent Recall that our goal is to answer an ostensibly straightforward scientific question -- is performance in simulated environments predictive of real-world performance for visual navigation? Let us make this more concrete.
First, note this question is about \emph{testing} in simulation vs reality.
It does not require us to take a stand on \emph{training} in simulation vs reality (or the need for training at all).
For a comparison between simulation-trained and non-learning-based navigation methods, we refer the reader to previous
studies~\cite{mishkin2019benchmarking, kojima2019learn, habitat19iccv}.
We focus on test-time discrepancies
between simulation and reality for learning-based methods.
Second, even at test-time, many variables contribute to the sim2real gap. The real-world test environment may include objects or rooms which visually differ from simulation,
or may present a differing task difficulty distribution
(due to unmodeled physics or rendering in simulation).
To isolate these factors as much as possible, we propose a direct comparison -- evaluating agents in physical environments and in corresponding simulated replicas.
We construct a set of physical lab environment configurations for a robot to traverse and virtualize each by 3D scanning the space, thus controlling for semantic domain gap.
We then evaluate agents in matching simulated and real configurations to characterize the sim-vs-real gap in visual navigation.
In this section, we first recap the PointNav task \cite{anderson2018evaluation} from the recent AI Habitat Challenge \cite{habitat_challenge}.
Then, we compare agents in simulation and robots in reality.
\subsection{Task: PointGoal Navigation (PointNav)}
In this task, an agent is spawned in an unseen environment and asked to navigate to a goal location specified in relative coordinates.
We start from the agent specification and observation settings from \cite{habitat_challenge}.
Specifically, agents have access to
an egocentric RGB (or RGBD) sensor and
accurate localization and heading via a GPS+Compass sensor.
The goal is specified using polar coordinates $(r,\theta)$,
where $r$ is the Euclidean distance to the goal
and $\theta$ is the azimuth to the goal.
The action space for the agent is: \texttt{turn-left 30$^{\circ}$}\footnote{\label{angle} Originally 10$^{\circ}.$},
\texttt{turn-right 30$^{\circ}$}\footref{angle}, \texttt{forward 0.25m}, and \texttt{STOP}.
An episode is considered successful if the agent issues the \texttt{STOP} command within $0.2$ meters of the goal.
Episodes lasting longer than $200$ steps or calling the \texttt{STOP} command >$0.2\text{m}$ from goal are declared unsuccessful.
We set the step threshold to $200$ (compared to $500$ in the challenge) because our testing lab is small and we found that episodes longer than $200$ actions are likely to fail.
We also limit collisions to $40$ to prevent damage to the robot.
We find that ${>}40$ collisions in an episode typically occur when the robot is stuck and likely to fail.
\vspace{-.1cm}
\subsection{Agent in Simulation}
\xhdr{Body.}
The experiments by Savva~\etal~\cite{habitat19iccv}
and the Habitat Challenge 2019 \cite{habitat_challenge} model the agent as an idealized cylinder
of radius $0.1$m and height $1.5m$.
As shown in \figref{fig:simvreal},
we configure the agent to match the robot used in our experiments (\locobot)
as closely as possible.
Specifically, we configure the simulated agent's base-radius and height to be $0.175\text{m}$ and $0.61\text{m}$ respectively to match \locobot dimensions.
\xhdr{Sensors.}
We set the agent camera field of view to $45$ degrees to match images from the Intel D435 camera on \locobot.
We match the aspect ratio and resolution of the simulated sensor frames to real sensor frames from \locobot using square center cropping followed by image resizing to a height and width of 256 pixels.
To mimic the depth camera's limitations,
we clip simulated depth sensing to $10\text{m}$.
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{figures/pyrobot-noise-rollout.png}
\caption{Effect of actuation noise. The black line is a trajectory from an action sequence with perfect actuation. In red are trajectories from this sequence with actuation noise.}
\label{fig:noisy-actions}
\end{figure}
\xhdr{Actions.}
In \cite{habitat19iccv, habitat_challenge},
agent actions are deterministic -- \ie when the agent executes \texttt{turn-left 30$^{\circ}$}, it turns \emph{exactly} $30^\circ$, and \texttt{forward 0.25\text{m}} moves the agent \emph{exactly} $0.25\text{m}$ forward (modulo collisions).
However, no robot moves deterministically due to real-world actuation noise.
To model the actions on \locobot, we leverage an actuation noise model derived from mocap-based benchmarking
by the PyRobot authors~\cite{pyrobot2019}.
Specifically, when the agent calls (say) \texttt{forward},
we sample from
an action-specific 2D Gaussian distribution over relative displacements. \figref{fig:noisy-actions} shows
trajectory rollouts sampled from this noise model.
As shown, identical action sequences can lead to vastly different final locations.
Finally, in contrast to \cite{habitat19iccv, habitat_challenge},
we increase the angles associated with \texttt{turn-left} and \texttt{turn-right} actions from $10^{\circ}$ to $30^{\circ}$ degrees.
The reason is a fundamental discrepancy between
simulation and reality -- there is no `ideal' GPS+compass sensor in reality. Perfect localization in indoor environments is an open research problem.
In our preliminary experiments, we found that
localization noise was exacerbated by
the `move, stop, turn, move' behavior of the robot,
which is a result of a discrete action space (as opposed to continuous control via velocity or acceleration actions). We strike a balance between staying
comparable to prior work (that uses discrete actions)
and reducing localization noise by increasing the turn angles (which decreases
the number of robot restarts).
In the longer term, we believe the community should move towards continuous control to overcome this issue.
All the modeling parameters are easily adaptable to different robots.
\label{sec:sim_agent_config}
\begin{figure}
\centering%
\resizebox{\columnwidth}{!}{
\renewcommand{\tableTitle}[1]{\large{#1}}%
\setlength{\figwidth}{0.3\columnwidth}%
\setlength{\tabcolsep}{1.5pt}%
\renewcommand{\arraystretch}{0.8}%
\renewcommand\cellset{\renewcommand\arraystretch{0.8}%
\setlength\extrarowheight{0pt}}%
\hspace{-0.25cm}\begin{tabular}{c c c c c}
&\tableTitle{Robot}&\tableTitle{RGB}&\tableTitle{Depth}&\tableTitle{Trajectories}\\
\rotatebox[origin=c]{90}{\tableTitle{Reality}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/robot_real_square.jpg}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/real_rgb.png}} &
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/real_depth.png}} &
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/real-traj-topdown.pdf}} \\
\rotatebox[origin=c]{90}{\tableTitle{Simulation}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim_robot_really_close_up.png}}&
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim_rgb.png}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim_depth.png}} &
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim-traj-topdown.pdf}}\\
\end{tabular}}
\caption{Simulation vs.~reality. Shading on the trajectory in reality represents uncertainty in the robot's location ($\pm$7cm).}
\label{fig:simvreal}
\end{figure}
\vspace{-.1cm}
\subsection{\locobot in Reality}
\label{sec:locobot}
\vspace{-.1cm}
\xhdr{Body.}
\locobot is designed to provide easy access to a robot with basic grasping, locomotion, and perception capabilities.
It is a modular robot based on a Kobuki YMR-K01-W1 mobile base with an extensible body.
\vspace{-.1cm}
\xhdr{Sensors.}
\locobot is equipped with a Intel D435 RGB+depth camera.
While \locobot possesses on-board IMUs and motor encoders
(which can provide the GPS+Compass sensor observations required by this task), the frequent stopping and starting from our discrete actions resulted in significant error accumulation.
To provide precise localization, we mounted a Hokuyo UTM-30LX LIDAR sensor in place of the robot's grasping arm (seen in \Cref{fig:simvreal}).
We run the LIDAR-based Hector SLAM~\cite{hector_slam} algorithm to provide the location+heading
for the GPS+Compass sensor and for computing success and SPL
of tests in the lab.
At this point, it is worth asking how accurate the
LIDAR based localization is.
To quantify localization error, we ran a total of $45$ tests across $3$ different room configurations in the lab,
and manually measured the error with measuring tape.
On average, we find errors of approximately $7\text{cm}$ with Hector SLAM, compared to $40\text{cm}$
obtained from wheel odometry and onboard IMU (combined
with an Extended Kalman Filter implementation in ROS).
Note that $7cm$ is significantly lower than the $0.2m = 20cm$ criteria used to define success in PointNav,
providing us confidence that we can use LIDAR-based Hector SLAM to judge success in our real-world experiments.
More importantly, we notice that the LIDAR approach allows the robot to reliably relocalize using its surroundings, and thus error does not accumulate over long trajectories, or with consecutive runs, which is important for running hundreds of real-world experiments.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/topdown-map-1.pdf}
\caption{Top-down view of one of our testing environments. White boxes are obstacles. The robot navigates sequentially through the waypoints $A \rightarrow B \rightarrow C \rightarrow D \rightarrow E \rightarrow A$.}
\label{fig:envconfigs}
\vspace{-.1cm}
\end{figure}
\iffalse
\begin{figure}
\centering%
\resizebox{0.85\columnwidth}{!}{
\renewcommand{\tableTitle}[1]{\large{#1}}%
\setlength{\figwidth}{0.45\linewidth}%
\setlength{\tabcolsep}{1.5pt}%
\renewcommand{\arraystretch}{0.8}%
\renewcommand\cellset{\renewcommand\arraystretch{0.8}%
\setlength\extrarowheight{0pt}}%
\hspace{-0.25cm}\begin{tabular}{c c c c}
&\tableTitle{Reality}&\tableTitle{Simulation}\\
\rotatebox[origin=c]{90}{\tableTitle{Robot}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/robot_real_square.jpg}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim_robot_really_close_up.png}}\\
\rotatebox[origin=c]{90}{\tableTitle{RGB}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/real_rgb.png}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim_rgb.png}}\\
\rotatebox[origin=c]{90}{\tableTitle{Depth}}&%
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/real_depth.png}}&
\makecell{\includegraphics[width=\figwidth]{figures/sim_vs_real/sim_depth.png}}\\
\end{tabular}}
\caption{Comparison of simulation and reality. \todo{Add contrast bw GPS+Compass and real-world localization error?}}
\label{fig:simvreal}
\end{figure}
\fi
\begin{comment}
\xhdr{Training setup.}
We consider visual navigation agents trained in simulation on the Gibson~\cite{xia2018gibson} dataset in Habitat.
Specifically, we train for PointGoal in the $72$ Gibson environments that were rated $4$+/$5$ for quality in \cite{habitat19iccv}.
Agents are trained from scratch with reinforcement learning using DD-PPO~\cite{ddppo} -- a decentralized, distributed proximal policy optimization~\cite{schulman2017ppo} algorithm that is well-suited for GPU-intensive simulator-based training.
Each model is trained for $500$ million steps on $64$ Tesla V100s. For evaluation, we select the model with best performance on the validation split.
We apply the agent architecture from \cite{ddppo} composed of a visual encoder and policy network.
We use a ResNet50~\cite{he2016resnet} and a 2-layer LSTM~\cite{hochreiter97lstm} with a hidden state size of $512$.
In total we train 9 different agents with varying sensor modalities (RGB, RGBD, Predicted Depth From RGB~\cite{icra_2019_fastdepth}), training simulation configurations, and actuation noise levels. The configurations for these 9 agents are listed in Table \ref{tab:simvreal_results}.
\end{comment}
\vspace{-.1cm}
\subsection{Evaluation Environment}
\label{sec:evalenv}
Our evaluation environment is a controlled $6.5\text{m}$ by $10\text{m}$ interior room called CODA. Note that the agent was trained entirely in simulation and has never seen our evaluation environment room during training. The grid-like texture of the floor is purely coincidental, and not relied upon by the agent for navigation.
We create 3 room configurations (easy, medium, hard difficulty) with increasing number of tall, cardboard obstacles spread throughout the room.
These `boxy' obstacles can be sensed easily by both the LIDAR and camera despite the two being vertically separated by \textasciitilde$0.5$m.
Objects like tables or chairs have narrow support at the LIDAR's height.
For each room configuration, we define a set of $5$ waypoints to serve as the start and end locations for navigation episodes.
\Cref{fig:envconfigs} shows top-down views of these room configurations.
\xhdr{Virtualization.}
We digitize each environment configuration using a Matterport Pro2 3D camera to collect $360^\circ$ scans at multiple points in the room, ensuring full coverage.
These scans are used to reconstruct 3D meshes of the environment which can be directly imported into Habitat.
This streamlined process is easily scalable and enables quick virtualization of new physical spaces.
On average, each configuration was reconstructed from $7$ panoramic captures and took approximately $10$ minutes.
We also evaluated how the reconstruction quality in simulation affects the transfer, this was done by dropping 5\% of mesh triangles in simulation
which led to a 23\% drop in SRCC (defined in Sec. IV.E).
\xhdr{Test protocol.}
We run parallel episodes
in both simulation and reality.
The agent navigates sequentially through the waypoints shown in \figref{fig:envconfigs}
$(A \rightarrow B \rightarrow C \rightarrow D
\rightarrow E \rightarrow A)$
for a total of 5 navigation episodes per room configuration.
The starting points, starting rotations, and goal locations are identical across simulation and reality.
In total, we test 9 navigation models (described in the next section),
in $3$ different room configurations, each with $5$ spawn-to-goal waypoints, and $3$ independent trials,
for a total of $810$ runs in simulation and reality combined. Each spawn-to-goal navigation with \locobot
takes approximately $6$ minutes, corresponding
to 40.5 hours of real-world testing. Safety guidelines
require that a human monitor the experiments and at 8 hours a day, these experiments would take 5 days.
With such
long turn-around times, it is essential that we use
a robust pipeline to automate (or semi-automate)
our experiments and
reduce the cognitive load on the human supervisor.
After each episode, the robot is automatically reset and has no knowledge from its previous run.
For unsuccessful episodes, the robot uses a prebuilt environment map to navigate to the next episode start position. The room is equipped with a wireless camera
to remotely track the experiments.
In future, we plan to connect this automated
evaluation setup to a docker2robot challenge, where participants can push
code to a repository, which is then automatically evaluated on a real robot in this lab environment.
\subsection{Sim2Real Correlation Coefficient}
\label{sec:srcc}
To quantify the degree to which performance in simulation translates to performance in reality, we use a measure we call Sim2Real Correlation Coefficient (SRCC).
Let $(s_i, r_i)$ denote
accuracy (episode success rate, SPL \cite{anderson2018vision}, \etc) of navigation
method $i$ in simulation and reality respectively.
Given a paired dataset of accuracies for $n$ navigation methods $\{(s_1, r_1), \ldots, (s_n, r_n)\}$, SRCC is the sample
Pearson correlation coefficient (bivariate correlation).\footnote{Other metrics such as rank correlation can also be used.}
SRCC values close to $+1$ indicate high linear correlation and are desirable, insofar as changes in simulation performance metrics correlate highly with changes in reality performance metrics. Values close to $0$ indicate low correlation and are undesirable as they indicate changes of performance in simulation is not predictive of real world changes in performance. Note that this definition of SRCC also suggests an intuitive visualization: by plotting performance in reality against performance in simulation as a scatterplot we can reveal the existence of performance correlation and detect outlier simulation settings or evaluation scenarios.
Beyond the utility of SRCC as a simulation predictivity metric, we can also view it as an optimization objective for simulation parameters.
Concretely, let $\theta$ denote parameters controlling the simulator (amount of actuation noise, lighting, \etc).
We can view simulator design as optimization problem:
$\max_\theta \text{SRCC}(S_n(\theta), R_n)$ where $S_n(\theta) = \{s_1(\theta), \ldots, s_n(\theta)\}$ is the set of accuracies in simulation with parameters $\theta$ and $R_n$ is the same performance metric computed on equivalent episodes in reality.
Note that $\theta$ affects performance in simulation $S_n(\theta)$ but not $R_n$ since we are only changing test-time parameters.
The specific navigation models themselves are held fixed. Overall, this gives us a formal approach to simulator design instead of operating on intuitions and qualitative assessments.
In contrast, if a simulator has low SRCC but high mean real world performance, researchers will not be able to use this simulator to make decisions (e.g. model selection) because they can’t know if changes to performance in simulation will have a positive or negative effect on real-world performance. Every change will have to be tested on the physical robot.
\section{Measuring the Sim2Real Gap}
\xhdr{Navigation Models.}
We experiment with learning-based navigation models.
Specifically, we train for PointGoal in Habitat on the $72$ Gibson environments that were rated $4+$ for quality in \cite{habitat19iccv}.
For consistency with \cite{habitat19iccv}, we use the Gibson dataset%
\cite{xia2018gibson} for training.
Agents are trained from scratch with reinforcement learning using DD-PPO~\cite{ddppo} -- a decentralized, distributed proximal policy optimization~\cite{schulman2017ppo} algorithm that is well-suited for GPU-intensive simulator-based training.
Each model is trained for $500$ million steps on $64$ Tesla V100s. For evaluation, we select the model with best Gibson-val performance.
We use the agent architecture from \cite{ddppo}
composed of a visual encoder (ResNet50~\cite{he2016resnet}) and
policy network (2-layer LSTM~\cite{hochreiter97lstm} with $512$-dim state).
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/hab_chal_srcc.pdf}
\caption{\srccSPL (left) and \srccSucc (right) plots for AI Habitat Challenge 2019 test-sim setting in the CODA environment. We note a relatively low correlation between real and simulated performance.}
\label{fig:hab_scatter}
\end{figure}
Consistent with prior work, we train agents with RGB and depth sensors.
Real-world depth sensors exhibit significant noise and are limited in range.
Thus, inspired by the winning entry in the RGB track of the Habitat Challenge 2019~\cite{habitat_challenge}, we also test an agent that uses a monocular depth estimator~\cite{icra_2019_fastdepth} to predict depth from RGB, which is then fed to the navigation model.
In total, we train 9 different agents by varying sensor modalities (RGB, Depth, RGB$\rightarrow$Predicted Depth) and training simulator configurations (\eg actuation noise levels). The exact settings
are listed in Table \ref{tab:simvreal_results}.
Note that the simulator parameters used for training these models may differ from the simulator parameters used for testing ($\theta$).
Our goal is to span the spectrum of performance at test-time.
\subsection{Revisiting the AI Habitat Challenge 2019}
\Cref{fig:hab_scatter} plots sim-vs-real performance of these 9 navigation models
\wrt success rate (right) and SPL~\cite{anderson2018evaluation} (left).
Horizontal and vertical bars on a symbol indicate
the standard error in simulation and reality respectively.
\srccSPL is $0.60$, which is reasonably high but far from a level where we can be confident about evaluations in simulation alone. Problematically, there are 9 relative ordering reversals from simulation to reality.
The success scatterplot (right) shows an even more disturbing trend -- nearly all methods (except one) appear to be working \emph{exceedingly} well in simulation with success rates close to 1.
However, there is a large dynamic range in success rates in reality.
This is summarized by a low \srccSucc of $0.18$, suggesting that improvements in performance in simulation are not predictive of performance improvements on a real robot.
Note that other than largely cosmetic adjustments to the robot size, sensor and action space specification, this simulation setting is not fundamentally different from the Habitat Challenge 2019.
Upon deeper investigation, we discovered that one factor leading to this low sim2real predictivity is due to a `sliding' behavior in Habitat
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figures/agent-cheating-by-sliding.pdf}
\caption{Sliding behavior leading to `cheating' agents. At time $t$, the agent at $s_t$ executes a \texttt{forward} action, and slides along the wall to state $s_{t+1}$. The resulting straight-line path (used to calculate SPL) goes outside the environment. Gray denotes navigable space while white is non-navigable.}
\label{fig:corner-clipping}
\end{figure}
\xhdr{Cheating by sliding.}
In Habitat-Sim~\cite{habitat19iccv}, when the agent takes an action that results in a collision, the agent \emph{slides} along the obstacle as opposed to stopping. This behavior is also seen in many other simulators -- it is enabled by default in MINOS~\cite{savva2017minos}, and Deepmind Lab~\cite{beattie2016deepmind}, and is also prevalent in simulators and video game engines that employ physics backend, such as Gibson \cite{xia2018gibson}, and AI2 THOR~\cite{ai2thor}. This is because there is no perfect physics simulator, only approximations; this type of physics allows for smooth human control, but does not accurately reflect real-world physics nor safety precautions the robot platform may employ (\ie stopping on collision).
We find that this enables `cheating' by learned agents.
As illustrated by an example in \figref{fig:corner-clipping},
the agent exploits this sliding mechanism to take an effective path that appears to travel \emph{through non-navigable regions} of the environment (like walls). Let $s_t$ denote agent position at time $t$, where the agent is already in contact with an obstacle.
The agent executes a \texttt{forward} action, collides, and slides along the obstacle to state $s_{t+1}$.
The path taken during this maneuver is far from a straight line, however for the purposes of computing
SPL (the metric the agent optimizes),
Habitat calculates the Euclidean distance travelled
$||s_t - s_{t+1}||_2$.
This is equivalent to taking a straight line path between $s_t$ and $s_{t+1}$ that goes outside the navigable regions of the environment
(appearing to go through obstacles).
On the one hand, the emergence of such exploits is a
sign of success of large-scale reinforcement learning -- clearly, we are maximizing reward. On the other hand, this is a problem for sim2real transfer.
Such policies fail disastrously in the real world where the robot bump sensors force a stop on contact with obstacles.
To rectify this issue, we modify Habitat-Sim to
disable sliding on collisions.
The discovery of this issue motivated our investigation into optimizing simulation parameters.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/updated_srcc.pdf}
\caption{Optimized \srccSPL (left) and \srccSucc (right) scatterplots in the CODA environment. Comparing with \Cref{fig:hab_scatter} we see improvements, indicating better predictivity of real-world performance.}
\label{fig:scatteroptimized}
\end{figure}
\subsection{Optimizing simulation with SRCC}
We perform grid-search over simulation parameters --
sliding (off vs on) and a scalar multiplier on actuation noise (varying from 0 to 1, in increments of 0.1).
We find that sliding off and $0.0$ actuation noise lead to the highest SRCC.
\figref{fig:scatteroptimized} shows
a remarkable alignment in the SPL scatter-plot (left) -- nearly all models lie close to the diagonal,
suggesting that we fairly accurately predict how a model is going to perform on the robot by testing in simulation.
Recall that the former takes $40.5$ hours and significant human effort, while the latter is computed in under 1 minute.
\srccSPL improves from $0.603$ (\figref{fig:hab_scatter}) to $0.875$ ($0.272$ gain).
\srccSucc shows an even stronger improvement --
from $0.18$ to $0.844$ ($0.664$ gain)!
\begin{table}
\vspace{5pt}
\renewcommand\theadalign{bc}%
\renewcommand\theadfont{\bfseries}%
\renewcommand\theadgape{\Gape[2pt]}%
\renewcommand\cellgape{\Gape[2pt]}%
\renewcommand{\tableTitle}[1]{\cellcolor{white}{\parbox[t]{2.5mm}{\multirow{-9}{*}{\rotatebox[origin=c]{90}{\cellcolor{white}\textbf{#1}}}}}}%
\setlength{\tabcolsep}{3pt}%
\caption{Average performance in reality (col.~3), CODA and Gibson scenes under Habitat Challenge 2019 settings [sliding=on, noise=0.0] (col.~5, col.~7), and test-sim [sliding=off, noise=0.0] (col.~6, col.~8) across different train-sim configurations (col.~1-2).}
\label{tab:simvreal_results}
\scriptsize
\centering%
\rowcolors{2}{white}{gray!25}%
\begin{tabularx}{\linewidth}{cYYYYYYY}
\toprule
\rowcolor{white} \textbf{Sensor} & \textbf{Train-Sim Noise} & \textbf{Train-Sim Sliding} & \textbf{Reality SPL} & \textbf{CODA Chall-Sim SPL} & \textbf{CODA Test-Sim SPL} &\textbf{Gibson Chall-Sim SPL} & \textbf{Gibson Test-Sim SPL}\\
\midrule
Depth & 0.5 & off & 0.59 & 0.64 & 0.58 & 0.68 & 0.59\\
Depth & 1.0 & off & 0.74 & 0.81 & 0.70 & 0.78 & 0.53\\
Pred. Depth & 0.5 & off & 0.53 & 0.37 & 0.37 & 0.54 & 0.40\\
Pred. Depth & 1.0 & off & 0.66 & 0.75 & 0.58 & 0.64 & 0.43\\
RGB & 0.5 & off & 0.33 & 0.50 & 0.33 & 0.56 & 0.43\\
RGB & 1.0 & off & 0.44 & 0.69 & 0.42 & 0.58 & 0.36\\
Depth & 0.0 & on & 0.64 & 0.70 & 0.63 & 0.66 & 0.35\\
Pred. Depth & 0.0 & on & 0.58 & 0.80 & 0.44 & 0.56 & 0.32\\
RGB & 0.0 & on & 0.61 & 0.80 & 0.64 & 0.62 & 0.36\\\bottomrule
\end{tabularx}
\end{table}
From \Cref{tab:simvreal_results}, we can see that the best performance in reality (Reality SPL) is achieved by row 2, which is confidently predicted by CODA Test-Sim SPL (col 4) by a strong margin.
The fact that no actuation noise is the optimal setting suggests that our chosen actuation noise model (from PyRobot~\cite{pyrobot2019}) may not reflect conditions in reality.
To demonstrate the capabilities of our models in navigating cluttered environments,
we also evaluate performance
on the Gibson dataset
which contains
$572$ scanned spaces (apartments, multi-level homes, offices, houses, hospitals, gyms),
containing furniture (chairs, desks, sofas, tables),
for a total of $1447$ floors.
We use the `Gibson-val' navigation episodes from Savva \emph{et al.}~[1].
We observe a similar drop between performance
between sliding on (Chall-Sim col 5, 7 in Table I) vs sliding off (Test-Sim col 4,6 in Table I).
This further suggests that the participants in the Habitat Challenge 2019 would not achieve similar performance on a real robot.
We also implement a wall-following oracle,
and compare its performance to our learned model (Depth, actuation noise=0.5, sliding=off).
The oracle receives full visibility of the environment via
a top-down map of the environment. Note that this is significantly stronger input
than our learned models (which operate on egocentric depth frames).
The oracle navigates to the goal by following a
straight path towards the goal and follows walls upon coming in contact with obstacles.
We endow the oracle with perfect wall-following (\emph{i.e.}
it follows the wall along the shortest direction to the goal),
and it never actually collides or gets stuck on obstacles.
This oracle achieves 0.46 SPL on Gibson-val, compared to 0.59 SPL
achieved by our model under the Gibson Test-Sim setting.
For episodes longer
than 10m, the oracle performance drops to 0.05 SPL, compared to our model,
which achieves a 0.23 SPL.
These experiments show the need for a learning based approach in complex and
cluttered environments.
\begin{comment}
We performed ablation on the impact of simulation parameters on \srccSPL.
We compare the average across all room configurations in \Cref{tab:simvreal_results}.
We find that sliding off and $0.0$ actuation noise lead to the highest SRCC, suggesting that a high performance in this simulation configuration would also translate to high performance in a real world environment. In particular, row 5 in \Cref{tab:simvreal_results} shows a perfect match between reality SPL and test-sim SPL, whereas challenge SPL is much higher, suggesting that agents were able to `cheat' using the naive AI Habitat Challenge simulation setting. \Cref{fig:scatteroptimized} shows the optimized SRCC; we see a $0.272$ improvement in \srccSPL, and a $0.664$ improvement in \srccSucc from \Cref{fig:hab_scatter}. This result also indicates that our chosen actuation noise model may not reflect conditions in reality. The data also validate our hypothesis that sliding should be turned off for all future challenges as this enables cheating that proves detrimental to transfer.
These results demonstrate that SRCC can be used to optimize simulation parameters to improve predictivity of real-world performance.
\end{comment}
\section{Conclusion}
We introduce the Habitat-PyRobot Bridge (HaPy) library that allows for seamless deployment of visual navigation models.
Using Matterport scanning, the Habitat stack, the HaPy library, and \locobot we benchmark the correlation between reality and simulation performance with the SRCC metric.
We find that naive simulation parameters lead to low correlation between performance in simulation and performance in reality.
We then optimize the simulation parameters for SRCC and obtain high predictivity of real world performance without running new evaluations in the real world.
We hope that the infrastructure we introduce and the conceptual framework of optimizing simulation for predictivity will enable sim2real transfer in a variety of navigation tasks
\xhdr{Acknowledgements.} We thank the reviewers for their helpful suggestions. We are grateful to Kalyan Alwala, Dhiraj Gandhi and Wojciech Galuba for their help and support. The Georgia Tech effort was supported in part by NSF, AFRL, DARPA, ONR YIPs, ARO PECASE, Amazon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
\\
\xhdr{Licenses for referenced datasets.} \\
{\footnotesize
Gibson: \url{https://storage.googleapis.com/gibson_material/Agreement\%20GDS\%2006-04-18.pdf}
\\
Matterport3D: \url{http://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf}
}
|
3,212,635,537,625 | arxiv | \section{Introduction}
Black hole binaries (BHB) transit through different ``spectral states'' during their outbursts. The two canonical ones are the soft state (also known as ``high'' state, hereafter HSS) and the hard state (also known as ``low'' state, hereafter LHS).
In the HSS, the emission
is dominated by a bright and warm ($\sim1$\,keV) accretion disk, the level of variability is low, and the power density spectrum
is power-law like. Little or no radio emission is detected in this state, which is interpreted as an evidence for an absence of jets.
In the LHS, the disk is colder ($\leq0.5$\,keV) and might be truncated at a large distance from the accretor
\citep[see however][for examples of results contradicting this picture]{reynolds13}. The X-ray spectrum shows a strong power-law like component extending up to hundreds
of keV, usually with a roll-over at typically 50--200\,keV. The level of rapid variability is higher than in the HSS and the power density spectrum s
shows a band limited noise component, sometimes with quasi-periodic oscillations (QPOs) with frequencies in the range $\sim0.1$--10\,Hz. A so-
called ``compact-jet'' is detected mainly through its emission in the radio to infrared domain \citep[e.g.,][]{corbel13} where the spectrum can be
modeled by $F_\nu \propto \nu^{-\alpha}$ where $\alpha\leq0$ up to a break frequency $\nu_{\mathrm{break}}$ that usually lies in the infrared
domain \citep{Corbel02a,rahoui11}. Above this break, the spectrum is a typical synchrotron spectrum
with $\alpha\ge0.5$.
Intermediate states (IS) exist between the HSS and LH \citep[see, e.g.,][for reviews and more detailed state classifications]{remillard06,homan05b,fender09}. Transitions between states indicate drastic changes in the properties of the flows and thus are crucial for the understanding of accretion-ejection connection. During a transition from the LHS to the HSS, the compact jet is quenched \citep[e.g.][]{fender99,mickael_1743}; discrete, sometimes superluminal ejections occur, the level of X-ray variability drops abruptly, and QPOs, if present in the given source, change types before disappearing \citep[see, e.g.,][and references therein for both a description of the different types of QPOs and a possible
theoretical interpretation of their origin and link to the spectral states]{varniere11,varniere12}.
The BHB Cygnus X-1 (Cyg X-1) was discovered in 1964 \citep{Bowyer65}. It is the first Galactic source known to host a black hole \citep{Bolton75} as the primary, and recent estimates led to a black hole mass of $\mathrm{M}_{\mathrm{BH}}=14.8\pm1.0\,{\mathrm{M}}_\odot$ \citep{Orosz11}. The donor star is the O9.7 Iab star HDE 226868 \citep{Bolton72b,Walborn73} with $\mathrm{M}_{\mathrm{HDE}}=19.2\pm1.9\,M_\odot$ \citep{Orosz11}\footnote{See also \citet{ziolkowski14} who obtain somewhat different values.}. The systems orbital period is 5.6\,d \citep{Webster72, Brocksopp99, Gies03}, it has an inclination $i = 27\fdg1 \pm 0\fdg8$ on the plane of the sky \citep{Orosz11}, and is located at a distance of $d = 1.86 \pm 0.12$\,kpc from Earth \citep{Reid11, Xiang_2011}.
Cyg X-1 can be found in both the LHS and the HSS; up to 2010, it was predominantly in the LHS. Since then, the behavior has changed and it spends most of its time in the HSS \citep{grinberg2013}. It sometimes undergoes partial (``failed'') transitions from the LHS to the HSS, and can be found in a transitional or intermediate state \citep[e.g.,][]{Pottschmidt03b}. The detection of compact relativistic jets in the LHS \citep{Stirling01} places
Cyg X-1 in the family of microquasars. It is one of the few microquasars known to have a hard tail extending to (and beyond) the MeV range \citep[e.g.,][]{mcconnell00}.
The presence of the MeV tail has recently been confirmed with the two main instruments onboard the ESA's INTErnational Gamma-Ray Astrophysics Laboratory (\textit{INTEGRAL}): the Imager onBoard the \textit{INTEGRAL} Satellite \citep[IBIS;][] {ubertini03} and the SPectrometer on \textit{INTEGRAL}\ \citep[SPI;][]{vedrenne03}. \citet{marion04} and \citet[][; LRW11]{laurent11} detect it with IBIS, and \citet[][; JRC12]{jourdain12} with SPI. Utilizing photon that Compton scattered in the upper plane of IBIS, \citep[ISGRI;][]{lebrun03}, and are absorbed in its lower plane PICsIT \citep[sensitive in the 200\,keV--10\,MeV energy band]{labanti03}, we have shown that the $\geq400$\,keV emission of Cyg X-1 was polarized at a level of about 70\%. We obtained only a 20\% upper limit on the degree of polarization at lower energies (LRW11). This result was independently confirmed by JRC12 using SPI data. Both studies obtained compatible results for the properties of the polarized emission, reinforcing the genuineness of this discovery. Both teams also suggested that the polarized emission was due to synchrotron emission coming from a compact jet.
The presence of polarized emission and the energy-dependency of the polarization have profound implication on our understanding of accretion and ejection processes. It can help to distinguish between the different proposed emitting media (Comptonization corona vs.\ synchrotron-self Compton jets) in microquasars and provide important clues to the composition, energetics and magnetic field of the jet.
A problem of both studies, however, is that they accumulated all \textit{INTEGRAL}\ data available to them, regardless of the spectral state and radio (and thus jet) properties of the source. In this paper, we separate the whole \textit{INTEGRAL}\ data set (accumulated up to December 2012) into different spectral states and study the properties of the state resolved broad band 10--2000\,keV emission of Cyg X-1. We also utilize the Ryle/AMI radio data to determine the state dependent level of 15\,GHz jet emission. In \S2, we describe the data reduction and in particular the so-called ``Compton mode'' that allowed us to discover and measure the polarization in Cyg X-1 (LRW11). We carefully separate the data into spectal state following a procedure based based on the classification of \citet{grinberg2013} that is described in \S\ref{sec:asmstate}. The results of both, the long term 15\,GHz radio monitoring and \textit{INTEGRAL}\ state resolved analysis, are presented in \S4. We discuss the implications of our analysis in \S5 and summarize our results in \S6.
\section{Observations and data reduction}
\subsection{Standard data reduction of the JEM-X and IBIS/ISGRI data}
Cyg~X-1\ has been extensively observed by \textit{INTEGRAL}\ since the launch of the satellite; preliminary results of the very first observations are described by \citet{katja03}. In LRW11, we first considered all uninterrupted \textit{INTEGRAL}\ pointings, also called ``science windows'' (ScWs), where Cyg~X-1\ had an offset angle of less than $10^\circ$ from the center of the field of view. We then removed all ScWs from the performance verification phase so that our analysis covered the period from 2003 March 24 (MJD 52722, satellite revolution [rev.] 54) to 2010 June 26 (MJD 55369, rev.\ 938). We further excluded ScWs with less than 1000\,s of good ISGRI and PICSiT time. This resulted in a total of 2098 ScWs. Polarization was detected when stacking this sample (LRW11).
In the present work, we added to the LRW11 sample all observations belonging to our Cyg X-1 \textit{INTEGRAL}\ monitoring program (PI J. Wilms) made until 2012 December 28 (MJD 56289.8, rev.\ 1246). We applied the same
filtering criteria to select good ScWs. We consider both, the data of the low energy detector of IBIS, ISGRI \citep{lebrun03} and of the X-ray monitors JEM-X \citep{lund03}. ISGRI is sensitive in the $\sim$18--1000\,keV energy range, although its response falls off rapidly above a few hundred\,keV. The two JEM-X units cover the soft X-ray (3--30\,keV) band.
All data were reduced with the \texttt{Off Line Scientific Analysis (OSA)} v.10.0 software suite and the associated updated calibration \citep{Caballero2012}, following the methods outlined by \citep{rodrigue08_1915b}.
The brightest and most active sources of the field, Cyg X-1, Cyg X-2, Cyg X-3, 3A 1954+319, and EXO 2032+375, were subsequently taken into account in the extraction of spectra and light curves. The Cyg X-1 spectra of each ScW were extracted with 67 spectral channels. All ScWs belonging to the same state were then averaged to produce one ISGRI spectrum for each spectral state. The spectral classification is described in \S\ref{sec:asmstate} \citep[see][for the details of the method]{grinberg2013}. A systematic error of 1\% was applied for all spectral channels. Low significance channels were rebinned when necessary.
The JEM-X telescopes have a much smaller field of view than IBIS. The number of ScWs during which Cyg X-1 can be seen by these instruments is therefore smaller. Since rev.\ 983 (MJD 55501) both units are on for all observations. Before rev.\ 983, both units were used alternately, so that the number of ScWs and time coverage is not the same. We derived images and spectra in the standard manner following the cookbook, separating the data into states following the same criteria as for the IBIS data. We applied systematic errors of 2\% to all spectral channels of both units.
\subsection{The Compton Mode}
\label{sec:comptonmode}
Thanks to its two position-sensitive detectors ISGRI and PICsIT,
IBIS can be used as a Compton polarimeter \citep{lei97}. The concept behind a Compton polarimeter utilizes the polarization dependence of the differential cross section for Compton scattering,
\begin{equation}
\frac{d\sigma}{d\Omega} = \frac{r_0^2}{2}\left(\frac{E'}{E_0}\right)^2\left(\frac{E'}{E_0}+\frac{E_0}{E'}-2 \sin^2\theta \cos^2\phi \right)
\end{equation}
where $r_0$ is the classical electron radius, $E_0$ the energy of the incident photon, $E'$ the energy of the scattered photon,
$\theta$ the scattering angle, and $\phi$ the azimuthal angle relative to the polarization direction. Linearly polarized photons
scatter preferentially perpendicularly to the incident polarization vector. Hence by examining the scattering angle distribution
of the detected photons (also referred to as polarigrams)
\begin{equation}
N(\phi)=S[1+a_0\cos(2(\phi-\phi_0))]
\label{eq:azimuth}
\end{equation}
where $S$ is the mean count rate, one can derive the polarization angle $PA$ and polarization fraction $\Pi$.
With $PA = \phi_0 - \pi /2 + n \pi$, equation~\ref{eq:azimuth}
becomes $N(\phi)=S[1-a_0\cos(2(\phi-PA))] [2\pi]$. The polarization angle therefore corresponds to the minimum of $N(\phi)$.
The polarization fraction is $\Pi= a_0/a_{100}$, where $a_{100}$
is the amplitude expected for a 100\% polarized source, obtained from Monte-Carlo simulations of the instrument.
\citet{forot08} obtained $a_{100}=0.30 \pm 0.02$, hence a $\sim6.7\%$ systematics uncertainty that should be
taken into account in the derivation of $\Pi$. Recent simulations allowed us to reduce
the uncertainty on $a_{100}$ to $\sim3\%$ which is small compared to the statistical errors on $N(\phi)$ (see below), and
is therefore neglected.
To measure $N(\phi)$, we followed the procedure described by \citet{forot08} that led to the successful detection of the polarized signal from the Crab nebula. We consider events that interacted once in ISGRI and once in PICsIT. These events are automatically selected on board through a time coincidence algorithm. The maximum allowed time window was set to 3.8\,$\mu$s during our observations. To derive the source flux as a function of $\phi$, the Compton photons were accumulated in 6 angular bins, each with a width of $30^\circ$ in azimuthal scattering angle. To improve the signal-to-noise ratio in each bin, we took advantage of the $\pi$-symmetry of the differential cross section (Eq.~\ref{eq:azimuth}), i.e., the first bin contains the photons with $0^\circ\leq\phi<30^\circ$ and $180^\circ\leq\phi<210^\circ$, etc. Chance coincidences, i.e., photons interacting in both detectors, but not related to a Compton event, were subtracted from each detector image following the procedure described by \citet{forot08}. The derived detector images were then deconvolved to obtain sky images. The flux of the source in each $\phi$-bin was then measured by fitting the instrumental PSF to the source peak in the sky image.
The uncertainty of $N(\phi$) is dominated by statistic fluctuations in our observations that are background dominated. Confidence intervals on $a_0$
and $\phi_0$ are not derived by a $N(\phi)$ fit to the data, but through a Bayesian approach following \citet{forot08}, based on the work
presented in \citet{vaillancourt06}. The applicability of this method was recently thoroughly discussed in \citet{maier14}.
In this computation, we suppose that all real polarization angles and fractions have an uniform probability distribution \citep[non-informative prior
densities; ][]{quinn12, maier14} and that the real polarization angle and fraction are $\phi_0$ and $a_0$. We then need the probability density
distribution of measuring $a$ and $\phi$ from $N_\mathrm{pt}$ independent data points in $N(\phi)$ over a period $\pi$, based on Gaussian
distributions for the orthogonal Stokes components \citep{vaillancourt06, forot08,maier14}:
\begin{multline}
dP(a,\phi) = \frac{N_\mathrm{pt}~S^2}{\pi\sigma_S^2} \times \\ \exp\left[-\frac{N_\mathrm{pt} S^2}{2\sigma_S^2}\left[a^2+a_0^2-2aa_0\cos(2\phi-2\phi_0)\right]\right]a\,da\,d\phi
\end{multline}
where $\sigma_S$ is the uncertainty of $S$. Credibility intervals of $a$ and $\phi$ correspond to the intervals comprised between the
minimum and maximum values of the parameter considered in the two dimensional 1 $\sigma$ contour plot (not shown).
\section{The spectral states of Cyg X-1}
\label{sec:asmstate}
We previously developed a method to classify the states of Cyg X-1 at any given time based on hardness and intensity measurements with the \emph{RXTE}/ASM and MAXI and on the hard X-ray flux obtained with the \emph{Swift}/BAT \citep{grinberg2013}. The ASM and MAXI data can be used to separate all three states (LHS, IS, HSS), while using BAT we can only separate the HSS, but not the LHS and IS from each other.
In microquasars in general and Cyg X-1 in particular, spectral transitions can occur on short time scales \citep[hours; e.g.,][]{boeck11}. We thus performed the spectral classification for each individual ScW. This is a sufficiently small exposure over which to consider the source to be spectrally stable in the majority of the data. The IBIS 18--25\,keV light curve is shown in Fig.~\ref{fig:Ryle}, where the different symbols and colors represent the three different spectral states.
Where available, we used simultaneous ASM data for the state classification. ScWs with two simultaneous ASM measurements that are classified into different states are presumed to have ocurred during a state transition and excluded from the analysis. Most ScWs are, however, not strictly simultaneous with any ASM measurements. To classify ScWs without simultaneous ASM measurements, we therefore used the closest ASM measurement within 6\,h before or after a given ScW. For the remaining pointings where no such ASM measurements exist, we used the same approach, first based on MAXI and then, if necessary, based on BAT. As shown by \citet{grinberg2013}, the probability that a state is wrongly assigned using this method is 5\% within 6\,h of an ASM pointing for both the LHS and HSS, and $\sim$25\% for the IS.
Out of the 3302 IBIS ScWs, 1739 ($\sim$52.7\%) were taken during the LHS, 868 ($\sim$26.3\%) during the HSS and 316 ($\sim$9.6\%) during the IS. The remaining 379 (11.5\%) have an uncertain classification and are thus not considered in our analysis: 351 lack a simultaneous or quasi-simultaneous all sky monitor measurements that would allow a classification and 28 were caught during a state transition.
From MJD 55700 until MJD 56000, Cyg X-1 was highly variable and underwent several transitions on short timescales \citep[particularly striking in Fig.~1 of][]{grinberg2014}. Inspection of some of the ScWs during this period shows that the source flux was very low in the ISGRI range. Since these ScWs were taken close to or in between state transitions, they were also removed from our analysis in order to limit the potential uncertainties they could introduce into the final data products. This filtering removed 32 additional ScWs.
The resulting IBIS exposure times are 2.05\,Ms, 1.21\,Ms, and 0.22\,Ms for the LHS, HSS, and IS, respectively. The IBIS 18--25\,keV and Ryle/AMI 15\,GHz light curves are shown in Fig.~\ref{fig:Ryle}.
\section{Results}
\subsection{Radio monitoring with the Ryle-AMI telescope}
The 2002--2012 15\,GHz light curve classified utilizing the same method as for the \textit{INTEGRAL}\ data is shown in Fig.~\ref{fig:Ryle}. As known from previous studies of microquasars, including Cyg X-1, the LHS and the IS show a high level of radio activity, while the radio flux is at a level compatible with or very close to zero during the soft state \citep[e.g.,][]{Corbel03,Gallo03}.
Cyg X-1 shows a high level of radio variability with a mean flux $\langle F_{\mathrm{15\,GHz}}\rangle\sim12$ mJy and flares. The most prominent flare occurred on MJD 53055 (2004 February 20) and reached a flux of 114\,mJy\footnote{This flare was studied in detail by \citet{fender06}, although the time axis of their Fig.~4 is wrongly associated with MJD 55049.2--55049.6}. Flares occur frequently (Fig.~\ref{fig:Ryle}), and are connected with the X-ray behavior of the source \citep{fender06,wilms07}. The main flares seem to mostly coincide with the IS. The radio behavior in the LHS is usually steadier (although there is, e.g., a flare at MJD~53700, Fig.~\ref{fig:Ryle}).
We obtained state resolved mean radio fluxes of $\langle F_{\mathrm{15\,GHz, LHS}} \rangle=13.5$\,mJy, $\langle F_{\mathrm{15\,GHz,IS}} \rangle=15.4$\,mJy, and $\langle F_{\mathrm{15\,GHz, HSS}} \rangle=4.6$\,mJy. Given the typical rms uncertainty of $\mathrm{rms(5min)}=3.0$\,mJy \citep[e.g.][]{pooley97,rodrigue08_1915a}, the mean radio flux in the HSS is not detected at the $3\sigma$ level. The few clear radio detections in this state (Fig.~\ref{fig:Ryle}) do not correspond to a steady level of emission that would indicate a compact radio core \citep[see also][]{fender99}. Such detections preferably occur after radio flares and thus likely represent the relic emission of previously ejected material \citep[e.g.][]{Corbel04b}.
\subsection{State resolved spectral analysis}
We analyzed the state resolved energy spectra of each instrument using \texttt{XSPEC v.12.8.0} \citep{Arnaud1996}. For JEM-X we considered the 10--20\,keV range.\footnote{Although the JEM-X units are well calibrated above $\sim$5\,keV, we chose to ignore the lower energy bins. Due to the possible influence of a disk and presence of iron fluorescence line at 6.4\,keV they would add uncertainties and degeneracies to the models that the low energy resolution does not allow us to constrain.} and 20--400\,keV for the ISGRI data. We used Compton data between 300\,keV and 1 or 2\,MeV, depending on their statistical quality. We started by fitting the spectra in the 10--400\,keV range since the contribution of the 1\,MeV power-law tail is supposed to be negligible here (LRW11). When an appropriate model was found for this restricted range, we added the 0.3--2\,MeV Compton spectra and repeated the spectral analysis.
In order to find the most appropriate models describing the 10--400\,keV spectra for all three spectral states, we proceeded in an incremental way. We started with a simple power-law model and added spectral model components until an acceptable fit according to $\chi^2$-statistics was found. The phenomenological models were then replaced by more physical Comptonization models. Since the $\geq 10$\,keV data are not affected by the absorption column density of $\mathrm{N}_\mathrm{H} \sim$a few $10^{22}\,\mathrm{cm}^{-2}$ \citep[e.g.,][]{boeck11,Grinberg_2015a}, no modeling of the foreground absorption is necessary. Cross-calibration uncertainties and source variability during different exposure times are taken into account by a normalization constant. The ISGRI constant was frozen to one and the others left free to vary independently. We required that the values of the free constants were in the range $[0.85,1.15]$. The bestfit parameters obtained with the best phenomenological and with the Comptonization models are reported in Table~\ref{tab:fits1} for all three spectral states.
\subsubsection{Hard state}
\label{sec:hardstate}
A power-law (Fig.~\ref{fig:reshard}a) gives a very poor representation of the 10--400\,keV data with a reduced $\chi^2= \ensuremath{\chi^2_\nu} \sim 70$ for 87 degrees of freedom (dof). The broad band spectrum clearly departs from a straight line, which indicates a curved spectrum. A power-law modified by a high energy cut-off (\texttt{highecut*power} in \texttt{XSPEC}) provides a highly significant improvement with $\ensuremath{\chi^2_\nu}=0.89$ for 85 dof (Fig.~\ref{fig:reshard}b). We note, however, that the value of the high energy cut-off is unconstrained (Table~\ref{tab:fits1}), and deviations are seen at high energies.
As a cut-off power-law in this energy range is usually assumed to mimic the spectral shape produced by an inverse Compton process, we replaced the phenomenological model by a more physical model \texttt{comptt} \citep{titarchuk94, Titarchuk_Lyubarskij_1995a,
Titarchuk_Hua_1995a}. Due to our comparably high lower energy bound, the temperature of the seed photons could not be constrained. We fix it to a typical LHS value of 0.2\,keV \citep{Wilms_2006a}. Although the high energy part is better represented, the fit is statistically unacceptable ($\ensuremath{\chi^2_\nu}=2.9$ for 88 dof, Fig.~\ref{fig:reshard}c) and significant residuals are visible in 10--80\,keV range. In this range, hard X-ray photons can undergo reflection on the accretion disk. This effect is usually seen as an extra curvature, or bump in the 10--100\,keV range in the spectra of microquasars and AGNs \citep[][and references therein]{Garcia_2014b}. We thus included a simple reflection model (\texttt{reflect}) convolved with the Comptonization spectrum and fixed the inclination angle to $30^\circ$. The residuals are much smaller, yielding an acceptable fit ($\ensuremath{\chi^2_\nu}=1.67$ for 87 dof)\footnote{We have followed the recommendations of the IBIS user manual regarding the level of systematics added to the IBIS spectral points (http://isdc.unige.ch/integral/download/osa/doc/10.0/osa\_um\_ibis/node74.html). Systematic errors of 1\% seem to underestimate the real uncertainties when dealing with (large) data sets that encompass several calibration periods. We have tested this notion by refitting the spectra with 1.5\% and 2\% systematics and, as expected, obtain \ensuremath{\chi^2_\nu}\ much closer to 1. Since the values of the spectral parameters, and thus the conclusions presented here, do not change significantly, we decided to present the spectral fits using the recommended 1\% for systematics.}. The normalization constants are 0.99, 1.06, and 0.98 for JEM X-1, JEM X-2, and the Compton mode respectively. The best fit results of this model are listed in Table~\ref{tab:fits1}, while the $\nu\,F\nu$ broad band 10--400\,keV spectrum is shown in the rightmost panel of Fig~\ref{fig:reshard}.
\begin{table*}
\caption{Spectral parameters of fit to the 10--400\,keV spectra. Errors and limits on the spectral parameters are given at the $90\%$ confidence
level, while the errors on the fluxes are at the $68\%$ level.}
\begin{tabular}{ccccccc}
\hline
\multicolumn{7}{c}{{\tt{reflect*highecut(powerlaw)}}}\\
\hline
State & $\Gamma$ & E$_{\mathrm{cut}}$ & E$_{\mathrm{fold}}$ & \multicolumn{3}{c}{Fluxes$^\ddagger$} \\
& & (keV) & (keV) &10--20\,keV & 20--200\,keV& 200--400\,keV\\
\hline
LHS & $1.43\pm0.01$ & $\le 12$ & $155\pm4$ & $4.48 \pm0.02$ & $22.30 \pm0.03$ & $3.56\pm0.03$\\
IS & $1.87_{-0.03}^{+0.02}$ & $56_{-6}^{+4}$ & $198\pm8$ & $5.10\pm0.03$ & $18.47\pm0.04$ & $2.52\pm0.05$\\
HSS$^\dagger$ & $2.447\pm0.007$ &$130_{-16}^{+11}$ & $198_{-59}^{+135}$ &$1.83\pm0.01$ & $3.33\pm0.01$ & $0.18\pm0.03$\\
\hline
\multicolumn{7}{c}{{\tt{reflect*comptt}}}\\
\hline
State & $\Omega/2\pi$ & kT$_{e}$ & $\tau$ & \multicolumn{3}{c}{Fluxes$^\ddagger$} \\
& & (keV) & & 10--20\,keV & 20--200\,keV& 200--400\,keV\\
\hline
LHS & $0.13\pm0.02$ & $59.4_{-1.2}^{+1.3}$ & $1.06\pm0.03$ & $4.37 \pm0.03$ & $22.60\pm0.03$ & $3.70\pm0.03$\\
IS & $0.04\pm0.03$ & $54.4_{-2.8}^{+3.6}$ & $0.82\pm0.06$ & $5.30 \pm0.03$ & $18.42\pm0.06$ & $1.93\pm0.07$\\
HSS & $0.36\pm0.03$ & $279\pm15$ & $<0.013$ & $1.84 \pm0.09$& $3.7\pm0.6$ & $0.36\pm0.01$\\
\end{tabular}
\begin{list}{}{}
\item[$^\dagger$]A reflection component with $\Omega/2\pi$=0.46$\pm0.04$ was included for a good spectral fit to be obtained
\item[$^\ddagger$]In units of $10^{-9}$~\ensuremath{\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}}
\end{list}
\label{tab:fits1}
\end{table*}
Next, we added the $\geq$400\,keV Compton mode spectrum (Fig.~\ref{fig:Allspec}) to our data. These points are significant at the $8.2\sigma$, $7.8\sigma$, $6.9\sigma$, and $6.5\sigma$ levels in the 451.4--551.9\,keV, 551.9--706\,keV, 706--1000.8\,keV, and $\sim$1--2\,MeV energy bands. A significant deviation from the previous best fit model is clearly visible above 400\,keV. An additional power-law improves the fit significantly ($\ensuremath{\chi^2_\nu}=1.4$ for 90 dof). The photon index of this additional power-law is hard ($\Gamma=1.1_{-0.4}^{+0.3}$) and marginally compatible with the values reported earlier ($1.6\pm0.2$, LRW11).
We model the reflection component using several distinct approaches, assuming 1) that the power-law component undergoes the same reflection as the Comptonization component (\texttt{reflect*(power+comptt)}), 2) that both components undergo reflection but with different solid angles $\Omega/2\pi$ (\texttt{reflect$_{1}$*(power)+reflect$_{2}$*(comptt)}), and 3) that the power-law is not reflected (\texttt{power+reflect*(comptt)}). In model~2, the value for the reflection of the power-law component is unconstrained. Model~3 results in a better \ensuremath{\chi^2_\nu}, but the value of the power-law parameters are physically not acceptable. The normalization is too high and $\Gamma$ too soft to reproduce the Compton data well. We therefore discard model~3, and consider model~1 as the most valid one. The value of the fit statistics is, however, still high with $\ensuremath{\chi^2_\nu}=1.8$ for 89 dof. The residuals show deviations in the $\sim50$--200\,keV range that may be due to the accumulation of a large set of ScWs taken at different calibration epochs and at different off-axis and roll angles of Cyg X-1.
In the final fit to the 10\,keV--2\,MeV broad band spectrum the parameters of the Comptonized component are $kT_e=53\pm2$\,keV and $\tau=1.15\pm0.04$, the reflection fraction is unchanged, and the photon index of the hard tail is $\Gamma=1.4_{-0.3}^{+0.2}$, i.e., essentially compatible with the value reported in LRW11. The 0.4--1 MeV flux of the hard tail ($F_{\mathrm{pow,0.4-1~MeV}}=1.9\times10^{-9}\,\ensuremath{\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}}$) accounts for 86\% of the total 0.4--1 MeV flux.
\subsubsection{Intermediate state}
A simple power-law gives a poor representation of the IS spectrum ($\ensuremath{\chi^2_\nu}\sim28$ for 82 dof). The inclusion of a high energy cut-off again greatly improves the fit ($\ensuremath{\chi^2_\nu}=1.35$, 80 dof), but the normalization constant of the Compton spectrum ($C_{\mathrm{Compton}}$) is outside the range that is consistent with the flux calibration of the instrument. Replacing the phenomenological model by a \texttt{comptt} model, with the seed photon temperature fixed at 0.8\,keV\footnote{The choice of a higher seed photon temperature in the IS does not influence the $>10$~keV spectrum and was only motivated by the desire to be consistent with the expectation of a hotter disk in the IS.} leads to a good fit ($\ensuremath{\chi^2_\nu}=1.1$ for 81 dof) and a physical value of $C_{\mathrm{Compton}}$. Although the statistics are not as good as in the LHS due to the much shorter total exposure, and the residuals in the 10--100\,keV region are rather acceptable, we included a reflection component to be consistent with the LHS modeling. This approach improved the fit to $\ensuremath{\chi^2_\nu}=0.97$ for 79 dof, which indicates a chance improvement of $\sim1.2\%$ according to an F-test (the reflection component has a significance lower than $3\sigma$). The reflection fraction is very low and poorly constrained ($\Omega/2\pi=0.04\pm0.03$). As the other parameters are not significantly affected, we still report the results obtained with the latter model in Table~\ref{tab:fits1} despite this weak evidence for reflection.
We then added the $\geq 400$\,keV spectral points of the Compton mode spectrum (Fig.~\ref{fig:Allspec}). The previous model (without reflection) leads to a good representation ($\ensuremath{\chi^2_\nu}=1.2$ for 83 dof), even if a deviation at high energies is present. Above 400\,keV, however, only the 551.9--706\,keV Compton mode spectral point is significant at $\geq 3\sigma$. Adding a power-law to the data leads to $\ensuremath{\chi^2_\nu}=1.04$ (81 dof), which corresponds to a chance improvement of 0.1\%. Not surprisingly, the power-law photon index $\Gamma$ is poorly constrained ($-0.4\leq\Gamma\leq1.8$) and the normalization constant for the Compton mode spectral points $C_{\mathrm{Compton}}$ tends to a very low value if not forced to be above 0.85. Fixing $\Gamma$ at 1.6 (LRW11, JRC12)
and setting $C_{\mathrm{compton}}$ to 0.90, the best value found in the LHS fits, allows us to estimate a $3\sigma$ upper limit
for the 0.4--1\,MeV flux of $1.5\times 10^{-9}\,\ensuremath{\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}}$.
\subsubsection{Soft State}
In the soft state the soft X-ray spectrum below 10\,keV is dominated by a $\sim$1\,keV accretion disk. This disk, however, does not influence the $>$10\,keV spectrum studied here. A single power-law gives a poor representation of the broad band spectrum ($\ensuremath{\chi^2_\nu}=6.0$ for 77 dof). The JEM-X and ISGRI spectra are particularly discrepant and a possible hint for a high energy roll-over is seen in the residuals. A cut-off power-law greatly improves the fit statistics ($\ensuremath{\chi^2_\nu}=1.68$ for 75 dof), but the modeling is still not satisfactory and the JEM-X normalization constants are inconsistent with the detector calibration uncertainty. Residuals in the 15--50\,keV range suggest that reflection on the accretion disk may occur. Adding the \texttt{reflect} model improves the fit to $\ensuremath{\chi^2_\nu}=1.24$ (74 dof). The normalization constant for the JEM-X1 detector has to be forced to be above 0.85 as it would naturally converge to about 0.8. Alternatively describing the data with a simple reflected power-law without a cutoff results in $\ensuremath{\chi^2_\nu}=1.88$ (76 dof) and shows that the cut-off is genuinely present. The chance improvement from a reflected power-law spectrum to a reflected cut-off power-law spectrum is about $4\times10^{-8}$ according to an F-test. Replacing the phenomenological model with a Comptonization continuum that is modified by reflection yields $\ensuremath{\chi^2_\nu}=1.69$ (75 dof, Table~\ref{tab:fits1}).
The $>$400\,keV data were then added. Since the Compton mode spectrum has a very low statistical quality and in order not to increase the number of degeneracies, we fixed its constant to a value of one. The \texttt{reflect*comptt} model is still statistically acceptable, and only the highest spectral bin (550--2000\,keV) indicates an excess of source photons compared to the model ($\ensuremath{\chi^2_\nu}=1.76$, 77 dof). Adding a powerlaw component to the overall spectrum improves the modeling of the spectrum to $\ensuremath{\chi^2_\nu}=1.69$ (75 dof), but if left free to vary $\Gamma$ is completely unconstrained. We fix it at 1.6 ($\ensuremath{\chi^2_\nu}=1.70$, 76 dof, 0.11 chance probability according to the F-test) and derive a $3\sigma$ upper limit of $0.93\times10^{-9}\,\ensuremath{\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}}$ for the 0.4--1~MeV flux.
\subsection{State resolved polarization analysis}
\label{sec:res-polar}
Polarigrams were obtained for each of the three spectral states. The low statistics of the IS (the source significance in the Compton mode is $5.7 \sigma$) does not permit us to constrain the 250--3000\,keV polarization fraction.
In the HSS, the detection significances of Cyg X-1 with the Compton mode are $4.2\sigma$ and $7.7\sigma$ in the 300--400\,keV and 400--2000\,keV bands respectively. A weighted least
square fit procedure was used to fit the polarigrams. A constant represents the 400--2000~keV polarigram rather well ($\ensuremath{\chi^2_\nu}=1.6$, 5 dof, not shown).
The $1\sigma$ upper limit of the 400--2000\,keV polarization fraction in the HSS is $\sim$70\%.
Figure~\ref{fig:pola} shows the LHS polarigrams in two energy ranges: 300--400\,keV and 400--2000\,keV. Note that the data selection
described in Sec.~\ref{sec:asmstate} is at the origin of the different count rates between the polarigrams shown here and those in Fig. 2 or LRW11.
This effect is more obvious at the highest energies where the different spectral slopes have the largest impact (Fig. ~\ref{fig:Allspec}). In the lower energy range, the Compton
mode detection significance of Cyg X-1 is $12.3\sigma$. The polarigram can be described by a constant ($\chi^2_\nu=0.87$, 5 dof). We estimate an
upper limit for the polarization fraction of $\Pi_{300-400~{\mathrm{keV, LHS}}}=22\%$ in this energy range. The 400--2000\,keV energy range
polarigram shows clear deviation from a constant and a constant poorly represents the data ($\chi^2_\nu=3.5$, 5 dof). The pattern and fit indicate
that the photons scattered in ISGRI are not evenly distributed in angle, but that the scattering has a preferential angle as is expected from a polarized
signal (\S2.2). We describe the distribution of the polarigram with the expression of Eq.~\ref{eq:azimuth} and obtain the values of the parameters $a_{0}$
and $\phi_{0}$ with a least square technique. The fit is better than that to a constant, although not formally good ($\chi^2_\nu=2.0$, 3 dof) due tothe low
value of the periodogram at azimuthal angle $100^\circ$ (Fig.~\ref{fig:pola}). Assuming that the non constancy of the polarigram is an evidence for a real
polarized signal \citep[an assumption also consistent with the independent detections of polarized emission in Cygnus X-1 with SPI; ][]{jourdain12},
we estimate that the polarization fraction is $\gtrsim 10\%$ at the 95\% level, and greater than a few \% at the 99.7\% level. In other words we obtain a
significance higher than $3\sigma$ for a polarized 400--2000\,keV LHS emission. We determine a polarization fraction and a credibility interval
of $\Pi_{\mathrm{400-2000\,keV, LHS}}=75\pm 32 \%$ at position angle $\mathrm{PA}_{\mathrm{Cyg X-1, LHS}} = 40.0^\circ \pm 14.3^\circ$. The values
of the polarization fraction and angle are consistent with those found by JRC12. We note that an error of 180$^\circ$ in the angle convention in
LRW11 led to $\mathrm{PA} = 140^\circ$, which after correction becomes $40^\circ $, in agreement with our state resolved result (see also JRC12).
Note that the good agreement of our results with those obtained with SPI in the case of Cyg X-1, but also the fact that IBIS also detected polarization in the high energy emission of
the Crab nebula and pulsar \citep{forot08} at values compatible with those obtained with SPI \citep{dean08}, increase the confidence in the
reality and validity of our results.
\section{Discussion and conclusions}
\subsection{The $\geq 400$~keV hard tail}
Our analysis shows that the $>400$\,keV spectral properties of Cyg X-1 depend on the spectral state, as already suggested previously \citep[e.g.,][]{mcconnell02}. We detect a strong hard tail in the LHS, but obtain only upper limits in the IS and HSS. In Fig.~\ref{fig:comptel} (left), we show a comparison of our LHS 0.75--1\,MeV spectrum with those obtained with \textit{CGRO}/COMPTEL during the June 1996 HSS and during an extended LHS observed with this instrument \citep{mcconnell02,mcconnell00}. The IBIS hard tail is compatible in terms of fluxes to within $2\sigma$ to those seen with COMPTEL and also with the \textit{INTEGRAL}/SPI hard tail reported in JRC12 (see their Fig.~5). It is worth noting that Cyg X-1 is a notoriously highly variable source and flux evolution between different epochs is expected. To illustrate this behavior, we study data taken during two main periods, the covering March 2003 to December 2007 (MJD 52722--54460) and March 2008 to December 2009 (MJD 54530--55196), respectively. These intervals correspond to the analysis of LRW11. The two Compton spectra are shown in Fig.~\ref{fig:comptel} (right). It is obvious that the flux of the hard tail varied between the two epochs and an evolution of the slope may also be visible. We note that these data are not separated by states, but as they cover the period before December 2009, they are dominated by the LHS (e.g., Fig.~\ref{fig:Ryle}). We therefore conclude that, even in the same state, the hard tail shows luminosity variations.
A major difference to \citet{mcconnell02} is the non-detection of a hard tail in the HSS. \citet{mcconnell02} remark that between two epochs of HSS the hard tail either has a higher flux than in their averaged LHS or is not detected. \citet{malyshev2013} also note that the 1996 HSS hard tail comes from a single occurrence of this state and may not reflect a typical behavior. While detection of a hard tail during the LHS, even if it shows variations, indicates an underlying process that is rather stable over long period of times, the apparent transience of the HSS hard tail may indicate a different origin for this feature. For example, in Cyg~X-3 the very high energy (Fermi) flares are associated with transitions to the ultra soft state \citep{abdo2009}. While this is similar to what is seen in Cyg~X-1, all Fermi $\gamma$-ray flares of Cyg X-1 were reported during LHS \citep{bodaghee13,malyshev2013}. While the lack of simultaneous very high energy GeV-TeV data, and/or polarimetric studies of the Comptel HSS hard tail prevent any further conclusion on the origin of this component, we conclude that the HSS hard tail and the LHS hard tail could be of different origin.
\subsection{Polarization of the hard tail and its possible origin}
A clear polarized signal is detected from Cyg~X-1 with both IBIS and SPI while accumulating all data regardless of the spectral state of the
source (LRW11, JRC12). Here we separated the data into different spectral states and studied the state dependent high energy polarization
properties. The HSS and 300--400\,keV polarigram can be described by a constant, indicating no or very little levels of polarization. The
400--2000\,keV LHS polarigram shows a large deviation from a constant, which indicates that the Compton scattering from ISGRI is not an
evenly circular distribution on PICSiT. We interpret this as evidence for the presence of polarization in this energy range, even if the polarigram
shows some deviations from the theoretically expected curve (Fig.~\ref{fig:pola}). In the 300--400\,keV range, no evidence for polarization is
found. This is consistent with the results obtained with SPI (JRC12) and shows that the polarization fraction is strongly energy dependent.
Assuming a non-null level of polarization, a reasonable assumption given the above arguments, we estimate that the detection of polarized
emission is significant at higher than $3\sigma$. Given the larger amount of data and the separation into different spectral states,
one could have expected to obtain more robust results, and more constrained polarization parameters compared to LRW11 and JCR12,
while the uncertainties we obtained are, at best, of the same order. The reason for this lies in the behavior of Cyg~X-1. While the earlier
studies did not perform a state dependent analysis, our state classification reveals that both studies were dominated by data taken during
the LHS and the IS (see also Fig.~\ref{fig:Ryle}). In fact, compared to LRW11 the number of ScWs measured during the LHS increased
only by 13\%, and 75\% of the IS data comes from the period considered in these earlier studies, i.e., before $\sim$MJD~55200. On the
other hand, about 94\% of the HSS data are new. It is therefore not surprising that our hard state result is consistent with the earlier results,
as these were fully dominated by the (polarized) hard state data. The exposure of the less polarized IS data was not large enough to
significantly ``dilute'' the polarization. Our result clearly shows that larger amounts of data in each of the states are needed to refine the
constrain on the polarization properties and their relation to the spectral states.
Our LHS spectral analysis shows that the 10--1000\,keV spectrum can be decomposed into several spectral components. A (reflected) thermal Comptonization component in the range $\sim10$--400\,keV, and a power-law tail dominating above $\sim$400\,keV (\S\ref{sec:hardstate}). The origin of the seed photons for the Compton component may either be an accretion disk or synchrotron photons from a compact jet undergoing synchrotron self-Comptonization (SSC). As discussed elsewhere \citep[e.g.,][]{laurent11}, due to the large number of scatterings the Comptonization spectrum from a medium with an optical depth $\tau\ge 1$ is not expected to show intrinsic polarization, especially with such a high fraction as the one we detect here. Multiple Compton scattering, as expected in such a medium, will ``wash out'' the polarization of the incident photons even if the seed photons are polarized (e.g., jet photons). It is therefore not surprising that the LHS $\lesssim$400\,keV component has a very low (or zero) polarization fraction \citep[see also][]{russell13}.
The origin of the hard-MeV tail is much less clear and is highly model-dependent. Two main families of models can explain the (co-)existence of cut-off power-law and/or pure power-law like emissions in the energy spectra of BHBs. The first is based on the presence of an hybrid thermal-non thermal population of electron in a ``corona'', and hybrid Comptonization have been successfully applied to the spectra of, e.g., GRO J1655$-$40 or GX 339$-$4 \citep[e.g.,][]{caballerog09, joinet07}, and even to $\sim1$--10000~keV spectra of Cyg X-1, in both the HSS and LHS \citep{mcconnell02}. A double thermal component has been used to represent the 20--1000\,keV SPI data of 1E 1740$-$2947 well \citep{bouchet09}.
The second family of models essentially contains the same radiation processes but is based on the presence of a jet emitting through direct synchrotron emission, thermal Comptonization of the soft disk photons on the base of the jet, and/or SSC of the synchrotron photons by the jet's electrons \citep[e.g.,][]{markoff05}. Such models have been used with success to fit the data of some BH, and particularly GX 339$-$4 and XTE J1118+480
\citep{markoff05,maitra09,Nowak11}. It should be noted, however, that they obviously rely on the presence of a compact jet and therefore can only be valid when/if a compact jet is present.
The capacities of the current high energy missions, albeit excellent, do not permit today an easy discrimination between the different families of models. This has nicely been illustrated in the specific case of Cyg X-1 through the 0.8--300\,keV spectral analysis of the Suzaku/RXTE observations \citep{Nowak11}. Different and complementary approaches and arguments can, however, be used to try and identify the most likely origin for the hard power-law tail. The fundamental plane between radio luminosity and X-ray luminosity \citep[e.g.,][]{Corbel03} has originally been used as an argument for a (small) contribution of the jet to the X-ray domain. Our approach here was to first separate the radio data sets into different spectral states, based on a model-independent classification. It was only because of this separation that could we show that a hard polarized tail is present in the LHS, with a high polarization fraction, together with a radio jet. The very high level of polarization can only originate from optically thin synchrotron emission (see below), and implies a highly ordered magnetic field similar to that expected in jets. The synchrotron tail coincides with thermal Comptonization at lower energies and with a radio jet. The presence of all these features is qualitatively compatible with the multi-component emission processes predicted by compact jet models such as those of \citet{markoff05}. While polarization of the (compact) jet emission has already been widely reported in the radio domain \citep[][and references therein]{brocksopp13}, a thorough study of the multi-wavelength polarization properties of Cyg X-1 has only recently been undertaken by \citet{russell13}. These authors, in particular, showed that the multi-wavelength spectrum and polarization properties of Cyg X-1 are quantitatively compatible with the presence of a compact jet that dominates the radio to IR domain and is also responsible for the MeV tail. A similar conclusion was drawn by \citet{malyshev2013} under the assumption that ``the polarization measurements were robust'' and that therefore the MeV tail could only originate from synchrotron emission. This pre-requisite is reinforced by our refined and spectral-state dependent study of the polarized properties of Cyg X-1 at 0.3--1\,MeV. A polarized signal is indeed suggested by the 400--2000\,keV LHS data and the large degree of polarization measured in \S\ref{sec:res-polar} implies a very ordered magnetic field. Assuming the magnetic field lines are anchored in the disk implies that the $\gamma$-ray polarized emission comes from close to the inner ridge of the disk where the magnetic energy density is highest. This constraint is difficult to reconcile with a spherically shaped corona medium, where the magnetic field lines are more likely to be tangled and where the polarized component necessarily comes from the outer shells of the corona where the optical depth is the lowest. We, therefore, conclude that \textbf{in the LHS} the 0.4--2 MeV tail detected with \textit{INTEGRAL}/IBIS is very likely due to optically thin synchrotron emission, and that this emission comes from the detected compact jet.
\subsection{Compatibility of the Cyg X-1 hard state parameters with synchrotron emission}
The synchrotron spectrum of a population of electrons following a power-law distribution, $dN(E)\propto E^{-p} d(E)$, where $p$ is the particle distribution index, can be approximated by a power-law function over a limited range of energy, i.e., $F_{E_1,E_2}(E)\propto E^{-\alpha}$ over $[E_1,E_2]$ \citep{rybickilightman}. In the case of compact jets, two main domains are usually considered: the optically thick regime with $\alpha\lesssim0$ and the optically thin regime with $\alpha>0$. The break between the two regimes is typically in the infrared band \citep[e.g.,][]{Corbel02a,rahoui11,russell13,corbel13}. The jet synchrotron spectrum is optically thin towards higher energies, and here $\alpha=(p-1)/2$. The jet emission becomes negligible in the range from the optical to hard X-rays when compared to the contribution from other components such as the companion star, the accretion disk, or the corona. Our results indicate that it again dominates above a few hundred\,keV \citep[see also][for a multi-components modeling of the multi-wavelength Cyg X-1 spectrum]{russell13}.
The degree of polarized emission expected in the optically thin regime is $\Pi=\frac{p+1}{p+7/3}$ \citep{rybickilightman}. Since the flux spectral index $\alpha=\Gamma-1$, where $\Gamma$ is the photon spectral index, it is possible to deduce $p$ from the measured spectral shape. With $\Gamma=1.4_{-0.3}^{+0.2}$ (\S\ref{sec:hardstate}) we obtain $p=1.8_{-0.7}^{+0.4}$ and thus expect $\Pi_{\mathrm{expected}}=67.8_{-5.5}^{+2.7} \% $. This value is consistent with the value measured in the LHS ($\Pi= 75 \pm 32\% $).
We determine a polarization angle of $\sim$$40^\circ$ compatible with the SPI results (JRC12). This angle, however, differs by about $60^\circ$ from the polarization angles measured in the radio and optical \citep[][and references therein]{russell13}. Assuming that the $\gamma$-ray polarized component comes from the region of the jet that is closest to the launch site (i.e., the inner ridge of the disk and/or the BH), the change in polarization angle implies that the field lines in the $\gamma$-ray emitting region have a different orientation than further out in the jet. This could be indicative of a relatively large opening angle at the basis of the jet \citep[as sketched in Fig.~3 of][]{russell13}. Alternatively the large offset could simply be due to strongly twisted magnetic field lines close to the accretion disk.
\section{Summary}
We have presented a broad band 10\,keV--2\,MeV spectral analysis of the microquasar Cyg X-1 based on about 10 years of data collected with the JEM-X and IBIS
telescopes onboard the \textit{INTEGRAL}\ observatory. We have used the classification criteria of \citet{grinberg2013} to separate the data into hard (LHS), intermediate (IS),
and soft (HSS) states. We have studied the radio behavior of the source associated with these states as observed with the Ryle/AMI radio telescope at 15~GHz.
\begin{itemize}
\item The $\leq$400\,keV emission is well represented by Comptonization spectra with further reflection off the accretion disk. In the LHS and IS, the Comptonization
process is thermal and optically thick, i.e., the corona has an optical depth $\tau>1$ and the electrons have a Maxwellian distribution. As a result, a clear high-energy
cut-off is seen in the spectra. In the HSS the situation is not that clear, although a cut-off is also necessary to give a good representation of the broad band spectrum.
\item A clear hard tail is detected in the LHS when also considering the 0.4--2\,MeV data. This high energy component is well represented by a hard power-law with no
obvious cut-off. The detection of the hard tail is compatible with earlier claims of the presence of such a component in spectra of Cyg X-1. We show that this component
is variable within same state, as is seen when considering \textit{INTEGRAL}\ observations from two arbitrary epochs but the same state.
\item In the radio domain, the 15\,GHz data show a definite detection with averaged flux densities of respectively $\sim$13 and $\sim$15\,mJy in the LHS and IS,
compatible with the presence of a compact jet in those states. No persistent radio emission is detected in the HSS, implying the absence of a compact radio core.
\item In the LHS, we measure a polarized signal above 400\,keV with a large polarization fraction ($75\pm32\%$). This high degree of polarization and the polarization
angle ($40\fdg0\pm14\fdg3$) are both compatible with previous studies by us and others. We obtain non-constraining upper limits on the polarization fraction in the
IS and HSS, which have significantly lower exposure.
\item The high degree of polarization of the hard tail can only originate from synchrotron emission in an highly ordered magnetic field. The demonstrated
presence of radio emission in the LHS points towards the compact jet as the origin for the 0.4--2\,MeV emission, corroborating earlier theoretical and
multi-wavelengths studies \citep{malyshev2013,russell13}. Our spectral state-resolved and multi-wavelength approach therefore further confirms the
conclusion presented in the earlier, non state-resolved studies based on \textit{INTEGRAL}\ data only \citep{laurent11, jourdain12}.
\item We increased the total \textit{INTEGRAL}\ exposure time and, in particular, nearly doubled the amount of data taken in the HSS. We, however, still do not
reach strong constraints on the polarization fraction in this state, even if we showed that the hard tail is much fainter. Provided the source does not
change state, we have gotten \textit{INTEGRAL}\ time approved to further increase the exposure time in the HSS, which will allow us to obtain tighter constraints
on the polarization signal in this very state.
\end{itemize}
\begin{acknowledgements}
This paper is based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain) and with the participation of Russia and the USA.
We acknowledge S. Corbel, R. Belmont, J. Chenevez, and J.A.\ Tomsick for very fruitful discussions about several aspects presented in this paper. J.R. acknowledges funding support from the French Research National Agency: CHAOS project ANR-12-BS05-0009 (\texttt{http://www.chaos-project.fr}), and from the UnivEarthS Labex program of Sorbonne Paris Cit\'e (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02). This work has been partially funded by the Bundesministerium f\"ur Wirtschaft und Technologie under Deutsches Zentrum f\"ur Luft- und Raumfahrt Grants 50\,OR\,1007 and 50\,OR\,1411. V.G.\ acknowledges support provided by NASA through the Smithsonian Astrophysical Observatory (SAO) contract SV3-73016 to MIT for Support of the Chandra X-Ray Center (CXC) and Science Instruments; CXC is operated by SAO for and on behalf of NASA under contract NAS8-03060. We acknowledge the support by the DFG Cluster of Excellence ``Origin and Structure of the Universe''. We are grateful for the support of M. Cadolle Bel through the Computational Center for Particle and Astrophysics (C2PAP).
\end{acknowledgements}
|
3,212,635,537,626 | arxiv | \section{Introduction} \label{sec:introduction}
Observations of local Milky Way clouds show that the bulk of molecular
gas is inert with star formation concentrated in the small fraction of
the cloud at high surface density
\citep[][]{2010ApJ...723.1019H,2010ApJ...724..687L,2012ApJ...745..190L}. This
observation leads to the idea that the ``dense'' molecular gas ($A_V
\gtrsim 10$~mag, $n(H_2)\gtrsim 10^4 \, \ensuremath{\rm{cm^{-3}} \ }$), rather than the whole
molecular interstellar medium ($A_V \gtrsim 1$~mag, $n(H_2)\gtrsim
10^2 \, \ensuremath{\rm{cm^{-3}} \ }$), represents the star-forming phase.
Extragalactic scaling relations also support the idea that the amount
of high density molecular gas helps to set the star formation rate.
Low angular resolution spectroscopy of nearby galaxies reveal a
constant ratio (within a factor of a few) of HCN intensity, which
traces the mass of dense gas, to infrared (IR) emission, which traces
the rate of recent star formation, across a wide range of systems,
including the central parts of nearby disks, starbursts, and major
mergers
\citep{2004ApJ...606..271G,2009ApJ...707.1217J,2012A&A...539A...8G}. Surprisingly,
the HCN-to-IR ratios for whole galaxies are comparable to those in
local cloud cores \citep{2010ApJS..188..313W}, implying a constant
ratio of dense gas to star formation rate. The ratios of CO intensity,
which trace the overall molecular gas mass, to the IR emission,
however, vary by more than a factor of ten between disk and starburst
galaxies, suggesting that in extreme regions the relationship between
the total molecular gas mass and the amount of star formation may be
non-linear
\citep{2004ApJ...606..271G,2013AJ....146...19L,2013ARA&A..51..105C}.
If the amount of dense gas is linked to the amount of star formation,
then the formation of dense gas from more diffuse molecular gas
represents an important regulating process. While there may be many
regulating steps (e.g., the formation of giant molecular clouds out of
diffuse {\sc Hi}, accretion onto the galaxy), differences among
density probability distribution functions (PDFs) of local clouds
\citep{2009A&A...508L..35K} and the contrast between ULIRGs, LIRGs,
and normal spirals \citep[][Usero et al., in
prep.]{2012A&A...539A...8G} support the idea that the ratio of dense
to total molecular gas varies in different environments.
This hypothesis can be tested in nearby galaxies by comparing dense
gas, as traced by \hcn\ and \ensuremath{{\rm HCO^+}}, to the total molecular gas, as
traced by CO. While observations of nearby galaxies have lower
spatial resolution and sensitivity than observations in the Milky Way,
nearby galaxies probe a wider range of physical conditions than found
in the Solar Neighborhood and more directly relate positions with
environmental conditions. The main obstacle to pursuing systematic
studies of the \hcn-to-CO ratio, or analogous measures of the dense
gas fraction like the \ensuremath{{\rm HCO^+}}-to-CO ratio, has been the faintness of the
line emission: averaged over a large part of a galaxy, the \hcn\ and
\ensuremath{{\rm HCO^+}}\ lines are 10--30 times fainter than $\rm{CO}$
\citep{2004ApJ...606..271G}. As a result, most studies of
extragalactic dense gas with small-aperture millimeter-wave telescopes
have focused on individual deep pointings rather than wide-field
maps. The new $\lambda \sim 4$mm (``W band'') heterodyne receiver on
the Robert C. Byrd Green Bank Telescope (GBT) has the potential to
extend these single-pointing observations and map \hcn\ and \ensuremath{{\rm HCO^+}}\
distributions in nearby galaxies because of the GBT's large collecting
area, high surface accuracy, and good resolution.
In this Letter, we demonstrate the power of the GBT as an \hcn\ and
\ensuremath{{\rm HCO^+}}\ mapping machine for nearby galaxies using new GBT maps of
$\hcn(J = 1-0)$ and $\ensuremath{{\rm HCO^+}}(J=1-0)$ in the nearby
\citep[D=3.530~Mpc;][]{2009ApJS..183...67D} starburst galaxy M82
\citep{1980ApJ...238...24R}. These observations -- from less than 15
hours of telescope time -- have the best surface brightness
sensitivity of any published \hcn\ or \ensuremath{{\rm HCO^+}}\ map of M82 and resolution
$< 200$~pc. We compare the \hcn\ and \ensuremath{{\rm HCO^+}}\ emission to that of the
bulk molecular gas traced by \co{12}{2}{1} and \co{12}{1}{0} and to
recent star formation traced by 3~GHz radio continuum emission.
\section{GBT Observations}
\label{sec:data}
\begin{deluxetable*}{lccc}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{Summary of Observations \label{tab:obs_summary}}
\tablehead{
\colhead{} &
\colhead{9 March 2013} &
\colhead{10 March 2013} &
\colhead{21 April 2013}
}
\startdata
UT Times & 3:15 -- 11:15 & 3:45 -- 6:45 & 2:00 -- 5:45 \\
Number of Spectral Windows & 2 & 2 & 2 \\
Bandwidth per Window (MHz) & 200 & 200 & 200 \\
Spectral Resolution (kHz) & 24.414 & 12.207 & 12.207 \\
Flux Calibrator & 0359+5057 & 1229+0203 & 1229+0203 \\
Flux Calibrator Flux (Jy)\tablenotemark{a} & 4.0\tablenotemark{b} & 8.5\tablenotemark{b} & 9.9\tablenotemark{c} \\
Pointing Source & 0841+7053 & 0841+7053 & 0841+7053 \\
Average Opacity\tablenotemark{d} & 0.086 & 0.109 & 0.082 \\
Average System Temperature (\hcn, \ensuremath{{\rm HCO^+}}) (K) & 109, 117 & 112, 117 & 89, 88 \\
Main Beam Efficiency (\hcn, \ensuremath{{\rm HCO^+}}) & 0.26, 0.29 & 0.26, 0.30 & 0.23,0.22\tablenotemark{e} \\
\enddata
\tablenotetext{a}{Values from the CARMA Calfind database
(\url{http://carma.astro.umd.edu/cgi-bin/calfind.cgi}). }
\tablenotetext{b}{At 103.5 GHz.} \tablenotetext{c}{At 92.6 GHz.}
\tablenotetext{d}{From GBT High Frequency Weather Forecasts
(\url{http://www.gb.nrao.edu/~rmaddale/WeatherNAM/}).}
\tablenotetext{e}{The actuators for a section of panels were
inoperational for this session leading to lower efficiencies.}
\end{deluxetable*}
We used the newly commissioned 4mm (``W band'') receiver and the GBT
Spectrometer to simultaneously map the $\hcn(J=1-0)$ ($\nu_{rest}$ =
88.63160~GHz) and $\ensuremath{{\rm HCO^+}}(J=1-0)$ ($\nu_{rest}$ = 89.18853~GHz)
intensity across the central 1.75\arcmin\ by 1.5\arcmin\ ($1.8 \times
1.5$~kpc) of M82. Table~\ref{tab:obs_summary} gives the details of the
observations (GBT project 13A-253).
We used a single beam of the W-band receiver to make rapid maps,
sampling five times each beam FWHM in the scanning direction and
sampling 2.4 times each beam FWHM in the orthongonal direction
\citep{2007A&A...474..679M}.
The observations were made over $\sim15$h in excellent
weather. Out-of-focus holography scans were made every two hours to
correct the dish surface, pointing and focus checks were made every
hour, and flux calibration observations were done every observing
session. The receiver gain was calibrated using hot and cold
loads.\footnote{See http://www.gb.nrao.edu/4mm/.} The ``OFF''
spectrum was created using the average of the first and last
integrations of each row. We subtracted a linear baseline from each
calibrated spectrum and then corrected for the atmospheric opacity
using GBT weather
models.\footnote{http://www.gb.nrao.edu/~rmaddale/Weather/}
The aperture efficiencies ($\eta_a$) were derived using observations
of sources from the CARMA CALfind database. CARMA typically measured
fluxes at 103.5 GHz, but the difference between 103.5~GHz and 88~GHz
flux densities will be only $\sim 20\%$ even for a steep spectral
index ($F_{\nu} \propto \nu^{-1}$). Applying the derived aperture
efficiency to pointing source observations for each observing block
yielded consistent flux densities. The structures we observe are
comparable to the beam size, so we convert our final maps from
$T_A^\prime$ to $T_{mb}$ using the main beam efficiency ($\eta_{mb}$),
which is $\approx 1.3 \eta_a$ for the GBT \citep{gbtmemo276}. The
value of $\eta_{mb}/\eta_{a}$ changes by less than 15\% for sources
sizes up to 60\arcsec\ because of the clean nature of the GBT beam
(R.\ Maddalena, priv.~comm., 2013).
The calibrated spectra were gridded using a Bessel function tapered by
a Gaussian \citep[for details see][]{2007A&A...474..679M}. The final
cubes have a beam size $\approx 9.2$\arcsec, slightly larger than the
native 8.3\arcsec\ GBT resolution, and were smoothed to a spectral
resolution of 3.9~MHz (13.2~\ensuremath{\rm{km \, s}^{-1}}\ for HCN and 13.14~\ensuremath{\rm{km \, s}^{-1}}\ for
\ensuremath{{\rm HCO^+}}). The typical noise per channel ($T_{mb}$) is 42mK (\hcn) and
31mK (\ensuremath{{\rm HCO^+}}) and each map has $\sim~100$ independent resolution
elements.
We used IRAM 30\,m \hcn\ observations of M82 from the literature to
assess the accuracy of our flux density scale. After smoothing our
cube to the resolution of the 30\,m at 89~GHz (28\arcsec), we derive
an integrated flux of $28.75\pm1.24 \, \ensuremath{\rm{K \, km \, s^{-1}}}$ for the center of
M82. This value agrees with 30\,m \hcn\ integrated fluxes from the
literature: 29\ensuremath{\rm{K \, km \, s^{-1}}}\ \citep{2008ApJ...677..262K}, 29.9\ensuremath{\rm{K \, km \, s^{-1}}}\
\citep{2008A&A...492..675F}, 27.4\ensuremath{\rm{K \, km \, s^{-1}}}\ \citep{2004ApJS..152...63G},
and 25.4\ensuremath{\rm{K \, km \, s^{-1}}}\ \citep{1989A&A...220...57N}.
\section{Results} \label{sec:results}
\begin{figure*} \centering
\includegraphics[width=\textwidth]{M82_overview_figure_newco_newrcimage_9panel.png}
\caption{ {\em Top and Middle Rows:} The \hcn\ (top) and \ensuremath{{\rm HCO^+}}\
(middle) integrated line flux (moment zero) contours overlaid on a
3~GHz radio continuum image (Marvil et al.\ in prep; left), an X-ray
image with the point sources highlighted in pink
(NASA/CXC/SAO/PSU/CMU; middle), and a \co{12}{1}{0} integrated flux
image (\citealp{2002ApJ...580L..21W}; right). The \hcn\ and \ensuremath{{\rm HCO^+}}\
emission are both correlated with the total molecular gas and star
formation in the center of M82. The diffuse \ensuremath{{\rm HCO^+}}\ emission on the
northeastern edge of the disk correlates with the outflow seen in
\co{12}{1}{0} by \citet{2002ApJ...580L..21W} and outlines the
eastern edge of the hot gas associated with the outflow seen in diffuse
X-rays \citep{2003MNRAS.343L..47S}. The contours start at 6$\sigma$
and go up by factors of 2 (1$\sigma_{\hcn}$=3.09 $\ensuremath{\rm{K \, km \, s^{-1}}}$ and
1$\sigma_{\ensuremath{{\rm HCO^+}}}$=2.26 $\ensuremath{\rm{K \, km \, s^{-1}}}$). The magenta ellipse in the top
left panel indicates the region from which the spectra in
Figure~\ref{fig:M82_all_spectra_mean} were extracted. {\em Bottom
Row:} The first moment map (mean velocity) for \hcn, \ensuremath{{\rm HCO^+}}, and
\co{12}{1}{0}. Both velocity fields are consistent with a rotating
torus of molecular material
\citep{1987PASJ...39..685N,1995ApJ...445L..99S}. The possibly
outflowing \ensuremath{{\rm HCO^+}}\ material north and south of the disk has
velocities similar to the outflowing material in the center of M82
instead of velocities associated with the rotating molecular
torus. The GBT, OVRO, and VLA beams are shown in the lower left
corner of relevant panels and a 10\arcsec\ scale-bar in the lower
right hand corner of all panels. }
\label{fig:M82_overview_figure}
\end{figure*}
The \hcn\ and \ensuremath{{\rm HCO^+}}\ maps demonstrate the GBT's excellent capabilities
for mapping large areas at high surface brightness sensitivity with
good resolution (Figure~\ref{fig:M82_overview_figure}). Higher
resolution maps have been published for both \hcn\ $J=1-0$
\citep{1993A&A...277..381B} and \ensuremath{{\rm HCO^+}}\ $J=1-0$
\citep{1998ApJ...507..745S}, but our maps have better surface
brightness sensitivity by a factor of four and cover a larger area.
We compare our maps to lower resolution \co{12}{2}{1} mapping from the
HERACLES survey (Leroy et al., in prep) and to higher resolution,
zero-spacing corrected \co{12}{1}{0} interferometeric maps
\citep{2002ApJ...580L..21W}. The \co{12}{2}{1} map has 20.1\arcsec\
resolution and covers a wide field. The \co{12}{1}{0} map has higher
resolution (3.6\arcsec), but covers a smaller field of view. We also
compare our data to a new 3.0~GHz Jansky Very Large Array (VLA)
continuum map (Marvil et al., 2013, in prep), which has $\approx 0.7
\arcsec$ resolution and includes a mixture of free-free and
synchrotron emission, which both trace the distribution of recent star
formation. The total flux in the map agrees with the total flux
measured from single dish observations in the literature.
\subsection{\hcn, \ensuremath{{\rm HCO^+}}, and CO comparison}
The morphology of \hcn\ and \ensuremath{{\rm HCO^+}}\ emission follows the galaxy disk
and coincides with the main ridge of \co{12}{1}{0} emission and the
star formation traced by the radio continuum emission (top and middle
panels of Figure~\ref{fig:M82_overview_figure}). The velocity
distributions of the disk \hcn\ and \ensuremath{{\rm HCO^+}}\ emission are similar to the
velocity distribution of the CO disk (bottom panels of
Figure~\ref{fig:M82_overview_figure}), suggesting that the \hcn,
\ensuremath{{\rm HCO^+}}, and CO emission in the disk originates from the same rotating
molecular torus \citep{1987PASJ...39..685N,1995ApJ...445L..99S}.
For the first time, we also measure \ensuremath{{\rm HCO^+}}\ emission at low surface
brightness associated with the \co{12}{1}{0} emission extending north
and south of the main disk \citep{2002ApJ...580L..21W}. Simulations of
the GBT beam shape at 4\,mm indicate that this emission does not
originate from sidelobes. The \ensuremath{{\rm HCO^+}}\ emission outlines the eastern
edge of the X-ray emission associated with the central outflow,
suggesting that the dense molecular gas is entrained in the outflow of
lower density gas. The \ensuremath{{\rm HCO^+}}\ in this region also is kinematically
inconsistent with a disk, similar to what is seen in CO (cf.\ Figure 6
in \citealp{2002ApJ...580L..21W} and
Figure~\ref{fig:M82_overview_figure} here).
CO emission has been associated with the outflow in M82
\citep{2002ApJ...580L..21W} and has been seen in the outflow from the
starburst nucleus of NGC 253 \citep{2013Natur.499..450B}. Emission
from \hcn\ and \ensuremath{{\rm HCO^+}}\ has also been associated with the AGN-driven
outflow in the ULIRG Mrk 231 \citep{2012A&A...537A..44A}. To our
knowledge, the current observations, however, would be the first time
that dense molecular gas, as traced by \ensuremath{{\rm HCO^+}}, has been found to be
associated with a starburst-driven outflow in a nearby galaxy. The
outflow of dense molecular gas seen in \ensuremath{{\rm HCO^+}}\ may regulate star
formation in galaxies like M82 by removing the fuel for star
formation.
The disk-averaged line profiles of the \hcn\ and \ensuremath{{\rm HCO^+}}\ emission agree
with \co{12}{2}{1} emission from the HERACLES image (Leroy et al., in
prep) in Figure~\ref{fig:M82_all_spectra_mean}. Both \hcn\ and \ensuremath{{\rm HCO^+}}\
have structure near 220~\ensuremath{\rm{km \, s}^{-1}}, which is not seen in \co{12}{2}{1} but
is seen in the higher order CO transitions
\citep{2010A&A...521L...2L}. The models of the $\rm{CO}$ emission
from \citet{2010A&A...521L...2L} show that the different CO
transitions reflect different molecular gas densities. Transitions
like $\co{12}{2}{1}$ are emitted by relatively diffuse ($10^{3.5} \,
\ensuremath{\rm{cm^{-3}} \ }$) molecular gas associated with the disk, while the higher order
$\rm{CO}$ transitions ($J > 4$) are emitted by two denser components
($10^5\, \ensuremath{\rm{cm^{-3}} \ }$ and $10^6\, \ensuremath{\rm{cm^{-3}} \ }$) associated with the star-forming gas
\citep{2010A&A...521L...2L}. Inspection of the channel maps near
220~\ensuremath{\rm{km \, s}^{-1}}\ confirm that the lack of structure in the \co{12}{2}{1} is
due to the presence of a significant amount of CO emission associated
with the warm and diffuse molecular gas found throughout the disk,
while the \hcn\ and \ensuremath{{\rm HCO^+}}\ trace the denser molecular gas component
found in the torus.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{M82_whole_galaxy_gbt_spectra_mean.png}
\caption{Mean intensity averaged over the disk of M82
for \co{12}{2}{1} (Leroy et al., in prep), \hcn, and \ensuremath{{\rm HCO^+}}. The
region averaged is shown as an magenta
ellipse in top left panel of
Figure~\ref{fig:M82_overview_figure}. The CO intensity has been
divided by 10.
}
\label{fig:M82_all_spectra_mean}
\end{figure}
\subsection{The Relationship Between \hcn, \ensuremath{{\rm HCO^+}}, CO, and Star
Formation}
To explore the relationships between \hcn, \ensuremath{{\rm HCO^+}}, CO, and star
formation within M82, we smoothed the images to the same resolution
(9.2\arcsec), regridded them to the same coordinate system, and
rebinned the pixels so that each pixel represents an independent
sample. The line intensities were derived from
moment zero maps and the luminosities were calculated by multiplying
the intensities by the area of a pixel. Regions less than 3$\sigma$
were blanked.
Figure~\ref{fig:L_HCN_HCOP_ratio} compares the distribution of
\ensuremath{L_{CO}/L_{HCN}}, \ensuremath{L_{CO}/L_{HCO+}}, and \ensuremath{L_{HCN}/L_{HCO+}}\ for regions within M82 and for
entire galaxies (or centers of galaxies) from the literature
\citep{2004ApJ...606..271G,2008A&A...479..703G,2009ApJ...707.1217J,2012A&A...539A...8G}.
In M82, the distributions of all three ratios have the same range as
the points from the literature, although \ensuremath{L_{CO}/L_{HCN}}\ and \ensuremath{L_{CO}/L_{HCO+}}\ do
peak at lower values. However, we must be cautious because we are
comparing measurements of entire galaxies with spatially resolved
measurements within a galaxy. Because the CO emission has a larger
filling factor than the HCN emission, the unresolved measurements may
have systematically larger \ensuremath{L_{CO}/L_{HCN}}\ and \ensuremath{L_{CO}/L_{HCO+}}\ ratios. The \ensuremath{L_{CO}/L_{HCN}}\
and \ensuremath{L_{HCN}/L_{HCO+}}\ values for M82 as a whole are similar to the mode of
the values found for other galaxies, while the \ensuremath{L_{CO}/L_{HCO+}}\ value is
slightly lower. The \ensuremath{L_{CO}/L_{HCN}}\ and \ensuremath{L_{CO}/L_{HCO+}}\ values increase with
distance from the center of the galaxy (bottom panels of
Figure~\ref{fig:L_HCN_HCOP_ratio}), suggesting the fraction of dense
gas decreases with distance from the center. The \ensuremath{L_{HCN}/L_{HCO+}}\ ratio is
roughly constant across the center of the galaxy with higher values at
the southern edge.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{L_CO_HCN_HCOP_ratio_v3.png}
\caption{{\em Top:} The distributions of \ensuremath{L_{CO}/L_{HCN}}\ (left), \ensuremath{L_{CO}/L_{HCO+}}\
(middle), and \ensuremath{L_{HCN}/L_{HCO+}}\ (right) for regions in our M82 data
compared to values for entire galaxies or galaxy centers from the
literature
\citep{2004ApJ...606..271G,2008A&A...479..703G,2009ApJ...707.1217J,2012A&A...539A...8G}. For
M82, only ratios with S/N greater than 6 are shown. The ratios
between the total values for each quantity for M82 are shown as
dashed gray lines. The ratios for M82 are similar to those seen in
the literature, although the M82 distributions peak at lower ratios
than the bulk of the literature values. {\em Bottom:} Spatial
distribution of the \ensuremath{L_{CO}/L_{HCN}}\ (left), \ensuremath{L_{CO}/L_{HCO+}}\ (middle), and
\ensuremath{L_{HCN}/L_{HCO+}}\ (right) ratios with the \hcn\ (left and right) and \ensuremath{{\rm HCO^+}}\
(middle) contours from Figure~\ref{fig:M82_overview_figure}. The
\ensuremath{L_{CO}/L_{HCN}}\ and \ensuremath{L_{CO}/L_{HCO+}}\ values increase away from the center of the
galaxy reflecting a decrease in dense gas away from the center of
the starburst. The \ensuremath{L_{HCN}/L_{HCO+}}\ ratio is constant across the face of
the galaxy with slightly higher values found in the southern half of
the galaxy. }
\label{fig:L_HCN_HCOP_ratio}
\end{figure*}
Figure~\ref{fig:gao_and_solomon} compares the relationship between the
total molecular gas mass (\ensuremath{{\rm L_{CO}}}), the dense gas mass (\ensuremath{{\rm L_{HCN}}}\ and
\ensuremath{{\rm L_{HCO+}}}), and the star formation rate (\ensuremath{{\rm L_{IR}}}) for the entire M82 disk,
points within M82, the sample of galaxies from the literature used in
Figure~\ref{fig:L_HCN_HCOP_ratio}, and star-forming regions in the
Milky Way \citep{2010ApJS..188..313W,2012arXiv1211.6492M}. We have
estimated \ensuremath{{\rm L_{IR}}}\ for the M82 points by multiplying the 3.0~GHz
continuum flux density per pixel by the ratio of the infrared
luminosity to the 3~GHz flux density for the entire galaxy. In effect,
this procedure relies on the radio-infrared correlation, one of the
tightest astronomical correlations, but avoids additional systematic
errors by using the empirical ratio rather than a fit to the
correlation seen in a large sample of galaxies. The use of the 3~GHz
radio continuum ameliorates the significant optical depth effects
found in edge-on galaxies like M82.
For the entire M82 disk, the relationship between the \ensuremath{{\rm L_{IR}}}\ and \ensuremath{{\rm L_{HCN}}}\
values matches the trend between \ensuremath{{\rm L_{IR}}}\ and \ensuremath{{\rm L_{HCN}}}\ found for a sample
of LIRGs and ULIRGs, which have high star formation rates. However,
our data shows that the relationship between \ensuremath{{\rm L_{IR}}}\ and \ensuremath{{\rm L_{HCN}}}\ varies
{\em within M82}. Regions away from the central starburst tend to have
lower \ensuremath{{\rm L_{IR}}}\ (star formation rate) for a given amount of \ensuremath{{\rm L_{HCN}}}\ (dense
molecular gas). These points match the trend seen in normal galaxies
\citep{2004ApJ...606..271G} and individual star-forming regions in the
Milky Way \citep{2010ApJS..188..313W}. For points near the central
starburst, the \ensuremath{{\rm L_{IR}}}\ (star formation rate) is higher for a given
amount of \ensuremath{{\rm L_{HCN}}}\ (dense molecular gas), matching the trend seen for
LIRG/ULIRGs. We see a similar trend for the \ensuremath{{\rm HCO^+}}\
measurements. Compared to normal galaxies, individual regions in M82
tend to have higher dense gas fractions (\ensuremath{L_{HCN}/L_{CO}}\ or \ensuremath{L_{HCO+}/L_{CO}}) and
higher ratios of star formation to total molecular gas mass
($\ensuremath{{\rm L_{IR}}}/\ensuremath{{\rm L_{CO}}}$), but the dense gas fractions vary by a factor of 10.
These resolved observations of M82 show that the relationship between
the amount of dense gas and star formation rate (as traced by the
radio continuum) varies within a single galaxy. The key variable
appears to be the central starburst, which could be using up or
expelling the dense gas, affecting the gas tracer chemistry, and/or
changing how star formation proceeds. Future resolved observations of
\hcn\ and \ensuremath{{\rm HCO^+}}\ in star-forming regions in a variety of galactic
environments will allow us to disentangle these possibilities and
understand how the central starburst in M82 influences star formation.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{lir_lhcn_lhcop_lco_all_newrcimage_ratio_4panel_v2.png}
\caption{ {\em Top:} The star formation rate traced by infrared
luminosity as a function of the amount of dense gas traced by \hcn\
(left) and \ensuremath{{\rm HCO^+}}\ (right). The \ensuremath{{\rm L_{IR}}}-\ensuremath{{\rm L_{HCN}}}\ fit for a sample
including both normal galaxies and LIRGs/ULIRGs from
\citet{2004ApJS..152...63G} is shown as a solid line; star-forming
regions within the Milky Way also follow this fit
\citep{2010ApJS..188..313W,2012arXiv1211.6492M}. The dotted line
shows the \ensuremath{{\rm L_{IR}}}-\ensuremath{{\rm L_{HCN}}}\ fit derived from the sample of LIRGs/ULIRGs
\citep{2008A&A...479..703G,2012A&A...539A...8G}. The integrated
\ensuremath{{\rm L_{IR}}}\ and \ensuremath{{\rm L_{HCN}}}\ points for M82 follow the LIRG/ULIRG relationship
between \ensuremath{{\rm L_{IR}}}\ and \ensuremath{{\rm L_{HCN}}}. Regions within M82, however, span a range
of values with the high luminosity \hcn\ and \ensuremath{{\rm HCO^+}}\ points following
the LIRG/ULIRG relationship and the low luminosity points following
instead the relationship seen in normal galaxies and in the Milky
Way. We see a similar trend for \ensuremath{{\rm L_{IR}}}\ vs. \ensuremath{{\rm L_{HCO+}}}. {\em Bottom:} In
M82, the amount of dense gas per total molecular gas mass (\ensuremath{L_{HCN}/L_{CO}}\
and \ensuremath{L_{HCO+}/L_{CO}}) and the amount of star formation per total molecular
gas mass (\ensuremath{{\rm L_{IR}}}/\ensuremath{{\rm L_{CO}}}) are both high. The errors on the M82 data in
all plots are smaller than the symbol size.}
\label{fig:gao_and_solomon}
\end{figure*}
\section{Summary} \label{sec:conclusions}
We have made the most sensitive map to date of the dense molecular gas
in the starburst galaxy M82 using the largest single-dish
millimeter-wave telescope in the world: the GBT. The \hcn\ and \ensuremath{{\rm HCO^+}}\
emission correlates with lower density molecular gas, traced by CO,
and star formation, traced by radio continuum, and its kinematics are
consistent with the previously proposed torus of molecular gas. We
also detect low surface brightness \ensuremath{{\rm HCO^+}}\ emission coincident with the
base of the molecular gas outflow first detected in $\rm{CO}$ and
tracing the edge of the hot outflowing gas seen in the X-ray.
The \ensuremath{L_{CO}/L_{HCN}}, \ensuremath{L_{CO}/L_{HCO+}}, and \ensuremath{L_{HCN}/L_{HCO+}}\ ratios are similar to those in
other starburst galaxies. The first two ratios increase with distance
from the central starburst, implying that the fraction of dense gas
decreases with distance from the starburst.
The relationship between the dense molecular gas and star formation
varies with distance from the central starburst. Near its center,
there is a higher ratio of star formation to dense molecular gas,
similar to the relationship seen for LIRGs and ULIRGs, but outside of
the central starburst, the ratio of the star formation to dense
molecular gas decreases, agreeing with the correlation seen in normal
galaxies and the Milky Way.
These observations demonstrate the effectiveness of the GBT for
mapping dense molecular gas in external galaxies. The already-exciting
capabilities of the GBT will be increased further with the advent of
the 16 element 4mm feed array (ARGUS) being built for the GBT and will
complement on-going efforts with ALMA.
\acknowledgments The National Radio Astronomy Observatory is a
facility of the National Science Foundation operated under cooperative
agreement by Associated Universities, Inc. This research used APLpy
and Astropy \citep{2013arXiv1307.6212T}.
{\it Facilities:} \facility{GBT, VLA}
|
3,212,635,537,627 | arxiv | \section{Introduction and motivation}
\label{introduction}
\flo{- Why generative models are cool}
Generative models have interesting properties, as shown in \cite{ng2002discriminative} where the conditional distribution $P( Y | X )$(X represents the samples and Y the classes) learned by a generative model is compared to the same conditional distribution learned by a discriminative model, the generative model performs better in learning this conditional distribution by regularizing the model when the amount of data is low. Secondly, once the generative model is trained, it can sample as much as needed. Methods such as GAN \citep{Goodfellow14}, WGAN \citep{Arjovsky17}, CGAN \citep{Mirza14}, CVAE \citep{Sohn15} and VAE \citep{Kingma13} have produced nice samples on various image datasets such as MNIST, bedrooms \citep{Radford15} or imageNet \citep{Nguyen16}
Other works use generative models for data augmentation \citep{Ratner17}, for safety against adversarial example \cite{Wang17}, or to produce labeled data \citep{Sixt16} in order to improve the training of discriminative models.
\flo{- How to evaluate them, what are the issues}
One commonly accepted tool to evaluate a generative model trained on images is visual assessment to validate the realistic character of samples. One case of this method is called 'visual Turing tests', in which samples are visualized by humans who try to guess if the images are generated or not. It has been used to assess generative models of images from ImageNet \citep{Denton15} and also on digit images \citep{Lake15}. Others propose to replace the human analysis by the analysis of first and second moments of activation of the generated data in a pretrained inception model. This method has been used in the output of the inception model for "Inception Score" (IS) \cite{Salimans16}, or with activation from intermediate layers for "Frechet Inception Score" (FID) \cite{Heusel17}. Log-likelihood based evaluation metrics were also widely used to evaluate generative models but as shown in \cite{Theis15}, those evaluations can be misleading in high dimensional cases such as images.
\flo{- What do we suggest}
The solution we propose to estimate both sample quality and global fit of the data distribution is to use the test set of a given dataset to evaluate generative models by incorporating generated data into the training phase of a classifier before testing it. We also need as baseline the same classifier trained only on original training data. The classifier performance on the test set is thus use as criterion to evaluate generator quality.
If replacing some training data by generated data doesn't degrade the score over the test set with respect to the baseline, it means that the generated sample are close to the training data. However, if the test score is significantly lower, it implies that the samples make harder the training of the classifier by shifting the data distribution to something far from the original one.
This method assess that the distribution of generated data is the same as the training data on never seen data.
\flo{Je ne comprends pas cette phrase, "training data on never seen data ?" This method asses implicitly a distance between the training data and the generated data ?}
It is a satisfying method because very close to evaluation of classifiers on testing data in term of evaluation bias.
\flo{Je ne comprends pas non plus puis j'éviterais le "It is a satisfying method"}
Furthermore, this criterion is particularly relevant for application of generative models \flo{???}.
First, it can make training more efficient when the amount of data is low, \flo{What ? Okay tu veux dire que le fait de faire de la data augmentation peut améliorer l'entrainement du classifier quand on a peu de données disponible}.
As shown in \cite{ng2002discriminative}
where the conditional distribution $P( Y | X )$(X represents the samples and Y the classes) learned by a generative model is compared to the same conditional distribution learned by a discriminative model, the generative model performs better in learning this conditional distribution by regularizing the model when the amount of data is low. Secondly, once the generative model is trained, it can be sampled as much images as needed and can produce interpolations between images. Other works use generative models for data augmentation \citep{Ratner17}, for safety against adversarial example \cite{Wang17}, or to produce labeled data \citep{Sixt16} in order to improve the training of discriminative models, but their intention is not to use it to evaluate or compare generative neural networks.
Our contribution is two folds: first we propose a
method to evaluate generative models on the test set.
Secondly we propose a Score to evaluate the performance of a model on a given dataset.
We also add a minor contribution by proposing a method to adapt Inception Score and Frechet Inception Score to any dataset.
\flo{Je suis pas ok sur le fait que ce soit une contribution meme mineur, y'a plein de gens qui ont deja adapte inception score pour mnist ou d autres datasets}
The next section will present the related work on generative models, the exploitation of the generated samples and their evaluation. We then present our generative model evaluation framework before presenting experimental results on several generative models with different datasets.
\section{Related work}
\label{Related work}
\subsection{Generative models}
\flo{La figure est vraiment inutile}
\begin{figure}
\begin{center}
\resizebox{100mm}{50mm}{
\begin{tikzpicture}[scale=0.50,
roundnode/.style={circle, draw=black!60, fill=green!0, very thick, minimum size=30mm},
squarednode/.style={rectangle, draw=black!60, fill=red!0, very thick, minimum size=20mm},
squarednode2/.style={rectangle, draw=black!60, fill=red!0, very thick, minimum size=5mm},
]
\node[squarednode] (generator) {Generator};
\node[squarednode2] (trainOn) [above=of generator] {1. Train};
\node[roundnode] (train) [above=of trainOn] {Train Dataset};
\node[squarednode2] (generate) [right=of generator] {2. Generate};
\node[roundnode] (generated) [right=of generate] {Generated Dataset};
\node[squarednode2] (trainOn2) [right=of generated] {3. Train};
\node[squarednode] (classifier) [right=of trainOn2] {Classifier};
\node[squarednode2] (testOn) [above=of classifier] {4. Test};
\node[roundnode] (test) [above=of testOn] {Test Dataset};
\draw[-] (train.south) -- (trainOn.north);
\draw[->] (trainOn.south) -- (generator.north);
\draw[-] (generator.east) -- (generate.west);
\draw[->] (generate.east) -- (generated.west);
\draw[-] (generated.east) -- (trainOn2.west);
\draw[->] (trainOn2.east) -- (classifier.west);
\draw[-] (test.south) -- (testOn.north);
\draw[->] (testOn.south) -- (classifier.north);
\end{tikzpicture}
}
\caption{Proposed method : 1. training a generator with train data, 2. generate data, 3. training classifier with the new data, 4. evaluating the generator by testing the classifier}
\end{center}
\label{fig:shema_methode}
\end{figure}
The variational auto-encoder (VAE) framework \citep{Kingma13}, \citep{Rezende14} is a particular kind of auto-encoder which has control over its latent space, in which each variable is a sample from a prior distribution, often chosen as an univariate normal distribution $\mathbf{N}(0,I)$ (where $I$ is the identity matrix). The VAE learns to map this low dimensional latent space to the observation space. This characteristic makes the VAE an interesting option for generating new data after training. The particularity of the latent space comes from the minimization of the KL divergence between the distribution of the latent space and the prior $\mathbf{N}(0,I)$. For the sake of simplicity, in this paper we will speak about the decoder of the VAE as a generator.
Generative adversarial networks \citep{Goodfellow14} are a framework of models that learn by a game between two networks: a generator that learns to produce images from a distribution $P$ and a discriminator which learns to discriminate between generated and true images. The generator wants to fool the discriminator and the discriminator wants to beat the generator. This class of generative models can produce visually realistic samples from diverse datasets but they suffer from instabilities in their training. Some recent approaches such as Wasserstein GAN (WGAN) \citep{Arjovsky17} try to address those issues by enforcing a Lipschitz constraint on the discriminator. We also evaluate the BEGAN \cite{Berthelot17}, an other variant of the GAN which uses an auto-encoder as discriminator.
Conditional neural networks \citep{Sohn15} and in particular Conditional Variational Autoencoders (CVAE) and Conditional Generative adversarial networks (CGAN) \citep{Mirza14} are a class of generative models that have control over the sample's class. By imposing a label during training, a conditional generative network can generate from any class and thus produces labeled data. The conditional approach has been used to improve the quality of generative models and make them more discriminative \citep{Odena16}. They are particularly adapted for our setup because we need to generate labeled data to train our classifiers.
\subsection{Evaluation of generated samples}
\cite{Theis15} show that different metrics (as Parzen windows, Nearest Neighbor or Log likelihood) applied to generative models can lead to different results. Good results in one application of a generative model can not be used as evidence of good performance in another application. Their conclusion is that evaluation based on sample visual quality is a bad indicator for the entropy of samples. Conversely, the log-likelihood can be used to produce samples with high entropy but does not assure good visual quality. The method we propose can both estimate the quality and the entropy of samples as we will show in Section \ref{methods}.
The quality of the internal representation of a generator can also be estimated with a discriminator. \cite{Radford15} use the discriminator of a ACGAN as feature extractor for evaluating the quality of unsupervised representation learning algorithms. They apply the feature extractor on supervised datasets and evaluate the performance of linear models fitted on top of these features. They experiment a good accuracy on Cifar10 thanks to this method. This approach gives insight on how the discriminator estimates if an image is true or false. If the discriminator has good enough feature extractors for classification, it means that the generator samples are hard to be discriminated from samples from the true distribution. It assess indirectly the quality of the generator. This method is however applicable only if a deep convolutional neural networks is used as discriminator to produce feature maps.
The main
difference between a discriminator and our classifier is that it is not involved in the training process. In our approach, the generator is completely independent from the classifier and therefore there is no bias from the classifier in the generator.
\subsection{Parzen window estimates and likelihood estimators}
Parzen windows estimate
the unknown probability density function $f$ of a probability distribution $P$. This method uses a mixture of simpler probability density functions, called kernels, as approximates for $f$. In general, a popular kernel used is an isotropic Gaussian centered on a given data point with a small variance (the variance is an hyper parameter here). The idea, like other methods based on Kernel Density Estimation, is to have a small window on each data point such that we apply some smoothing over the function we try to approximate. However, even if the number of samples is high, Parzen windows estimator can be still very far from the true likelihood as shown in \cite{Theis15}, and thus cannot be a good approach to evaluate if the data distribution learned by a model is close to the original one.
\subsection{Multi-scale structural similarity}
Multi-scale structural similarity (MS-SIM, \cite{Wang03}) is a measurement that gives a way to incorporate image details at different resolutions in order to compare two images. This similarity is generally used in the context of image compression to compare image before and after compression. \\
\cite{Odena16} use this similarity to estimate the variability inside a class. They randomly sample two images of a certain class and measure the MS-SIM. If the result is high, then images are considered different. By operating this process several times, the similarity should give an insight on the entropy of $P(X|Y)$ (X a data point and Y its class): if the MS-SIM gives high result, the entropy is high; otherwise, the entropy is low. However, it can not estimate if the sample comes from one or several modes of the distribution $P(X|Y)$. For example, if we want to generate images of cats, the MS-SIM similarity can not differentiate a generator that produces different kinds of black cats from a network that produces different cats of different colors. In our method, if the generator is able to generate in only one mode of $P(X|Y)$, the score will be low in the testing phase.
\subsection{Inception score}
One of the most used approach to evaluate a generative model is Inception Score (IS) \citep{Salimans16,Odena16}. The authors use a pretrained inception classifier model to get the conditional label distribution $P(Y | X)$ over the generated samples and a distribution over the classes $P(Y)$.
They proposed the following score in order to evaluate a generative model:
\begin{equation}
\exp(\mathbb{E}_X KL (P_{data}(Y | X) \parallel P(Y)) \mbox{ ,}
\label{inception_score}
\end{equation}
When the score is high, the generator produces samples on varied classes (Cross entropy of $P(Y | X), P(Y)$ is high) and the samples look like real images from the original dataset (entropy of $P(Y|X)$ is low ). Inception score can be seen as a measurement of the variability of the generated data while penalizing the uncertainty of $P( Y | X)$. Unfortunately, it does not estimate if the samples are varied inside a certain class (the entropy of $p(X|Y)$).
\subsection{Frechet Inception Score}
The Frechet Inception Distance (FID) have been introduced in \cite{Heusel17} to evaluate Generative adversarial networks. The FID, as the inception score, is based on low moment analysis. It compares the mean and the covariance of activations between real data and generated data. The activation are taken from a inner layer in a inception model. The comparison is done using the Frechet distance as if the means and covariance where taken from a Gaussian distribution (formula \ref{eq:frechet_inception_distance}). The inception model is trained on Imagenet and the metric is applicable only dataset with same classes.
\begin{equation}
d^2((\mu,C),(\mu_{gen},C_{gen}))=\parallel \mu - \mu_{gen} \parallel_2^2 + Tr(C+C_{gen} - 2(C*C_{gen})^{\frac{1}{2}}) \mbox{ ,}
\label{eq:frechet_inception_distance}
\end{equation}
The advantage of FID and IS is that it is fully unsupervised given the pretrained inception model. However those comparison methods have to be used with the very same unique model, which induce a bias into the comparison.\flo{Pourquoi un biais ?} Furthermore the method can not be experimented on other dataset than cifar10 or with other models. \flo{On peut les utiliser sur tout les dataset d'images naturels, puis je ne vois pas pourquoi on ne pourrait pas les utiliser avec d'autres models}
\ctim{parce que le model inception proposé est entrainé pour ça, après si on l'adapte comme on le fait c'est possible de l'uiliser autrement}
More methods for evaluation of generative methods are describe in 'Pros and Cons of GAN Evaluation Measures' \cite{Borji18}. In particular the authors describe the approach that use a classifier performance to evaluate generative models, as \cite{Radford15}, \cite{Zhang16} or \cite{Isola16} each of them use in slightly different approaches a classifier trained on real data to evaluate the generate data
\tim{The specificity of our method is that it evaluates real data with a classifier trained on generated data and not evaluate generated data with a classifier trained on real data. The method evaluate if a classifier is able to understand real data based on knowledge learned on generated data. Furthermore the approach can be applied with any architecture of generator or classifier.}
\flo{Apres faut etre fair, qu'a un moment si on veut faire des comparaisons faut definir un model de classifier comme le inception. Je veux dire on a le meme probleme, nos scores vont dependre du modele}
\section{Methods}
\label{methods}
We evaluate generators in a supervised training setup. We have a dataset $D$ composed of pair of examples $(x, y)$ where $x$ is a data point and $y$ the label associated to this data point. The dataset is split in three parts $D_{train}, D_{valid}$ and $D_{test}$. Our method needs a generative model that can sample conditionally from any given label $y$. This conditional generative model is thus trained on $D_{train}$. Once the training of this model is done, we sample random labels and use the generative model to get a new dataset $D_{gen}$. Then, we mix the true training set $D_{train}$ with the new generated data $D_{gen}$. The ratio of generated data into the whole mixture $D_{mix}$ is called $\tau$. $\tau$ is used as the probability to sample a generated batch rather than a batch of true data during training. $D_{mix}$ is used to train a classifier on the classes of the dataset. This classifier is evaluated at each epoch over the portion $D_{valid}$ of the dataset. Once, we get the best model over $D_{valid}$, we compute the score of this classifier over the test set $D_{test}$. We compare the results from training on $D_{mix}$ with a baseline. The baseline is the score of the same classifier model trained only on training data $D_{train}$. We can summarize our method as follows:
\begin{enumerate}
\item Train a conditional generative model over $n$ sample of $D_{train}$
\item Mix the samples $D_{gen}$ generated by this model with $D_{train}$ under a probability $\tau$ into $D_{mix}$
\item Train a discriminative model over $D_{mix}$ (to avoid over-fitting on $D_{gen}$, it is generated such as the discriminative model sees only one time each generated data)
\item Train a discriminative model over $D_{train}$ using the $n$ same sample as for the generator trainin
\item Select a classifiers over a valid set $D_{valid}$.
\item Compare the score of those classifiers over a test set $D_{test}$.
\end{enumerate}
By iterating this method on diverse values of $\tau$ we can evaluate the quality of a generator given a dataset.
We can evaluate performance with accuracy on the whole test or class by class. We evaluate the best accuracy $acc_G$ of each generator as well as the expected performance $\mathbb{acc_G}$ (Equation \ref{eq:fitting_capacity})
We compare our evaluation method with Inception Score and Frechet Inception Score. We also experiment a KNN classifier for our evaluation to compare with our neural networks.
\begin{equation}
\mathbb{E}{acc_G} = \mathbb{E}[_n(G,\tau=1) - acc_n(G,\tau=0) | D_{train}] \mbox{ ,}
\label{eq:fitting_capacity}
\end{equation}
\flo{Je mettrais les comparisons en sous section de experiments}
\ctim{Oui je pense que ce serait pas mal de fairte un truc comme ça}
\subsection{Fair Comparisons}
For fair comparison between generative model, we normalize the model of GANs architecture to one proposed in infoGAN paper \cite{Chen16}. For model that need special adaptation, like CGAN we added the architecture modifications required to make the model compatibl
. For the VAEs we used a two layers fully connected model.
Concerning the classifiers all classifier used as proxy to evaluate generative models are the same for a given dataset. They are all trained by the same way with the same data.
Each generative model and classifier are trained on eight different seeds a priori chosen.
The choice of normalizing trainings and models is motivated to favour the impartiality of the paper toward generative models.
\flo{Je ne clamerais pas l'impartialité du papier}
However the comparison is not dependent on the models or training methods, and each part of the method can be tuned to maximize the final results on test set (like for traditional classification methods). \flo{C'est vraiment pas clair}
\subsection{Adaptation of Inception Score and Frechet Inception Distance}
IS, as originally defined in \citep{Salimans16}, can not be used on other dataset than those with the same classes as Imagenet
. Furthermore IS and FID can not be used with other models than the one proposed by \cite{Salimans16} pretrained on Imagenet
\flo{C'est un peu de la mauvaise foi, dans le papier de IS, ils disent qu'on peut utiliser un autre modèle il me semble}
\ctim{moi j'ai pas trouvé dans le papier quelquechose mentionnant qu'on peut adapter la méthode}
, we propose to adapt those methods to our setting to be able to compare our generative models on any dataset. We train a model for classification on the dataset. Then we use it to produce a probability vector to compute IS and an activation vector to compute FID. To have similar vector to \cite{Heusel17}, the activations are from a layer in the middle of the model.
The very same formula than IS and FID can then be used to compare models. The results here produced are obviously not comparable with results from other papers since they are from another model but they can be used to compare the generative models presented in this paper with each others.
\flo{Ce paragraph manque de clarté}
\section{Experiments}
\subsection{Experimental protocol}
In order to evaluate different generative models, we present our evaluation results on popular datasets : MNIST and Fashion-MNIST \citep{Xiao2017}.
We used two different methods in order to get generated labeled dataset. The first uses traditional generative neural network which can not produce labeled data. In order to associate each generated sample to a label, we train one generator for each specific class $y$ on $D_{train}$. This makes us able to label any generated sample. Once the training of those generators is done, we mix the samples obtained by each generator in order to produce $D_{gen}$. For the experiments, we compare different generative models is this setting: a standard VAE, WGAN, GAN and BEGAN. The second method uses conditional generative models which can generate samples in all classes while controlling the class of a particular sample. Conditional models can thus generate various labeled samples and produce a whole dataset $D_{gen}$ directly. In this last case, we ran our experiments on CVAE and CGAN. Once the dataset $D_{gen}$ is generated, we mixed it with the real dataset $D_{train}$. As we can generate as much data as we want, we experimented different ratios between real datasets and generated datasets. We call $\tau$ the probability of sampling from $D_{gen}$. We made experiments with different values for $\tau = [0.000, 0.125, 0.250, 0.375, 0.500, 0.625, 0.750, 0.875, 1.000]$. $\tau = 0$ implies that we use only real data from $D_{train}$, $\tau=1$ implies only generated data. $\tau = 0$ is used as a baseline to compare other results.
We use a standard CNN with a softmax output as classifier to predict the labels on this mixture of samples. On each epoch we evaluate this classifier over a validation set $D_{valid}$. Then, we choose the classifier that performs best on this validation set to compute the test error on $D_{test}$. We use early stopping to stop the training if the accuracy does not improve anymore on the validation set after $50$ epochs. We train a classifier for each value of $\tau$. We assess the quality of the generative model by comparing the test score of this classifier when $\tau = 0$ versus the best test score of the classifier with $\tau > 0$. The result gives an indication on how the learned distribution from $D_{train}$ fits and generalizes the distribution from $D_{train}$. In order to be able to compare results from different generators on a given dataset, we always use the same classifier architecture.
The experiment with various $\tau$ makes possible to visualize how generated data have influence on the classifier results.
However, the real criterion we are interest in is the accuracy when $\tau=1$. It describes the ability to fit the real distribution and evaluate if generated data are good enough to make a classifier "understand" real data.
\subsection{Results}
In Figure {\ref{fig:accuracy}}, we present the test accuracy when $\tau$ increase. When $\tau=0$ there is no generated data, this is the result of the baseline without data augmentation. Our interpretation of the figure is that if the accuracy is better than baseline with a low $\tau$ ( $< 0.5$) it means that the generator is able to generalize by learning meaningful informations about the dataset. When $\tau > 0.5 $ if the accuracy is maintained it means the generated data can replace the dataset in most parts of the distribution. When $\tau=1$ there is no more original data, the classifier is thus trained only on generated samples. If the accuracy is still better than the baseline, it means that the generator has fit the training distribution (and eventually has learned to generalize if this score is high over the test set). Unfortunately our results show that none of our experiment success to outperform the baseline. This probably because the training set of the dataset experimented are quite big and don't need data augmentation to perform well or that the generators are not good enough.
Following this interpretation, Figure \ref{fig:accuracy} allows us to compare different generative neural networks on fashion-MNIST. For example we can see that all models are almost equivalent when $\tau$ is low (Figure \ref{fig:accuracy-fashion} and \ref{fig:accuracy-MNIST}). However when $\tau$ is high we can clearly differentiate generative models type. Furthermore Figure \ref{fig:accuracy-MNIST_var} (MNIST) and \ref{fig:accuracy-fashion_var} (fashion-MNIST) show the stability of each models on the trained dataset. The variances have been computed by running each models on height different random seeds. Figures \ref{fig:accuracy-MNIST_classes} and \ref{fig:accuracy-fashion_classes} show the accuracy of the classifier class by class for $\tau=1$. For not conditional generative models trained class by class this figure show if the generated images makes possible to discriminate different classes. For conditional generative models is evaluate if the models is able to learn each mode of the distribution with the same accuracy.
Some of the curve in Figure \ref{fig:accuracy-MNIST_var} and \ref{fig:accuracy-fashion_var} have high standard deviation. To show that this standard deviation is not due to the classifier instability we plot the results of classification with various $\tau$ with a knn classifier Figure \ref{fig:1nn_accuracy-MNIST} and \ref{fig:1nn_accuracy-fashion} (k=1). Knn algorithms have no standard deviation since they are deterministic. The standard deviation is similar for generative model with neural networks classifiers and knn classifiers. So we can deduce from it that the classifier with neural networks does not add instability in the evaluation process of generative models.
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_NN/mnist_accuracy.png}
\caption{Mean Classifiers Accuracy mnist against $\tau$}
\label{fig:accuracy-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_NN/fashion-mnist_accuracy.png}
\caption{Classifiers Accuracy fashion-mnist against $\tau$}
\label{fig:accuracy-fashion}
\end{subfigure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_NN/mnist_accuracy_var.png}
\caption{Standard deviation of Classifiers Accuracy mnist against $\tau$}
\label{fig:accuracy-MNIST_var}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_NN/fashion-mnist_accuracy_var.png}
\caption{Standard deviation of Classifiers Accuracy fashion-mnist against $\tau$}
\label{fig:accuracy-fashion_var}
\end{subfigure}
\caption{Representation of accuracy of each model to train a classifier. Classifier trained with a mixture of real and generated data and tested on the test set of each dataset.}
\label{fig:accuracy}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_NN/mnist_num_test_accuracy.png}
\caption{Relative accuracy wrt. baseline on mnist classe by classe for $\tau=1$}
\label{fig:accuracy-MNIST_classes}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_NN/fashion-mnist_num_test_accuracy.png}
\caption{Relative accuracy wrt. baseline on fashion-mnist classe by classe for $\tau=1$}
\label{fig:accuracy-fashion_classes}
\end{subfigure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Diagrammes/mnist_diagram.png}
\caption{MNIST boxplot}
\label{fig:diagram-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Diagrammes/fashion-mnist_diagram.png}
\caption{fashion-MNIST boxplot}
\label{fig:diagram-fashion}
\end{subfigure}
\caption{Comparison of models for $\tau=1$}
\label{fig:boxplot}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_KNN/mnist_knn_accuracy.png}
\caption{Mean Accuracy for 1-NN on MNIST}
\label{fig:1nn_accuracy-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Accuracy_KNN/fashion-mnist_knn_accuracy.png}
\caption{Mean Accuracy for 1-NN on fashion-mnist}
\label{fig:1nn_accuracy-fashion}
\end{subfigure}
\end{figure}
Results from Figure \ref{fig:boxplot} show that GAN are one of the best in both MNIST and Fashion-MNIST, however in Fashion-MNIST one model has completely fail which explain the high variance of Figures \ref{fig:accuracy}. Otherwise our results show that WGAN is the best trade-off stability results. The same comparison betweens models can be achieve by visualizing best performance achieved in Table \ref{table:Best_Table} where GAN have best results however in the expected performance the WGAN is better Table \ref{table:Result_Table}. Our recommendation is then to run models at least five times with different random seed to compute an expected results on a generative models rather than cherry picking just the best seed.
\begin{table}[tbp]
\centering
\caption{Best accuracies for $acc_G(\tau=1)$}
\label{table:Best_Table}
\begin{tabular}{|l |l | l |l | l | l | l | l | l | l}
\hline
Datasets & Baseline &VAE & CVAE & GAN & CGAN & WGAN & BEGAN \\
\hline
MNIST & 97.99\% & 97.24\% & 95.96\% & \textbf{97.47}\% & 96.46\% & 97.36\% & 93.71\%\\
\hline
Fashion MNIST & 86.81\% & 82.74\% & 80.58\% & \textbf{84.35}\% & 80.21\% & 83.76\% & 74.92\% \\
\hline
\end{tabular}
\end{table}
\begin{table}[tbp]
\centering
\caption{Results Expected Accuracy : $\mathbb{E}[acc_G(\tau=1)]$ values}
\label{table:Result_Table}
\begin{tabular}{|l |l | l |l | l | l | l | l | l | l}
\hline
Datasets & Baseline &VAE & CVAE & GAN & CGAN & WGAN & BEGAN \\
\hline
MNIST & 97.80\% & 96.65\% & 95.34\% & \textbf{97.28}\% & 95.79\% & 96.98\% & 92.31\%\\
\hline
Fashion MNIST & 86.39\% & 82.02\% & 78.84\% & 80.66\% & 78.66\% & \textbf{83.26}\% & 71.91\% \\
\hline
\end{tabular}
\end{table}
\subsection{Comparison with IS and FID}
We compare our results to those find with the IS and FID methods adapted to our setting Figure \ref{fig:inception}. For those two evaluation methods we compute the results for each model experimented. We add an experiment to compare train set and test set. For FID we compute the frechet distance on to activations of the classifier on test and train sets. For Inception score we only compute it on the test set after training on train set.
For the inception score (Figures \ref{fig:IS-MNIST}\ref{fig:IS-fashion}) the results (bigger is better) seem not coherent between MNIST and Fashion-MNIST . The results where difficult to distinguish so we plot the difference of the IS computed in a generative models and the IS computed on the test set (Equation \ref{eq:diff_IS}).
\begin{equation}
diff-IS(G|D_{train}) = IS(G|D_{train})-IS(test|D_{train}) \mbox{ ,}
\label{eq:diff_IS}
\end{equation}
We also added the result when we use training data to compute the inception score.
For both MNIST and Fashion MNIST we find models that have better results than the train set, especially for Fashion-MNIST.
The Frechet inception score (Figures \ref{fig:FID-MNIST}\ref{fig:FID-fashion}) computed between test set and generated data gives coherent results between MNIST and fashion-MNIST (smaller is better), but there are not always coherent with our results. As an example VAE does not perform well with FID when we can see in our results \ref{fig:accuracy} that it is able to train a classifier quite well with high accuracy and stability on every classes.
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Inception/mnist_Inception_Score.png}
\caption{MNIST Inception Score}
\label{fig:IS-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Inception/fashion-mnist_Inception_Score.png}
\caption{fashion-MNIST Inception Score}
\label{fig:IS-fashion}
\end{subfigure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Inception/mnist_Frechet_Inception_Distance.png}
\caption{MNIST Frechet Inception Distance}
\label{fig:FID-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Inception/fashion-mnist_Frechet_Inception_Distance.png}
\caption{fashion-MNIST Frechet Inception Distance}
\label{fig:FID-fashion}
\end{subfigure}
\caption{Modified version of Inception Score and Frechet Inception Distance applied on the model evaluated}
\label{fig:inception}
\end{figure}
\section{Discussion}
We presented a method to evaluate a generative model on the testing set of a dataset. It assesses how well the generative model learned to generalize and fit a distribution in a conditional setting.
The use of a discriminative model to evaluate have already be experimented to compare generative models(IS, FID). However the model used in those methods is pretrained on labeled data and, in practice, unique
\flo{Pourquoi unique ?}
\ctim{car tout les comparaison se font avec le model entrainé par les type de IS (ce model est unique)}. This is problematic because the choice of the model is empirical and not representative. The use of the testing set of the dataset is more representative of the generated dataset quality, than the analysis of first and second moments of activation in an empirically chosen model. Furthermore the evaluation based on a single model are easier to fake. It is very easy to train the generator to maximize the IS or FID. Those criterions can be used as a loss function and optimized.
As we have shown, this evaluation is meaningful to measure how well the generative model can sample data points that are probable under the true distribution we want to approximate. In this paper we applied this method on images samples. It means that we can correlate our measure with a visually assess of the samples quality as the generator outputs are in pixel space. Our assessment method can also be used in other spaces as long as labeled data are available to train and evaluate generator and baseline.
Our evaluation was performed on several seeds, datasets and generative models. With the current results, WGAN seems to be the most efficient solution. However, this result should be confirmed by experiments on other datasets, generative models and with different types of discriminative models to get a more general comparison. This will be explored in further experiments. Furthermore the results presented are only produced on image datasets but this method can also be applied on other kinds \flo{type ?} of datasets or generative models with any classifiers as long as labeled data is available.
As presented in \cite{White16}, the sampling method of a generator can be adapted, which can have an impact on the data produced. A way to boost the performance of a generator can be to focus on improving the sampling phase instead of the model design or the training. An extension of this work could be to look into the impact of several sampling techniques on the performance of a generative model.
\tim{Note : The evaluation takes time but it can be associated with FID for example. FID can be used for model selection and our method for model validation}
\section{Conclusion}
This paper introduces a new method to assess and compare the performances of generative models on datasets test set. By training a classifier generated sample we can estimate the ability of a generative model to fit and generalize the training dataset. We hope that new models will evaluate themselves by applying our method and sharing the best classifier their are able to train with their generated data. \flo{Je retirerais le We hope that}
This method makes possible to take into account
complex characteristics of the generated samples and not only the distribution of their features or the discrimination between true and false data.
The evaluation relevance depends on the testing set relevance
\flo{relevance depends on the relevance, really ?}
\ctim{on the testing set relevance, yes}
and not on the classifier used. Any classifier can be used to improve or modify the results. For instance it is obvious that a generator that just learn to reproduce the training set will beat our results, since no generator are able to beat the baseline.
However we hope that new generators or better classifier will be able to beat the baseline. Our method does not directly assess the realistic characteristics of the generated data but rather if their content and variability contains enough information to understand real data.
Having a generator that reproduce real data content is very pertinent for embedded platform for example. In order to save memory space, an autonomous agent can only save a generative model to reproduce data e.g. fashion-MNIST length is 55MB when our model of CVAE is only 26.7MB.
\newpage
\section{Introduction}
\label{introduction}
Generative models are a particular kind of model that learn to reproduce training data and to generalize it. This kind of model has several advantages, for example as shown in \cite{ng2002discriminative}, the generalization capacity of generative models have interesting properties and can help a discriminative model to learn by regularizing it.
Moreover, once trained, they can be sampled as much as needed to produce new datasets. Generative models such as GAN \cite{Goodfellow14}, WGAN \cite{Arjovsky17}, CGAN \cite{Mirza14}, CVAE \cite{Sohn15} and VAE \cite{Kingma13} have produced samples with good visual quality on various image datasets such as MNIST, bedrooms \cite{Radford15} or imageNet \cite{Nguyen16}.
They can also be used for data augmentation \cite{Ratner17}, for safety against adversarial example \cite{Wang17}, or to produce labeled data \cite{Sixt16} in order to improve the training of discriminative models.
One commonly accepted tool to evaluate a generative model trained on images is visual assessment. It aims at validating the realism of samples. However those methods are very subjective and dependent of the evaluation process. One case of this method is called 'visual Turing tests', in which samples are visualized by humans who try to guess if the images are generated or not. It has been used to assess generative models of images from ImageNet \cite{Denton15} and also on digit images \cite{Lake15}. Others propose to replace the human analysis by the analysis of first and second moments of activation of a neural network. This method has been used with the output of the inception model for "Inception Score" (IS) \cite{Salimans16}, or with activation from intermediate layers for "Frechet Inception Score" (FID) \cite{Heusel17}. Log-likelihood based evaluation metrics were also widely used to evaluate generative models but as shown in \cite{Theis15}, those evaluations can be misleading in high dimensional cases such as images.
\begin{figure}
\begin{center}
\input{scheme/method.tex}
\label{fig:shema_methode}
\caption{Proposed method : 1. Train a generator on real training data, 2. Generate labeled data, 3. Train classifier with the generated data, 4. Evaluate the generator by testing the classifier on the test set composed of real data}
\label{fig:shema_methode}
\end{center}
\end{figure}
The solution we propose uses the test set of a given dataset to estimate generated samples quality and global fit to the original data. The testing accuracy of a classifier trained on generated data estimates if a generative model is good at producing a dataset. The full process of the method is illustrated Figure \ref{fig:shema_methode}.
Our contribution is twofolds: first we propose a method to evaluate generative models on the testing set.
Secondly we introduce a quantitative score, \textit{the fitting capacity}, to evaluate and compare performance of generative models.
\section{Related work}
\label{Related work}
\subsection{Generative models}
Generative models can be implemented in various frameworks and settings. In this paper we explore two kind of those frameworks : variational auto-encoders and generative adversarial networks.
The variational auto-encoder (VAE) framework \cite{Kingma13}, \cite{Rezende14} is a particular kind of auto-encoder which learns to map data into a Gaussian latent space, generally chosen as an univariate normal distribution $\mathbf{N}(0,I)$ (where $I$ is the identity matrix). The VAE learns also the inverse mapping from this latent space to the observation space. This characteristic makes the VAE an interesting option for generating new data after training. Indeed, thanks to the inverse mapping, new data can be generated by sampling a Gaussian distribution and decoding these samples. The particularity of the latent space comes from the minimization of the KL divergence between the distribution of the latent space and the prior $\mathbf{N}(0,I)$. For the sake of simplicity, in this paper we will refer to decoder of the VAE as a generator.
Generative adversarial networks \cite{Goodfellow14} are another framework of models that learn to generate data. The learning process is a game between two networks: a generator that learns to produce images from a distribution $P$ and a discriminator which learns to discriminate between generated and true images. The generator learns to fool the discriminator and the discriminator learns to not be fooled. This class of generative models can produce visually realistic samples from diverse datasets but they suffer from instabilities in their training. One of the model we evaluate, the Wasserstein GAN (WGAN) \cite{Arjovsky17}, try to address those issues by enforcing a Lipschitz constraint on the discriminator. We also evaluate the BEGAN \cite{Berthelot17}, another variant of the GAN which uses an auto-encoder as discriminator.
Both GANs and VAEs can also be implemented into a conditional setting.
Conditional neural networks \cite{Sohn15} and in particular Conditional Variational Autoencoders (CVAE) and Conditional Generative adversarial networks (CGAN) \cite{Mirza14} are a class of generative models that have control over the sample's class of their training dataset. By imposing a label during training on the generator, a conditional generative network can generate from any class and thus produces labeled data automatically. The conditional approach has been used to improve the quality of generative models and make them more discriminative \cite{Odena16}. They are particularly adapted for our setup because we need to generate labeled datasets to train our classifiers.
\subsection{Evaluation of generated samples}
The evaluation of generative models have been addressed in various settings.
\cite{Theis15} show that different metrics (as Parzen windows, Nearest Neighbor or Log likelihood) applied to generative models can lead to different results. Good results in one application of a generative model can not be used as evidence of good performance in another application. Their conclusion is that evaluation based on sample visual quality is a bad indicator for the entropy of samples. Conversely, the log-likelihood can be used to produce samples with high entropy and evaluate it but it does not assure good visual quality. In this setting, high entropy means high variability in the samples.
More methods exist to evaluate generative networks as described in \cite{Borji18}. In particular, approaches that use a classifier trained on real data to evaluate generative models, \cite{Zhang16,Isola16}.
Different quantitative evaluation have also been experimented in \cite{Jiwoong18} which compares models of GANs in various settings.
These quantitative evaluations are based on divergence or distances between real data or real features and generated ones.
Our approach is similar to the approach developed in parallel by \cite{Santurkar18}. However our experiments evaluate generative models with the same generator architecture, all trained by the same method and not models with their original architecture. We compare then the results of different learning criterion with the same models architecture and training process.
\subsection{Multi-scale structural similarity}
Multi-scale structural similarity (MS-SIM, \cite{Wang03}) is a measurement that gives a way to incorporate image details at different resolutions in order to compare two images. This similarity is generally used in the context of image compression to compare image before and after compression. \\
\cite{Odena16} use this similarity to estimate the variability inside a class. They randomly sample two images of a class and measure the MS-SIM. If the value is high, then images are considered different. By operating this process on multiple data points $X$ of the class $Y$, the similarity gives an insight on the entropy of $P(X|Y)$: if the MS-SIM gives high result, the entropy is high; otherwise, the entropy is low. However, it can not estimate if the sample comes from one or several modes of the distribution $P(X|Y)$. For example, if we want to generate images of cats, the MS-SIM similarity can not differentiate a generator that produces different kinds of black cats from a network that produces different cats of different colors. In our method, if the generator is able to generate in only one mode of the distribution $P(X|Y)$, the score will be low in the testing phase.
\subsection{Inception score}
One of the most used approach to evaluate a generative model is Inception Score (IS) \cite{Salimans16,Odena16}. The authors use an inception classifier model pretrained on Imagenet to evaluate the sample distribution.
They compute the conditional classes distribution at each generated sample $x$, $P(Y | X=x)$ and the general classes distribution $P(Y)$ over the generated dataset.
They proposed the following score :
\begin{equation}
IS(X)=\exp(\mathbb{E}_X [KL (P(Y | X) \parallel P(Y))] \mbox{ ,}
\label{inception_score}
\end{equation}
Where KL is the Kullback-Leibler divergence. The KL term can be rewritten :
\begin{equation}
KL (P(Y | X) \parallel P(Y)) = H(P(Y | X), P(Y)) - H(P(Y|X)) \mbox{ ,}
\label{inception_score}
\end{equation}
Where $H(P(Y|X))$ is the entropy of $P(Y|X)$ and $H(P(Y | X), P(Y))$ the cross-entropy between $P(Y | X)$ and $P(Y)$. The entropy term is low when predictions given by the inception model have high confidence in one class and is high in other cases. The cross-entropy term is low when predictions given by the inception model gives unbalanced classes in the whole dataset and is high if the dataset is balanced.
Hence, the inception score promotes when the inception model predictions have high confidence in varied classes. The hypothesis is that if the inception has high confidence in its prediction the image should look real.
Unfortunately, it does not estimate if the samples have intra-class variability (it does not take the entropy of $P(X|Y)$ into account). Hence, a generator that could generate only one sample per class with high quality would maximize the score.
One important restriction of IS is that the generative models to evaluate should produce images in Imagenet classes.
\subsection{Frechet Inception Distance}
Another recent approach to evaluate generative adversarial networks is the Frechet Inception Distance (FID)~\cite{Heusel17}. The FID, as the inception score, is based on low moment analysis. It compares the mean and the covariance of activations between real data ($\mu$ and $C$) and generated data ($\mu_{gen}$, $C_{gen}$). The activation are taken from a inner layer in an inception model. The comparison is done using the Frechet distance (as if the means and covariance where taken from a Gaussian distribution) (see Eq. \ref{eq:frechet_inception_distance}). The inception model is trained on Imagenet.
\begin{equation}
d^2((\mu,C),(\mu_{gen},C_{gen}))=\parallel \mu - \mu_{gen} \parallel_2^2 + Tr(C+C_{gen} - 2(C*C_{gen})^{\frac{1}{2}}) \mbox{ ,}
\label{eq:frechet_inception_distance}
\end{equation}
FID have in particular be use in a large scale study to compare generative models \cite{Lucic17}
\section{Methods}
\label{sec:methods}
We evaluate generators in a supervised training setup. The real dataset $D$, the \textit{original dataset}, is composed of pair of examples $(x, y)$ where $x$ is a data point and $y$ the associated label. The dataset is split in three subset $D_{train}, D_{valid}$ and $D_{test}$ for cross-validation. Our method needs a generative model that can sample conditionally from any given label $y$. This conditional generative model is trained on $D_{train}$. Then, we sample random labels $\hat{y}$ and use the generative model to get a new dataset $D_{gen}$ of samples $\hat{x}$. It is used afterwards to train a classifier implemented into a deep neural network. The cross-validation is then applied with $D_{valid}$ of the dataset. We compare $D_{test}$ accuracy from training on $D_{gen}$ with the score of the same classifier model trained only on training data $D_{train}$ (the baseline). We can summarize our method as follows:
\begin{enumerate}
\item Train a conditional generative model over $D_{train}$
\item Sample data to produce $D_{gen}$
\item Train a discriminative model (the classifier) over $D_{gen}$
\item Select a classifier over a validation set $D_{valid}$.
\item Iterate the process for several generative models and compare the accuracy of the classifiers on $D_{test}$.
\end{enumerate}
The protocol presented allows to analyze the performance of a model on the whole test set or class by class.
As we will see in results, we estimate the stability of the generators by training them with different random seeds.
The simple fact of changing this seeds can have great impact on the generative models training and thus induce a variability in the results. To show that the variability of results come mainly from the instability of the generator and not from the classifier, we compare our results with results computed with a KNN classifiers instead of neural networks. As KNN classifiers are deterministic classifier, if the random seeds produce variability with this kind of classifier, the instability necessarily comes from the generators. The KNN classifier is however not a good option for our evaluation methods because it is not adapted for complex image classification.
The classifier was chosen to be a deep neural network because they are known to be difficult to train if the testing set distribution is biased with respect to the training set distribution. This characteristic is put to good use in order to compare generated dataset and hence generative models. If $D_{gen}$ contains unrealistic samples or samples that are not variate enough, the classifier will not reach a high accuracy. Moreover to investigate the impact of generated data into the training process of a classifier, we also experiment the method by mixing real data and generated data. The ratio between generated data over the complete dataset is called $\tau$. If $\tau=0$ there is no generated data and $\tau=1$ means only generated data.
We call our final score for a generator $G$ the \textit{fitting capacity} (Noted $\Psi_G$) because it evaluates the ability of a generator to fit the distribution of a testing set distribution. It is the testing accuracy of the classifier $C$ over $D_{Test}$ trained by a generator when $\tau=1$.
We evaluate models with the generator or discriminator architecture proposed in \cite{Chen16}.
\section{Experiments}
\subsection{Implementation details}
Generative models are often evaluated on the MNIST dataset. Fashion MNIST \cite{Xiao2017} is a direct drop-in replacement for the original MNIST dataset but with more complex classes with higher variability. Thus, we use this dataset in order to evaluate different generative models in addition to MNIST.
As presented in the previous section, to apply our method, we need to generate labeled datasets. We used two different methods for that. For The first one, we train one generator for each class $y$ on $D_{train}$. This enables us to generate sample from a specific class and to generate a labeled dataset. In this setting, we compare VAE, WGAN, GAN and BEGAN. The second method uses conditional generative models which can generate the whole labeled dataset $D_{gen}$ directly. In this case, we ran our experiments on CVAE and CGAN. The generators are trained with Adam optimizer on the whole original training dataset for $25$ epochs on eight different random seeds.
The classifier model trained on $D_{gen}$ is a standard deep CNN with a softmax output (see in Appendix for details).
The classifier is trained with Adam optimizer for a maximum of $200$ epochs. We use early stopping to stop the training if the accuracy does not improve anymore on the validation set after $50$ epochs. Then, we apply model selection based on the validation error and compute the test error on $D_{test}$. The architecture of the classifier is kept fixed for all experiments.
The experiment made with various values of $\tau$ evaluate the results for $\tau = [0.000, 0.125, 0.250, 0.375, 0.500, 0.625, 0.750, 0.875, 1.000]$. $\tau = 0$ is used as a baseline to compare other results. However most of the analysis are made on $\tau=1$ because the results are representative of the full quality of the generator, i.e. generalization and fitting of the distribution of the testing set.
The experiment with various $\tau$ makes it possible to visualize if a generator is able to generalize and fit a given dataset. If the method results improve when $\tau > 0$ it means that the generator is able to realize data augmentation i.e. generalize training data and if the result are as good when $\tau$ is near 1 than when $\tau$ is near 0 it means the generator is able to fit $D_{test}$ distribution as well as the original training set.
\subsection{Adaptation of Inception Score and Frechet Inception Distance}
We compare our evaluation method of the generative model with the two most used methods : IS and FID.
IS and FID, as originally defined, use a model pretrained on Imagenet. To apply those method, it is mandatory to use the exact same model proposed in \cite{Salimans16} with the exact same parameter because in other case results are not comparable with results from other paper. However, as proposed in \cite{li17} we can adapt those methods to other setting (for us, MNIST and Fashion-MNIST) to compare several generative models with each other.
We therefore train a model for classification on $D_{train}$. Then we use it to produce a probability vector to compute IS and an activation vector to compute FID. The activation are from a layer in the middle of the model (details in appendix).
The very same formula than IS and FID can then be used to compare models. As described previously, the results produced here are not comparable with results presented in other papers since they are from another model but they can be used to compare the generative models presented in this paper with each other.
\subsection{Results}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/mnist_diagram.png}
\caption{MNIST boxplot}
\label{fig:diagram-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/fashion-mnist_diagram.png}
\caption{fashion-MNIST boxplot}
\label{fig:diagram-fashion}
\end{subfigure}
\caption{Analysis and comparison of models when $\tau=1$}
\end{figure}
The relevant results presented below are the maximum \textit{fitting capacity} of each model over all seeds in order to evaluate what models can achieve and statistics on the results among those 8 seeds to give insight on the stability of each model with regards to the random seed.
First we present boxplots of \textit{fitting capacity} results of each models in Figures \ref{fig:diagram-MNIST} and \ref{fig:diagram-fashion}. They present the median value along with the first and last quartile limits. Furthermore they display the outliers of each training (values that are outside 1.5 of the interquartile range (IQR) over the different seeds). This representation is less sensible to outliers than mean value and standard deviation without making those outliers disappear\footnote{The Figures \ref{fig:diagram-MNIST} and \ref{fig:diagram-fashion} are zoomed to be able to visually discriminates models making some outliers out of the plot. Full figures are in appendix.}.
Those results show an advantage for the GAN model as it has the best median value (even if it does not make better than baseline). Unfortunately some of the generator training failed (in particular on Fashion-Mnist), producing outliers in the results. WGAN produce results comparable to GAN in MNIST but seems less stable in our setting.
The figures are complemented with the values computed for the mean \textit{fitting capacity} $\Psi_G$ in Table~\ref{table:Mean_Table} and the values for the best \textit{fitting capacity} in Table~\ref{table:Best_Table}.
We can note that for both MNIST and Fashion-MNIST, models with unstable results, as GAN and WGAN, have the best $\Psi_G$ however since some training failed for GAN and WGAN, more stable models, as VAE and CGAN, have the best \textit{mean fitting capacity}.
\begin{table}[tbp]
\centering
\caption{Mean $\Psi_G$}
\label{table:Mean_Table}
\begin{tabular}{|l |l | l |l | l | l | l | l | l | l}
\hline
Datasets & \textbf{Baseline} &VAE & CVAE & GAN & CGAN & WGAN & BEGAN \\
\hline
MNIST & \textbf{97.81\%} & 96.39\% & 95.86\% & 86.03\% & \textbf{96.45\%} & 94.25\% & 95.49\%\\
\hline
Fashion & \textbf{86.59\%} & \textbf{80.75\%} & 73.43\% & 67.75\% & 77.68\% & 73.43\% & 77.64\% \\
\hline
\end{tabular}
\end{table}
\begin{table}[tbp]
\centering
\caption{Best $\Psi_G$}
\label{table:Best_Table}
\begin{tabular}{|l |l | l |l | l | l | l | l | l | l}
\hline
Datasets & \textbf{Baseline} &VAE & CVAE & GAN & CGAN & WGAN & BEGAN \\
\hline
MNIST & \textbf{97.94\%} & 96.82\% & 96.21\% & 97.18\% & 96.45\% & \textbf{97.37\%} & 95.86\%\\
\hline
Fashion & \textbf{87.08\%} & 81.85\% & 81.93\% & \textbf{84.43}\% & 78.63\% & 81.32\% & 81.32\% \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.80\textwidth}
\includegraphics[width=\textwidth]{Figures/mnist_classes_test_accuracy.png}
\caption{Relative accuracy wrt. baseline on mnist class by class for $\tau=1$}
\label{fig:accuracy-MNIST_classes}
\end{subfigure}
\begin{subfigure}[b]{0.80\textwidth}
\includegraphics[width=\textwidth]{Figures/fashion-mnist_classes_test_accuracy.png}
\caption{Relative accuracy wrt. baseline on fashion-mnist class by class for $\tau=1$}
\label{fig:accuracy-fashion_classes}
\end{subfigure}
\caption{Plot of the difference between models performance and baseline class by class when $\tau=1$ : Mean and standard deviation over random seeds}
\end{figure}
In addition, we present in Figures \ref{fig:accuracy-MNIST_classes} and \ref{fig:accuracy-fashion_classes}, the \textit{per class fitting capacity}. The figures show the difference between the baseline classes results and the classifier trained on the generator to evaluate.
For generative models that are not conditional and trained class by class, those figures show how the generator is successful to generate in different classes of the dataset. For conditional generative models, it evaluates if the models is able to learn each mode of the distribution with the same accuracy.
We can also estimate the stability of each models class by class or from a class to another. For example we can see that WGAN on MNIST is very stable except on class $1$ and on Fashion-MNIST it seems to struggle a lot between the first 3 classes. On an other hand we can see that BEGAN has some trouble on Fashion-MNIST on class $0$ and $2$ (T-shirt and pullover) suggesting that the generator is not good enough to discriminate between those two classes.
\begin{figure} [t]
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/mnist_max_accuracy.png}
\caption{Maximum Classifiers Accuracy MNIST against $\tau$}
\label{fig:accuracy-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/fashion-mnist_max_accuracy.png}
\caption{Maximum Classifiers Accuracy Fashion-MNIST against $\tau$}
\label{fig:accuracy-fashion}
\end{subfigure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/mnist_accuracy_var.png}
\caption{Standard deviation of Classifiers Accuracy MNIST against $\tau$}
\label{fig:accuracy-MNIST_var}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/fashion-mnist_accuracy_var.png}
\caption{Standard deviation of Classifiers Accuracy Fashion-MNIST against $\tau$}
\label{fig:accuracy-fashion_var}
\end{subfigure}
\caption{Representation of the test accuracy of the classifiers trained by each $G$ with various $\tau$.}
\label{fig:accuracy}
\end{figure}
In Figure {\ref{fig:accuracy}}, we present the test accuracy with various $\tau$ to represent the impact of generated data on the training process. When $\tau=0$, there is no generated data, this is the result of the baseline. Figures \ref{fig:accuracy-fashion} and \ref{fig:accuracy-MNIST} show result of cherry-picking the best results for each $\tau$ and seed, while Figures \ref{fig:accuracy-MNIST_var} and \ref{fig:accuracy-fashion_var} show statistics among different seeds and the stability of each model on the trained dataset.
Our interpretation is that if the accuracy is better than baseline with a low $\tau$ ( $0< \tau < 0.5$) it means that the generator is able to generalize by learning meaningful information about the dataset. When $\tau > 0.5 $ if the accuracy is still near the baseline it means the generated data can replace the dataset in most parts of the distribution. When $\tau=1$, the classifier is trained only on generated samples. If the accuracy is still better than the baseline, it means that the generator has fitted the training distribution (and eventually has learned to generalize if this score is high over the test set).
Our results show that some models are able to produce data augmentation and outperform the baseline when $\tau$ is low but unfortunately none of them is able to do the same when $\tau$ is high. This show that they are not able to replace completely the true data in this setting. Following this interpretation, Figure \ref{fig:accuracy} allows us to compare different generative neural networks on both datasets. For example, we can see that all models expectation are equivalent when $\tau$ is low (Figure \ref{fig:accuracy-fashion_var} and \ref{fig:accuracy-MNIST_var}). However when $\tau$ is high we can clearly differentiate generative models type.
Some of the curves in Figure \ref{fig:accuracy-MNIST_var} and \ref{fig:accuracy-fashion_var} have high standard deviation (e.g. for GAN when $\tau=1$). To show that it is not due to the classifier instability we plot the results of classification with various $\tau$ with a KNN classifier (Figure \ref{fig:1nn_accuracy-MNIST} and \ref{fig:1nn_accuracy-fashion} (k=1) ). KNN algorithms are stable since they are deterministic. The standard deviations found with KNN classifiers is similar to those with neural networks classifiers. This proves that the instability does not come from the classifier but from the generative models. This is coherent with the fact that in Figure \ref{fig:diagram-MNIST} and \ref{fig:diagram-fashion}, the diagrams of the reference classifiers trained with true data on eight different seeds show a high stability of the classifier model.
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/mnist_knn_accuracy.png}
\caption{Standard deviation of Classifiers Accuracy for 1-NN on MNIST}
\label{fig:1nn_accuracy-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/fashion-mnist_knn_accuracy.png}
\caption{Standard deviation of Classifiers Accuracy for 1-NN on Fashion-MNIST}
\label{fig:1nn_accuracy-fashion}
\end{subfigure}
\caption{Comparison of models using a nearest neighbor classifier}
\end{figure}
\subsection{Comparison with IS and FID}
We compare our results to IS and FID methods. The two methods have been slightly adapted to fit our setting as described previously in Section \ref{sec:methods}.
To be able to compare easily the different methods, we normalize values in order to have a mean of $0$ and a standard deviation of $1$ among the different models.
Originally for FID methods, a low value means a better model, for easier comparison we multiply the FID score by $-1$ to valuate by an higher value a better model as for the other evaluation methods.
The results are shown on Figure \ref{fig:Comparison-MNIST} and \ref{fig:Comparison-fashion}.
We added a baseline for each method, for the \textit{fitting capacity} the baseline is the test accuracy when the classifier trained on true data, the inception score baseline is computed on the test data and the frechet inception distance baseline is computed between train data and test data.
The results of the inception score are completely different between MNIST and Fashion-MNIST and some model radically beat the baseline (as WGAN) when in the other methods, none of the results outperform the baselines.
However, the Frechet inception score computed between test set and generated data gives coherent results between MNIST and fashion-MNIST. Even if they are not always coherent with our results. As an example, VAE does not perform well with FID when we can see with our fitting capacity that it is able to train a classifier quite well with high stability. The FID baseline outperform the other models however there is a small margin between best model and basleine when the margin between best model and baseline is big for the \textit{fitting capacity}.
The performances of each model are unfortunatly specific to each datasets and the experiments made are not sufficient to generalized results to other datasets. This can be seen in models like CGAN or WGAN where results are very different between MNIST and Fashion-MNIST.
\begin{figure}
\centering
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/mnist_Comparison_Scores.png}
\caption{MNIST Comparison}
\label{fig:Comparison-MNIST}
\end{subfigure}
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{Figures/fashion-mnist_Comparison_Scores.png}
\caption{fashion-MNIST Comparison}
\label{fig:Comparison-fashion}
\end{subfigure}
\caption{Comparison between highest results from different approaches. Each result have been normalize to have mean=0 and standard deviation=1 among all models.}
\end{figure}
Those results show a high instability for certain models and therefore should be interpreted carefully. Different random seeds would gives different results. However, they show that it is possible to evaluate and compare generative models on their top performance and there stability with regards to random seeds as well as detect failing training. We hope that the use of our method will be a step for clear evaluation and comparison between generative models.
\section{Discussion}
We presented a method to evaluate a generative model on the testing set of a labeled dataset. It assesses how well the generative model learned to generalize and fit a distribution in a conditional setting.
The use of a discriminative model to evaluate generative models and GANs in particular, have already be experimented to compare generative models (IS, FID). However, the model used in those methods is pretrained on labeled data and dependent on the classifier proposed. This is problematic because the choice of the model is biased with respect to this classifier.
As a testing set is specifically designed to measure the ability to understand data it is therefore more representative than
the analysis of first and second moments of activation in an empirically chosen model to evaluate a generated dataset quality.
Furthermore our approach relies only on the testing set a not on generative model or classifier architecture for results comparison and classification.
In this paper, we applied this method on images samples. It means that we could compare our measure with a visual assessment. Our assessment method can also be used with other kind of data as long as labeled data are available to train and evaluate generators.
Unfortunately our evaluation method takes time with regards to other methods because a classifier needs to be trained. However, we can take advantage from both our method and other at the same time for example by applying model selection with FID during training and applying our method for deeper analysis.
Our evaluation was performed on several seeds, datasets and generative models. With the current results, GAN and WGAN seems to be the most efficient solutions. However, this result should be confirmed by experiments on other datasets, generative models and with different types of discriminative models to get a more general comparison.
As presented in \cite{White16}, the sampling method of a generator can be optimized, which can have an impact on the data produced. A way to boost the performance of a generator can be to focus on improving the sampling phase instead of the model design or the training. An extension of this work could be to look into the impact of several sampling techniques on the performance of a generative model.
\section{Conclusion}
This paper introduces a method to assess and compare the performances of generative models.By training a classifier on generated sample we can estimate the ability of a generative model to fit and generalize a testing set. It does not directly assess the realistic characteristics of the generated data but rather if their content and variability contains enough information to classify real data.
This method makes it possible to take into account complex characteristics of the generated samples and not only the distribution of their features. Moreover it does not evaluate generative models by testing if we can discriminate true data from generated one.
Our results suggest that to get the best results GAN or WGAN approach should be privileged, however to maximize the chance of having a decent result CGAN or VAE are preferable.
In order to have a fair comparison we use same classifier to evaluate all generative models.
However, any classifier could be used in replacement to improve and complete the results, our focus is on keeping the same test set to compare models. For instance it is obvious that a generator that just learned to reproduce the training set will beat our results, since no generator are able to beat the baseline. However we hope that new generators or better classifier will be able to beat the baseline.
This work is not a comprehensive evaluation of generative models but we believe that it is simple enough to be easily reproducible to evaluate any generative model.
An example of application of generative model used as replacement of true data is embedded platform. An autonomous agent can save a generative model to reproduce training data in order to save memory space, for example fashion-MNIST length is 55MB when our model of CVAE is only 26.7MB. As a future work, our approach could be used to optimize the generator size while keeping the ability to reproduce sufficient data to train a classifier.
\section*{Acknowledgement}
We really want to thanks Florian Bordes for experiment settings and interesting discussions as well as Pascal Vincent for his helpful advises. We would like also to thanks Natalia D\'iaz Rodriguez and Anthonin Raffin for their help in proof reading this article.
\bibliographystyle{abbrv}
|
3,212,635,537,628 | arxiv | \section{Splitting of a cotangent bundle}
\medskip \par
For the dynamical systems with a high degree of symmetry it is natural to
try to parametrize as much of the phase space, as possible by the global
constants of motion. \par
In the case of geodesic motion on a semisimple group manifold (with the
Hamiltonian being given by the quadratic Casimir invariant) all global
constants of motion are described by the momentum mappings corresponding
to the natural left and right actions of the group on its cotangent
bundle. The equivariance of the momentum mappings allows one to split
the phase space $T^*G$ into the sectors corresponding to the types of
coadjoint orbits. In the case of compact groups this decomposition is
quite simple as one has the unique type of the orbits of maximal dimension.
It has been shown \cite{my} that in such a case one can represent $T^*G$
by the Cartesian square of $G \times { \cal W}$ divided by suitable relations.
(${ \cal W}$ stands for some chosen Weyl chamber, used to parametrize the
space of coadjoint orbits). \par
We call $G \times { \cal W}$ a Chiral sector.
The canonical symplectic form of the cotangent bundle pulls back
onto the product of the two sectors as a difference $ \Omega_L - \Omega_R$
of two components, each component living on one sector. \par
Each component is an exact two--form, giving each sector a
structure of a symplectic manifold. \par
Thus the classical model can now be quantized in two different ways:
\begin{enumerate}
\item One can quantize the cotangent bundle. This is straightforward
and yields known results. The operators corresponding to the matrix elements
of any representation of $G$ commute.
\item One can quantize each of the sectors separately. It is much more
complicated, but results in a very interesting class of non--commutative
($ \cal C^*$ for compact groups) algebras describing a new class of quantum
group
manifolds. In each sector there is a non--commutative spectrum generating
algebra (SGA). \\
\end{enumerate}
The following diagram summarizes these ideas:
\begin{picture}(130,95)(0,10)
\put(100,90){ \makebox(35,10)[b]{$ (T^*G, \Omega) $}}
\put(35,45){ \makebox(40,10){$ { \cal H} $}}
\put(105,0){ \makebox(40,10){$ { \cal H}_L \otimes { \cal H}_R $}}
\put(165,40){ \makebox(40,10){$ \matrix{
(G \times { \cal W}) \times (G \times { \cal W}) \cr
\Omega_L - \Omega_R } $}}
\put(20,70){ \makebox(40,10){ \small \sl quantization }}
\put(185,70){ \makebox(40,10){ \small \sl sympl. reduction }}
\put(60,20){ \makebox(20,10){ \small \sl fusion}}
\put(185,15){ \makebox(40,10){ \small \sl quantization}}
\put(100,80){ \vector(-2,-1){40}}
\put(165,60){ \vector(-2,1){35}}
\put(105,20){ \vector(-2,1){45}}
\put(165,30){ \vector(-3,-1){50}}
\put(195,30){ \vector(-3,-1){50}}
\end{picture}
\\[0.5cm]
The detailed description of the above procedure called
chiral splitting and fusion can be found in \cite{my}. \\
\section{Symplectic structure of a sector}
The symplectic structure of the chiral sector can be most transparently
described in terms of Chevalley basis of ${ \cal G}$.
In case of compact groups the space of coadjoint orbits can be conveniently
parametrized by choosing some Weyl chamber $W$ in ${ \cal G}$ (i.e. the dual
of a Cartan subalgebra divided by the Weyl group of its discreet
symmetries). $W$ intersects each regular orbit exactly once.
We shall use the following notation \cite{Hum}. The set of simple roots
dual to the chosen Weyl chamber is $ \Delta$, the
set of roots of the Lie algebra is $ \Phi$, and the set of positive roots
is
$ \Phi_+$. The element of the Cartan subalgebra $K$--dual to the root
$ \beta$
is $t_{ \beta}=i [e_ \beta,e_{- \beta}]$. In addition we introduce
$ \theta^{ \alpha_i}$, the one--form dual to the simple root
$t_{ \alpha_i}$
and $ \omega^\beta$, the left invariant one--form dual to the root
vector
$e_ \beta$. Finally, $w_i$ is the coordinate in the Weyl chamber in the
basis dual to the one formed by $t_{ \alpha_i}$.
\par
The left component
(the symplectic form of the left sector) is given by
\beq
\Omega_L= \sum_{ \alpha_i \in \Delta} dw_i \wedge \theta^{ \alpha_i}
+i \sum_{ \beta \in \Phi_+} \lan w,t_ \beta \ran \omega^\beta \wedge
\omega^{- \beta}.
\label{sectsym}
\enq
It has a global symplectic potential:
\beq
\Omega_L = d \sum_i w_i \theta^{ \alpha_i}.
\enq
For the 'right' component the expressions are analogous.
\par
The symplectic structure gives Poisson brackets of matrix elements
in arbitrary representations of $G$ as:
\beq
\{T_1 \otimes T_2 \}_M(g) = (T_1 \otimes T_2)(g) r_{12}(w)
\enq
where
\beq
r_{12}(w) = \sum_{ \beta \in \Phi_+} \frac{i}{ \lan w,t_ \beta \ran}
[ \tau_1(e_{- \beta}) \otimes \tau_2(e_ \beta)-
\tau_1(e_{ \beta}) \otimes \tau_2(e_{- \beta})].
\label{r}
\enq
together with
\beq
\{w_i,T \}(g)=T(g) \tau(t_{ \alpha_i}). \label{PB wi M}
\enq
($T$ and $\tau$ label representations of $G$, and their differentials,
respectively.)
\section{Example: G=SU(2)}
In the fundamental representation $T_f$:
\beq
T_f(g) = \pmatrix{ a & -b^* \cr b & a^* } \; ; \; w \in { \bf R}_+
\enq
\beq
a^{*}a+b^{*}b = 1 .
\enq
\begin{eqnarray}
\{ a^{*}, a \}_M = {i \over w} b b^{*} &,& \quad \{ \, a , b \, \}_M = 0
\qquad ,
\nonumber \\
\{ a , b^{*} \}_M = {i \over w} a b^{*} &,& \quad
\{ b , b^{*} \}_M = - {i \over w}a a^{*} ,
\label{Poisson a b} \\
\{ a , w \, \}_M = - i a \ &,& \quad \{ a^{*}, w \}_M = i a^{*} ,
\nonumber \\
\{ b , w \, \}_M = - i b \ &,& \quad \{ b^{*}, w \}_M = i b^{*} ,
\label{Poisson w}
\end{eqnarray}
Geometric Quantization \cite{Wood} \cite{Snia} of this structure gives:
\beq
{ \hat a}^{ \dagger}{ \hat a}+{ \hat b}^{ \dagger}{ \hat b} = 1 .
\enq
\beq
\Delta = 1 - { \hat a}{ \hat a}^\dagger - { \hat b}{ \hat b}^\dagger \;(=
\hbar \hat w ^{-1} )
\enq
\beq
\label{niekom}
\matrix{
(1+ \Delta) { \hat a}^\dagger{ \hat a} = { \hat a}{ \hat a}^\dagger + \Delta
\cr
\cr
(1+ \Delta) { \hat b}^\dagger{ \hat b} = { \hat b}{ \hat b}^\dagger + \Delta
\cr
\cr
(1+ \Delta) { \hat a}^\dagger{ \hat b} = { \hat b}{ \hat a}^\dagger \cr
\cr
ab = ba }
\enq
We can say that the above relations define the structure of non--commutative
group manifold, namely \cu{quantum} $S^3$.
In the representation Hilbert space there is a unique 'vacuum' state
$ \varphi_o$ satisfying
\beq
a^\dagger \varphi_o = 0 = b^\dagger \varphi_o
\enq
We can expect that the procedure of geometric quantization will enable us
to describe the corresponding non--commutative manifolds for all compact
groups.
\section{Trace of the Right Action Symmetry}
\par
By performing the chiral splitting we have broken the right symmetry of
the dynamical system. This happened because we have chosen some fixed Weyl
chamber in order to parametrize the coadjoint orbits. \\
The symmetry is restored after fusion of both sectors. One may ask however
whether there is a trace of the broken symmetry in the chiral sector? \\
The natural right action of $G$ on the sector is given by:
\beq
(G \times W) \times G \ni (g,w,h) \buildrel{R} \over \mapsto (gh^{-1},w)
\in G \times W.
\enq
We assume that the acting group $G$ is equipped with the bracket $ \{.,. \}_G$,
such that the above action preserves the quadratic Poisson brackets { \it
for the group elements} in the chiral sector. We should stress that we do
not demand the preservation of the brackets of functions on $ \cal W$
with the group elements. \par
The equation for the bracket $ \{.,. \}_G$ reads:
\beq
\label{eqq}
\matrix{ \{T_1 \otimes T_2 \}_M(gh^{-1}) = \cr \cr
= \{T_1 \otimes T_2 \}_M(g) (T_1 \otimes T_2)(h^{-1}) +
(T_1 \otimes T_2)(g) \{T_1 \otimes T_2 \}_G(h^{-1}) }
\enq
In terms of the Poisson tensor ${cal P}_G$ corresponding to $\{.,.\}_G$,
the unique solution to (\ref{eqq}) is:
\beq
{ \cal P}_G = L^*_g r - R_g^{*-1} r,
\enq
Where $r$ is that of (\ref{r}). \par
Question: is ${ \cal P}_G$ Lie--Poisson? It has the familiar form of
a 'Sklyanin bracket', but does the bivector $r$ satisfy YBE ? \\
For $SU(2)$ the answer is affirmative.
In this case we obtain a family of Lie Poisson structures labeled by the
Weyl chamber parameter $w$ :
\begin{eqnarray}
\{ \, \alpha , \beta \, \}_G &=&- {i \over w} \alpha \beta \, \quad ;
\quad
\{ \alpha ^{*}, \beta ^{*} \}_G = {i \over w} \alpha ^{*} \beta ^{*} ;
\nonumber \\
\{ \alpha , \beta ^{*} \}_G &=& - {i \over w} \alpha \beta ^{*} \quad ;
\quad
\{ \alpha , \alpha ^{*} \}_G = {2i \over w} \beta ^{*} \beta ;
\nonumber \\
\{ \beta ^{*}, \beta \}_G &=& 0 \quad .\phantom{{2i \over w} \beta ^{*}
\beta}
\label{Woron}
\end{eqnarray}
For the compact groups of higher rank the tensor ${ \cal P}_G$ is {\it not}
a Poisson one as it breaks the Jacobi identity for the corresponding bracket
of functions.
It is not clear to us at this moment how to realize the quantum version
of the above symmetry as we don`t know which of the relations
(\ref{niekom}) are independent. We hope to get back to this problem
in a forthcoming paper. \\
|
3,212,635,537,629 | arxiv | \section*{Some comments and Open Problems}
In contrast to Theorem~\ref{main}, Theorem~\ref{main2} is true for the Euclidean plane $E^2$ even in a stronger form: for any subset $C\subset E^2$ not lying on a line and any partition $E^2=A_1\cup A_2$ one of the cells of the partition contains an unbounded subset symmetric with respect to some center $c\in C$, see \cite{B2}.
Having in mind this result let us call a subset $C$ of a Lobachevsky or Euclidean space $X$ {\em central for (Borel) $k$-partitions} if for any partition $X=A_1\cup\dots\cup A_k$ of $X$ into $k$ (Borel) pieces one of the pieces contains an unbounded monochromatic subset $S\subset X$, symmetric with respect to some point $c\in C$. By $c_k(X)$ (resp. $c^B_k(X)$) we shall denote the smallest size of a subset $C\subset X$, central for (Borel) $k$-partitions of $X$. If no such a set $C$ exists, then we put $c_k(X)=\infty$ (resp. $c_k^B(X)=\infty$) where $\infty$ is assumed to be greater than any cardinal number. It follows from the definition that $c_k^B(X)\le c_k(X)$.
We have a lot of information on the numbers $c_k^B(E^n)$ and $c_k(E^n)$ for Euclidean spaces $E^n$, see \cite{B2}. In particular, we known that
\begin{enumerate}
\item $c_2(E^n)=c_2^B(E^n)=3$ for all $n\ge 2$;
\item $c_3(E^3)=c_3^B(E^3)=6$;
\item $12\le c_4^B(E^4)\le c_4(E^4)\le 14$;
\item $n(n+1)/2\le c_n^B(E^n)\le c_n(E^n)\le 2^n-2$ for every $n\ge 3$.
\end{enumerate}
Much less is known on the numbers $c_k^B(H^n)$ and $c_k^B(H^n)$ in the hyperbolic case.
Theorem~\ref{main2} yields the upper bound $c_2(H^2)\le 3$. In fact, 3 is the exact value of $c_2(H^n)$ for all $n\ge 2$.
\begin{proposition} $c_2^B(H^n)=c_2(H^n)=3$ for all $n\ge 2$.
\end{proposition}
\begin{proof} The upper bound $c_2(H^n)\le c_2(H^2)\le 3$ follows from Theorem~\ref{main2}. The lower bound $3\le c_2^B(H^n)$ will follow as soon as for any two points $c_1,c_2\in H^n$ we construct a partition $H^n=A_1\cup A_2$ in two Borel pieces containing no unbounded set, symmetric with respect to a point $c_i$. To construct such a partition, consider the line $l$ containing the points $c_1,c_2$ and decompose $l$ into two half-lines $l=l_1\sqcup l_2$. Next, let $H$ be an $(n-1)$-hyperplane in $H^n$, orthogonal to the line $l$. Let $S$ be the unit sphere in $H$ centered at the intersection point of $l$ and $H$. Let $S=B_1\cup B_2$ be a partition of $S$ into two Borel pieces such that no antipodal points of $S$ lie in the same cell of the partition.
For each point $x\in H^n\setminus l$ consider the hyperbolic plane
$P_x$ containing the points $x,c_1,c_2$. The complement
$P_x\setminus l$ decomposes into two half-planes $P^+_x\cup P^-_x$
where $P_x^+$ is the half-plane containing the point $x$. The plane
$P_x$ intersects the hyperplane $H$ by a hyperbolic line containing
two points of the sphere $S$. Finally put
$$A_i=l_i\cup\{x\in H^2\setminus l: P^+_x\cap B_i\ne\emptyset\}$$for $i\in\{1,2\}$.
It is easy to check that $A_1\sqcup A_2=H^n$ is the desired partition of the hyperbolic space into two Borel pieces none of which contains an unbounded subset symmetric with respect to one of the points $c_1$, $c_2$.
\end{proof}
The preceding proposition implies that the cardinal numbers $c_2(H^n)$ are finite.
\begin{problem} For which numbers $k,n$ are the cardinal numbers $c_k(H^n)$ and $c_k^B(H^n)$ finite? Is it true for all $k\le n$?
\end{problem}
Except for the equality $c_2(E^n)=3$, we have no information on the numbers $c_k(E^n)$ with $k<n$.
\begin{problem} Calculate (or at least evaluate) the numbers $c_k(E^n)$ and $c_k(H^n)$ for $2<k<n$.
\end{problem}
In all the cases where we know the exact values of the numbers $c_k(E^n)$ and $c_k^B(E^n)$ we see that those numbers are equal.
\begin{problem} Are the numbers $c_k(E^n)$ and $c_k^B(E^n)$ (resp. $c_k(H^n)$ and $c_k^B(H^n)$) equal for all $k,n$?
\end{problem}
Having in mind that each subset not lying on a line is central for 2-partitions of the Euclidean plane, we may ask about the same property of the Lobachevsky plane.
\begin{problem} Is any subset $C\subset H^2$ not lying on a line central for (Borel) 2-partitions of the Lobachevsky plane $H^2$?
\end{problem}
Finally, let us ask about the numbers $c_k^B(H^2)$ and $c_k(H^2)$.
Observe that Theorem~\ref{main} guarantees that $c^B_k(H^2)\le \mathfrak c$ for all $k\in\mathbb N$. Inspecting the proof we can see that this upper bound can be improved to $c^B_k(H^2)\le\mathrm{non}(\mathcal M)$ where $\mathrm{non}(\mathcal M)$ is the smallest cardinality of a non-meager subset of the real line. It is clear that $\aleph_1\le\mathrm{non}(\mathcal M)\le \mathfrak c$. The exact location of the cardinal $\mathrm{non}(M)$ on the interval $[\aleph_1,\mathfrak c]$ depends on axioms of Set Theory, see \cite{Bl}. In particular, the inequality $\aleph_1=\mathrm{non}(\mathcal M)<\mathfrak c$ is consistent with ZFC.
\begin{problem} Is the inequality $c_k^B(H^2)\le \aleph_1$ provable in ZFC? Are the cardinals $c_k^B(H^2)$ countable? finite?
\end{problem}
The latter problem asks if $H^2$ contains a countable (or finite) central set for Borel $k$-partitions of the Lobachevsky plane. Inspecting the proof of Theorem~\ref{main} we can see that it gives an ``approximate'' answer to this problem:
\begin{proposition}\label{main3} For any $k\in\mathbb N$ there is a finite subset $C\subset H^2$of cardinality $|C|\le k(k+1)/2$ such that for any partition $H^2=B_1\cup\dots\cup B_k$ of $H^2$ into $k$ Borel pieces and for any open neighborhood $O(C)\subset H^2$ of $C$ one of the pieces $B_i$ contains an unbounded subset $S\subset B_i$ symmetric with respect to some point $c\in O(C)$.
\end{proposition}
\begin{remark} For further results and open problems related to symmetry and colorings see the surveys \cite{BP2}, \cite{BVV} and the list of problems \cite[\S4]{BBGRZ}.
\end{remark}
|
3,212,635,537,630 | arxiv | \section{Detection Method}\seclab{method}
Cosmic $\nu_{\tau}$ can produce taus\footnote{Other neutrino flavors can be neglected, as the electron range in matter at these energies is too short and the muon decay length too large.} under the Earth surface through charged-current interactions. Taus may then exit and decay in the atmosphere, generating Earth-skimming extensive air showers
(EAS)~\cite{Fargion00,Bertou04}. EAS emit coherent electromagnetic radiations at frequencies of a few to hundreds of MHz, detectable by radio antennas for shower energies $E \gtrsim 3\cdot10^{16}$ eV~\cite{CODALEMA2005,LOPES2005}.
The strong beaming of the electromagnetic emission, combined with the transparency of the atmosphere to radio waves, will allow the radiodetection of EAS initiated by tau decays at distances up to several tens of kilometers (see \figref{fig}), making radio antennas ideal instruments for the search of cosmic neutrinos. Furthermore, antennas offer practical advantages (e.g. limited unit cost, easiness of deployment) that allow the deployment of an array over very large areas, as required by the expected low neutrino event rate.
Remote sites, with low electromagnetic background, should obviously be considered for the array location. In addition, mountain ranges are preferred, first because they offer an additional target for the neutrinos, and also because mountain slopes are better suited to the detection of Earth-skimming showers compared to flat areas which are parallel to the neutrino-induced EAS trajectories.
GRAND antennas are foreseen to operate in the $30-100$\,MHz band. Below this range, short-wave background prevents detection, while coherence of radio emission fades above it.
However, an extension of the antenna response up to 200 or 300\,MHz would enable us to better observe the Cherenkov ring associated with the air shower \cite{Alvarez2012}, which represents a sizable fraction of the total electromagnetic signal at these frequencies. This could provide an unambiguous signature for background rejection.
\section{GRAND layout and neutrino sensitivity}\seclab{sensitivity}
We present here a preliminary evaluation of the potential of GRAND for the detection of cosmic neutrinos, based on the simulated response of a 90\,000 antennas setup deployed on a square layout of 60\,000\,km$^2$ in a remote mountainous area, the Tianshan mountains in the XinJiang province, China.
{\bf Simulation method.}
We perform a 1D tracking of a primary $\nu_{\tau}$, simulated down to the converted tau decay.
We assume standard rock with a density of 2.65~g/cm$^3$ at sea level and above, while the Earth core is modeled following the Preliminary Reference Earth Model \cite{Dziewonski81}.
The simulation of the deep inelastic scattering of the neutrinos is performed with Pythia6.4, using the CTEQ5d probability distribution functions (PDF) combined with \cite{Gandhi98} for cross section calculations. The propagation of the outgoing tau is simulated using randomized values from parameterisations of GEANT4.9 PDFs for
tau path length and proper time. Photonuclear interactions in GEANT4.9 have been extended above PeV energies following \cite{Dutta00}. The tau decay is simulated using the TAUOLA package. The radiodetection of neutrino-initiated EAS is simulated in the following way:~
- for a limited set of $\nu_{\tau}$ showers simulated with ZHaireS \cite{Zhaires} at various energies (see \figref{fig}), we determine a conical volume inside which the electric field is above the expected detection threshold of the GRAND antennas (30~$\mu$V/m in an agressive scenario, 100~$\mu$V/m in a conservative one).
- from this set of simulations, we parametrize the shape (angle at top and height) of this detection cone as a function of energy.
- for each neutrino-initiated EAS in our simulation, we compute the expected cone shape and position, and select the antennas located inside the corresponding volume, taking into account signal shadowing by mountains.
- if a cluster of 8 neighbouring units can be found among these selected antennas, we consider that the primary $\nu_{\tau}$ is detected.
{\bf Results and implications.}
Assuming a 3-year observation with no neutrino candidate on this 60\,000 km$^2$ simulated array, a 90\%\,C.L. integral limit of $6.6\times10^{-10}$~GeV$^{-1}$~cm$^{-2}$~s$^{-1}$ can be derived for an $E ^{-2}$ neutrino flux in our agressive scenario~($1.3\times10^{-9}$ in our conservative scenario). This is a factor $\ge 5$ better than than other projected giant neutrino telescopes for EeV energies \cite{ARA2016}.
This preliminary analysis also shows that mountains constitute a sizable target for neutrinos, with $\sim$50\% of down-going events coming from neutrinos interacting inside the mountains.
It also appears that specific parts of the array (large mountains slopes facing another mountain range at distances of $30-80$\,km) are associated with a detection rate well above the average. By splitting the detector into smaller sub-arrays of a few 10 000 km$^2$ each, deployed solely on favorable sites, an order-of-magnitude improvement in sensitivity could be reached with only a factor-of-3 increase in detector size, compared to the 60 000 km$^2$ simulation area. This is the envisioned GRAND setup.
This neutrino sensitivity corresponds to a detection rate of 1 to 60 cosmogenic events per year. Besides, the angular resolution on the arrival directions, computed following \cite{Ardouin11}, could be as low as 0.05$^\circ$ for a 3~ns precision on the antenna trigger timing, opening the door for neutrino astronomy.
\begin{figure*}
\centering
\includegraphics[width=5cm,clip]{taudecay.png}
\includegraphics[width=5cm,clip]{sens.png}
\caption{ {\it Left:} Expected radio footprint for a $5\cdot10^{17}$~eV horizontal shower induced by a tau decay at the origin. The color coding corresponds to the Efield maximum amplitude integrated over the 30-80MHz range (in $\mu$V/m). The sky background level is $\sim$15$\mu$V/m in this frequency range. Note the different x and y scales. {\it Right:} Differential sensitivity of the 60\,000 km$^2$ simulated setup (brown region, top limit: conservative, bottom: aggressive) and of the projected GRAND array (brown thick curve). The integral sensitivity limit for GRAND is shown as a thick line. We also show the expected limit for the projected final configuration of ARA~\cite{ARA2016} and theoretical estimates for cosmogenic neutrino fluxes \cite{KAO10}: the blue line stands for the most pessimistic fluxes, the gray-shaded region to the ``reasonable'' parameter range. All curves are for single-flavor neutrino fluxes.}
\figlab{fig}
\end{figure*}
\section{Background rejection}\seclab{bg}
A few tens of cosmogenic neutrinos per year are expected in GRAND. The rejection of events initiated by high-energy particles other than cosmic neutrinos should be manageable \cite{ICRC2015}. The event rates associated to terrestrial sources (human activities, thunderstorms, etc.) are difficult to evaluate, but an estimate can be derived from the results of the Tianshan Radio Experiment for Neutrino
Detection (TREND). TREND~\cite{Ardouin11} is an array of 50 self-triggered antennas deployed over a surface $\gtrsim1$\,km$^2$ in a populated valley of the Tianshan mountains, with antenna design and sensitivity similar to what is foreseen for GRAND. The observed rate of events triggering six selected TREND antennas separated by $\sim$800~m over a sample period of 120 live days was found to be around 1~day$^{-1}$, with two-thirds of them coming in bursts of events, mostly due to planes. Direct extrapolation from TREND results thus leads to an expected event rate of $\sim1$~Hz for GRAND for a trigger algorithm based on coincident triggers on neighbouring antennas and a rejection of events bursts.
Amplitude patterns on the ground (emission beamed along the shower axis and signal enhancement on the Cherenkov ring \cite{Alvarez2012}), as well as wave polarization \cite{Aab14} are strong signatures of neutrino-initiated EAS that could provide efficient discrimination tools for the remaining background events.
These options are being investigated within GRAND, through simulations and experimental work. In 2017 the GRANDproto project \cite{Gou15} will deploy a hybrid detector composed of 35 3-arm antennas (allowing for a complete measurement of the wave polarization) and 24 scintillators, that will cross-check the EAS nature of radio-events selected from a polarization signature compatible with EAS.
\section{GRAND development plan}\seclab{engineering}
Before considering the complete GRAND layout, several validation steps have to be considered. The first one will consist of establishing the autonomous radiodetection of very inclined EAS with high efficiency and excellent background rejection, with a dedicated setup of size $\sim 300$\,km$^2$. This array will be too small to perform a neutrino search, but cosmic rays should be detected above $10^{18}\,$eV. Their reconstructed properties (energy spectrum, composition) will enable us to validate this stage. The absence of events below the horizon will confirm our EAS identification strategy. A second array, 10 times larger, will allow to test the technological choices for the DAQ chain, trigger algorithm and data transfer. This will mark the start of GRAND data taking, foreseen in the mid-2020s.
\section{Conclusion}
The GRAND project aims at building the ultimate next-generation neutrino telescope. Preliminary simulations indicate that
a sensitivity guaranteeing the detection of cosmogenic neutrinos is achievable. Work is ongoing to assess GRAND achievable scientific goals and the corresponding technical constraints. Background rejection strategies and technological options are being investigated.\\
\noindent\footnotesize{{\it Acknowledgements.} The GRAND and GRANDproto projects are supported by the Institut Lagrange de Paris, the France China Particle Physics Laboratory, the Natural Science Fundation of China (Nos.11135010, 11375209), the Chinese Ministry of Science and Technology and the S\~ao Paulo Research Foundation FAPESP (grant 2015/15735-1).}
|
3,212,635,537,631 | arxiv | \section{Introduction} \label{sec:intro}
Over the duration of its operation, from 2009--2013, the Herschel Space Observatory (HSO) enabled observations of the fundamental rotational transitions of a variety of molecular hydrides and hydride ions, several of them being newly discovered. Many of these transitions cannot be observed from the ground at all because these high frequency lines lie in parts of the sub-millimetre (sub-mm) and far-infrared (FIR) wavelength range that are blocked by absorption in the Earth's atmosphere. One of the many highlights of the Herschel mission, and a real surprise, has been the fortuitous detection of the ${J=1-0}$ and ${J=2-1}$ transitions of argonium, ArH$^{+}$ , in emission towards the Crab Nebula by \citet{barlow2013detection}. Following it, \citet{schilke2014ubiquitous} were able to successfully assign, previously unidentified absorption features near 617\,GHz to ArH$^+$, along the lines-of-sight (LOS) towards five star-forming regions. It turned out that, fortunately, the ${J = 1-0}$ transition of ArH$^{+}$ lies at a wavelength that is accessible with ground-based telescopes at high mountain sites under exceptional weather conditions, and \citet{Jacob2020Arhp} were recently able to detect ArH$^{+}$ towards seven more sight lines in the inner Galaxy using the Atacama Pathfinder Experiment (APEX) 12\,m sub-mm telescope. All of these observations confirm, as first discussed by \citet{schilke2014ubiquitous}, that the ArH$^{+}$ molecular ion exclusively probes diffuse atomic material and that it is ubiquitously present in the Milky Way. This raises questions on the existence and nature of ArH$^{+}$ in extragalactic sources.
Towards external galaxies, both the ArH$^+$ ${J=1-0}$ and ${J=2-1}$ lines have remained undetected in single side band observations covering their corresponding line frequencies, which were carried out using the Herschel Spectral and Photometric Imaging REceiver \citep[SPIRE,][]{griffin2010herschel} on board the HSO. The apparent non-detection of these lines is likely caused by a combination of effects including smearing by the spectrometer which results in unresolved spectral line profiles alongside effects of blending from nearby lines (such as HCN-v2 (7-6) at 623.3635\,GHz and H$_2$O $2_{2,0}$--$2_{1,1}$ at 1228.79\,GHz). In addition, the ringing noise introduced by uncertainties in the fitting and subtraction of strong lines in the SPIRE FTS spectra using sinc functions can affect the line profiles of the underlying weak absorption. It is for these reasons that ArH$^+$ has remained undetected in observations of extragalactic sources other than PKS 1830$-$211.
Therefore, very little is known about the nature and abundance of ArH$^+$ outside of the Milky Way. To date, there exists only a single detection of ArH$^{+}$ in extragalactic space, which was carried out by \citet{Mueller2015}. Using the Atacama Large Millimetre/sub-millimetre Array \citep[ALMA,][]{wootten2009atacama}, these authors were able to detect the $J=1-0$ transitions of $^{36}$ArH$^+$ and $^{38}$ArH$^+$ through the intermediate redshift ${z = 0.8858}$ foreground galaxy absorbing the continuum of the gravitational lens-magnified blazar, PKS 1830$-$211 along two different sight lines.
Primarily residing in atomic gas with molecular hydrogen fractions, $f_{\text{H}_{2}}$, between $10^{-2}$ and $10^{-4}$ \citep{schilke2014ubiquitous, neufeld2016chemistry, Jacob2020Arhp}, the abundance of ArH$^{+}$ is sensitive to the X-ray and cosmic-ray fluxes that permeate the surrounding media as its formation is initiated by the reaction between H$_{2}$ and atomic argon ionised by either X-rays and/or cosmic-ray particles. Therefore, observations of the ground state transitions of ArH$^{+}$ provide a unique tool for probing atomic gas and estimating ionisation rates. Regions permeated by a high flux of cosmic-rays can be heated by them to high gas temperatures, which in turn can strongly influence the initial conditions of star-formation and the initial mass function \citep[IMF,][]{papadopoulos2011extreme}.
In this paper we present our search for ArH$^{+}$ towards three luminous galaxies, Arp~220, NGC~253 and NGC~4945, using the SEPIA660 receiver on the APEX 12\,m telescope. Both systems have been extensively studied over a wide range of wavelengths. In particular, a plethora of molecules have been observed towards these sources, including common hydrides and their cations, for example, OH, OH$^+$, H$_{2}$O, H$_{2}$O$^+$ \citep{Gonzalez2013, van2016ionization, Gonzales2018}. Arp~220 is the archetypical ultra-luminous infrared galaxy (ULIRG). A merging system, it hosts two compact nuclei \citep{baan1995nuclear, rodriguez2005vla} that are surrounded by an immense amount of gas and dust \citep{scoville1997arcsecond, engel2011arp} with dust temperatures between 90 and 160\,K \citep{sakamoto2008submillimeter} and a luminosity of 0.2--$1\times10^{12}\,L_{\odot}$. Notably, the intense starburst activity within the dense interstellar medium (ISM) of its nuclear regions causes stars to form at a rate of up to 50--100\,$M_{\odot}$\,yr$^{-1}$ \citep{Smith1998}, which is ${>\!50}$ times that in the disk of the Milky Way galaxy today \citep{Robitaille2010} and ${\sim\!1000}$ times that in its central molecular zone \citep{Immer2012}.
NGC~253 is a barred prototypical starburst galaxy part of the Sculptor group, with an infrared luminosity of $1.7\times10^{10}\,L_{\odot}$ \citep{radovich2001far}. Its strong nuclear starburst, drives a ${\sim\!100}\,$pc-scale molecular gas outflow/wind as seen, for example in observations of CO \citep{Bolato2013}. It has been suggested that at the centre of this barred spiral a weak AGN coexists with the strong starburst, an issue that is still under debate \citep[for example see,][]{muller2010stellar, Gutierrez2020}. NGC~4945 is an infrared-bright galaxy with a Seyfert nucleus, signifying an accreting supermassive black hole, in the Centaurus group with a luminosity of $2.4\times10^{10}\,L_{\odot}$ \citep{brock1988far}. It is the brightest Seyfert 2 galaxy and hosts a deeply enshrouded AGN at its centre which is revealed by X-ray emission in the 100-keV sky \citep{Iwasawa1993}. The AGN is surrounded by a strongly absorbing, inclined circumnuclear starburst ring with a radius of ${\sim\! 50}\,$pc \citep{Chou2007}. With comparable star-formation rates of a few times $M_{\odot}$yr$^{-1}$ \citep{Bolato2013, Bendo2016} the similarities (or lack thereof) in the abundances and gas properties traced by ArH$^+$ in these sources will shed light on their nuclear environments.
The observations are described in Sect.~\ref{sec:observations}, followed by a qualitative and quantitative analysis of the data and a discussion of the results in Sects.~\ref{sec:results} and \ref{sec:discussion}. Finally, in Sect.~\ref{sec:conclusions} we discuss our main findings and summarise our results.
\section{Observations} \label{sec:observations}
Using the Swedish-ESO PI (SEPIA660) receiver \citep{belitsky2018sepia, hesper2018deployable} of the APEX 12\,m sub-mm telescope, we were able to carry out observations of the ${J = 1-0}$ transition of $^{36}$ArH$^{+}$ (hereafter ArH$^{+}$) between 2019 July and August (Project Id: M9519C$\_$103). The SEPIA660 receiver is a two sideband (2SB), dual polarisation receiver that covers a bandwidth of 8\,GHz, per sideband, with a sideband rejection level $>$15\,dB. The observations were carried out in wobbler switching mode, using a secondary wobbler throw of 180$^{\prime\prime}$ in azimuth at a rate of 1.5\,Hz. Combined with the atmospheric stability at the high APEX site, this observing method allows reliable recovery of the sources' continuum levels.
Properties of our source sample are summarised in Table~\ref{tab:source_properties}.
\begin{table*}
\begin{center}
\caption{Properties of studied sources.}
\begin{tabular}{lrr ccc}
\hline\hline
Source & \multicolumn{2}{c}{Coordinates (J2000)} & \multicolumn{1}{c}{D} & \multicolumn{1}{c}{$\upsilon_{\rm helio}$} & \multicolumn{1}{c}{$T_\text{c}$\tablefootmark{a} }\\
& $\alpha$~[hh:mm:ss] & $\delta$~[dd:mm:ss]
& [Mpc] & [km~s$^{-1}$] & [K] \\
\hline
Arp~220 & 15:34:57.20 & $+$23:30:11.00
& 72.0 & 5434.0 & 0.05 \\
NGC~253 & 00:47:32.98 & $-$25:17:15.90 &
\phantom{0}3.0& \phantom{0}243.0 &0.21\\
NGC~4945 & 13:05:27.48 & $-$49:28:05.60 &
\phantom{0}3.8 & \phantom{0}585.0 & 0.54 \\
\hline
\end{tabular}
\label{tab:source_properties}
\end{center}
\tablefoot{\tablefoottext{a}{Main-beam brightness temperature of the continuum at 617\,GHz as measured using the SEPIA660 receiver with a HPBW of $10\rlap{.}^{\prime\prime}$3 and a Jy-to-K conversion factor of 70$\pm$6.} }
\tablebib{Distances are taken from the NASA Extragalactic Database (NED) at \url{https://ned.ipac.caltech.edu}.}
\end{table*}
We tuned the upper sideband (USB) to a frequency of 618.5\,GHz to cover the ArH$^{+}$ ${J=1-0}$ transition at a rest frequency of 617.525\,GHz. This allowed us to simultaneously observe the $N_{K_{a}K_{c}} = 1_{10}-1_{01}, J = 3/2 - 1/2$, and ${J = 3/2 - 3/2}$ transitions of \phtop at 604.678 and 607.227\,GHz in the lower sideband (LSB), centred at a frequency of 606.5\,GHz. The USB also covers an atmospheric absorption feature close to 620\,GHz, however falls just short of covering the corresponding ${J=1-0}$ transition of $^{38}$ArH$^+$ at 616.648\,GHz. Our observations were carried out under excellent weather conditions, with precipitable water vapour (PWV) levels between 0.25 and 0.41\,mm, corresponding to an atmospheric transmission better than or comparable to 0.5 in both sidebands and a mean system temperature of 1874\,K, at 617\,GHz. On average we spent a total (on+off) observing time of 4.6\,hours towards each source. The typical half power beam-width (HPBW) is $10\rlap{.}^{\prime\prime}$3 at 617\,GHz. The HPBW at this frequency corresponds 0.15\,kpc and 0.19\,kpc at the distances towards NGC~253 and NGC~4945, respectively. Therefore, the beam probes only the nucleus and foreground disk gas of these two galaxies. We converted the spectra into main-beam brightness temperature scales by using an antenna forward efficiency, $F_{\text{eff}}$, of 0.95 and a main-beam efficiency, $B_{\text{eff}}$, of 0.41 (determined from observations of Mars). As a backend we used an evolved version of the MPIfR built Fast Fourier Transform Spectrometer \citep[XFFTS,][]{Klein2012} which provided, over the entire 2$\times$8\,GHz bandwidth, a generic spectral resolution of 61\,kHz (corresponding to 30\,m\,s$^{-1}$ at 617\,GHz). The calibrated spectra, smoothed to a velocity resolution appropriate for our sources (${\Delta\upsilon \sim\!4.5}\,$km~s$^{-1}$), were subsequently processed using the GILDAS/CLASS software\footnote{Software package developed by IRAM, see \url{https://www.iram.fr/IRAMFR/GILDAS/} for more information regarding GILDAS packages.} and first order polynomial baselines were removed.
Our search for ArH$^{+}$ and \phtop towards Arp~220 was unsuccessful and we do not detect any lines in the sideband down to a noise level of 5\,mK at a spectral resolution of 4.5\,km~s$^{-1}$. Therefore, we do not include Arp~220 in our analysis but quote a 3$\sigma$ upper limit of 15\,mK for the ArH$^{+}$ ${J=1-0}$ and both \phtop transitions in this source. The non-detection of ArH$^+$ towards Arp~220 might be because this system's X-ray emission is not capable of ionising Ar or a combined effect of sensitivity and the low continuum level.
In addition to the results of the ArH$^{+}$ and \phtop observations newly presented in this work, we use complementary archival HIFI/Herschel data of other well known tracers of atomic gas, namely, OH$^{+}$ and o-H$_{2}$O$^+$, which were acquired as a part of the Herschel EXtraGALactic (HEXGAL) guaranteed time key project (PI: R. G\"{u}sten), published by \citet{van2016ionization} and also of the diffuse molecular gas tracer HF observed by \citet{monje2014hydrogen}.
The HIFI HPBWs at the frequencies of the OH$^+$, o-H$_{2}$O$^+$, and HF transitions studied in the above works are comparable to one another at 22$^{\prime\prime}$, 20$^{\prime\prime}$, and 17$^{\prime\prime}$, respectively.
The spectrum of the $N,J\!=\!1,2\!-\!0,1$ OH$^+$ transitions near 971~GHz towards NGC~253, observed under the HEXGAL project (presented by \citealp{van2016ionization}) is saturated over almost the entire range of heliocentric velocities associated with the central parts of the galaxy, between 185 and 235\,km~s$^{-1}$. This makes the subsequent determination of OH$^+$ column densities from the observed spectrum extremely difficult. Therefore, in this work, towards NGC~253 we instead use observations of the $N, J\!=\!1,1-0,1$ transitions of OH$^+$ near 1033~GHz observed in 2010 July using the now decommissioned dual-channel 1.05\,THz receiver on the APEX 12~m telescope \citep{Leinz2010}. The observations were carried out under excellent weather conditions with PWV between 0.15 and 0.25\,mm. The fast Fourier Transform spectrometer \citep[FFTS,][]{Klein2006} provided a spectral resolution of ${\sim\!0.053}$\,km~s$^{-1}$ (183\,kHz) over a 2.4\,GHz bandwidth for the 1.05\,THz channel which was later smoothed to a velocity resolution of 4.5\,km~s$^{-1}$. The spectra were calibrated on main-beam temperature scale using a main-beam efficiency of ${(\sim\!21\pm7)\%}$ as determined from observations of Uranus. The observed spectrum shows a narrow absorption feature at 320~km~s$^{-1}$ which likely arises from blending with contaminant species along the sight line. In order to confirm the association of this feature to a species other than OH$^+$ we cross-checked the 1033~GHz OH$^+$ spectrum along the LOS towards the extensively studied Galactic source, Sgr~B2~(M), observed under the HEXOS Herschel guaranteed time key project \citep{bergin2010herschel}. From this comparison we assign this absorption feature near 1032.783~GHz as being likely associated with the $^{13}$CH$_2$CHCN (21$_{10,11}$--20$_{9,12}$) transition. In addition we also observe a weaker absorption feature near 12~km~s$^{-1}$ which potentially arises from the high-lying ($37_{9,29}$--$37_{6,32}$) transition of C$_2$H$_5$CN at 1033.868~GHz. Furthermore the high excitation lines of similar complex organic molecules (COMs) found in ALMA Band 6 and 7 observations towards NGC~253 by \citet{Mangum2019} makes it likely that the OH$^+$ spectrum at 1033~GHz is contaminated by the high-lying transitions of these species.
We also compare the line profiles of the ArH$^{+}$ line with those of the H{\small I} 21\,cm line and subsequently determine the ArH$^{+}$ abundances. For this we use archival interferometric data of H{\small I} absorption and emission for NGC~253 and NGC~4945, observed using the Australia Telescope Compact Array (ATCA) with beam sizes of $4\rlap{.}^{\prime\prime}9\times10\rlap{.}^{\prime\prime}3$ and $7\rlap{.}^{\prime\prime}9\times9\rlap{.}^{\prime\prime}4$, respectively. The H{\small I} column densities were determined as described in \citet{winkel2017hydrogen}, by combining the absorption profiles with emission line data. The results of this H{\small I} analysis, resulting in determinations of the optical depth, spin temperatures, and H{\small I} column densities along with the corresponding H{\small I} emission and absorption spectra is given in Appendix~\ref{appendix:hi_analysis}. In addition we also present CO $J=(1-0)$ emission spectra towards NGC~253 and NGC~4945, previously published in \citet{Houghton1997} and \citet{Curran2001}, respectively, for comparison\footnote{The spectra were extracted using the WebPlotDigitizer tool available on \href{http://arohatgi.
info/}{http://arohatgi.
info/}.}. In both cases the spectra were obtained using the Swedish–ESO 15~m Sub-millimetre Telescope (SEST) at La Silla, Chile with a beam size of 43$^{\prime\prime}$ at 115~GHz. The spectroscopic parameters of all the lines described above are summarised in Table~\ref{tab:spectroscopic_properties}. \\
We analysed the stability of the quoted continuum levels at 617\,GHz across scans (or time) and found the fluctuations to lie within 14\%. This is illustrated in Fig~\ref{fig:continuum_scans_scatter} and is not surprising as the observations were carried out using a wobbling secondary with a fast switching rate which guaranteed the removal of any drifts that may arise due to atmospheric instabilities. Furthermore, we find the 617\,GHz continuum flux to be well correlated with the continuum flux at 870\,$\mu$m observed using the Large APEX Bolometer Camera (LABOCA) at the APEX telescope presented in \cite{Wiess2008} which leads us to conclude that the continuum levels used are fairly reliable.
\begin{table*}
\centering
\caption{Spectroscopic properties of the studied species and transitions.}
\begin{tabular}{lcclcrlr}
\hline \hline
Species & \multicolumn{2}{c}{Transition} & Frequency & $A_{\text{E}}$ & \multicolumn{1}{c}{$E_{\text{u}}$} & Receiver/Telescope & $\theta_{\rm FWHM}$\\
& $J^{\prime} - J^{\prime\prime}$ & $F^{\prime} - F^{\prime\prime}$ & [GHz] & [s$^{-1}$] & \multicolumn{1}{c}{[K]} & & \multicolumn{1}{c}{[$^{\prime\prime}$]}\\
\hline
ArH$^{+}$ & $1 - 0$ & --- & \phantom{0}617.5252(2) & 0.0045 & 29.63 & SEPIA660/APEX & 10.3\\
\phtop & $3/2 - 1/2$& --- & \phantom{0}604.6841(8) & 0.0013 & 29.20 & SEPIA660/APEX & 10.3\\
$N_{K_{\rm a}K_{\rm c}} = 1_{1,0} - 1_{0,1}$ & $3/2 - 3/2$& --- & \phantom{0}607.2258(2) & 0.0062 & 29.20 & SEPIA660/APEX\\
OH$^{+}$ & $2-1$ & $5/2 - 3/2$ & \phantom{0}971.8038(1)\tablefootmark{*} & 0.0182 & 46.64 & HIFI/Herschel & 22.0 \\
$N = 1-0$& & $3/2 - 1/2$ & \phantom{0}971.8053(4) & 0.0152 & \\
& & $3/2 - 3/2$ & \phantom{0}971.9192(11) & 0.0030 & \\
& $1 - 1$ & $1/2 - 1/2$ & 1032.9985(7) & 0.0141 & 49.58 & 1.05THz Rx./APEX & 6.4\\
& & $3/2 - 1/2$ & 1033.0040(10) & 0.0035 & \\
& & $1/2 - 3/2$ & 1033.1129(7) & 0.0070 & \\
& & $3/2 - 3/2$ & 1033.1186(10)\tablefootmark{*} & 0.0176 & \\
o-H$_{2}$O$^{+}$ & $3/2 - 1/2$ & $3/2 - 1/2$ & 1115.1560(8) & 0.0171 & 53.52 & HIFI/Herschel & 20.0\\
$N_{K_{\rm a}K_{\rm c}} = 1_{1,1} - 0_{0,0}$ & & $1/2 - 1/2$ & 1115.1914(7) & 0.0274& \\
& & $5/2 - 3/2$ & 1115.2093(7)\tablefootmark{*} & 0.0309& \\
& & $3/2 - 3/2$ & 1115.2681(7) & 0.0138& \\
& & $1/2 - 3/2$ & 1115.3035(8) & 0.0034& \\% \hline
HF & $1-0$ & --- & 1232.4762(1) & 0.0242 & 59.14 & HIFI/Herschel & 17.0\\
CO & $1-0$ & --- & \phantom{0}115.2712(0) & 7.203$\times10^{-8}$& 5.53 & SEST & 43.0\\
\hline
\end{tabular}
\tablefoot{ The spectroscopic data are taken from the Cologne Database for Molecular Spectroscopy \citep[CDMS,][]{muller2005cologne}. The H$_2$O$^+$ frequencies were actually refined considering astronomical observations \citep[see Appendix A of][]{Mueller2016}, for which the upper level energies are given with respect to the ground state of p-H$_2$O$^+$ ($N_{K_{\rm a}K_{\rm c}} = 1_{0,1}$). For the rest frequencies, the numbers in parentheses give the uncertainty in the last listed digit. \tablefoottext{*}{Indicates the strongest hyperfine-structure transition, which was used to set the velocity scale in the analysis.}}
\label{tab:spectroscopic_properties}
\end{table*}
\section{Results} \label{sec:results}
\subsection{Spectral line profiles} \label{subsec:spectral_profiles}
The calibrated and baseline-subtracted spectra towards NGC~253 and NGC~4945 are presented in Figure~\ref{fig:NGC253_allspectra_panel}. In the following paragraphs we qualitatively discuss the observed ArH$^+$ and p-H$_{2}$O$^+$ features and compare them to spectra of OH$^+$, o-H$_{2}$O$^+$, HF, CO, and H{\small I}. As mentioned above, our tuning setup simultaneously covers both the $J=3/2-1/2$ as well as the $J=3/2-3/2$ fine structure transitions from the ($1_{1,0}-1_{0,1}$) level of p-H$_{2}$O$^+$ at 604.684 and 607.225\,GHz. However, towards both NGC~253 and NGC~4945, we do not detect the $J=3/2-1/2$ fine structure transition near 604\,GHz above a noise level of ${\sim}$3 and 7\,mK at a spectral resolution of 4.5\,km~s$^{-1}$, respectively. Therefore this transition is not discussed any further in the text.
For both NGC~253 and NGC~4945, we assume molecular source sizes of ${\sim}20^{\prime\prime}$, which were previously determined by \citet{Wang2004,Aladro2015,JP2018} and references therein, using molecular emission maps of abundant species like CO. The beam sizes of all the species studied here with the exception of CO (see Table~\ref{tab:spectroscopic_properties}) are either smaller than or comparable to this source size.
\subsubsection{NGC~253}
The ArH$^{+}$ line profile towards NGC~253 displays blueshifted absorption with respect to the systemic velocity of the source at 243\,km\,s$^{-1}$, covering a velocity range from ${\sim\!90}$ to 270\,km~s$^{-1}$ and centred at a velocity of 210\,($\pm 3.7$)\,km~s$^{-1}$. For comparison, other species like OH$^{+}$, o-H$_{2}$O$^{+}$ and HF all show spectra with P-Cygni profiles with the absorption seen at comparable blueshifted velocities. The corresponding H{\small I} profile shows an asymmetric absorption component, centred at a blueshifted velocity of ${\sim\!200}$\,km~s$^{-1}$. The velocity shift observed in the H{\small I} spectrum has been interpreted as evidence for a rotating nuclear disk of cold gas in this galaxy \citep{koribalski1995peculiar}. The P-Cygni profile observed towards the molecular ions studied here and HF, characterises the radial motion of the gas, which is indicative of outflows from the central region. The gas associated with the outflow component near 360~km~s$^{-1}$ is unsurprisingly not traced by ArH$^+$ and shows stronger emission in CO (particularly in its higher J-transitions) relative to the gas near 180~km~s$^{-1}$. Previous studies have determined the outflow component to be kinematically distinct from the surrounding gas and is associated with the so-called western-superbubble located north-west of the central starburst region \citep{Sakamoto2006} while the gas at 180~km~s$^{-1}$ is co-located with the central disk \citep{Krieger2019}. Such an asymmetric emission is revealed by H$\alpha$ observations whose emission is dominant on the approaching side of the outflow as reported by \citet{Bolato2013}.
\subsubsection{NGC~4945}
Towards NGC~4945, the ArH$^{+}$ line displays an asymmetric absorption profile between 455--715\,km~s$^{-1}$, similar to what is observed in the spectra of OH$^{+}$, o-H$_{2}$O$^{+}$ and HF lines. As discussed in \citet{monje2014hydrogen}, \citet{van2016ionization} and references therein, there are at least two velocity components, however unlike the other molecules and molecular ions, the broader component in ArH$^{+}$ is centred at $510\,$km~s$^{-1}$ and the narrower one at $605\,$km~s$^{-1}$ while for the other species the broader component is the redshifted absorption feature.
In the p-H$_{2}$O$^+$, o-H$_{2}$O$^+$, OH$^{+}$\footnote{The OH$^{+}$ spectrum contains a second absorption feature, which is the result of image band contamination from the CH$_{3}$OH ${J_k = 9_4-8_3~\text{E}2}$ line near 959.8\,GHz \citep{Xu1997}.}, and HF spectra, the two absorption components are observed to be almost symmetric about the galaxy's systemic velocity at 585\,km~s$^{-1}$ \citep{Chou2007}. This may indicate that the observed absorption arises from non-circular motions associated with the galaxy's bar. A similar absorption dip is seen in the emission line profiles of lines from higher density gas tracers like HCN, HCO$^+$, CN \citep{Henkel1990}, H$_{2}$CO \citep{Gardner1974} and also from CO \citep{Whiteoak1990} near 640\,km~s$^{-1}$. It likely traces foreground molecular gas that is moving towards the nucleus.
The H{\small I} absorption spectrum against the nuclear continuum of the source, also displays a similar profile with two absorption components with dips at ${\sim}$540 and ${\sim}$620\,km~s$^{-1}$. The observed shift in the centres of both the ArH$^{+}$ absorption components in comparison to that of the other molecules, suggests that ArH$^{+}$ does not trace the same layers of infalling molecular gas as the other molecules and molecular ions but rather traces mostly or exclusively atomic gas layers, as expected.
\subsubsection{PKS~1830$-$211}
As discussed in Sect.~\ref{sec:intro}, the $J=1-0$ transition of ArH$^+$ was first detected in extragalactic environments by \citet{Mueller2015} towards the intermediate redshift ($z=0.8858$) lensing galaxy located in front of the blazar, PKS~1830$-$211. These authors study absorption spectra extracted from two separate lines-of-sight towards two magnified images located on the south-west (SW) and north-east (NE) sides of its nucleus. Owing to differences in the nature of the continuum between PKS~1830$-$211's absorber and the sources studied here, a direct comparison of the spectral line profiles with those discussed in our study is not straightforward. While the background continuum for NGC~253 and NGC~4945 arise from the galaxies themselves, that for the sight lines studied towards foreground lensing galaxy in front of PKS~1830$-$211 arises from the distant quasar. Therefore, unlike the case for the nearby galaxies the spatial resolution for the latter is set by the small size of the background continuum emission (which is a few parsec in the plane of the $z$ = 0.8858 galaxy) but nonetheless we briefly describe the properties of the ArH$^+$ spectra observed towards both sight lines studied by \citet{Mueller2015}. The spectrum along the SW magnified image comprises of a single component with a line width of $\sim 57$~km~s$^{-1}$ and a weaker but broader blueshifted component, a feature that is also seen in the 607~GHz and 634~GHz p-H$_{2}$O$^+$ spectra. However, the spectral line profile of ArH$^+$ towards the NE image shows multiple narrow absorption features spanning 200~km~s$^{-1}$ which is comparable to the spread over which absorption from other diffuse gas tracers is seen in this direction, namely CH$^+$, HF, OH$^+$ and H$_2$O$^ +$ \citep{Mueller2016,Muller2017}.
\begin{figure*}
\centering
\includegraphics[width=0.404\textwidth]{Fig/NGC253_alles_spectra.pdf}\quad
\includegraphics[width=0.404\textwidth]{Fig/NGC4945_alles_spectra.pdf}
\caption{From top to bottom: Normalised absorption spectra of ArH$^+$, p-H$_{2}$O$^+$, o-H$_{2}$O$^+$, OH$^+$ (at 971 and 1033~GHz), H{\small I} and HF as well as the CO (1-0) emission line spectrum for comparison, towards NGC~253 (left) and NGC~4945 (right), respectively. In the spectra for NGC~253, the dotted light blue curves display the Gaussian fit to the emission component. The dashed black line represents the individual absorption profiles after subtracting the Gaussian fit with their Wiener filter fits overlaid in red for all species except H{\small I} and CO. The relative intensities of the hyperfine structure (HFS) components of the o-H$_{2}$O$^+$ and OH$^+$ transitions are shown in black above their respective spectra and the grey shaded regions display their HFS deconvolved spectra. The vertical dashed black lines mark the systemic velocity of NGC~253 and NGC~4945 at $240\,$km~s$^{-1}$ and $563\,$km~s$^{-1}$, respectively. The 971~GHz OH$^+$ line profile towards NGC~253 is saturated at blueshifted velocities while the 1033~GHz OH$^+$ spectrum is potentially contaminated by the $^{13}$CH$_{2}$CHCN (21$_{10,11}$--20$_{9,12}$) line at 1032.783~GHz (fit shown by the dark blue curve). Also marked here in blue is contamination from the C$_{2}$H$_{5}$CN ($37_{9,29}$--$37_{6,32}$) line at 1033.868~GHz. The 971~GHz OH$^+$ spectrum towards NGC~4945 is contaminated by the CH$_{3}$OH ${J_k = 9_{4,6}-8_{,5}3~\text{E}}$ line near 959.900~GHz originating from the image sideband (marked in blue). }
\label{fig:NGC253_allspectra_panel}
\end{figure*}
\subsection{Column densities}\label{subsec:column_densities}
The line profiles can be expressed in terms of the optical depth, $\tau$, using the radiative transfer equation, which for the particular case of absorption spectroscopy is given by $T_\text{b} = T_\text{c}e^{-\tau}$, where $T_{\text{b}}$ and $T_{\text{c}}$ are the line, and the background continuum brightness temperatures, respectively. For NGC~253, we compute optical depths for the absorption profile obtained after subtracting the emission component. The emission component is modelled by fitting a Gaussian profile centred at the systemic velocity of the source. We fitted the optical depth profile on heliocentric velocity scales, that is to say $\tau$ versus. $\upsilon_{\rm helio}$, using the Wiener filter fitting technique as described in \citet{jacob2019fingerprinting}. This fitting procedure first fits the spectrum using the Wiener filter kernel by minimising the mean square error between the model ($T_\text{c}e^{-\tau}$) and observations. For species like OH$^+$ and o-H$_{2}$O$^{+}$, whose rotational transitions further undergo hyperfine structure (HFS) splitting, this algorithm additionally deconvolves the HFS from the observed spectrum using the relative spectroscopic weights of the different HFS components. When fitting lines that do not exhibit HFS, the procedure simply assumes that there is only a single HFS component whose frequency corresponds to that of the fine-structure transition itself. Other than the observed spectrum and the spectroscopic parameters of the line to be fit, the only other input parameter required by the Wiener filter technique is the spectral noise, which is assumed to be independent of the observed signal. However, the Wiener filter faces singularities in portions of the spectrum in which the observed line profiles saturate or the line-to-continuum ratio tends to zero. This is the case for the OH$^+$ spectrum towards NGC~253, which shows saturated absorption at blueshifted velocities between 185 and 235\,km~s$^{-1}$ (see Fig.~\ref{fig:NGC253_allspectra_panel}). Therefore, as discussed in Sect.~\ref{sec:observations} we instead model the 1033~GHz transition of OH$^+$. Prior to performing HFS deconvolution we remove contributions from the $^{13}$CH$_{2}$CHCN contamination using the WEEDS package in the GILDAS software.
The resulting optical depth profiles ($\tau_{\rm decon}$ versus. $\upsilon_{\rm helio}$) are used to derive column densities as follows, assuming that the foreground absorption covers the background continuum source entirely:
\begin{equation}
N_{\text{tot}} = \frac{8\pi\nu^3 }{b_{\text{ff}}c^3 } \frac{Q(T_{\text{ex}})}{g_{\text{u}} A_{\text{E}}} \text{e}^{E_{\text{u}}/T_{\text{ex}}} \left[ \text{exp} \left(\frac{h\nu}{k_{\text{B}}T_{\text{ex}}}\right) - 1 \right]^{-1} \int \tau_{\rm decon}\text{d}\upsilon \, .
\label{eqn:column_density}
\end{equation}
For a given species, all the spectroscopic terms in Eq.~\ref{eqn:column_density} namely, the upper level energy, $E_{\text{u}}$, the upper level degeneracy, $g_{\text{u}}$, and the Einstein A coefficient, $A_{\text{E}}$, all remain constant, except for the partition function, $Q$, which itself is a function of the rotation temperature, $T_{\text{rot}}$. Under conditions of local thermodynamic equilibrium (LTE), $T_{\text{rot}}$ is equal to the excitation temperature, $T_{\text{ex}}$. The excitation of the molecules is straightforward as most of the particles are expected to occupy the ground state level, owing to the large Einstein A coefficients of all the lines involved, resulting in high critical densities of the order of a few 10$^{7}$\,cm$^{-3}$ (see Table~\ref{tab:spectroscopic_properties} for the Einstein A coefficients). Therefore, assuming a complete ground state occupation, we can approximate the excitation temperature to be less than the energy of the upper level above the ground and equal to the radiation temperature of the cosmic microwave background ($T_{\rm CMB} = 2.73\,$K). Similar assumptions for the excitation temperatures have been made by \citet{van2016ionization} in their analysis. Since, ${T_{\rm CMB} < T_{\rm ex} < E_{\rm u}/k_{\rm B}}$, the column densities derived for the different species studied here strictly represent only lower limits.
Unlike those of other species, the CO column densities are determined from the integrated intensities of their respective emission line profiles, for an excitation temperature of 20~K, as was assumed by \citet{Houghton1997} and \citet{Curran2001}. Furthermore, the derived column densities are corrected to a first order for beam dilution effects, using the beam filling factor, $b_{\text{ff}}$, following $b_{\text{ff}} = \left[ \left( \theta_{\text{s}}^2 + \theta_{\text{b}}^2 \right)/\theta_{\text{s}}^2 \right]$ where, $\theta_{\text{s}}$, and $\theta_{\text{b}}$ represent the molecular source size and the beam size, respectively. The column density profiles hence derived per velocity channel are displayed in Fig.~\ref{fig:coldens_distributions} and the total column densities derived by integrating between 55 and 295~km~s$^{-1}$ for NGC~253, and 445 and 725~km~s$^{-1}$ for NGC~4945, are summarised in Table~\ref{tab:col_dens}. Using the column densities we determine from our spectra of the 607\,GHz p-H$_{2}$O$^+$ line, we predict, assuming conditions of LTE at 2.73\,K, integrated intensities of 1.3 and 2.9\,K\,km\,s$^{-1}$ for the 604\,GHz transition of p-H$_{2}$O$^+$, values that are lower than our 3$\sigma$ upper limits of 2.1 and 5.2\,K\,km\,s$^{-1}$ towards NGC~253 and NGC~4945, respectively.
\begin{figure*}
\centering
\includegraphics[width=0.44\textwidth]{NGC253_coldens.pdf}\quad\includegraphics[width=0.455\textwidth]{NGC4945_coldens.pdf}
\caption{From top to bottom: Column density distributions (black) of ArH$^+$, p-H$_{2}$O$^+$, o-H$_{2}$O$^+$, OH$^+$, H{\small I}, HF alongside the corresponding scaled H$_{2}$ profile ([HF]/[H$_{2}$] = 5$\times 10^{-10}$; \citet{Emprechtinger2012}) and CO towards NGC~253 (left) and NGC~4945 (right), respectively. The corresponding uncertainties are displayed by the blue shaded region.}
\label{fig:coldens_distributions}
\end{figure*}
\begin{table*}
\caption{Synopsis of the derived column densities. }
\centering
\begin{tabular}{lr rrrrrr}
\hline \hline
\multicolumn{1}{c}{Source} & $\upsilon_{\text{min}}$--$\upsilon_{\text{max}}$ & \multicolumn{1}{c}{$N(\text{ArH}^{+})$} & \multicolumn{1}{c}{$N(\text{p-H}_{2}\text{O}^{+})$} & \multicolumn{1}{c}{$N(\text{o-H}_{2}\text{O}^{+})$} & \multicolumn{1}{c}{$N(\text{OH}^{+})$} & \multicolumn{1}{c}{$N(\text{HF})$} & \multicolumn{1}{c}{$N(\text{H{\small I}})$} \\
& $~$[km~s$^{-1}$] & \multicolumn{1}{c}{10$^{12}$[cm$^{-2}$]} & \multicolumn{1}{c}{10$^{13}$[cm$^{-2}$]} & \multicolumn{1}{c}{10$^{13}$[cm$^{-2}$]} &
\multicolumn{1}{c}{10$^{14}$[cm$^{-2}$]} & \multicolumn{1}{c}{10$^{14}$[cm$^{-2}$]} & \multicolumn{1}{c}{10$^{20}$[cm$^{-2}$]}\\
\hline
NGC~253 & 55--295 & $3.35 \pm 0.31$ & $1.22 \pm 0.22$ & $ 8.47 \pm 0.36$ & \multicolumn{1}{c}{>1.57} & $1.40 \pm 0.60$ & $4.57 \pm 3.80 $\\
NGC~4945 & 445--725 & $4.31 \pm 0.20$ & $2.46 \pm 0.24$ & $9.28 \pm 0.32$ & \multicolumn{1}{c}{>3.20} & $1.61 \pm 0.68$ & $4.27 \pm 3.93$\\
\hline
\end{tabular}
\label{tab:col_dens}
\end{table*}
\subsection{Cosmic-ray ionisation rates}\label{subsec:CRIR}
Cosmic-rays represent the dominant source of heating and ionisation in the inner parts of molecular clouds that are not penetrated by UV photons, making them an important driver for ion-molecular reactions, including those responsible for the formation of ArH$^+$ \citep{roach1970potential}:
\begin{equation*}
\text{Ar} \xrightarrow{\text{CR}} \text{Ar}^{+} + e^{-}\xrightarrow{\text{H}_{2}}
\text{ArH}^{+} + \text{H} \hspace{2.25cm} (\Delta E = 1.436~\text{eV})\,. \\
\end{equation*}
Hence, as a key ingredient in the ensuing chemistry, the abundance of ArH$^{+}$ is sensitive to the cosmic-ray ionisation rate, $\zeta_{\text p}(\text{H})$, and to the molecular fraction, $f_{\text{H}_{2}}$, of the gas probed. The cosmic-ray ionisation rates in both NGC~253 and NGC~4945 have previously been determined by analysing the steady-state chemistry of oxygen-bearing ions like OH$^+$ and H$_{2}$O$^+$ by \citet{van2016ionization}.
Following the steady-state analysis presented by these authors and \citet{indriolo2015herschel}, we derive revised cosmic-ray ionisation rates by including contributions from p-H$_{2}$O$^+$. The value of $N_{\text H}$ used in our calculations is given by $N$(H{\small I}) + 2$\times N$(H$_{2}$), where $N$(H$_{2}$) is obtained by using $N$(HF) as a surrogate for molecular hydrogen, where the abundance of HF in dense gas
is assumed to be 5$\times10^{-10}$ \citep{Emprechtinger2012}. From the column density profiles presented in Fig.~\ref{fig:coldens_distributions}, it is clear that the total hydrogen content along the studied sight lines are dominated by dense molecular gas with pockets or bubbles of diffuse clouds as suggested by \citet{van2016ionization}. Including contributions from p-H$_{2}$O$^+$ we derive mean cosmic-ray ionisation rates of $2.2\times10^{-16}$ and $7.5\times10^{-17}$\,s$^{-1}$ across NGC~253 and NGC~4945, respectively, values that are not far from those derived by \citet{van2016ionization}. The derived cosmic-ray ionisation rates represent only lower limits because of uncertainties in the derived column densities, particularly in that of the H{\small I} and OH$^+$ absorption profiles. In addition other assumptions made in this calculation, such as the values of the gas density ($n_{\rm H} = 35\,$cm$^{-3}$), the electron fraction ($x_{\rm e}=1.5\times10^{-4}$) and the OH$^+$ formation efficiency parameter ($\epsilon=7\%$) are poorly constrained in these sources. The general impact of the uncertainties associated with the assumptions made in such an analysis, on the derived cosmic-ray ionisation rates of Galactic sources is discussed in more detail in \citet{schilke2010} and \citet{indriolo2015herschel}.
\subsection{Gas properties}\label{subsec:gas_properties}
The cosmic-ray ionisation rates discussed in the previous section, $\zeta_{\rm p}({\rm H})$, describe only the total number of primary ionisations per hydrogen atom, per second. In contrast, the total cosmic-ray ionisation rate, $\zeta_{\rm t}$, includes contributions from both primary ionisation by cosmic-rays as well as ionisation by secondary electrons (resulting from the former). As discussed by \citet{Neufeld2017}, the exact relation between the two rates is dependent on the molecular fraction and fractional ionisation, but for the typical conditions of diffuse clouds the two are roughly related as $\zeta_{\rm p}(\rm H)=\zeta_{\rm t}/1.5$. Therefore not only do cosmic-rays play an important role in heating the gas, they also increase the electron abundance.
In the following sections we evaluate the impact of cosmic-ray ionisation and heating through the physical properties traced by ArH$^+$. We estimate the molecular fraction characteristic of the gas traced by ArH$^+$ and explore the impact of collisional excitation by electrons on the observed ArH$^+$ abundance.
\subsubsection{Molecular fraction}\label{subsubsec:molec_frac}
Under the assumption that the cloud volumes containing ArH$^+$ are exposed to the same cosmic-ray ionisation flux as those traced by both OH$^+$ and H$_{2}$O$^+$, we can derive the molecular fraction, $f_{\text{H}_{2}}$, of the gas probed by ArH$^{+}$, using the ArH$^{+}$ abundances (with respect to H{\small I}) derived from observations as a constraint. We first present the relation between $\zeta_{\text{p}}$(H), $X({\text{ArH}}^+)$ and $f_{\text{H}_{2}}$ by analysing the steady-state ion-molecular chemistry of ArH$^{+}$ as discussed in \citet{schilke2014ubiquitous}. ArH$^{+}$ is destroyed primarily via proton transfer reactions with H$_{2}$ or atomic oxygen, and photodissociation:
\begin{align*}
\text{ArH}^{+} &+ \text{H}_{2} \rightarrow \text{Ar} + \text{H}_{3}^{+} \, ; \quad \quad \quad k_{5} = 8\times 10^{-10}~\text{cm}^{3}\text{s}^{-1} \\
\text{ArH}^{+} &+ \text{O}
\rightarrow \text{Ar} + \text{OH}^{+} \, ; \quad \quad \,\,\, \, k_{6} = 8\times 10^{-10}~\text{cm}^{3}\text{s}^{-1} \\
\text{ArH}^{+} &+ \text{h}\nu \rightarrow \text{Ar}^+ + \text{H} \, ; \, \quad \quad \quad k_{7} = 1.0\times 10^{-11}\chi_\text{UV}f_{\text{A}}~\text{s}^{-1} \, .
\end{align*}
The photodissociation rate of ArH$^{+}$ was estimated by \citet{alekseyev2007theoretical} to be $\sim\!1.0\times10^{-11}f_{\text{A}}\,\text{s}^{-1}$ for an unshielded cloud model that is uniformly surrounded by the standard Draine UV interstellar radiation field.
The attenuation factor, $f_{\text{A}}$, is given by an exponential integral and is a function of visual extinction, $A_\text{v}$. For a cloud model with $A_{\text{v}} = 0.3$, \citet{schilke2014ubiquitous} derived values for $f_{\text{A}}$ between 0.30 and 0.56 that increase as you move outwards from the centre of the cloud. For our analysis, we use 0.43 for $f_{\text{A}}$, which lies mid-way through the computed range of values.
We further assume an atomic oxygen abundance (relative to H nuclei) of ${3.9 \times 10^{-4}}$ \citep{Cartledge2004} and an argon abundance close to this element's solar abundance of ${3.2\times 10^{-6}}$ \citep{lodders2008solar}. Observations by \citet{van2016ionization} suggest that the dense and diffuse gas phases are well mixed in the galaxies under consideration here, a notion based on the similarities between the observed profiles of OH$^+$ and H$_{2}$O$^+$ lines to those of H$_{2}$O and H{\small I} lines. Therefore, akin to these authors, who derived the cosmic-ray ionisation rates for NGC~253 and NGC~4945 by analysing the steady-state chemistry of OH$^+$ and H$_{2}$O$^+$, we assume a gas density, $n(\text{H})$, of $35\,\text{cm}^{-3}$ \citep{indriolo2015herschel} for the OH$^+$--H$_{2}$O$^+$ absorbing clouds in our analysis.
\noindent
Using these values, the cosmic-ray ionisation rate can be approximated as
\begin{align}
\zeta_\text{p}(\text{H}) ~({\rm s}^{-1}) &= \frac{N(\text{ArH}^{+})}{N_{\text{H}}}\left(\frac{k_{5}n(\text{H}_{2}) + k_{6}n(\text{O}) + k_{7}}{11.42}\right)\,({\rm s}^{-1}) ,\\ &=\frac{N(\text{ArH}^{+})}{N_{\text{H}}}\left( \frac{0.5005 + 448f_{\text{H}_{2}}}{1.2 \times 10^{6} }\right) \, .
\label{eqn:CRIR_arhp}
\end{align}
Eq.~\ref{eqn:CRIR_arhp} is re-arranged and expressed in terms of $f_{\text{H}_{2}}$, as
\begin{equation}
f_{\text{H}_{2}} = 2.68\times10^{3}\,({\rm s}) \,\left[ \frac{\zeta_\text{p}(\text{H})}{X(\text{ArH}^{+})} - 4.17\times10^{-7} \right]\,({\rm s}^{-1}) \, .
\label{eqn:hfrac_arhp}
\end{equation}
Using Eq.~\ref{eqn:hfrac_arhp}, the molecular fractions of the gas probed by ArH$^{+}$ are found to be a few times ${10^{-3}}$ towards both sources, values that are comparable to what is derived along the NE line-of-sight towards PKS~1830$-$211 (which amongst the two sight lines studied by \citet{Mueller2015} has been shown to probe more diffuse and atomic environments \citep{Koopmans2005}) with $X$(ArH$^+$) = $2.8\times10^{-9}$ and $\zeta_\text{p}(\text{H}) = 3\times10^{-15}$~s$^{-1}$. The molecular gas fraction hence derived, per velocity channel over velocity intervals most relevant to the sources studied (as discussed in Table~\ref{tab:col_dens}) is displayed in Fig.~\ref{fig:gas_prop}. Towards both sources we find the ArH$^+$ bearing clouds to host abundances between 10$^{-11}$ and 10$^{-10}$ and trace molecular gas fractions between 3$\times10^{-4}$ and a few times {$10^{-2}$}. In comparison, OH$^+$, whose abundance is almost two orders of magnitude higher than that of ArH$^+$, mainly traces gas with molecular fractions of the order of a few times 10$^{-1}$ whereas \citet{Mueller2016} derived $f_{{\rm H}_{2}}$ values for OH$^+$-bearing gas that varies between 0.01 and 0.07 towards both sight lines in their study. Therefore, the different atomic gas tracers may not be spatially co-existent. Furthermore, the transition from atomic to molecular gas is illustrated in the top panel of Fig.~\ref{fig:gas_prop} through the distribution of the molecular gas fractions traced by ArH$^+$-, OH$^+$-, and CO-bearing gas volumes. The molecular fractions in this analysis are computed using Eq.~\ref{eqn:hfrac_arhp}, Eq.~12 of \citet{indriolo2015herschel} and by assuming a CO-to-H$_{2}$ conversion factor of 2$\times10^{20}\,$cm$^{-2}$\,(K\,km\,s$^{-1}$)$^{-1}$ as recommended by \citet{Bolatto2013}, albeit with a factor of 2 uncertainty. By combining previous estimates of the cosmic-ray ionisation rates derived for Arp~220, of $\zeta_{\rm p}$(H)>$10^{-13}~$s$^{-1}$ \citep{Gonzalez2013,van2016ionization}, with our upper limit for the ArH$^{+}$ abundance of 2.5$\times10^{-11}$ (determined by integrating the $3\sigma$ detection limit quoted in Sect.~\ref{sec:observations} over a line width of 743~km~s$^{-1}$ \citep{Mirabel1982} and $N$(H$_{2}) = 1\times10^{24}~$cm$^{-2}$ \citep{Downes2007}), we are unable to derive reasonable values for the molecular fraction traced by ArH$^+$.
\begin{figure*}
\includegraphics[width=0.48\textwidth]{abundance_vs_fh2_NGC253.pdf}\quad
\includegraphics[width=0.48\textwidth]{abundance_vs_fh2_NGC4945.pdf}
\caption{Top: Distribution of molecular gas fraction ($f_{\text{H}_{2}}$) traced by ArH$^+$ (blue), OH$^+$ (red) and CO (orange). Bottom: Contours of $X(\text{ArH}^+)$ (blue) and $X(\text{OH}^+)$ (red) abundances with respect to $N_{\text{H}}$ in the $f_{\text{H}_{2}}$--$\zeta_{\rm p}{\text{(H)}}$ plane. Blue triangles and red diamonds represent the corresponding values derived from the LOS observations presented in this work computed per channel in the velocity interval (${\Delta\upsilon \sim\!4.5}\,$km~s$^{-1}$) between 55 and 295\,km\,s$^{-1}$, and 445 and 725\,km\,s$^{-1}$ for NGC~253 and NGC~4945, respectively.}
\label{fig:gas_prop}
\end{figure*}
\subsubsection{Excitation by electrons}\label{subsubsec:electron_frac}
In this section we explore the effects of collisional excitation of ArH$^+$ by electrons. We expect ArH$^+$ to reside in cloud layers with electron fractions, $x_{\rm e}$ of $\geq 10^{-5}$--$10^{-4}$, making them competitive collision partners to atomic hydrogen in such cloud environments. Moreover, the destruction of ArH$^+$ via dissociative recombination reactions with electrons have, alongside photodisocciation pathways, been shown to have small to negligible impacts on the ArH$^+$ abundances in astrophysical environments \citep{alekseyev2007theoretical, Mitchell2005, Abdoulanziz2018}, making electron-impact excitation effects of significant importance for ArH$^+$. In order to explore this and to evaluate the validity of our assumptions for the electron density and excitation temperature, we performed non-LTE radiative transfer calculations using the statistical equilibrium radiative transfer code RADEX \citep{vanderTak2007}. The models were run for a uniform sphere geometry (with the offline version of RADEX, which allows a choice of input geometries)
under the large velocity gradient (LVG) approximation. Based on the escape probability formalism, the code computes level populations, line intensities, excitation temperatures, and optical depths as a function of the physical conditions (kinetic temperature and density) and radiative transfer parameters (column density and line width) specified as inputs. The models were run using rate coefficients recently computed by \citet{Hamilton2016} for collisions between ArH$^{+}$ and electrons as well as those between ArH$^{+}$ and atomic hydrogen computed by \citet{Dagdigian2018}.
By constraining the models using the ArH$^{+}$ column densities as determined in Sect.~\ref{subsec:column_densities}, we model the excitation temperature of the observed 617\,GHz ArH$^+$ line as a function of the electron density and gas temperature, $T_{\rm kin}$. The models are run across a 100$\times$100 grid, with $x_{\rm e}$ values between 10$^{-6}$, and 10$^{-1}$ and gas temperatures between 10 and 3000\,K. Furthermore, the total gas density, $n_{\rm H}$, approximated by the sum of the electron and atomic gas densities in the models, is fixed to values of 35, 100 and 1000\,cm$^{-3}$.
The modelled results are visualised in Fig.~\ref{fig:arhp_tex}. Similar to \citet{Dagdigian2018}, we find the excitation temperature for the models with total gas densities fixed at 100 and 1000\,cm$^{-3}$ to be greater than our assumed value of $T_{\rm ex}$ at 2.73\,K, across the entire $x_{\rm e}$ parameter space, with values of $T_{\rm ex}$ only slightly higher than 2.73 for $x_{\rm e} \lessapprox 10^{-3}$, beyond which the value of the excitation temperature increases quite rapidly. While the general trend for those models with a total gas density of 35\,cm$^{-3}$ (equal to the gas densities assumed in Sect.~\ref{subsec:CRIR} to compute the cosmic-ray ionisation rates) is similar to those with higher gas densities, that is, 100 and 1000\,cm$^{-3}$, the ArH$^+$ excitation temperature over the assumed electron fraction ($\approx1.5\times10^{-4}$ see Sect.~\ref{subsec:CRIR}) remains very close to a value of 2.73\,K. This implies electron densities of $\sim\!5.3\times 10^{-3}$, consistent with the low electron abundances expected in gas volumes bearing OH$^+$, a species formed in cloud environments similar to those that contain ArH$^+$. Low electron densities are prerequisites for the formation of detectable amounts of OH$^+$, whose formation pathway via H$_{3}^+$ and atomic oxygen competes with the dissociative recombination of H$_{3}^+$ with electrons. Observationally however, the range of electron densities is poorly constrained with a possible upper limit set by assuming the electron densities to be consistent with the photoionisation of neutral carbon.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{ArHp_electron_frac_tex.pdf}
\caption{RADEX modelled excitation temperature as a function of electron density, for fixed values of the total gas density at 35 (red), 100 (orange) and 1000\,cm$^{-3}$ (dark blue) for the ArH$^+$ (1-0) transition. Furthermore, each model is run for fixed gas temperatures at 30 (solid), 300 (dashed) and 3000\,K (dotted), respectively. The horizontal dashed grey line marks an excitation temperature of 2.73\,K. The inset zooms in on 10$^{-4} < x_{\rm e}<10^{-2}$ with the vertical dashed purple line indicating $x_{\rm e} = 1.5\times10^{-4}$ which corresponds to the value of $x_{\rm e}$ used in our calculations. }
\label{fig:arhp_tex}
\end{figure}
Using the same physical conditions as used to model the ArH$^+$ $1-0$ transition, we model the $2-1$ transition of ArH$^+$ near 1234.602\,GHz. The
results are displayed in Fig.~\ref{fig:arhp_21_tau}. For all the models the excitation temperature for the $2-1$ line
is higher,
between ${\sim 8}$ and 18\,K, while the optical depths are extremely low, of the order of a few 10$^{-6}$ for
total gas densities of 35\,cm$^{-3}$ and only an order of magnitude higher for models with total gas densities of 1000\,cm$^{-3}$. Therefore, as discussed by \citet{Dagdigian2018} given the low optical depths reproduced by the models, it is highly unlikely that the $2-1$ transition of ArH$^+$ can be observed in interstellar gas; so far it has only been detected (in emission) in the extreme circumnebular environment of the Crab \citep{barlow2013detection, Priestley2017}.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{ArHp_21_tex.pdf} \quad
\includegraphics[width=0.48\textwidth]{ArHp_21_tau.pdf}
\caption{RADEX modelled excitation temperatures (left) and optical depths (right) as a function of electron density, for fixed values of the total gas density at 35 (red), 100 (orange) and 1000\,cm$^{-3}$ (dark blue) for the ArH$^+$ (2-1) transition. Furthermore, each model is run for fixed gas temperatures at 30 (solid), 300 (dashed) and 3000\,K (dotted), respectively. The horizontal dashed grey line marks an excitation temperature of 2.73\,K (in the left-hand panel) while the vertical dashed purple line indicates, $x_{\rm e} = 1.5 \times 10^{-4}$.}
\label{fig:arhp_21_tau}
\end{figure*}
\subsection{\texorpdfstring{$\text{H}_{2}\text{O}^{+}$}{HtOp} ortho-to-para ratio}\label{subsec:h2o_analysis}
The H$_{2}$O$^+$ molecular ion exists in two symmetric states, o- and p-H$_{2}$O$^+$, that have opposing parities due to the interaction between the magnetic moment of the unpaired electron and protons. Studying the ratio of molecules in these two states which is reflected by the spin temperature, can provide insight into the formation pathway and thermodynamic properties of the gas. In the following paragraphs we determine the ortho-to-para ratio (OPR) of H$_{2}$O$^+$ and derive the nuclear spin temperature.
Unlike for H$_{2}$O, the lowest energy state of H$_{2}$O$^+$ corresponds to the lowest rotational energy level of its ortho state. This is a result of the molecule's $C_{2\varv}$ symmetry and $^{2}B_{1}$ ground state configuration. The fine structure levels of the o-H$_{2}$O$^+$ spin state, with a nuclear spin, $I$, of 1, further undergo HFS splitting while, p-H$_{2}$O$^+$ (with $I = 0$), does not. The lowest rotational energy level of the ortho spin state, ${N_{K_{a}K_{c}} = 1_{1,1}-0_{0,0}}$,
lies at an energy level which is 30.1\,K lower energy than that of the lowest p-H$_{2}$O$^+$ level,
$1_{1,0}-1_{0,1}$.
Under the assumption that the rotational temperature is close to $T_{\text{CMB}} = 2.73\,$K, one would expect that most of the ions occupy either one of the H$_{2}$O$^+$ ground state levels. Similar to H$_{2}$ and H$_{2}$O, H$_{2}$O$^+$ is expected to exhibit an OPR of at least 3:1 \citep{townes1975microwave}. Typically, the conversion between the ortho and para states of H$_{2}$O$^+$ occurs via gas phase reactions with atomic hydrogen, as follows:
\begin{equation}
\text{p-H}_{2}\text{O}^+ + \text{H} \rightleftharpoons \text{o-H}_{2}\text{O}^+ + \text{H}
\label{eqn:formation}
\end{equation}
while H$_{2}$O$^+$ energetically reacts with molecular hydrogen to yield H$_{3}$O$^+$.
\noindent
Moderately coupled via collisions, the OPR between the two spin states of H$_{2}$O$^+$ is given by,
\begin{equation}
\text{OPR} \equiv \frac{Q_{\text{ortho}}}{Q_{\text{para}}}\text{exp}\left( -\Delta E / T_{\text{ns}}\right) \, ,
\label{eqn:OPR}
\end{equation}
where $Q_{\text{ortho}}$ and $Q_{\text{para}}$ represent the partition functions of the respective spin isomers, $\Delta E$ is the energy difference between them, $\Delta E = -30.1~$K and $T_{\text{ns}}$ is the nuclear spin temperature. The value of $\Delta E$ is expressed as a negative quantity because, as discussed above, the lowest ortho state has a lower energy than the lowest para state.
As discussed in Appendix A of \citet{schilke2010}, at low temperatures, the partition functions of the two states are governed by the degeneracy of their lowest fine-structure and HFS levels. Having the same quantum numbers and upper level degeneracies, the ratio of the partition functions approaches unity as the rotational temperature tends to 0~K.
Figures~\ref{fig:OPR_NGC253} and \ref{fig:OPR_NGC4945} display the distribution of the derived column densities per velocity intervals (corresponding to the spacing of the velocity channel bins) for o- and p-H$_{2}$O$^{+}$, the OPR and the nuclear spin temperature.
The OPRs determined for NGC~253 are quite large, with a mean value of roughly 7.0$\pm 3.1$,
whereas, the mean value derived towards NGC~4945 is 3.5$\pm 1.2$. The derived OPRs are consistent with the equilibrium value of three within the quoted error bars and correspond to nuclear spin temperatures between 15 and 24~K.
The very large values found for NGC~253 for LSR velocities ${>\!150}$~km~s$^{-1}$ can be partly attributed to the uncertainties resulting from modelling the emission component observed in the P-Cygni profiles of its o- and p-H$_{2}$O$^+$ spectra.
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{Fig/opratio_NGC253.pdf}
\caption{Top panel: Column density per velocity interval distribution of o-H$_{2}$O$^{+}$ (solid purple) and \phtop (dashed blue) over the entire velocity interval covering absorption towards NGC~253. Middle panel: OPR H$_{2}$O$^+$ distribution (left) and corresponding histogram (right). Bottom panel: Nuclear spin temperature distribution (left) and corresponding histogram (right). The median and $1~\sigma$ levels of the OPR and nuclear spin temperature are marked by solid and dotted black lines, respectively.}
\label{fig:OPR_NGC253}
\end{figure*}
Theoretical studies carried out by \citet{Tanaka2013} on the ortho-to-para exchange reactions of H$_{2}$O$^+$ reveal quite low reaction rates, which implies that the OPR of H$_{2}$O$^+$ does not significantly deviate from the equilibrium value of three, particularly at low temperatures, lower than the values we find for NGC 253.
In diffuse regions affected by X-rays and/or cosmic-rays, the chemistry leading to H$_{2}$O$^+$ is initiated by the charge transfer between ionised hydrogen and oxygen atoms to form O$^+$, which is subsequently followed by reactions with H$_{2}$ to first form OH$^+$ and then H$_{2}$O$^+$. Alternatively, in regions with higher molecular fractions, both OH$^+$ as well as H$_{2}$O$^+$ can be formed via reactions between H$_{3}^+$ and oxygen atoms \citep[][and references therein]{Hollenbach2012}. Lastly, H$_{3}$O$^+$ can also be formed from hydrogen abstraction reactions of H$_{2}$O$^+$ in denser environments. If the exchange reaction is faster than other competing reactions then the observed upper limits of the OPR, point to gas kinetic temperatures that are greater than $\sim$20~K.
Since the diffuse and dense gas volumes along the sight lines towards the galaxies under study here are well mixed, the OPR of parent species such as H$_{3}^+$, H$_{2}$, and H$_{2}$O, from which H$_{2}$O$^+$ originates, may impact the observed OPR of H$_{2}$O$^+$ \citealp[see for example the studies on the OPR of H$_{2}$ presented by][]{flower2006importance}. Destruction reactions of both the ortho- and para- forms of H$_{3}^+$ via dissociative recombination have been experimentally investigated by \citet{glosik2010binary}, who observe a preferential recombination of this molecule's para-state compared to that of its ortho-state, which can also lead to OPRs \textgreater 3. Recently, \citet{Novotny2019} found a similar dependence on the rotational state, for the dissociative recombination of HeH$^+$ ions. However, a better understanding of the OPR of H$_{2}$O$^+$ requires a detailed understanding (not only) of what fraction of the observed H$_{2}$O$^+$ is formed along each of the different chemical pathways towards its formation, alongside the efficiency with which either H$_{2}$O$^+$ state (ortho or para) is formed in both quiescent and turbulent and/or shocked regions, but also the destruction of this ionic species which occurs primarily via dissociative recombination and hydrogen abstraction reactions with H$_{2}$. In Appendix~\ref{appendix:h2op_steadystate_chem} we briefly evaluate deviations in the observed OPR of H$_{2}$O$^+$ from the original OPR at formation, by carrying out a steady-state analysis of H$_{2}$O$^+$ chemistry.
Furthermore, investigating the influence of cosmic-ray ionisation on the OPR of H$_{2}$O$^+$ for star-forming regions within the Milky Way, \citet{Jacob2020Arhp} find that the OPR of H$_{2}$O$^+$ thermalises to the equilibrium value of three with increasing rates of cosmic-ray ionisation up to a value of ${\sim\!2 \times 10^{-16}\,}$s$^{-1}$, beyond which there exists no credible correlation. The correlation between cosmic-ray ionisation rates and OPRs suggests that the higher abundances of atomic hydrogen in regions exposed to higher cosmic-ray fluxes are able to efficiently drive the proton-exchange reaction causing a change in the OPR. Putting the current results into context, we find the data points for both studied galaxies to lie in the thermalised region of Fig.~\ref{fig:CRIR_OPR}, with OPRs near three within the uncertainties. Given the uncertainties in the column densities derived from both the H{\small I} and the OH$^+$ line profiles, which are used to compute the cosmic-ray ionisation rates, we represent the cosmic-ray ionisation rates as lower limits. In analogy to the Milky Way, we would expect the cosmic-ray ionisation rates to be higher towards the centres of these galaxies, however in contrast, we derive values closer to the canonical values derived towards the disk of the Milky Way \citep{indriolo2015herschel, Jacob2020Arhp}, outside of the central molecular zone.
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{Fig/opratio_NGC4945.pdf}
\caption{Same as Fig.~\ref{fig:OPR_NGC253}, but towards NGC~4945. The median value for $T_{\rm ns}$ derived from the distribution is the same as that obtained from deriving $T_{\rm ns}$ using the median OPR for H$_{2}$O$^+$.}
\label{fig:OPR_NGC4945}
\end{figure*}
\begin{figure}[ht!]
\centering
\includegraphics[width= 0.45\textwidth]{Fig/CRIR_OPR.pdf}
\caption{Observed OPR of H$_{2}$O$^+$ versus the cosmic-ray ionisation rates derived using the steady-state analysis of the OH$^+$ and H$_{2}$O$^+$. The results from this work are marked by teal pentagons while the blue diamonds and black triangles display the results for sight lines studied in the Milky Way by \citet{indriolo2015herschel} and \citet{Jacob2020Arhp}. For comparison we display the OPR of H$_{2}$O$^+$ also derived by \citet{indriolo2015herschel} towards Galactic centre (GC) sight lines using purple stars. The dashed red line marks the equilibrium OPR value of three.
}
\label{fig:CRIR_OPR}
\end{figure}
\section{Discussion} \label{sec:discussion}
The gas properties of the ArH$^+$ bearing cloud volumes that we have derived in Sect.~\ref{subsec:gas_properties} towards the nearby galaxies studied in this work are comparable to those found by \citet{schilke2014ubiquitous} and \citet{Jacob2020Arhp} for clouds along the LOS towards star forming regions in the disk of the Milky Way. Furthermore, the cosmic-ray ionisation rates derived towards both NGC~253 and NGC~4945 in Sect.~\ref{subsec:CRIR} are comparable to the average cosmic-ray ionisation rate derived towards the disk of the Milky Way, of ($2.3\pm0.3$)$\times10^{-16}$~s$^{-1}$ \citep{indriolo2015herschel, Jacob2020Arhp}. The molecular gas traced by HF has an almost two orders of magnitude higher column density than H{\small I} (as illustrated in Fig.~\ref{fig:coldens_distributions}), suggesting that most of the gas, by volume, along the LOS towards these galaxies is dense. Ostensibly, this brings into question whether ArH$^+$ is a probe of diffuse atomic gas with a very small molecular fraction (see Sect.~\ref{subsec:gas_properties}). However, as previously discussed by \citet{van2016ionization}, who observed OH$^+$ and H$_{2}$O$^+$, other well-known tracers of (predominantly) atomic gas, towards both NGC~253 and NGC~4945, the presence of gas with
low $f_{{\rm H}_{2}}$ may suggest an inhomogeneous ISM with local pockets of diffuse clouds mixed with denser
material. Perhaps, the marginally lower cosmic-ray ionisation rates that we find for the galaxies studied here, in comparison to the Milky Way, may be a result of the larger volumes of dense gas present along the LOS. We note, however, that values for the cosmic-ray ionisation rate estimated for the nuclear regions of ULIRGs like NGC~4418 and Mrk~231, which harbour much more extreme star bursts than NGC~253 and NGC~4945, are almost three orders of magnitude higher with a lower-limit of $\zeta_{\rm p}(\text{H})\!>\!10^{-13}\,$s$^{-1}$ \citep{Gonzalez2013,Gonzales2018}, similar to values derived towards the Milky Way's Galactic centre region, indicative of higher levels of energetic phenomena present at the centres of these galaxies in comparison to their disks. This is also the case towards the diffuse atomic gas rich lines-of-sight studied towards PKS~1830$-$211, who derived column densities for atomic gas tracers such as ArH$^+$ \citep{Mueller2015}, p-H$_{2}$O$^+$, and OH$^+$ \citep{Mueller2016} that are roughly an order of magnitude greater than those that we derived towards NGC~253 and NGC~4945 while the column density values for molecular gas tracers such as HF are comparable between both studies \citep[see Table~2 of ][]{Mueller2016}. This results in cosmic-ray ionisation rates of $\sim 2\times10^{-14}~{\rm s}^{-1}$ and s$\sim 3\times 10^{-15}~{\rm s}^{-1}$ for the SW and NE lines-of-sight, respectively.
In addition, while we quote LOS averaged values for the cosmic-ray ionisation rates towards both NGC~253 and NGC~4945 using the steady-state analysis of OH$^+$ and H$_{2}$O$^+$, their interpretation is not trivial as the cosmic-ray ionisation rate is likely to vary spatially, across these galaxies. This variation is best demonstrated by observations towards different molecular cloud sight lines within the Milky Way by \citet{indriolo2015herschel} and \citet{Jacob2020Arhp} and reflects the proximity of the studied molecular clouds to the nearest supernova remnants, which are likely sites for particle acceleration amongst other propagation effects. Therefore we expect the cosmic-ray ionisation rates determined from compact regions of star-formation like the central molecular zones of the galaxies studied here, to be orders of magnitude greater. In general with star-formation rates higher than that of the Milky Way by almost two orders of magnitude, one would anticipate higher cosmic-ray ionisation rates towards both NGC~253 and NGC~4945 since the injection of cosmic-rays is governed by star-formation and accelerated by associated shocks like supernova remnants and stellar wind bubbles. Observationally, higher rates of cosmic-ray ionisation have been estimated towards the centre of NGC~253 through the detection of gamma-rays at TeV energies using ground based Cherenkov telescopes \citep{Acero2009}. Exhibiting both starburst and AGN activities the nuclear environment of NGC~4945 is more complex, as a significant portion of the injected cosmic-ray energy is used in the production of pions and $\gamma$-rays. \citet{Wojaczynski2017A} have shown that Seyfert galaxies like NGC~4945 have larger $\gamma$-ray luminosity's than the calorimetric limit which sets the total energy of cosmic-rays used for the production of pions assuming that each supernova explosion injects 10$^{50}$~ergs of cosmic-ray energy while that of starbursts like NGC~253 is 50~\% lower.
Moreover, while ionisation by cosmic-rays plays an important role in heating the gas in the nuclear environment of galaxies \citep{Bradford2003}, heating by X-rays \citep{Usero2004} and by stellar UV radiation \citep[in widespread photodissociation regions (PDRs)][]{Hollenbach1997}, dynamical shock heating \citep{Burillo2001} to a name a few, can all be prominent sources of heating in these regions and it is not possible to discriminate between their contributions merely by an analysis such as ours.
\section{Conclusions} \label{sec:conclusions}
Well established as a tracer for purely atomic gas, outside of the Milky Way the noble gas species ArH$^+$ had only been detected towards the intermediate redshift ${z = 0.89}$ gravitational lens system PKS 1830$-$211. In this work we extend the notion that ArH$^+$ is an ubiquitous tracer of diffuse atomic gas towards external galaxies by presenting observations of its ${J\!=\!1-0}$ transition.
We present the detection of ArH$^+$ in absorption towards two nearby galaxies, NGC~253 and NGC~4945 and the non-detection towards Arp~220. In addition to ArH$^+$ we also report the detection of the ${J=3/2-1/2}$ transition of p-H$_{2}$O$^+$ at 607\,GHz towards the former. We compare the observed profiles of ArH$^+$ and p-H$_{2}$O$^+$ lines with those of the H{\small I} 21\,cm line and lines from the well known atomic gas tracers OH$^+$ and o-H$_{2}$O$^+$ and the molecular gas tracer, HF. Using the cosmic-ray ionisation rates derived by analysing the steady-state chemistry of OH$^+$ and H$_{2}$O$^+$ and by assuming that the cloud populations bearing ArH$^+$ are exposed to the same cosmic-ray ionisation rates, we derive the molecular fraction of the gas traced by ArH$^+$ towards both galaxies to be between ${\!10^{-3}}$ and ${\sim\!10^{-2}}$. This is consistent with estimates made within the Milky Way by \citet{schilke2014ubiquitous, neufeld2016chemistry, Jacob2020Arhp}. From the detection of p-H$_{2}$O$^+$ we estimate the H$_{2}$O$^+$ OPR of 7.0 and 3.5 towards NGC~253 and NGC~4945, and subsequently derive values for the nuclear-spin temperature that are greater than 15 and 24\,K, respectively. The variation in the H$_{2}$O$^+$ OPR with cosmic-ray ionisation rates follows the trend displayed by sight line components towards star-forming regions within the Milky Way other than those towards the Galactic centre. This is likely because our observations are not sensitive to spatial variations along the sight lines studied and only probe the average properties.
In order to comment on the ubiquity and chemistry of ArH$^+$ and its possible role in probing the energetics of extragalactic sources, the searches for ArH$^+$ need to be extended to a wider source sample. Since its chemistry and properties are widely studied, alongside that of OH$^+$ and o-H$_{2}$O$^+$, it may be promising to search for ArH$^+$ towards a range of extragalactic sources hosting various levels of nuclear activity, which was a selection criterion for the Herschel EXtraGALactic (HEXGAL) survey which was aimed at studying the physical and chemical composition of the ISM in galactic nuclei using HIFI spectroscopy.
\begin{acknowledgements}
This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX) under the project id M9519C\_103. APEX is a collaboration between the Max-Planck-Institut f\"{u}r Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. We would like to express our gratitude to the APEX staff and science team for their continued assistance in carrying out the observations presented in this work. We would like to thank Dr.~Paul Dagdigian for providing us with the ArH$^+$ collisional rate coefficients and the anonymous referee for their careful review of the article and valuable input. We are thankful to the developers of the C++ and Python libraries and for making them available as open-source software. In particular, this research has made use of the NumPy \citep{numpy}, SciPy \citep{scipy} and matplotlib \citep{matplotlib} packages.
\end{acknowledgements}
\bibliographystyle{aa}
|
3,212,635,537,632 | arxiv | \section{Introduction}
\label{introduction}
Count data are random variables that assume non-negative integer values
and represent the number of times an event occurs in the observation
period. This kind of data is common in crop sciences, such as the number
of grains produced by a plant, number of fruits produced by a tree,
number of insects captured by a trap, to cite but a few. Since the
seminal paper of \citet{Nelder1972} where the class of the generalized
linear models (GLMs) was introduced, the analysis of count data often
employs the Poisson regression model. This model provides a suitable
strategy for the analysis of count data and an efficient Newton scoring
algorithm can be used for fitting the model.
In spite of the advantages of the Poisson regression model, the Poisson
distribution has only one parameter, which represents both the
expectation and variance of the count random variable. This restriction
on the relationship between the expectation and variance induced by the
Poisson distribution is referred as equidispersion. However, in
practical data analysis such a restriction can be unsuitable, since the
observed data can present variance both smaller or larger than the
expectation, leading to the cases of under and overdispersion,
respectively. The main problem of the application of the Poisson
regression model to non-equidispersed count data is that the standard
errors associated with the regression coefficients are inconsistently
estimated, which in turn can lead to misleading
inferences~\citep{Winkelmann1995, Bonat2017}.
In practice, overdispersion is largely reported in the literature and
may occur due to the absence of relevant covariates, heterogeneity of
sampling units, different observational periods/regions not considered
in the analysis, and excess of zeros~\citep{Hinde1998}. The case of
underdispersion is less report in the literature, however, it has been
of increasing interest in the statistical community. The processes that
reduce the variability are not as well-known as those leading to extra
variability. For this reason, there are few approaches to deal with
underdispersed count data. The explanatory mechanisms leading to
underdispersion may be related to the underlying stochastic process
generating the count data. When the time between events is not
exponentially distributed, the number of events can be over or
underdispersed, this process motivated the class of duration dependence
models~\citep{Winkelmann1995}. Another possible explanation of
underdispersion is when the responses correspond to order statistics of
component observations, such as maxima of Poisson distributed
counts~\citep{Steutel1989}.
The strategies for constructing alternative count distributions are
related with the causes of the non-equidispersion. Specfically for the
overdispersion case Poisson mixture models are widely applied. One
popular example of this approach is the negative-binomial model, where
the expectation of the Poisson distribution is assumed to be gamma
distributed. However, other distributions can be used to represent the
random variation. For example the Poisson-Tweedie
model~\citep{Bonat2017} and its special cases as the Poisson
inverse-Gaussian and Neyman-Type A assume that the random effects are
Tweedie, inverse Gaussian and Poisson distributed, respectively. The
Gamma-Count distribution assumes a gamma distribution for the time
between events, thus it can deal with underdispersed as well as
overdispersed count data~\citep{Zeviani2014}. The COM-Poisson
distribution is obtained by a generalization of the Poisson distribution
allowing for a non-linear decrease in the ratios of successive
probabilities~\citep{Shmueli2005}.
The COM-Poisson distribution is a member of the exponential family and
it has the Poisson and geometric distributions as special cases, as well
as the Bernoulli distribution as a limiting case. It can deal with both
under and overdispersed count data. Some recently applications of the
COM-Poisson distribution include \citet{Lord2010} for the analysis of
traffic crash data, \citet{Sellers2010} for the modelling of airfreight
breakage and book purchases, and \citet{Huang2017} to the analysis of
attendance data, takeover birds and cotton boll counts. The main
disadvantage of the COM-Poisson regression model as presented
in~\citet{Sellers2010} is that its location parameter does not
correspond to the expectation of the distribution, which complicates the
interpretation of regression models and means that they are not
comparable with standard approaches such as the Poisson and negative
binomial regression models. \citet{Huang2017} proposed a
mean-parametrization of the COM-Poisson distribution in order to avoid
such an issue. In this approach the mean parameter is obtained by
solving an non-linear equation defined as an infinite sum.
Consequently, it is computationally demanding and liable to numerical
problems.
The main goal of this article is to propose a novel COM-Poisson
parametrization based on the mean approximation presented by
\citet{Shmueli2005}. In this parametrization, the probability mass
function is written in terms of $\mu$ and $\phi$, where $\mu$ is the
expectation and $\phi$ is a dispersion parameter. In contrast to the
original parametrization, the proposed parametrization leads to
regression coefficients directly associated with the expectation of the
response variable, as usual in the context of generalized linear models.
Consequently, the obtained regression coefficients are comparable with
the ones obtained by standard approaches, such as the Poisson and
negative binomial regression models. Furthermore, our novel COM-Poisson
parametrization is simpler than the strategy proposed
by~\cite{Huang2017}, since it does not require any numerical method for
solving non-linear equations, and we show the attractive properties like
the orthogonality between dispersion and regression parameters and
consistency and asymptotic normality of the maximum likelihood
estimators are retained.
This paper is organized as follows. In \autoref{background} we present
the COM-Poisson distribution and the strategy proposed
by~\cite{Huang2017}. The proposed reparametrization, assessment of
moment approximations, and study of distribution properties are
considered in the~\autoref{reparametrization}. In
the~\autoref{estimation-and-inference} we present estimation and
inference for the novel COM-Poisson regression model based on the
likelihood paradigm. The properties of the maximum likelihood estimators
and the orthogonality property are assessed
in~\autoref{simulation-study} through simulation studies. We illustrate
the application of the new COM-Poisson regression model through the
analysis of three data sets. We provide an \texttt{R} implementation of
the COM-Poisson and reparameterized COM-Poisson regression models as
well as the analyzed data sets in the supplementary
material.\footnote{Available on
\texttt{\url{http://www.leg.ufpr.br/~eduardojr/papercompanions}}
\label{papercompanion}.}.
\section{Background}
\label{background}
The COM-Poisson distribution generalizes the Poisson distribution in
terms of the ratio between the probabilities of two consecutive events
by adding an extra dispersion parameter~\citep{Sellers2010}. Let $Y$ be
a COM-Poisson random variable, then
$$\frac{\Pr(Y=y-1)}{\Pr(Y=y)} = \frac{y^\nu}{\lambda}$$ while for the
Poisson distribution this ratio is $\frac{y}{\lambda}$ corresponding to
$\nu=1$. It allows the COM-Poisson distribution deals with
non-equidispersed count data. The probability mass function of the
COM-Poisson distribution is given by
\begin{equation}
\label{eqn:pmf-cmp}
\Pr(Y=y \mid \lambda, \nu) = \frac{\lambda^y}{(y!)^\nu Z(\lambda, \nu)},
\qquad y = 0, 1, 2, \ldots,
\end{equation}
where $\lambda > 0$, $\nu \geq 0$ and $Z(\lambda, \nu) =
\sum_{j=0}^\infty \frac{\lambda^j}{(j!)^\nu}$ is a normalizing
constant that depends on both parameters.
The $Z(\lambda, \nu)$ series diverges theoretically only when $\nu=0$
and $\lambda \geq 1$, but numerically for small values of $\nu$ combined
with large values of $\lambda$, the sum is so huge it causes
overflow. \autoref{tab:convergenceZ} shows the values of the normalizing
constants using one thousand increments, that is,
$\sum_{j=0}^{1000}\lambda^j/(j!)^\nu$ for different values of $\lambda$
and $\phi$.
\begin{table}[ht]
\centering
\caption{Values for $Z(\lambda, \nu)$ constant (numerically computed) for values of $\lambda$ (0.5 to 50) and $\phi$ (0 to 1)}
\label{tab:convergenceZ}
\begingroup\small
\begin{tabularx}{\textwidth}{C|CCCCCC}
\toprule
& \multicolumn{6}{c}{$\bm{\lambda}$} \\
$\bm{\nu}$ & 0.5 & 1 & 5 & 10 & 30 & 50 \\
\midrule
0 & 2.00 & divergent$^{*\,\,\,}$ & divergent$^{*\,\,\,}$ & divergent$^{*\,\,\,}$ & divergent$^{*\,\,\,}$ & divergent$^{*\,\,\,}$ \\
0.1 & 1.92 & 7.64 & divergent$^{**}$ & divergent$^{**}$ & divergent$^{**}$ & divergent$^{**}$ \\
0.2 & 1.86 & 5.25 & 3.17e+273 & divergent$^{**}$ & divergent$^{**}$ & divergent$^{**}$ \\
0.3 & 1.81 & 4.32 & 1.60e+29 & 2.54e+282 & divergent$^{**}$ & divergent$^{**}$ \\
0.4 & 1.77 & 3.80 & 4.71e+10 & 1.33e+56 & divergent$^{**}$ & divergent$^{**}$ \\
0.5 & 1.74 & 3.47 & 1.34e+06 & 3.67e+22 & 3.32e+196 & divergent$^{**}$ \\
0.6 & 1.72 & 3.23 & 2.05e+04 & 4.99e+12 & 1.73e+76 & 4.63e+177 \\
0.7 & 1.70 & 3.06 & 2.37e+03 & 3.69e+08 & 4.93e+39 & 6.93e+81 \\
0.8 & 1.68 & 2.92 & 6.49e+02 & 2.70e+06 & 5.09e+24 & 3.43e+46 \\
0.9 & 1.66 & 2.81 & 2.74e+02 & 1.47e+05 & 1.80e+17 & 2.19e+30 \\
1 & 1.65 & 2.72 & 1.48e+02 & 2.20e+04 & 1.07e+13 & 5.18e+21 \\
\bottomrule
\end{tabularx}
\endgroup
\footnotesize \raggedright
divergent$^{*}$ is a mathematically divergent series; and
divergent$^{**}$ a numerically divergent series.
\end{table}
In the first line of \autoref{tab:convergenceZ} we have mathematically
divergent series, because $\sum_{j=0}^\infty\lambda^j$ is divergent when
$\lambda \geq 1$. In other cases the series diverges numerically, due
to the computational storage limitation. For both forms of divergence
it is impossible to compute probabilities, therefore, this acts as a
restriction on the parameter space.
An undesirable feature of the COM-Poisson distribution is that the
moments cannot be obtained in closed form. \citet{Shmueli2005} and
\citet{Sellers2010} using an asymptotic approximation for
$Z(\lambda,\nu)$, showed that the expectation and variance of the
COM-Poisson distribution can be approximated by
\begin{equation}
\label{eqn:mean-approx}
\text{E}(Y) \approx \lambda^{1/\nu} - \frac{\nu - 1}{2\nu} \qquad
\textrm{and} \qquad
\textrm{Var}(Y) \approx \frac{\lambda^{1/\nu}}{\nu}\,,
\end{equation} which is particularly accurate for $\nu \leq 1$ or
$\lambda > 10$. The authors also argue that the mean-variance
relationship can be approximate by
$\frac{1}{\nu}\text{E}(Y)$. In~\autoref{reparametrization}, we assess
the accuracy of these approximations.
The COM-Poisson regression model was proposed by~\citet{Sellers2010},
using the original parametrization. In this case, the COM-Poisson
regression model is $\log(\lambda_i) = \bm{x}_i^\top \bm{\beta}$ and the
relationship between E$(Y_i)$ and $\bm{x}_i$ is modelled indirectly.
\citet{Huang2017} shows how to model directly the expectation of the
COM-Poisson distribution in a suitable parametrization. In the
\autoref{eqn:pmf-cmp}, \Citeauthor{Huang2017} proposes that the
parameter $\lambda$ as a function of $\mu$ and $\nu$, is given by the
solution to
\begin{equation*}
\sum_{j=0}^{\infty} (j - \mu) \frac{\lambda^j}{(y!)^\nu} = 0\,.
\end{equation*}
Thus the mean-parametrized COM-Poisson regression model is $\log(\mu_i)
= \bm{x}_i^\top \bm{\beta}$. In this article, we propose an alternative
mean-parametrization of the COM-Poisson distribution in order to avoid the
limitations of the original parametrization and the numerical complexity
of the \Citeauthor{Huang2017}'s approach.
\section{Reparametrized COM-Poisson regression model}
\label{reparametrization}
The proposed reparametrization of COM-Poisson models is based on
the mean approximation (\autoref{eqn:mean-approx}). We introduced a new
parameter $\mu$, using this approximation,
\begin{equation}
\label{eqn:repar-cmp}
\mu = h(\lambda, \nu) = \lambda^{1/\nu} - \frac{\nu - 1}{2\nu}
\quad \Rightarrow \quad
\lambda = h^{-1}(\mu, \nu) = \left (\mu +
\frac{(\nu - 1)}{2\nu} \right )^\nu.
\end{equation}
The dispersion parameter is taken on the log scale for computational
convenience, thus $\phi = \log(\nu)$, $\phi \in \mathbb{R}$. The
interpretation of $\phi$ is the same as the $\nu$, but on another
scale. For $\phi < 0$ and $\phi > 0$ we have the overdispersed and
underdispersed cases, respectively. When $\phi=0$ we have Poisson
distribution as a special case.
In order to assess the accuracy of the moment approximations
(\autoref{eqn:mean-approx}), \autoref{fig:approx-plot} presents the
quadratic errors for (a) expectation and (b) variance. The
quadratic errors were obtained by $[\mu - \text{E}(Y)]^2$ for the
expectation and by $[ \mu \exp(-\phi) - \text{Var}(Y)]^2$ for the
variance. In both cases $\text{E}(Y)$ and Var$(Y)$ were computed
numerically. The dotted lines represent the border between the regions
$\nu \leq 1$ and $\lambda > 10^\nu$, in the $\mu$ and $\phi$ scale.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=.8\textwidth]{figure/approx-plot-1}
}
\caption[Quadratic errors for the approximation of the (a) expectation and (b) variance]{Quadratic errors for the approximation of the (a) expectation and (b) variance. Dotted lines represent the restriction for suitable approximations given by \cite{Shmueli2005}.}\label{fig:approx-plot}
\end{figure}
\end{knitrout}
The results in \autoref{fig:approx-plot} show that the mean
approximation is accurate, the largest quadratic error is
0.038 for the parameter values evaluated. For the
variance approximation, the largest quadratic error was
63.903 and it occurs for negative values of
$\phi$. Interestingly, the errors are larger for negative values of
$\phi$ and present no clear relation with $\mu$, as opposed to the
regions gives by \citet{Shmueli2005} ($\phi \leq 0$ and
$\mu > 10 - \frac{\exp(\phi) - 1}{2\exp(\phi)}$).
The results presented in \autoref{fig:approx-plot}(a) support the
proposed reparametrization. Replacing $\lambda$ and $\nu$ as function of
$\mu$ and $\phi$ in \autoref{eqn:pmf-cmp}, the reparametrized
distribution takes the form
\begin{equation}
\label{eqn:pmf-cmpmu}
\Pr(Y=y \mid \mu, \phi) =
\left ( \mu +\frac{ e^\phi-1}{2e^\phi} \right )^{ye^\phi}
\frac{(y!)^{-e^\phi}}{Z(\mu, \phi)},
\qquad y = 0, 1, 2, \ldots\,,
\end{equation} where $\mu > 0$. We denote this distribution as
COM-Poisson$_\mu$. In \autoref{fig:pmf-cmp}, we show the shapes of
COM-Poisson$_\mu$ distribution.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/pmf-cmp-1}
}
\caption[Shapes of the COM-Poisson distribution for different parameter values]{Shapes of the COM-Poisson distribution for different parameter values.}\label{fig:pmf-cmp}
\end{figure}
\end{knitrout}
In order to explore the flexibility of the COM-Poisson model to deal
with real count data, we compute indexes for dispersion (DI),
zero-inflation (ZI) and heavy-tail (HI), which are respectively given by
\begin{equation*}
\text{DI} = \frac{\text{Var}(Y)}{\text{E}(Y)}, \quad
\text{ZI} = 1 + \frac{\log \Pr(Y = 0)}{\text{E}(Y)}
\quad \text{and} \quad
\text{HT} = \frac{\Pr(Y=y+1)}{\Pr(Y=y)}\quad \text{for} \quad y \to
\infty.
\end{equation*}
These indexes are defined in relation to the Poisson distribution. Thus,
the dispersion index indicates overdispersion for $\text{DI} > 1$,
underdispersion for $\text{DI} < 1$ and equidispersion for
$\text{DI} = 1$. The zero-inflation index indicates zero-inflation for
$\text{ZI} > 0$, zero-deflation for $\text{ZI} < 0$ and no excess of
zeros for $\text{ZI} = 0$. Finally, heavy-tail index indicates a
heavy-tail distribution for $\text{HT} \to 1$ when $y \to \infty$.
These indexes are discussed by \citet{Bonat2017} to study the
flexibility of Poisson-Tweedie distribution, and \citet{Puig2006} to
describe count distributions. Regarding the COM-Poisson$_\mu$
distribution, in \autoref{fig:indexes-plot} we present the relationship
between (a) mean and variance, (b--c) the dispersion and zero-inflation
indexes for different values of $\mu$ and $\phi$, and (d) heavy-tail
index for $\mu=25$ and different values of $y$ and $\phi$.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/indexes-plot-1}
}
\caption[Indexes for COM-Poisson distribution]{Indexes for COM-Poisson distribution. (a) Mean and variance relationship, (b--d) dispersion, zero-inflation and heavy-tail indexes for different parameter values. Dotted lines represents the Poisson special case.}\label{fig:indexes-plot}
\end{figure}
\end{knitrout}
\autoref{fig:indexes-plot} shows that the indexes are slightly dependent
on the expected values and tend to stabilize for large values of
$\mu$. Consequently, the mean and variance relationship
\autoref{fig:indexes-plot}(a) is proportional to the dispersion
parameter $\phi$. In terms of moments, this leads to a specification
indistinguishable from the quasi-Poisson regression model. The
dispersion indexes in \autoref{fig:indexes-plot}(b) show that the
distribution is suitable to deal to dispersed counts, of course. For the
parameter values evaluated the largest DI was 4.213 and smallest
was 0.168. \autoref{fig:indexes-plot}(c) shows the COM-Poisson
can handle a limited amount of zero-inflation, in cases of
overdispersion ($\phi < 0$). On the other hand, for $\phi > 0$
(underdispersion) this distribution is suitable to deal with
zero-deflated counts. Heavy-tail indexes in
\autoref{fig:indexes-plot}(d) indicate the distribution is in general a
light-tailed distribution, i.e. $HT \to 0$ for $y \to \infty$.
\section{Estimation and Inference}
\label{estimation-and-inference}
In this section we describe the estimation and inference for the
two forms of the COM-Poisson regression model based on the maximum
likelihood method. The log-likelihood function for a set of independent
observations $y_i$, $i=1,2,\ldots,n$ from the COM-Poisson$_\mu$
distribution has the following form,
\begin{equation}
\label{eqn:ll-rcmp}
\ell = \ell(\bm{\beta}, \phi \mid \bm{y}) =
e^\phi \left [
\sum_{i=1}^n y_i
\log \left( \mu_i + \frac{e^\phi-1}{2e^\phi} \right ) -
\sum_{i=1}^n \log(y_i!) \right ] -
\sum_{i=1}^n \log(Z(\mu_i, \phi)),
\end{equation}
where $\mu_i = \exp(\bm{x}_i^\top\bm{\beta})$, with
$\bm{x}_i^\top = (x_{i1},\, x_{i2},\, \ldots,\, x_{ip})$ is a vector of
known covariates for the $i$-th observation, and $(\bm{\beta},\, \phi)
\in \mathbb{R}^{p+1}$. The normalizing constant $Z(\mu_i, \phi)$ is
given by
\begin{equation}
\label{eqn:infseries}
Z(\mu_i, \phi) = \sum_{j=0}^\infty \left [ \left (
\mu_i + \frac{e^\phi - 1}{2e^\phi} \right )^{je^\phi}
\frac{1}{(j!)^{e^\phi}} \right ].
\end{equation}
The evaluation of the log-likelihood function for each observation
involves the computation of the infinite series
(\autoref{eqn:infseries}). Thus, the fitting procedure is computationally
expensive for regions of the parameter space where the convergence of
the infinite sum is slow.
Parameter estimation requires the numerical maximization of
\autoref{eqn:ll-rcmp}. Since the derivatives of $\ell$ cannot be
obtained in closed forms, we adopted the \texttt{BFGS}
algorithm~\citep{Nocedal1995} as implemented in the function
\texttt{optim()} for the statistical software \texttt{R}
\citep{Rcore2017}. Standard errors for the regression coefficients are
obtained based on the observed information matrix
$\mathcal{I}(\bm{\theta})$, where
$\mathcal{I}(\bm{\theta}) = -\mathcal{H}(\bm{\theta})$ (hessian matrix)
is computed numerically. Confidence intervals for $\hat{\mu}_i$ are
obtained by using the delta method~\citep[p. 89]{Pawitan2001}.
The parameter estimation for the COM-Poisson regression model in the
original parametrization is analogous to the one presented for the
COM-Poisson$_\mu$ distribution, however, it considers
\autoref{eqn:ll-rcmp} in terms of $\lambda$. Even for the standard
COM-Poisson distribution, the dispersion parameter is taken on the log
scale to avoid numerical issues.
In the applications we fitted the quasi-Poisson model
\citep{Wedderburn1974} as a baseline model. This approach is based on a
second-moment assumption that allows more flexibility to the model. In
this case the variance of the response variable is fixed by an
additional parameter $\sigma$, $\textrm{Var}(Y_i)=\sigma \mu_i$. These
models are fitted in the \texttt{R} software using the function
\texttt{glm(..., family = quasipoisson)}.
\section{Simulation study}
\label{simulation-study}
In this section we performed a simulation study to assess the properties
of the maximum likelihood estimators and orthogonality of the
reparametrized model as well as the flexibility of the COM-Poisson
regression model to deal with non-equidispersed count data.
We considered average counts varying from $3$ to $27$ according to a
regression model with a continuous and a categorical covariate. The
continuous covariate~($\bm{x}_1$) was generated as a sequence from $0$
to $1$ and of length equal to the sample size. Similarly, the categorical
covariate~($\bm{x}_2$) was generated as a sequence of three values each
one repeated $n/3$ times (rounding up when required), where $n$ denotes
the sample size. Thus, the expectation of the COM-Poisson random
variable is given by
$\bm{\mu} = \exp(\beta_0 + \beta_1 \bm{x}_1 + \beta_{21} \bm{x}_{21} +
\beta_{22} \bm{x}_{22})$,
where $\bm{x}_{21}$ and $\bm{x}_{22}$ are dummy representing the levels
of $\bm{x}_2$. The regression coefficients were fixed at the values,
$\beta_0 = 2$, $\beta_1 = 0.5$, $\beta_{21} = 0.8$ and
$\beta_{22} = -0.8$.
We designed four simulation scenarios by considering different values of
the dispersion parameter $\phi = -1.6, -1.0, 0.0$ and $1.8$. Thus, we
have strong and moderate overdispersion, equidispersion, and
underdispersion, respectively. \autoref{fig:justpars} shows the
variation of the average counts (left) and dispersion index (right) for
each value of the dispersion parameter considered in the simulation
study. These configurations allow us to assess the properties of the
maximum likelihood estimators in extreme situations, such as high counts
and low dispersion, and low counts and high dispersion, but also in the
standard case of equidispersion.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/justpars-1}
}
\caption[Average counts (left) and dispersion indexes (right) for each scenario considered in the simulation study]{Average counts (left) and dispersion indexes (right) for each scenario considered in the simulation study.}\label{fig:justpars}
\end{figure}
\end{knitrout}
In order to check the consistency of the estimators we considered four
different sample sizes: $50$, $100$, $300$ and $1000$; generating $1000$
data sets in each case. In \autoref{fig:bias-plot}, we show the bias of
the estimators for each simulation scenario (combination between values
of the dispersion parameter and samples sizes) along with the confidence
intervals calculated as average bias plus and minus $\Phi(0.975)$ times
the average standard error. The scales are standardized for each
parameter by dividing the average bias by the average standard error
obtained for the sample of size $50$.
The results in \autoref{fig:bias-plot} show that for all dispersion
levels, both the average bias and standard errors tend to $0$ as the
sample size increases. The estimators for the regression parameters are
unbiased, consistent and their empirical distributions are
symmetric. For the dispersion parameter, the estimator is asymptotically
unbiased; in small samples the parameter is overestimated and the
empirical distribution is slightly right-skewed.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/bias-plot-1}
}
\caption[Distributions of standardized bias (gray box-plots) and average with confidence intervals (black segments) by different sample sizes and dispersion levels]{Distributions of standardized bias (gray box-plots) and average with confidence intervals (black segments) by different sample sizes and dispersion levels.}\label{fig:bias-plot}
\end{figure}
\end{knitrout}
\autoref{fig:coverage-plot} presents the empirical coverage rate of the
asymptotic confidence intervals. The results show that for the
regression parameters the empirical coverage rates are close to the
nominal level of 95\% for sample sizes greater than $100$ and all
simulation scenarios. For the dispersion parameter the empirical
coverage rates are slightly lower than the nominal level; however, they
become closer to the nominal level for large samples. The worst
scenario is when we have small sample size and strong overdispersed
counts.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/coverage-plot-1}
}
\caption[Coverage rate based on confidence intervals obtained by quadratic approximation for different sample sizes and dispersion levels]{Coverage rate based on confidence intervals obtained by quadratic approximation for different sample sizes and dispersion levels.}\label{fig:coverage-plot}
\end{figure}
\end{knitrout}
To check the orthogonality property we compute the covariance matrix
between maximum likelihood estimators
$\hat{\bm{\theta}} = (\hat{\bm{\beta}}, \phi)$, obtained from the observed
information matrix,
Cov$(\hat{\bm{\theta}}) = \mathcal{I}^{-1}(\bm{\theta})$.
\autoref{fig:ortho-plot} shows the covariance between regression and
dispersion parameter estimators for each simulation scenario, on
the correlation scale. The correlations are close to zero in all cases
suggesting the orthogonality property for the reparametrized
model. Interestingly, results in the first panel show that
cov($\hat{\beta}_{22}, \hat{\phi}$) is not very close to zero (values
between $-0.4$ and $0.2$) for strong overdispersion ($\phi = -1.6$).
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/ortho-plot-1}
}
\caption[Empirical correlations between regression and dispersion parameters by different sample sizes and dispersion levels]{Empirical correlations between regression and dispersion parameters by different sample sizes and dispersion levels.}\label{fig:ortho-plot}
\end{figure}
\end{knitrout}
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/ortho-surf-1}
}
\caption[Deviance surface contour plots under original and proposed parametrization for four simulated data sets of size 1000 with different dispersion parameters]{Deviance surface contour plots under original and proposed parametrization for four simulated data sets of size 1000 with different dispersion parameters. The ellipses are confidence regions (90, 95 and 99\%), dotted lines are the maximum likelihood estimates, and points are the real parameters used in the simulation.}\label{fig:ortho-surf}
\end{figure}
\end{knitrout}
To illustrate the orthogonality, \autoref{fig:ortho-surf} displays contour plots of the deviance
surfaces for four simulated data set of size 1000, $\mu=5$ and different
values of the dispersion parameters. The shapes of the deviance function
show that the proposed parametrization is better for both computation
and asymptotic (normal-based) inference. Furthermore, it is interesting
to note that the deviance function shape under strong overdispersion
($\phi=-1.6$) is not as well behaved as the others; this is due to the
difficulty of the distribution in dealing with strong overdispersion in
low counts (see dispersion index plot in the
\autoref{fig:indexes-plot}). This also explains the results of
Cov$(\hat{\beta}_{22}, \phi)$ in the first panel of
\autoref{fig:ortho-plot}, since $\beta_{22}$ is negative and associated
with low counts.
\section{Case studies}
\label{case-studies}
In this section, we report three illustrative examples of count data
analysis. We considered as alternative models for the analysis the
standard Poisson regression model, the COM-Poisson model in the two
forms (original and new parametrization) and the quasi-Poisson
regression model. The data sets and \texttt{R} code for their analysis
are available as supplementary material.
\subsection{Artificial defoliation in cotton phenology}
\label{case-cotton}
This example relates to cotton plants (\textit{Gossypium hirsutum})
submitted to five levels of artificial defoliation (\texttt{des}) and
crossed with five growth stages (\texttt{est}). The main goal of this
study was to assess the effect of defoliation levels at different growth
stages of cotton plants on the cotton production, expressed by the
number of bolls produced. The study was conducted in a greenhouse and
the experimental design was completely randomized with five
replicates. This data set was analyzed by~\citet{Zeviani2014} using the
Gamma-Count distribution.
Following~\citet{Zeviani2014}, the linear predictor is given by
$$\log(\mu_{ij}) = \beta_0 + \beta_{1j} \texttt{def}_i + \beta_{2j}
\texttt{def}_i^2$$
where $\mu_{ij}$ is the expected number of cotton bolls for the $i$-th
defoliation level ($i=$ 1: 0\%, 2: 25\%, 3: 50\%, 4: 75\% e 5: 100\%)
and $j$-th growth stage ($j$ = 1: vegetative, 2: flower bud, 3: blossom,
4: boll, 5: boll open), i.e. we have a second order effect of
defoliation in each growth stage. The parameters estimates and
goodness-of-fit measures for the Poisson, COM-Poisson, COM-Poisson$_\mu$
and quasi-Poisson regression models are presented in
\autoref{tab:coef-cotton}.
\begin{table}[!ht]
\centering \small
\caption{Parameter estimates (Est) and ratio between estimate and
standard error (SE) for the four model strategies for the analysis
of the cotton experiment.}
\label{tab:coef-cotton}
\begin{tabular}{lrrrrrrrr}
\toprule
& \multicolumn{2}{c}{Poisson} &
\multicolumn{2}{c}{COM-Poisson} &
\multicolumn{2}{c}{COM-Poisson$_\mu$} &
\multicolumn{2}{c}{Quasi-Poisson} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
& Est & Est/SE & Est & Est/SE & Est & Est/SE & Est & Est/SE \\
\midrule
$\phi\,,\,\sigma$ & & & 1.5847 & 12.4166 & 1.5817 & 12.3922 & 0.2410 & \\
$\beta_0$ & 2.1896 & 34.5724 & 10.8967 & 7.7594 & 2.1905 & 74.6397 & 2.1896 & 70.4205 \\
$\beta_{11}$ & 0.4369 & 0.8473 & 2.0187 & 1.7696 & 0.4350 & 1.8194 & 0.4369 & 1.7260 \\
$\beta_{12}$ & 0.2897 & 0.5706 & 1.3431 & 1.2109 & 0.2876 & 1.2227 & 0.2897 & 1.1622 \\
$\beta_{13}$ & $-$1.2425 & $-$2.0581 & $-$5.7505 & $-$3.8858 & $-$1.2472 & $-$4.4202 & $-$1.2425 & $-$4.1921 \\
$\beta_{14}$ & 0.3649 & 0.6449 & 1.5950 & 1.2975 & 0.3500 & 1.3280 & 0.3649 & 1.3135 \\
$\beta_{15}$ & 0.0089 & 0.0178 & 0.0377 & 0.0346 & 0.0076 & 0.0324 & 0.0089 & 0.0362 \\
$\beta_{21}$ & $-$0.8052 & $-$1.3790 & $-$3.7245 & $-$2.7754 & $-$0.8033 & $-$2.9613 & $-$0.8052 & $-$2.8089 \\
$\beta_{22}$ & $-$0.4879 & $-$0.8613 & $-$2.2647 & $-$1.8051 & $-$0.4858 & $-$1.8499 & $-$0.4879 & $-$1.7544 \\
$\beta_{23}$ & 0.6728 & 0.9892 & 3.1347 & 2.0837 & 0.6788 & 2.1349 & 0.6728 & 2.0149 \\
$\beta_{24}$ & $-$1.3103 & $-$1.9477 & $-$5.8943 & $-$3.6567 & $-$1.2875 & $-$4.0951 & $-$1.3103 & $-$3.9672 \\
$\beta_{25}$ & $-$0.0200 & $-$0.0361 & $-$0.0901 & $-$0.0755 & $-$0.0189 & $-$0.0740 & $-$0.0200 & $-$0.0736 \\
\specialrule{0.01em}{0.3em}{0.3em}
LogLik & \multicolumn{2}{c}{$-255.803$} & \multicolumn{2}{c}{$-208.250$} & \multicolumn{2}{c}{$-208.398$} & \multicolumn{2}{c}{$ -$}\\
AIC & \multicolumn{2}{c}{$533.606$} & \multicolumn{2}{c}{$440.500$} & \multicolumn{2}{c}{$440.795$} & \multicolumn{2}{c}{$ -$}\\
BIC & \multicolumn{2}{c}{$564.718$} & \multicolumn{2}{c}{$474.440$} & \multicolumn{2}{c}{$474.735$} & \multicolumn{2}{c}{$ -$} \\
\bottomrule
\end{tabular}
\end{table}
The results presented in \autoref{tab:coef-cotton} show that the
goodness-of-fit measures (log-likelihood, AIC and BIC) are quite similar
for the COM-Poisson and COM-Poisson$_\mu$ models. It suggests that the
reparametrization does not change the model fit, as expected. The
Poisson model is clearly unsuitable, being overly conservative. The
difference in terms of log-likelihood value from the Poisson to the
COM-Poisson$_\mu$ model was $94.811$, which in turn
suggests the better fit of the COM-Poisson$_{\mu}$ model. A chi-square
test also supports this statement. Finally, the estimated value of the
dispersion parameter $\\hat{\phi} = 1.582$ suggests
underdispersion.
Furthermore, results in \autoref{tab:coef-cotton} also show the
advantage of the COM-Poisson$_\mu$ model, since the estimates are quite
similar to the ones obtained by the Poisson model, whereas estimates
obtained from the COM-Poisson model in the original parametrization are
on a non interpretable scale. The ratios between estimates and their
respective standard errors for the COM-Poisson models are very close to
ratios obtained by quasi-Poisson model. However, it is important to note
that COM-Poisson model is a full parametric approach, i.e. there is a
probability distribution associated to the counts. On the other hand,
the quasi-Poisson model is a specification based only on second-moment
assumptions.
\autoref{fig:pred-cotton} presents the observed and fitted values with
confidence intervals (95\%) as a function of the defoliation level for
each growth stage. The fitted values are the same for the Poisson and
COM-Poisson$_{\mu}$ models, however, the confidence intervals are larger
for the Poisson model because the equidispersion assumption. The
results from the COM-Poisson$_{\mu}$ model are consistent with those
from the Gamma-Count model~\citep{Zeviani2014},
Poisson-Tweedie~\citep{Bonat2017} and the alternative parametrization of
the COM-Poisson distribution proposed by~\citet{Huang2017}. In all
strategies the models indicated underdispersion and significant effects
of defoliation for the vegetative, blossom and boll growth stages.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=1\textwidth]{figure/pred-cotton-1}
}
\caption[Scatterplots of the observed data and curves of fitted values with 95\% confidence intervals as functions of the defoliation level for each growth stage]{Scatterplots of the observed data and curves of fitted values with 95\% confidence intervals as functions of the defoliation level for each growth stage.}\label{fig:pred-cotton}
\end{figure}
\end{knitrout}
In order to assess the relation between $\bm{\mu}$ and $\phi$ in the
COM-Poisson$_\mu$ parametrization, \autoref{tab:corr-cotton} presents
the empirical correlations between the regression and dispersion
parameters, as computed by the asymptotic covariance matrix of the
estimators, i.e. the inverse of the observed information. The
correlations are practically null considering the COM-Poisson$_\mu$. On
the other hand, for the original parametrization such correlations are
quite large, in particular for the parameter $\beta_0$ (due to effects
parametrization in the linear predictor). This result explain the
better performance of the maximization algorithm in the new
parametrization. It is important to note that the initial values for
the \texttt{BFGS} algorithm are provided by the Poisson model, then in
the COM-Poisson$_{\mu}$ model the initial values are practically the
maximum likelihood estimates and the effort of maximization is on the
dispersion parameter $\phi$ only. To compare the computational times on
the two parametrizations we repeat the fitting $50$ times. In
this case COM-Poisson$_\mu$ fit was, on average, $38$\% faster than the
original one.
\begin{table}[ht]
\centering
\caption{Empirical correlations between $\hat{\phi}$ and
$\hat{\bm{\beta}}$ for the two parametrizations of COM-Poisson model
fit to underdispersed data.}
\label{tab:corr-cotton}
\begingroup\small
\begin{tabular}{lrrrrrr}
\toprule
& $\hat{\beta}_0$ & $\hat{\beta}_{11}$ & $\hat{\beta}_{12}$ & $\hat{\beta}_{13}$ & $\hat{\beta}_{14}$ & $\hat{\beta}_{15}$ \\
\midrule
COM-Poisson & 0.9952 & 0.2229 & 0.1526 & $-$0.4895 & 0.1614 & 0.0043 \\
COM-Poisson$_\mu$ & 0.0005 & $-$0.0002 & $-$0.0002 & $-$0.0007 & $-$0.0015 & $-$0.0002 \\
\midrule
& $\hat{\beta}_{21}$ & $\hat{\beta}_{22}$ & $\hat{\beta}_{23}$ & $\hat{\beta}_{24}$ & $\hat{\beta}_{25}$ & $ $ \\
\midrule
COM-Poisson & $-$0.3496 & $-$0.2276 & 0.2629 & $-$0.4578 & $-$0.0095 & \\
COM-Poisson$_\mu$ & 0.0001 & 0.0002 & 0.0006 & 0.0018 & 0.0001 & \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
\subsection{Soil moisture and potassium doses on soybean culture}
\label{case-soybean}
The second example is a $5\times 3$ factorial experiment in a randomized
complete block design. The aim of this study was to evaluate the effects
of potassium doses (\texttt{K}) applied to soil (0, 0.3, 0.6, 1.2 and
1.8 $\times$ 100mg dm$^{-3}$) and soil moisture (\texttt{umid}) levels
(37.5, 50, 62.5\%) on soybean (\emph{Glicine Max}) production. The
experiment was carried out in a greenhouse, in pots with two plants, and
the count variable measured was the number of bean seeds per
pot~\citep{Serafim2012}. \autoref{fig:desc-soy} (left) shows the number
of bean seeds recorded for each combination of potassium dose and moisture
level, it is important to note the indication of a quadratic effect of
the potassium levels as shown by smoothing curves. Most points in the
sample variance \textit{versus} sample means dispersion diagram (right)
are above the identity line, suggesting overdispersion (block effect not
yet removed).
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!ht]
{\centering \includegraphics[width=1\textwidth]{figure/desc-soy-1}
}
\caption[Number of bean seeds per pot for each potassium dose and moisture level (left) and sample mean against sample variance of the five replicates for each experimental treatment (right)]{Number of bean seeds per pot for each potassium dose and moisture level (left) and sample mean against sample variance of the five replicates for each experimental treatment (right). Solid lines are the smoothing curves on the left and the least of squares curve on the right.}\label{fig:desc-soy}
\end{figure}
\end{knitrout}
For the analysis of this data set based on the descriptive analysis
(\autoref{fig:desc-soy}), we proposed the following linear predictor
$$
\log(\mu_{ijk}) = \beta_0 + \gamma_i + \tau_j +
\beta_{1}\texttt{K}_k + \beta_{2}\texttt{K}_k^2 +
\beta_{3j}\texttt{K}_k
$$
where $i=$1: block II, 2: block III, 3: block IV e 4: block V; $j=$1:
50\% e 2: 62.5\%; and $k=$1: 0.0, 2: 0.3, 3: 0.6, 4: 1.2, 5: 1.8 100mg
dm$^{-3}$, where $\gamma_i$ is the effect of $i$-th block ($i=$1: block
II, 2: block III, 3: block IV and 4: block V), $\tau_j$ is the effect of
$j$-th moisture level ($j=$1: 50\% and 2: 62.5\%) and $\beta_{3j}$ is
interaction of the first order potassium effect (\texttt{K}) for the
$j$-th moisture level (\texttt{umid}). \autoref{tab:coef-soy} presents
the estimates, ratio between estimate and standard error and
goodness-of-fit measures for the alternative models.
The results in \autoref{tab:coef-soy} show that the two parametrization
of COM-Poisson model presented very similar goodness-of-fit measures and
better fit than the Poisson model. The difference between the
log-likelihoods of the Poisson and COM-Poisson models was
$29.697$, indicating that $\phi$ is significantly
different from zero. The estimate of $\phi$ ($-0.782$) indicates
overdispersion, corroborating the descriptive analysis. Concerning to
the regression parameters, the similarities between the models are
analogous to the previous section. Both models indicate effects of
block, potassium dose and moisture level, however the Poisson model
indicates the effects with greater significance, because it does not
take account of the extra variability.
\begin{table}[!ht]
\centering \small
\caption{Parameter estimates (Est) and ratio between estimate and
standard error (SE) for the four model strategies for the analysis of
the soybean experiment.}
\label{tab:coef-soy}
\begin{tabular}{lrrrrrrrr}
\toprule
& \multicolumn{2}{c}{Poisson} &
\multicolumn{2}{c}{COM-Poisson} &
\multicolumn{2}{c}{COM-Poisson$_\mu$} &
\multicolumn{2}{c}{Quasi-Poisson} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
& Est & Est/SE & Est & Est/SE & Est & Est/SE & Est & Est/SE \\
\midrule
$\phi\,,\,\sigma$ & & & $-$0.7785 & $-$4.7208 & $-$0.7821 & $-$4.7371 & 2.6151 & \\
$\beta_0$ & 4.8666 & 144.2886 & 2.2320 & 6.0415 & 4.8666 & 97.7808 & 4.8666 & 89.2254 \\
$\gamma_{1}$ & $-$0.0194 & $-$0.7302 & $-$0.0089 & $-$0.4939 & $-$0.0194 & $-$0.4950 & $-$0.0194 & $-$0.4516 \\
$\gamma_{2}$ & $-$0.0366 & $-$1.3733 & $-$0.0169 & $-$0.9212 & $-$0.0366 & $-$0.9306 & $-$0.0366 & $-$0.8492 \\
$\gamma_{3}$ & $-$0.1056 & $-$3.8890 & $-$0.0486 & $-$2.4223 & $-$0.1056 & $-$2.6338 & $-$0.1056 & $-$2.4049 \\
$\gamma_{4}$ & $-$0.0916 & $-$3.2997 & $-$0.0422 & $-$2.1020 & $-$0.0917 & $-$2.2366 & $-$0.0916 & $-$2.0405 \\
$\tau_{1}$ & 0.1320 & 3.6471 & 0.0609 & 2.2949 & 0.1320 & 2.4715 & 0.1320 & 2.2553 \\
$\tau_{2}$ & 0.1243 & 3.4319 & 0.0573 & 2.1772 & 0.1243 & 2.3258 & 0.1243 & 2.1222 \\
$\beta_1$ & 0.6160 & 11.0139 & 0.2839 & 4.7291 & 0.6161 & 7.4640 & 0.6160 & 6.8108 \\
$\beta_2$ & $-$0.2759 & $-$10.2501 & $-$0.1272 & $-$4.5890 & $-$0.2760 & $-$6.9458 & $-$0.2759 & $-$6.3385 \\
$\beta_{31}$ & 0.1456 & 4.2680 & 0.0670 & 2.6138 & 0.1456 & 2.8922 & 0.1456 & 2.6392 \\
$\beta_{32}$ & 0.1648 & 4.8294 & 0.0759 & 2.8843 & 0.1648 & 3.2723 & 0.1648 & 2.9864 \\
\specialrule{0.01em}{0.3em}{0.3em}
LogLik & \multicolumn{2}{c}{$-340.082$} & \multicolumn{2}{c}{$-325.241$} & \multicolumn{2}{c}{$-325.233$} & \multicolumn{2}{c}{$ -$}\\
AIC & \multicolumn{2}{c}{$702.164$} & \multicolumn{2}{c}{$674.482$} & \multicolumn{2}{c}{$674.467$} & \multicolumn{2}{c}{$ -$}\\
BIC & \multicolumn{2}{c}{$727.508$} & \multicolumn{2}{c}{$702.130$} & \multicolumn{2}{c}{$702.116$} & \multicolumn{2}{c}{$ -$} \\
\bottomrule
\end{tabular}
\end{table}
The infinite sum $Z(\mu, \phi)$ in the cases of overdispersed count data
requires a larger upper bound to reach convergence. Thus, in these cases
the computation of the log-likelihood function is computationally
expensive. For the data set considered, the upper bound was fixed at
$700$. The \texttt{BFGS} algorithm evaluated the log-likelihood
function $264$ and $20$ times to reach
convergence, when using the original and new parametrization of the
COM-Poisson distribution, respectively. In terms of computational time,
for 50 repetitions of fit, the proposed reparametrization was on average
$110\%$ faster than the original one. Probably, it is due to the better
behaviour of the log-likelihood function as well as better initial
values obtained from the Poisson fit. In \autoref{tab:corr-soy}, we
present the empirical correlation between the regression and dispersion
parameter estimates. The correlations are close to zero for the
COM-Poisson$_\mu$ model, indicating the empirical orthogonality between
$\mu$ and $\phi$.
\begin{table}[ht]
\centering
\caption{Empirical correlations between $\hat{\phi}$ and
$\hat{\bm{\beta}}$ for the two parametrizations of COM-Poisson model
fit to overdispersed data.}
\label{tab:corr-soy}
\begingroup\small
\begin{tabular}{lrrrrrrr}
\toprule
& $\hat{\beta}_0$ & $\hat{\beta}_{11}$ & $\hat{\beta}_{12}$ & $\hat{\beta}_{13}$ & $\hat{\beta}_{14}$ & $\hat{\beta}_{15}$ & $\hat{\beta}_{21}$ \\
\midrule
COM-Poisson & 0.9952 & 0.2229 & 0.1526 & $-$0.4895 & 0.1614 & 0.0043 & $-$0.3496 \\
COM-Poisson$_\mu$ & 0.0005 & $-$0.0002 & $-$0.0002 & $-$0.0007 & $-$0.0015 & $-$0.0002 & 0.0001 \\
\midrule
& $\hat{\beta}_{22}$ & $\hat{\beta}_{23}$ & $\hat{\beta}_{24}$ & $\hat{\beta}_{25}$ & $ $ & $ $ & $ $ \\
\midrule
COM-Poisson & $-$0.2276 & 0.2629 & $-$0.4578 & $-$0.0095 & & & \\
COM-Poisson$_\mu$ & 0.0002 & 0.0006 & 0.0018 & 0.0001 & & & \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
The observed and fitted counts for each humidity level with confidence
intervals are shown in \autoref{fig:pred-soy}. The fitted values are
identical for the Poisson and COM-Poisson$_{\mu}$ models, leading to the
same conclusions. On the other hand, confidence intervals for the
Poisson model are smaller than the ones from the COM-Poisson$_{\mu}$,
due to the equidispersion assumption underlying the Poisson model. The
confidence intervals from the COM-Poisson$_{\mu}$ and quasi-Poisson
models are really similar, which in turn shows the already highlighted
similarity between these approaches, however only the COM-Poisson
model$_{\mu}$ corresponds to a fully specified probability model.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!ht]
{\centering \includegraphics[width=1\textwidth]{figure/pred-soy-1}
}
\caption[Dispersion diagrams of been seeds counts as function of potassium doses and humidity levels with fitted curves and confidence intervals (95\%)]{Dispersion diagrams of been seeds counts as function of potassium doses and humidity levels with fitted curves and confidence intervals (95\%).}\label{fig:pred-soy}
\end{figure}
\end{knitrout}
\subsection{Assessing toxicity of nitrofen in aquatic systems}
Nitrofen is a herbicide that was used extensively for the control of
broad-leaved and grass weeds in cereals and rice. Although it is
relatively non-toxic to adult mammals, nitrofen is a significant
tetragen and mutagen. It is also acutely toxic and reproductively toxic
to cladoceran zooplankton. Nitrofen is no longer in commercial use in
the U.S., having been the first pesticide to be withdrawn due to
tetragenic effects~\citep{Bailer1994}.
The data set comes from an experiment to measure the reproductive
toxicity of the herbicide, nitrofen, on a species of zooplankton
(\textit{Ceriodaphnia dubia}). Fifty animals were randomized into
batches of ten and each batch was put in a solution with a measured
concentration of nitrofen ($0, 0.8, 1.6, 2.35$ and $3.10$
$\mu$g$/10^2$litre) (\texttt{dose}). Then the number of live offspring
was recorded.
For this data set we consider three models with linear predictors,
\begin{center}
\begin{minipage}{12cm}
Linear: $\log(\mu_i) = \beta_0 + \beta_1 \texttt{dose}_i$\\
Quadratic: $\log(\mu_i) = \beta_0 + \beta_1 \texttt{dose}_i +
\beta_2 \texttt{dose}_i^2$\\
Cubic: $\log(\mu_i) = \beta_0 + \beta_1 \texttt{dose}_i +
\beta_2 \texttt{dose}_i^2 + \beta_3 \texttt{dose}_i^3$.
\end{minipage}
\end{center}
\begin{table}[!ht]
\centering \small
\caption{Model fit measures and comparisons between predictors and
models fitted to the nitrofen data.}
\label{tab:anova-ovos}
\begin{tabularx}{\textwidth}{lCCCCCrC}
\toprule
Poisson & np & $\ell$ & AIC & 2(diff $\ell$) & diff np & P($>\rchi^2$) & \\
\midrule
Preditor 1 & 2 & $-$180.667 & 365.335 & & & & \\
Preditor 2 & 3 & $-$147.008 & 300.016 & 67.319 & 1 & 2.31E$-$16 & \\
Preditor 3 & 4 & $-$144.090 & 296.180 & 5.835 & 1 & 1.57E$-$02 & \\
\specialrule{0em}{0.5em}{0em}
COM-Poisson & np & $\ell$ & AIC & 2(diff $\ell$) & diff np &
P($>\rchi^2$) & $\hat{\phi}$ \\
\midrule
Preditor 1 & 3 & $-$167.954 & 341.908 & & & & $-$0.893 \\
Preditor 2 & 4 & $-$146.964 & 301.929 & 41.980 & 1 & 9.22E$-$11 & $-$0.059 \\
Preditor 3 & 5 & $-$144.064 & 298.129 & 5.800 & 1 & 1.60E$-$02 & 0.048 \\
\specialrule{0em}{0.5em}{0em}
COM-Poisson$_\mu$ & np & $\ell$ & AIC & 2(diff $\ell$) & diff np &
P($>\rchi^2$) & $\hat{\phi}$ \\
\midrule
Preditor 1 & 3 & $-$167.652 & 341.305 & & & & $-$0.905 \\
Preditor 2 & 4 & $-$146.950 & 301.900 & 41.405 & 1 & 1.24E$-$10 & $-$0.069 \\
Preditor 3 & 5 & $-$144.064 & 298.127 & 5.773 & 1 & 1.63E$-$02 & 0.047 \\
\specialrule{0em}{0.5em}{0em}
Quasi-Poisson & np & QDev & AIC & F & diff np & P($>F$) & $\hat{\sigma}$ \\
\midrule
Preditor 1 & 3 & 123.929 & & & & & 2.262 \\
Preditor 2 & 4 & 56.610 & & 60.840 & 1 & 5.07E$-$10 & 1.106 \\
Preditor 3 & 5 & 50.774 & & 5.659 & 1 & 2.16E$-$02 & 1.031 \\
\bottomrule
\end{tabularx}
\footnotesize \raggedright np, number of parameters; diff $\ell$,
difference in log-likelihoods; QDev, quasi-deviance, F, F statistics
based on quasi-deviances; diff np, difference in np.
\end{table}
\autoref{tab:anova-ovos} summarizes the results of the fitted models and
likelihood ratio tests comparing the sequence of predictors. All models
indicate the significance of the cubic effect of the nitrofen
concentration. Considering this predictor, there is an evidence of
equidispersed counts, the $\phi$ estimate of the COM-Poisson is close to
zero and $\sigma$ of quasi-Poisson is close to one. It is interesting
to note that if we omit the high order effects the models show evidence
of overdispersion. This exemplifies the discussion on the causes of
overdispersion made in \autoref{introduction}. We can also note that
the quasi-Poisson approach, although robust to equidispersion
assumption, shows higher descriptive levels ($p$-values) than parametric
models, that is, the tests under parametric models are more powerful
than the ones under the quasi-Poisson model in the equidispersed case.
In \autoref{tab:coef-ovos}, we present the estimates of the regression
parameters considering the cubic dose model. The interpretations are
similar to others cases studies, however, in this case the Poisson model
is also suitable for indicating the significance of the covariate
effects. In addition, note that the parameter estimates of the original
COM-Poisson model are comparable to the others models. This occurs
because we are in the particular case, where $\phi = 0$, which implies
$\lambda = \mu$.
\begin{table}[!ht]
\centering
\caption{Parameter estimates (Est) and ratio between estimate and
standard error (SE) for the four model strategies for the analysis of
the nitrofen experiment.}
\label{tab:coef-ovos}
\begin{tabular}{lrrrrrrrr}
\toprule
& \multicolumn{2}{c}{Poisson} &
\multicolumn{2}{c}{COM-Poisson} &
\multicolumn{2}{c}{COM-Poisson$_\mu$} &
\multicolumn{2}{c}{Quasi-Poisson} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
& Est & Est/SE & Est & Est/SE & Est & Est/SE & Est & Est/SE \\
\midrule
$\beta_{0}$ & 3.4767 & 62.8167 & 3.6494 & 4.8499 & 3.4769 & 64.3083 & 3.4767 & 61.8599 \\
$\beta_{1}$ & $-$0.0860 & $-$0.4328 & $-$0.0914 & $-$0.4475 & $-$0.0879 & $-$0.4523 & $-$0.0860 & $-$0.4262 \\
$\beta_{2}$ & 0.1529 & 0.8634 & 0.1612 & 0.8783 & 0.1547 & 0.8938 & 0.1529 & 0.8502 \\
$\beta_{3}$ & $-$0.0972 & $-$2.3978 & $-$0.1021 & $-$2.2294 & $-$0.0976 & $-$2.4635 & $-$0.0972 & $-$2.3612 \\
\bottomrule
\end{tabular}
\end{table}
\autoref{fig:pred-ovos} shows the number of live off-spring observed and
fitted curves along with confidence intervals for all model strategies
adopted. The fitted values and confidence intervals are identical and
have a complete overlap. It shows that the estimation of the extra
dispersion parameter does not affect the estimation of the regression
coefficients in the case of equidispersed counts.
\begin{knitrout}
\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\color{fgcolor}\begin{figure}[!htp]
{\centering \includegraphics[width=0.7\textwidth]{figure/pred-ovos-1}
}
\caption[Number of live offsprings observed for each nitrofen concentration level with fitted curves and 95\% confidence intervals]{Number of live offsprings observed for each nitrofen concentration level with fitted curves and 95\% confidence intervals.}\label{fig:pred-ovos}
\end{figure}
\end{knitrout}
Finally, in \autoref{tab:corr-ovos} we present the empirical
correlations between the regression and dispersion parameter estimates.
The results show that even in the special case ($\phi=0$), the empirical
correlations for the original COM-Poisson model are not zero. For the
reparametrized model, as discussed in the previous sections, the
correlations are practically null. The computational times for fifty
repetitions of fit are similar. The average time to fit for the
COM-Poisson$_\mu$ and COM-Poisson models is 1.19 and 1.09 seconds,
respectively.
\begin{table}[ht]
\centering
\caption{Empirical correlations between $\hat{\phi}$ and $\hat{\bm{\beta}}$ for the two parametrizations of COM-Poisson model fit to equidispersed data.}
\label{tab:corr-ovos}
\begingroup\small
\begin{tabular}{lrrrr}
\toprule
& $\hat{\beta}_{0}$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ & $\hat{\beta}_{3}$ \\
\midrule
COM-Poisson & 0.9972 & $-$0.0771 & 0.1562 & $-$0.4223 \\
COM-Poisson$_\mu$ & $-$0.0003 & 0.0023 & $-$0.0029 & 0.0033 \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
\section{Concluding remarks}
\label{conclusion}
In this paper, we presented and characterized a novel parametrization of
the COM-Poisson distribution and the associated regression model. The
novel parametrization was based on a simple asymptotic approximation for
the expectation and variance of the COM-Poisson distribution. The main
advantage of the proposed reparametrization is the simple interpretation
of the regression coefficients in terms of the expectation of the
response variable as usual in the generalized linear models
context. Thus, it is possible to compare the results of the COM-Poisson
model with the ones from standard approaches as the Poisson and
quasi-Poisson regression models. Furthermore, in the novel
parametrization the COM-Poisson distribution is indexed by the
expectation $\mu$ and an extra dispersion parameter $\phi$ which our
data analysis suggest being orthogonal. This is similar to
\Citeauthor{Huang2017}'s (\citeyear{Huang2017}) parametrization but is
simpler, because the $\mu$ is obtained from simple algebra.
We evaluated the accuracy of the asymptotic approximations for the
expectation and variance of the COM-Poisson distribution by considering
quadratic approximation errors. The results showed that the
approximations are accurate for a large part of the parameter space,
which in turn support our reparametrization. Moreover, we discuss the
properties and flexibility of the distribution to deal with count data
although dispersion, zero-inflation and heavy-tail indexes. We carried
out a simulation study to assess the properties of the reparametrized
COM-Poisson model to deal with different levels of dispersion as well as
the properties of the maximum likelihood estimators. The results of our
simulation study suggested that the maximum likelihood estimators of the
regression and dispersion parameters are unbiased and consistent. The
empirical coverage rates of the confidence intervals computed based on
the asymptotic distribution of the maximum likelihood estimators are
close to the nominal level for sample size greater than $100$. The
worst scenario is when we have small sample sizes and strong
overdispersed counts. In general, we recommend the use of the
asymptotic confidence intervals for computational simplicity.
The data analyses have shown that the COM-Poisson regression model is a
suitable choice to deal with dispersed count data. The observed
empirical correlation between the regression and dispersion parameter
estimators suggest orthogonality between $\mu$ and $\phi$ in
COM-Poisson$_\mu$ distribution. Thus, the computational procedure based
on the proposed reparametrization is faster than in the original
parametrization.
In general, the results presented by the reparametrized COM-Poisson
models were satisfactory and comparable to the conventional approaches.
Therefore, its use in the analysis of count data is encouraged. The
computational routines for fitting the original and reparametrized
COM-Poisson regression models are available in the supplementary
material\textsuperscript{\ref{papercompanion}}.
There are many possible extensions to the model discussed in this paper,
including simulation studies to assess the model robustness against
model misspecification and to assess the theoretical approximations for
$Z(\lambda, \nu)$ (or $Z(\mu,\phi))$. Another simple extension of the
proposed model is to model both $\mu$ and $\phi$ parameters as functions
of covariate in a double generalized linear models framework. Finally,
the reparametrized version of the COM-Poisson model also encourages the
specification of generalized linear mixed models using this
distribution.
|
3,212,635,537,633 | arxiv | \section{Introduction}
One of the major goals of evolutionary genomics is the identification of genomic changes that underpin phenotypic differences between species \cite{anisimova2012detecting}.
To identify such changes it is important to identify regions or even individual sites in molecular sequences that have been under differing selective pressure through evolutionary history. While some of these phenotype-inducing changes occur in regulatory regions that are not responsible for encoding protein sequences, a large subset will change the amino acid sequence of a protein \citep{barrett2011molecular}.
At the interspecific level, several classes of codon models have been described that enable detection of changes in selective pressure in protein encoding genes across a phylogeny.
Markov models of codon substitution were introduced by \citet{goldman1994codon} and \citet{muse1994likelihood} and have subsequently been extended so that the ratio of nonsynonymous to synonymous nucleotide substitution rates can be estimated over branches of a phylogenetic tree \citep{yang1998likelihood} or on specific sites undergoing positive selection on particular branches \citep{Zhang2005}.
In this discussion, we use the term ``model'' to mean a set of substitution matrices, and their corresponding rate matrices, which are parametrized in a particular way, such as the ``Jukes-Cantor'' (JC) DNA model \citep{jukes1969evolution} (which only has one parameter and therefore exactly one rate matrix up to scaling) or the ``general time reversible'' (GTR) DNA model \citep{tavare1986some} (which has nine free parameters).
Further, when we say a matrix is ``in a model'', we mean that its entries, be they substitution probabilities or rates, are obtainable by some parametrization of that model.
When codon models are used to detect positive selection, they allow selection to differ between branches.
For instance, the parameter $\omega$ that controls the ratio of non-synonymous to synonymous changes might be free to vary in different branches \citep{yang1998likelihood}.
Recent work by Sumner \etal\ (2012) has shown that modelling a heterogeneous process as homogeneous can be inaccurate if the underlying substitution models do not have the mathematical property of (multiplicative) {\em closure}.
Mathematically, closure is the idea that if one multiplies two substitution matrices from some model, then their product is also in the same model.
For instance, it is known that, in general, two GTR substitution matrices, when multiplied together, need not yield another GTR matrix; on the other hand, the product of two F81 substitution matrices \cite{felsenstein1981} is always an F81 matrix.
Both results are demonstrated in \cite{sumner2012lie}.
From this it is clear that some of the standard phylogenetic models currently in use are closed and some are not.
To rectify this situation a complete hierarchy of closed DNA models, coined Lie-Markov models, has been developed; in particular allowing for distinguished nucleotide pairings \citep{fernandez2015lie, woodhams2015new}.
In this paper we focus on generalisations of two different, but closely related, types of codon model: those introduced by \citet{muse1994likelihood} (MG-type models), and those introduced by \citet{yang1998synonymous} and \citet{nielsen1998likelihood} derived from \citet{goldman1994codon} (GY-type models).
In these codon models DNA substitution rates are scaled by either 0, 1, or $\omega$, depending on whether, following the genetic code, the codon substitution is prohibited, or induces an amino acid substitution that is synonymous or non-synonymous, respectively.
To distinguish from DNA substitution rates and probabilities, we will refer to the parameters that arise from the genetic code as ``augmentation'' parameters, comprising both selection parameter $\omega$ and the existence of stop codons.
As discussed in \citet{lindsay2008pitfalls}, following the GTR paradigm of model construction, both MG- and GY-type models can be decomposed into a symmetric relative rate component and a base-frequency component.
The difference is that, for example, if codon AAA goes to AAG, in the GY-type models the overall rate depends on the frequency of the codon AAG whereas in the MG-type models it only depends on the frequency of the nucleotide G.
For reasons detailed below, the MG-type models have a more sensible mathematical structure, in the sense that if the underlying DNA model has the closure property then this at least carries over to the resulting MG-type \emph{triplet} model (see later), which can be thought of as the MG-type \emph{codon} model without the augmentation parameters. Further, MG-type models are better justified biologically from the perspective that the same DNA-level mutational process is likely to act at each of the three codon positions, with only selection differing. This is born out in their providing less biased estimates of selective parameters \cite{Spielman2015}.
This property is not satisfied by the GY-type models because the multiplication of the relative rates with the codon frequencies necessarily introduces non-linear constraints.
For DNA models, \citet{sumner2012general} found that lack of closure can cause serious mis-estimation of model parameters.
In the case of codon models, there are two important and independent reasons why they do not have the closure property.
The first is that codon models are built up from DNA models acting independently at each codon position, and these underlying models need not themselves be closed.
The second is due to the augmentation parameters 0, 1, $\omega$, used to reflect the structure of the genetic code.
In the section that follows we will tease apart these two effects and formally establish the lack of closure for codon models.
The lack of closure raises a somewhat alarming prospect, that the resulting artefacts could cause mis-estimation of the parameters used to understand selection.
To make this concrete, consider a scenario where for a period of time one substitution matrix $M_1$ governs codon evolution, which is then followed by a period governed by a distinct substitution matrix $M_2$.
Suppose also that selective pressure has not changed, i.e., $\omega$ is the same in both matrices, but there has been a change in the underlying DNA substitution rates.
We need not assume any change in the underlying DNA model, only in its substitution rate parameter values.
Even with these simple assumptions, the combined process, given by the matrix product $M_1M_2$, is not, in general, obtainable using the same model.
However, there will be some alternative codon substitution matrix $M'$ in the model that \emph{best fits} the observed data, and the $\omega$ parameter in the ``compromise'' $M'$ may not be the same as the $\omega$ in models $M_1$ and $M_2$.
In this paper, we seek to evaluate whether the lack of closure of codon models, be it caused by the underlying DNA model and/or by the augmentation parameters, has a significant detrimental effect on estimation of model parameters.
We simulate cases where the DNA rate parameters differ over two lineages while $\omega$ remains constant.
We measure the resulting error in estimation of both branch lengths and $\omega$, and explore whether the use of closed DNA models reduces these errors.
This setup explores the errors that can arise when fitting a homogeneous model to what is truly a heterogeneous process.
For clarity, since we allow for DNA rate parameters to differ \emph{across} lineages and then attempt a homogeneous fit, the model specification is \emph{not} necessarily cured by applying a multiplicatively closed model.
Our results are consistent with this: whether applying time reversible or multiplicative closed underlying DNA models in a codon model, we obtain errors in $\omega$.
We do however frame our discussion in the context of multiplicative closure, since the scale of the errors is due to the non-linearity present in the parametrization of these models and it is this property that is brought to light by consideration of multiplicative closure.
Additionally, the attraction of multiplicatively closed models is that they can be consistently used when the process is heterogeneous along a single lineage, and thus can be used to consistently fit a heterogeneous model to a true heterogeneous processes on a tree, even under the presence of subsampling of taxa.
This consistency is not shared by most of the family of time-reversible DNA models or any of the presently available codon models.
This motivates the need for development of closed models, similar to the newly introduced hierarchy of Lie-Markov models for DNA \citep{fernandez2015lie, woodhams2015new}, that apply at a \emph{codon} level.
We conclude the paper by giving some first thoughts on what such models might look like.
\section{Closure properties for codon models}
We are interested in drawing on algebraic properties of substitution models to better understand closure properties, or lack thereof, of both MG- and GY-type models.
To properly make these mathematical connections we present a generalised formulation in what follows.
In particular, we will follow the framework of embedding the models within the general Markov model of codon substitutions, making connections to the GTR paradigm when needed for further understanding.
Our approach will be to first construct what we will call a \emph{triplet} model by assuming independence of the substitutions at the three nucleotide positions, as is typical in practice.
We use ``triplet model'' to distinguish it from the more complex codon models, which include the augmentation parameters.
We then modify the triplet model to account for synonymous and nonsynonymous amino acid substitutions and stop codons, as dictated by the genetic code, yielding a \emph{codon} model.
We will see how the first part of the construction can be understood in algebraic means that are perfectly compatible with the goal of producing a multiplicatively closed model, whereas the second part, involving the augmentation parameters, is incompatible with that goal.
In this way, we are led to a deeper understanding of the obstructions involved in producing a multiplicatively closed codon model that respects the genetic code.
Suppose $M$, $N$, and $P$ are $4\times 4$ DNA substitution matrices whose entries give the probabilities of DNA substitution at nucleotide positions 1, 2, and 3 respectively.
Assuming substitutions at different positions are independent, it follows that the probability of transition from triplet $xyz$ to triplet $x'y'z'$ is given by the product
\[
\text{prob}(xyz \rightarrow x'y'z') = M_{xx'}N_{yy'}P_{zz'}.
\]
This array of numbers can be organised using a matrix ``Kronecker'' product $\otimes$, so $\text{prob}(xyz \rightarrow x'y'z')$ is equal to the corresponding entry of the $64\times 64$ matrix $M\otimes N\otimes P$.
The Kronecker product of two matrices $D$, $E$ is obtained by replacing each entry in $D$ by that entry multiplied by the matrix $E$.
To illustrate, the Kronecker product of two $2\times 2$ matrices
\[
D =\left[\begin{array}{cc} 1 & 0 \\ 1 & -2 \end{array}\right],
\quad
E = \left[\begin{array}{cc} 6 & -3 \\ 0 & -1\end{array}\right]
\]
is given by
\[\arraycolsep=5pt
D\otimes E =\left[\begin{array}{c@{~}c} E & 0 \\ E & -2E \end{array}\right]= \left[\begin{array}{c@{~}c@{~}c@{~~}c} 6 & -3 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 6 & -3 & -12 & 6\\ 0 & -1 & 0 & 2\end{array}\right].
\]
Performing this Kronecker product on the three $4\times 4$ DNA substitution matrices $M$, $N$ and $P$ yields the $4^3\times 4^3 = 64\times 64$ triplet substitution matrix $M\otimes N\otimes P$, where the ordering of triplets is implicitly determined by the properties of the Kronecker product.
Turning to a continuous time formulation, each substitution matrix is the matrix exponential of some (but, to remain general, possibly different) rate matrix, i.e., $M(t)=e^{At}$, $N(t)=e^{Bt}$, and $P(t)=e^{Ct}$ where $t$ is time.
We can recover the DNA rate matrix, $A$, via the derivative of $M(t)$ evaluated at $t\!=\!0$, giving $A=\left.\frac{d}{dt}M(t)\right|_{t=0}$.
Correspondingly, we can express the $64\times 64$ rate matrix as
\begin{align}
\label{eq:triplet}
R_{\text{triplet}} & =\left.\frac{d}{dt} (M(t)\otimes N(t)\otimes P(t)) \right|_{t=0} \notag \\ & =A\otimes \mathbf{I} \otimes \mathbf{I} +\mathbf{I} \otimes B \otimes \mathbf{I} +\mathbf{I} \otimes \mathbf{I} \otimes C,
\end{align}
where $\mathbf{I}$ is the $4 \times 4$ identity matrix.
(This result follows as an implementation of the usual rule for the derivative of a product.)
This formulation also makes intuitive sense, since the independence assumption implies that the instantaneous rate of substitution between any two triplets with variation at more than one position should be zero, and this can be confirmed by checking the matrix entries of each summand above.
Indeed, one finds that
\[
\text{rate}(xyz \rightarrow x'y'z')=
\left\{
\begin{array}{cl}
A_{xx'}, & \text{ if }y=y',z=z';\\
B_{yy'}, & \text{ if }x=x',z=z';\\
C_{zz'}, & \text{ if }x=x',y=y';\\
0, & \text{ otherwise},
\end{array}
\right.
\]
which is consistent with the structure of MG-type codon models.
Following the basic properties of Kronecker products, a simple argument then shows that such a triplet model is closed if, and only if, each rate matrix $A,B,C$ belongs to a (possibly different) closed DNA substitution model (see Supplementary Material).
Given this generalized version of triplet models, we now turn to accounting for the genetic code using the augmentation parameters.
We refer to these models as \emph{codon} models.
We like to describe the introduction of the augmentation parameters, in particular, the dN/dS ratio $\omega$, into the codon rate matrix as an ``overlay matrix''.
We create the overlay matrix by encoding the genetic code into a $64\times 64$ matrix $G$ where each entry is either a $0, 1,$ or $\omega$, depending on whether the corresponding substitution is prohibited (stop codons), synonymous, or nonsynonymous, respectively.
A substitution is prohibited if it is to or from a stop codon.
We note that it is not necessary to explicitly prohibit multiple simultaneous DNA substitutions in the same codon, as these are automatically prohibited by the underling triplet model.
This is a direct reflection of the Kronecker product construction of $R_\text{triplet}$ above.
Our generalized model is then expressed as a simple entry-wise multiplication of the triplet rate matrix $R_{\text{triplet}}$ with the matrix $G$, followed by adjustment of diagonal entries to ensure zero row-sums.
When needed, we write this two-step process using the notation
\begin{equation}
\label{eq:codonG}
R_\text{codon}=R_\text{triplet}\ast G,
\end{equation}
where the off-diagonal $i,j$-th entry in $R_\text{codon}$ is given by the product of the $i,j$-th entries of $R_\text{triplet}$ and $G$, and the diagonal entries are determined by demanding zero row-sums.
We note that this codon model is equivalent to the General Nucleotide Codon model of \citet{kaehler2017standard}.
We refer to any codon model constructed in this way as being of \emph{MG-type}, since, in the special case where the underlying DNA rate matrices are the same and are selected from the F81 model, we obtain precisely the Muse-Gaut codon model \citep{muse1994likelihood}.
This gives a convenient mathematical description of MG-type models but, unlike the Kronecker product operation $\otimes$, the introduction of the augmentation parameters via $G$ and the operation $\ast$ described above does \emph{not} preserve the closure property.
As we will presently establish, the introduction of the augmentation parameters causes the model to become non-linear, which by itself, destroys any chance of multiplicative closure.
Expressed in terms of constraints on rate matrices, for a model to be multiplicatively closed, the set of rate matrices obtainable from the model must form a \emph{linear space} (along with an additional condition not needed here involving `Lie brackets', or `commutators' --- see \citep{sumner2012lie,sumner2017multiplicatively} and also the Supplementary Material for more details).
This linearity condition means that the sum of any two rate matrices from the model is another rate matrix in the model.
Na\"ively (albeit mostly accurately) this property can be detected by inspecting the parametrization of the rates of substitutions: almost always, if the substitution rates are expressed using products of parameters, the model cannot be linear.
For example, the rates for the GTR and HKY models involve products of parameters for the equilibrium distribution and what are sometimes referred to as `relative' rates.
These DNA models are not linear, and hence are not multiplicatively closed.
On the other hand, the rates of substitution in the Kimura 2ST (K2ST) and the F81 models are expressed without products of parameters, and indeed these models are linear.
To illustrate the issue with any codon model constructed as in (\ref{eq:codonG}), suppose the underlying DNA model is identical at each position (that is, $A=B=C$), with, for example, the F81 model selected as the underlying DNA model (recall this is a closed DNA model).
In this DNA model the rate matrix can be expressed as
\[\arraycolsep=5pt
Q_{\text{F81}} = \left[\begin{array}{c@{~}c@{~}c@{~}c}
- & \alpha_2 & \alpha_3 & \alpha_4 \\
\alpha_1 & - & \alpha_3 & \alpha_4 \\
\alpha_1 & \alpha_2 & - & \alpha_4 \\
\alpha_1 & \alpha_2 & \alpha_3 & - \\
\end{array} \right],
\]
where `$-$' is a value fixed such that the row sum is 0, together with the triplet rate matrix given by
\[
R_{\text{triplet}}=Q_{\text{F81}}\otimes I\otimes I+I\otimes Q_{\text{F81}}\otimes I+I\otimes I\otimes Q_{\text{F81}}.
\]
The parameters $\alpha_i$ are not normalised in any particular way; however the DNA equilibrium base frequencies $\pi_i$ can be calculated as
\[
\pi_i = \frac{\alpha_i}{\alpha_1 + \alpha_2 + \alpha_3 + \alpha_4}.
\]
Now notice that the distinct entries of the resulting codon rate matrix $R_\text{codon}=R_\text{triplet}\ast G$ in this model are all either 0 (which we can, and will, ignore in the following argument), $\alpha_i$, or $\alpha_j\omega$, depending on whether the substitution is prohibited, synonymous or nonsynonymous, respectively.
If we add two such matrices together then the non-zero entries have the form $\alpha_i+\alpha_i'$ and $\alpha_j\omega+\alpha_j'\omega'$.
For the matrix sum to be an element of our codon model, these new entries must be expressible in the form
\begin{align}
\label{eq:sum}
\widehat{\alpha}_i &= \alpha_i+\alpha_i', \nonumber\\
\widehat{\alpha}_j\widehat{\omega} & = \alpha_j\omega+\alpha_j'\omega'
\end{align}
for some choices of $\widehat{\alpha}$ and $\widehat{\omega}$.
Consistent with all matrices in this codon model, the resulting matrix sum should contain multiple non-linear relationships between rates; in particular, the obvious equality
\begin{equation}
\frac{\widehat{\alpha}_1\widehat{\omega}}{\widehat{\alpha}_1}= \frac{\widehat{\alpha}_2\widehat{\omega}}{\widehat{\alpha}_2}.
\end{equation}
\noindent then produces the following constraint on parameters:
\[
\frac{\alpha_1\omega+\alpha_1'\omega'}{\alpha_1+\alpha_1'} =
\frac{\alpha_2\omega+\alpha_2'\omega'}{\alpha_2+\alpha_2'},
\]
which is clearly violated for general choices.
Thus, we conclude that this codon model is not closed.
Provided the underlying DNA models themselves have free parameters (beyond an overall scaling), a similar argument establishes that any codon model constructed as in (\ref{eq:codonG}) is not closed.
Even in the special case where the preceding caveat is false (such as the Jukes-Cantor DNA model) and the resulting codon model is linear, it nonetheless follows that the structure of the genetic code and the augmentation parameters 0, 1, $\omega$ themselves cause the model not to be closed.
We close this section with a discussion of how the GY-type models fall within the general perspective given thus far.
The major difference with the MG-type models is the treatment, within the GTR framework, of equilibrium frequencies and `relative' rates.
For GY-type models, it is the matrix of relative rates that can be expressed using the Kronecker product operation.
For instance, if we take $S$ as a symmetric $4\times 4$ DNA rate matrix, we can construct a symmetric $64\times 64$ matrix of relative rates by taking:
\[
S_{\text{triplet}}=S\otimes \mathbf{I} \otimes \mathbf{I} +\mathbf{I} \otimes S \otimes \mathbf{I} +\mathbf{I} \otimes \mathbf{I} \otimes S.
\]
(As mentioned above, this construction can be generalized by using a different symmetric DNA rate matrix at each codon position whilst maintaining the codon site independence assumption.)
The GY-type models are then obtained by taking a $64\times 64$ diagonal matrix $D$ of codon equilibrium frequencies (chosen in various ways) and, as is usual in the GTR framework, taking the product {$S_{\text{triplet}}D$ and then introducing the augmentation parameters by defining:
\[
R_{\text{GY}}=S_{\text{triplet}}D\ast G.
\]
In the particular case that each symmetric DNA matrix $S$ is taken from the K2ST model, we obtain the original model used by Goldman and Yang [\citeyear{goldman1994codon}].
As discussed in the introduction, we prefer, and will focus on, the MG-type construction.
\section{Methods}
\subsection{General approach of simulation study}
\begin{figure}
\begin{center}
\begin{tikzpicture}[line width=1pt]
\draw(2,3) node(root) {$\bm{\pi}_{0}$} (0,0) node(left) {} (4,0) node(right) {};
\path[->](root) edge [above left] node {$M_{1} = \exp(R_{1}t_{1})$} (left);
\path[->](root) edge [above right] node {$M_{1} = \exp(R_{2}t_{2})$} (right);
\end{tikzpicture}
\end{center}
\caption{
Initial codon frequencies, $\bm{\pi}_0$, are either: (1) the average of the equilibrium codon frequencies given by $M_1$ and $M_2$; or (2) the equilibrium base frequency distribution of a matrix $M$, where $M$ is chosen randomly in the same way as $M_1$ and $M_2$.
We then evolve by Markov matrices $M_{1}$ and $M_{2}$
\label{fig:GY}}
\end{figure}
As outlined in the introduction, there is potential for the lack of closure of codon models to cause over- or under-estimation of model parameters.
We wanted to know if fitting a homogeneous codon model to a heterogeneous process could lead to mis-estimation of either $\omega$ or branch lengths in simple scenarios with two taxa and a single change of process.
We explored what magnitude of errors were generated under a range of biologically reasonable evolutionary scenarios.
We were also interested in the follow-up question of whether or not different choices of underlying DNA model had a significant effect on the magnitude of errors.
In particular we wanted to test if there was any advantage to constructing codon models from closed (i.e., Lie-Markov) DNA models.
In order to address these questions, we applied the following simulation procedure to MG-type models
(see Figure~\ref{fig:GY} for illustration):
\begin{enumerate}\compactlist
\item Fix a DNA model of interest, e.g., the HKY model.
(For a complete list of models tested see Table \ref{tab:model_families}.)
\item Randomly choose parameters for two DNA rate matrices from this model.
\item Choose branch lengths $t_1$ and $t_2$ and a fixed value of $\omega$.
\item Generate the corresponding codon rate matrices $R_1$ and $R_2$ and measure how different they are.
\item Either set the initial codon frequency distribution $\bm{\pi}_0$ to be the average of the equilibrium frequencies for $R_1$ and $R_2$, or randomly select the initial codon frequencies.
\item Evolve the codon frequencies on the two branches (one for each of $R_1$, $R_2$) to generate a joint probability matrix $J$.
\item Treat $J$ as representing the initial and final states of a single MG-type model, and find the best fitting choice of DNA rate matrix $\widehat{Q}$, selection parameter $\widehat{\omega}$ and branch length $\widehat{t}$ .
\item Compare the estimated selection parameter $\widehat{\omega}$ and estimated branch length $\widehat{t}$ to the true values $\omega$ and $t_1 + t_2$.
\end{enumerate}
\subsection{Controlling for the effect of differences in models}
As discussed in the background section, codon models are constructed by first assuming an underlying DNA model.
When generating our heterogeneous process at the codon level, we begin by selecting two random instances from the same DNA model.
DNA models differ in their numbers of free parameters, e.g., beyond an overall scaling, the Jukes-Cantor model has no free parameters, whereas the HKY model has four (the transition/transversion ratio and three degrees of freedom in the choice of base frequencies).
The base frequencies in the underlying DNA model can have a range of different constraints, from all being the same (no degrees of freedom) to all being permitted to vary (three degrees of freedom); we refer to the Base frequency Degrees of Freedom as the BDF.
Interestingly, while DNA models that are in common use have either BDF~=~0 (with $\pi_A=\pi_G=\pi_C=\pi_T$) or BDF~=~3 (with $\pi_A,\pi_C,\pi_G,\pi_T$ unconstrained), the hierarchy of Lie-Markov DNA models presented in Woodhams \etal\ (\citeyear{woodhams2015new}) displays two additional options: BDF~=~1 resulting from the constraints $\pi_G=\pi_A$ and $\pi_C=\pi_T$, and BDF~=~2, from the constraint $\pi_G+\pi_A=\frac{1}{2}=\pi_C+\pi_T$.
Hence we also attempt to account for this potential confounding variable by including a range of models with BDF~=~0, 1, 2 and 3 (see Table~\ref{tab:model_families}).
It is important to comment at this point on what the distance between two matrices is.
The distance between two codon rate matrices is computed by first scaling them so they have trace of -1, and then taking the square root of the sum of squared differences of the off-diagonals.
The potential difference between numbers of free parameters means that when we choose parameters at random it is expected to get larger distances between the codon rate matrices $R_1$ and $R_2$ for some DNA models than for others.
In order to sensibly compare the performance of different classes of DNA models, e.g., time-reversible (TR) models vs.\ Lie-Markov (LM) models, it is important to control for such potential confounding variables.
For this reason we chose DNA models with a range of features: number of parameters, base frequency degrees of freedom (BDF), LM or not, and TR or not (see Table~\ref{tab:model_families}).
For each simulation we also recorded the distance between the $R_1$ and $R_2$ codon rate matrices and the difference in the equilibrium base frequencies.
We were then able to fit a linear regression analysis with error (in either selection parameter $\widehat{\omega}$ or branch length $\widehat{t}$) as the response variable, and characteristics of the models and simulations as predictors.
This framework allowed us to statistically test whether LM models have significantly smaller errors than TR models.
\begin{table*}
\begin{centering}
\begin{tabular}{ccccc}
\hline
BDF=0 & BDF=1 & BDF=2 & BDF=3 & \#~parameters \tabularnewline
\hline
\bf\em 2.2b (K2P) & \bf\em 3.4 & & \em 5.6b & 1\tabularnewline
\bf\em 3.3a (K3P) & \em 4.5a & \em 5.7a & \em 6.7a & 2\tabularnewline
\bf\em 3.3c (TrNef) & \bf\em 4.4b & \em 5.11a & \em 6.8a & 2\tabularnewline
\em 5.6a & \em 6.6 & & \em 8.10a & 5\tabularnewline
\em 5.11c & & 6.8b & \em 8.16 & 5\tabularnewline
\em 9.20b (DS) & & & \em 12.12 (GM) & 8\tabularnewline
\hline
\bf\em K2P (2.2b) & \bf K2P+1 & \bf HKY-1 & \bf HKY & 1\tabularnewline
\bf\em TrNef (3.3c) & \bf TrN-2 & \bf TrN-1 & \bf TrN & 2\tabularnewline
\bf TVMef & \bf TVM-2 & \bf TVM-1 & \bf TVM & 4\tabularnewline
\bf SYM & \bf SYM+1 & \bf GTR-1 & \bf GTR & 5\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{The DNA models considered in this study.
Within each row, models on the left are sub-models of models to their right, differing only in equilibrium base frequency degrees of freedom (BDF).
The mid-line separates models in the Lie-Markov hierarchy (above) from models in the time reversible hierarchy (below).
`\#~parameters' is the degrees of freedom of the BDF~=~0 model.
Models in {\em italic} are multiplicatively closed (Lie-Markov), and models in {\bf bold} are time reversible (some models are both).
\label{tab:model_families}}
\end{table*}
\subsection{Constructing the heterogeneous process}
For all instances, we first randomly generated DNA equilibrium base frequencies $\bm{\pi}=[\pi_i]$, then generated a random DNA rate matrix $Q$ in the model such that $Q$ had $\bm{\pi}$ as its equilibrium distribution.
The procedure for generating $\bm{\pi}$ depends on the BDF of the DNA model, and is explained in the following.
In choosing random DNA rate matrices we wanted to avoid parameters that were biologically unrealistic.
So that individual base frequencies were more likely to be close to 0.25, we used triangular distributions and ensured that no base could have a frequency less than 0.1.
No random generation was required for BDF~=~0 models as these have $\bm{\pi}=(0.25,0.25,0.25,0.25)$.
For DNA models with BDF~=~1, $\pi_A$ + $\pi_G$ was chosen from a triangular distribution centred on 0.5 with 0.2 and 0.8 as extremes.
For DNA models with BDF~=~2, $\pi_A$ and $\pi_C$ were independently chosen from a triangular distribution centred on 0.25 with 0.1 and 0.4 as extremes, with $\pi_G$ = $0.5 - \pi_A$ and $\pi_T$ = $0.5 - \pi_C$.
For BDF~=~3 (unconstrained base frequencies) the procedure for generating the $\pi_i$ is complex and is explained in full in the Supplementary Material.
However, the basic process was to first generate a random $\bm{\pi}^\prime$ as extreme as possible (i.e., containing at least one zero) and then form a weighted average of $\bm{\pi}^\prime$ and $(0.25,0.25,0.25,0.25)$, where the weight of $\bm{\pi}^\prime$ was chosen from a triangular distribution such that the minimum possible value in the average $\bm{\pi}$ was 0.1.
The symmetric part of the time-reversible models was generated via the method of \emph{basis matrices}, in imitation of the construction of Lie-Markov models presented in \cite{woodhams2015new}.
Details are given in the Supplementary Material.
Under this construction, both time-reversible and Lie-Markov models have parametrizations where parameters must be in the range $[-1,1]$, and if any parameter is $+1$ or $-1$, then the rate matrix will have a zero entry.
In other words, if a parameter is on the boundary of allowed values, $Q$ will be on the boundary of being a valid (stochastic) rate matrix.
If all parameters are zero, $Q$ will be the Jukes-Cantor matrix.
In either case (LM or TR), we draw parameters from a triangular distribution in the range $[-0.8,0.8]$, so parameters tend to be close to zero and rate matrices tend to be close to the Jukes-Cantor matrix.
Given a DNA model, to construct the corresponding MG-type model we require two additional parameters: $\omega$ and $t$.
In our simulations, $\omega$ was fixed and chosen from $\{0.2, 0.5, 1, 1.5, 2\}$, and time $t$ was selected uniformly in the range [0.03,0.18].
We have found that these parameter values create conditions in which it is feasible to compare fairly the performance of phylogenetic inference using the resulting simulated data.
Following (\ref{eq:codonG}), the codon substitution matrix $M_i$ for the $i$-th branch is given by
\begin{eqnarray}
\label{eq:M}
M_i=\exp\left(R_i t_i \right)
\end{eqnarray}
where $R_i=\left(Q_i\otimes \mathbf{I} \otimes \mathbf{I} +\mathbf{I} \otimes Q_i \otimes \mathbf{I} +\mathbf{I} \otimes \mathbf{I} \otimes Q_i\right)\ast G$, and $Q_i$ is the DNA rate matrix and $t_i$ the time on the $i$-th branch.
Having randomly selected codon rate matrices $R_1$ and $R_2$, we then explored two different methods for setting the codon equilibrium frequency $\bm{\pi}_0$ at the root.
In the first set of simulations we set $\bm{\pi}_0$ to be the average of the equilibrium frequencies for $R_1$ and $R_2$; in the second set of simulations we chose $\bm{\pi}_0$ at random as described above.
In either case, we then generated a joint probability matrix $J$ for the overall process by setting the initial codon frequencies to $\bm{\pi}_0$ and evolving down each branch, using the rate matrices $R_1$ and $R_2$ respectively (Figure \ref{fig:GY}).
$J$ is calculated by
\[
J = M_{1}^{T}\cdot \mbox{diag}(\bm{\pi}_{0})\cdot M_{2}\\
\]
where $M_{i}$ is constructed according to (\ref{eq:M}) from a single DNA rate matrix $Q_i$ that applies at each codon position and which has randomly chosen parameters $\theta_i$.
The $i,j$-th entry of $J$ is the probability that at an arbitrary site, the left leaf has codon $i$ and the right leaf has codon $j$.
\subsection{Model-fitting and performance measures}
We now treat the matrix $J$ as our ``data'' and attempt to fit a single homogeneous process to it.
The fact that the codon models are not closed means that we will not be able to do this exactly, but we can look for the best fitting homogeneous model.
We optimize $\omega$, a single model $\mathcal{M}$ with parameters $\theta_\mathcal{M}$ and $t$ to best match $J^\ast$ to $J$, where $J^\ast$ is calculated as follows:
\begin{equation*}
\begin{split}
J^\ast(\omega,\theta_\mathcal{M},t) & = \exp(R(\omega,\theta_\mathcal{M})t)^{T}\\ & \qquad \times\mbox{diag}(\mbox{eqbm}(R(\omega,\theta_\mathcal{M}))) \\ & \qquad \times\exp(R(\omega,\theta_\mathcal{M})t)
\end{split}
\end{equation*}
\noindent where $\text{eqbm}(R(\omega,\theta_\mathcal{M}))$ is the equilibrium distribution of $R(\omega,\theta_\mathcal{M})$.
For each simulation, we calculate $\Delta\pi$, a measure of the difference in codon frequencies at the two leaves. This is the root square difference in codon frequencies, i.e.,
\[
\Delta\pi = ||\bm{\pi}_1 - \bm{\pi}_2||_{2} = \sqrt{(\bm{\pi}_1-\bm{\pi}_2)\cdot(\bm{\pi}_1-\bm{\pi}_2)^T}
\]
where, taking $\bm{e}$ as the 64-long vector of ones, $\bm{\pi}_1=\bm{e}\cdot J$, and $\bm{\pi}_2=\bm{e}\cdot J^T$ (the column and row marginalizations of $J$ respectively).
The optimization is done by maximum likelihood, where the log-likelihood is
\[
\log(L(\omega,\theta_\mathcal{M},t|J))\propto\sum_{i,j}J_{ij}\log(J^\ast(\omega,\theta_\mathcal{M},t)_{ij})
\] and we denote $\widehat{\omega}$, $\widehat{\theta}_\mathcal{M}$, $\widehat{t}$ as the parameters which maximize this log-likelihood.
If the GY model is robust to inhomogeneity, we should find $\widehat{\omega} \approx \omega$ and $2\widehat{t} \approx t_1+t_2$.
Simulations were completed for each model shown in Table~\ref{tab:model_families}, for values of $\omega \in \{0.2, 0.5, 1, 1.5, 2\}$.
500 replicates were performed for each parameter set.
For each simulation we record both a raw error and a relative error.
Raw error is recorded as $\widehat{\omega} - \omega$ and $\widehat{t} - (t_1 + t_2)$, and relative error is defined as the absolute value of the raw error divided by the true value.
\section{Results}
\subsection{Does lack of closure lead to mis-estimation of $\omega$ or branch lengths?}
In this subsection we report results for the MG-style codon model that embeds HKY as a DNA model (the most widely used software, PAML, \citep{yang2007paml} for fitting codon models is based on the HKY DNA model). Here the lack of closure results from a change in model parameters such as equilibrium frequencies of the bases. Biologically, this is justified by several phenomena, including shifts in environmental temperature in bacteria \citep{groussin2011adaptation}, site-wise shifts in amino acid frequencies \citep{pollock2012amino}, and laterally transferred genes that show shifts in base frequencies between the old genome and the new genome \citep{daubin2003source}.
Our results for this model show that lack of closure does cause $\omega$ to be mis-estimated (Figure \ref{fig:error_MGHKY_nasty}(a), Table \ref{tab:mean_error_omega_nasty}; Supplementary Figure~S1, Supplementary Table~S4).
Over- or under-estimation seems about equally likely for values of true $\omega$ less than or equal to 1, but $\omega$ is more likely to be overestimated for true $\omega$ values of 1.5 and 2.
Mis-estimation is usually less than 2\%, but can be larger (the maximum relative error was 11.6\% in simulations with $\bm{\pi}_0$ chosen to be intermediate, and 16.9\% in the simulations where $\bm{\pi}_0$ was chosen randomly).
\begin{table*}
\begin{centering}
\begin{tabular}{cccccc}
\hline
true $\omega$ & 0.2 & 0.5 & 1 & 1.5 & 2 \tabularnewline
\hline
Mean raw error & -0.001065 & -0.002390 & -0.003419 & 0.007429 & 0.014113 \tabularnewline
Mean relative error & 0.018394 & 0.018873 & 0.017968 & 0.016775 & 0.017175 \tabularnewline
Maximum relative error & 0.126965 & 0.168885 & 0.139984 & 0.134708 & 0.169300 \tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Mean raw error, mean relative error and maximum relative error in the estimate of $\omega$ for different true values of $\omega$ for simulations where $\bm{\pi}_0$ is chosen at random.
\label{tab:mean_error_omega_nasty}}
\end{table*}
\begin{figure*}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.49\textwidth]{raw_omega_error_nasty.pdf}
& \includegraphics[width=0.49\textwidth]{raw_b_error_nasty.pdf} \\
(a) & (b)
\end{tabular}
\end{center}
\caption{Error in (a) $\omega$ and (b) branch length for the MG-style model that embeds the HKY DNA model. Boxplots display 500 simulations for each true value of $\omega$. Boxes show the lower quartile, median and upper quartile. Errors more than 1.5 times the interquartile range from the lower/upper quartiles are shown as individual points.
Results are shown for simulations where the root distribution $\bm{\pi}_0$ was chosen at random.}
\label{fig:error_MGHKY_nasty}
\end{figure*}
Branch lengths are also mis-estimated (Figure~\ref{fig:error_MGHKY_nasty}(b), Table~\ref{tab:mean_error_bl_nasty}; Supplementary Figure~S2, Supplementary Table~S5).
In the simulations where $\bm{\pi}_0$ was chosen to be intermediate, branch lengths are about equally likely to be over- or underestimated for values of true $\omega$ less than or equal to 1, and are more likely to be overestimated for true $\omega$ of 1.5 and 2.
Mis-estimation is usually less than 1\% of true branch length but can be up to 8.5\%.
For the set of simulations where the root distribution was chosen at random, branch lengths were far larger and also far more likely to be overestimated: for these experiments, the error was usually around 5\% but could be up to 83\% of the true branch length (Table~\ref{tab:mean_error_bl_nasty}).
\begin{table*}
\begin{centering}
\begin{tabular}{cccccc}
\hline
true $\omega$ & 0.2 & 0.5 & 1 & 1.5 & 2 \tabularnewline
\hline
Mean raw & 0.010231 & 0.017138 & 0.028055 & 0.040812 & 0.050529 \tabularnewline
Mean relative error & 0.047293 & 0.051239 & 0.052850 & 0.053294 & 0.052124 \tabularnewline
Maximum relative error & 0.544113 & 0.514956 & 0.584356 & 0.730939 & 0.827849 \tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Mean error, mean relative error and maximum relative error in the estimate of branch length for different true values of $\omega$ for simulations where $\bm{\pi}_0$ is chosen at random.
\label{tab:mean_error_bl_nasty}}
\end{table*}
As the underlying process becomes more heterogeneous the size of the errors increases. The largest errors in estimation of both $\omega$ and branch lengths occur when the difference in the rate matrices and/or difference in base frequencies are large (Figure \ref{fig:error_fanout_MGHKY_nasty}, Figure~S3).
\begin{figure*}
\includegraphics[width=\textwidth]{error_fanout_nasty.pdf}
\caption{Raw errors in $\omega$ (top panels) and errors in branch lengths (bottom panels)
for increasingly heterogeneous processes as measured by the difference in rate matrices (left-hand panels) and difference in base frequencies (right-hand panels).
Results are shown for simulations where the root distribution $\bm{\pi}_0$ was chosen at random.
\label{fig:error_fanout_MGHKY_nasty}}
\end{figure*}
\subsection{Does choice of DNA model affect estimation of $\omega$ or branch lengths?}
As the results above indicate, lack of closure can cause mis-estimation of both $\omega$ and branch lengths. We were interested in whether the choice of underlying DNA model had an effect on accuracy.
In particular we explored if using closed DNA models would reduce errors.
To assess this we fit a linear model with relative error (in either $\omega$ or branch lengths) as the response variable and the following predictor variables: model class Lie-Markov \textit{vs.}\ non-LM),
number of parameters in the base model (treated as a scaled variable), number of base frequency degrees of freedom (treated as a categorical variable), difference in rate matrices, and difference in base frequencies.
The relationship between the first three of these predictor variables and the models we used can be seen in Table~\ref{tab:model_families}.
We restricted to data sets where the true value of $\omega$ was 1.
\begin{table*}
\begin{centering}
\begin{tabular}{cllll}
\hline
Variable & Estimate & Std.\ Error & t value & p-value \tabularnewline
\hline
(Intercept) & -3.306e-03 & 1.646e-04 & -20.080 & {\bf $\leq$ 2e-16} \tabularnewline
Model class (TR) & -1.333e-03 & 1.062e-04 & -12.551 & {\bf $\leq$ 2e-16} \tabularnewline
\# parameters & 6.618e-04 & 2.472e-05 & 26.768 & {\bf $\leq$ 2e-16} \tabularnewline
BDF=1 & -2.972e-04 & 1.553e-04 & -1.913 & 0.0558 \tabularnewline
BDF=2 &-1.536e-04 & 1.678e-04 & -0.915 & 0.3602 \tabularnewline
BDF=3 &2.877e-05 & 1.908e-04 & 0.151 & 0.8802 \tabularnewline
diff.\ in rate matrices & 1.691e-02 & 7.807e-04 & 21.661 & {\bf $\leq$ 2e-16} \tabularnewline
diff.\ in base frequencies & 1.376e-01 & 6.917e-03 & 19.888 & {\bf $\leq$ 2e-16} \tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Results of fitting linear model with relative error in $\omega$ as the response variable.
With regard to the categorical variables, the intercept corresponds to models in the Lie-Markov (LM) class with BDF~=~0. As low errors are good, categories with negative coefficients in the `Estimate' column are performing well. In particular, TR (time reversible) models generate lower errors in $\omega$ than do Lie-Markov based models.
\label{tab:lm_omega}}
\end{table*}
\begin{table*}
\begin{centering}
\begin{tabular}{cllll}
\hline
Variable & Estimate & Std.\ Error & t value & p-value \tabularnewline
\hline
(Intercept) & -2.508e-03 & 1.272e-04 & -19.714& {\bf $\leq$ 2e-16} \tabularnewline
Model class (TR) & 8.547e-04 & 8.205e-05 & 10.418 & {\bf $\leq$ 2e-16} \tabularnewline
\# parameters & 2.839e-04 & 1.910e-05 & 14.861 & {\bf $\leq$ 2e-16} \tabularnewline
BDF=1 & 1.611e-03 & 1.200e-04 & 13.423 & {\bf $\leq$ 2e-16} \tabularnewline
BDF=2 & -8.560e-04 & 1.297e-04 & -6.600 & {\bf 4.21e-11} \tabularnewline
BDF=3 &4.678e-04 & 1.475e-04 & 3.172 & {\bf 0.00151} \tabularnewline
diff. in rate matrices & 1.098e-02 & 6.033e-04 & 18.198 & {\bf $\leq$ 2e-16} \tabularnewline
diff. in base frequencies & 2.697e-01 & 5.345e-03 & 50.452 & {\bf $\leq$ 2e-16} \tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Results of fitting linear model with relative error in branch length as the response variable.
With regard to the categorical variables, the intercept corresponds to models in the Lie-Markov (LM) class with BDF~=~0. The interpretation of this table is similar to the previous one, and we see that TR models are worse (positive estimate coefficient) than Lie-Markov based models.
\label{tab:lm_bl}}
\end{table*}
These results are in Tables \ref{tab:lm_omega} and \ref{tab:lm_bl}. In these tables, ``\# parameters'', ``BDF'', ``diff. in rate matrices'' and ``diff. in base frequencies'' are potentially confounding variables being accounted for.
The row of greatest interest is the ``Model class'' row, which shows how time reversible models perform relative to Lie-Markov based models.
For the $\omega$ relative error (able~\ref{tab:lm_omega}) the negative coefficient in ``Model class'' indicates that, other variables being equal, time reversible models give lower error than Lie-Markov based models.
For the branch length relative error, the converse is true --- Lie-Markov based models give lower errors than time reversible.
In both cases, the effect size is small (on the order of 0.1\%) so these differences are of little importance.
The results shown are for the case where $\bm{\pi}_0$ was chosen to be intermediate, however, the random $\bm{\pi}_0$ results are similar (time reversible models being better at estimating $\omega$ and worse at estimating branch length.)
\section{Discussion}
In this paper we introduced an algebraic formulation of codon models that allows us to separate the effect of independent evolution of sites across triplets (implemented via the Kronecker product) from the overlaying effect of the genetic code.
This gives a flexible method for constructing codon models from any chosen DNA model (or even three different DNA models acting at different codon positions).
By using this formulation, we can show that while closure properties of DNA models carry over to triplet models, the overlaying effect of the genetic code removes the closure property from codon models.
Given that previous work suggests lack of closure could exacerbate misestimation of heterogeneous processes \cite{sumner2012general}, we used two-taxon simulations to investigate the effect of using homogeneous codon models to fit a heterogeneous process.
Specifically, we investigated cases where the selection parameter $\omega$ was constant but where parameters of the underlying DNA model were different in different branches.
We found errors in the estimates of both $\omega$ and branch length, these errors became larger on average as the processes on the two branches differed more.
Further, where the ancestral codon frequencies were not intermediate, we encountered far larger errors than those where it was intermediate.
On average, the effect sizes are not large (less than 1\% for $\omega$ and less than 5\% for branch lengths), however, it was possible to get errors greater than 10\% for $\omega$ and greater than 50\% for branch lengths.
\citet{kaehler2017standard} demonstrate that homogeneous time reversible (hence stationary) codon models are biased to overestimate $\omega$ when the sequences have differing codon frequencies.
Our Monte Carlo simulations find this bias only when the true $\omega$ is strictly greater than 1.
We are simulating non-stationary non-homogeneous data and analysing it as stationary, homogeneous, but not necessarily time reversible.
As our results when the underlying DNA model is non-reversible (Lie-Markov) are performing no better than time reversible DNA models, we can conclude that the misestimation of $\omega$ found by \citet{kaehler2017standard} is not due to the time reversibility assumption, but rather due to one or both of the stationarity or homogeneity assumptions.
Intriguingly, as pointed out in \citet{kaehler2017standard}, while non-stationary models pass absolute goodness-of-fit tests for nucleotide data, even the most general non-stationary model of codon evolution (GNC) is still frequently rejected by absolute goodness-of-fit tests. It seems plausible that the issues with lack of closure explored here may offer some explanation for the failure of these models to fit codon sequence data adequately.
To judge to what extent these errors were specifically due to lack of closure we repeated the simulations on triplet models with no $G$ matrix applied, in this case errors for both closed and non-closed DNA models are about an order of magnitude smaller (Supplementary Material).
This result combined with the observation that codon models formed from closed DNA models (i.e. Lie Markov models) perform no better than models formed from other DNA models suggests that it is the lack of closure introduced by the genetic code that is the main source of the problem.
This raises the question of whether there exist any biologically realistic codon models that are closed.
This is currently an open problem.
Mathematically, it appears to be a rather difficult task.
The key issue is that applying augmentation parameters to allow the model to respect the genetic code introduces non-linearity into the model which, in itself, is enough to rule out the possibility of the model being closed.
A reasonable way around this difficulty is to ask, for a given non-closed codon model: ``What is the simplest closed codon model that contains this particular model as a submodel?''. The mathematical procedure for answering this question is computationally straightforward (albeit intensive), and our calculations have shown that the resulting models have many, many parameters (in the order of thousands) and are hence not of practical use for phylogenetics.
This does however prompt a modified question that has the potential to produce a more reasonable answer: ``What is the simplest \emph{linear} codon model that contains an MG-type model?'' In general a linear model is not closed, but our previous work \citep{sumner2012general} has shown that, at least in the case of DNA models, the errors caused by non-closure are, comparatively, resolved by moving from a non-closed non-linear model to a non-closed linear model.
For the MG-type models, the smallest containing linear model has quite an interesting structure, that we now describe.
To fix ideas, we discuss the MG model with F81 as the underlying DNA model and augmentation parameters 0, 1, and $\omega$.
To find the smallest linear codon model containing this model, one may proceed by finding the set of codon rate matrices obtained by taking sums of codon rate matrices from this model.
This results in the replacement of the substitution rates $(\alpha_i, \alpha_i\omega)$ --- which, as discussed above, exhibit non-linear constraints --- with an independent set $(\alpha_i, \mu_i)$.
This has the effect of removing the non-linear constraints on the model, at the expense of moving from a five parameter model to an eight parameter model.
This does however produce a linear codon model which is consistent with the genetic code structure and allows for the recoverability of \emph{multiple} analogues of the $dN/dS$ via the definitions:
\[
\omega_i\equiv \frac{\mu_i}{\alpha_i}.
\]
While there are intriguing possibilities inspired by signals from protein biophysics embedded in the genetic code through differing amino acid properties, we leave analysis and application of this model to future work.
\subsubsection*{Acknowledgement}
This research was supported by Australian Research Council (ARC) Discovery Grant DP150100088.
The author's would like to thank Simon Whelan for suggesting the line of enquiry explored in this work after JGS's presentation at the Phylogenetics: New Data, New Phylogenetic Challenges Workshop, University of Cambridge 2011.
\bibliographystyle{natbib}
|
3,212,635,537,634 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Let $f$ be a rational or a transcendental entire function. For $n \in \mathbb{N} $, let $ f^{n}$ denote the $n^{th}$ iterate of $f$. The set \\
$ F(f) = \{ z : (f^n) $ is normal in some neighbourhood of $ z \}$ is called Fatou set of $f$ and its complement denoted by $J(f)$ is called Julia set of $f$. For properties of these two sets one can refer for instance \cite{Beardon}, \cite{Bergweiler}, \cite{Morosawa}. Here we observe that $\hat{\mathbb{C} }= \mathbb{C} \cup \{ \infty \}$ has been partitioned in two different classes of sets, viz. the Fatou set and the Julia set with $ \infty \in J(f) $ when $f$ is transcendental entire. There are other ways also that $\hat{\mathbb{C} }$ can be partitioned, which is also linked with the Fatou and Julia sets.
For a transcendental entire function, Eremenko \cite{Eremenko} considered the Escaping set
$ I(f) = \{z \in \mathbb{C} : f^n(z) \rightarrow \infty \textrm{ as } n \rightarrow \infty \} $
and showed that $ I(f) \cap J(f) \not= \phi $, $ \partial I(f) = J(f) $ and all components of $ \overline{I(f)}$ are unbounded. He further conjectured that all the components of $I(f)$ should be unbounded. This conjecture though not yet solved has given rise to a rich development in the field. The points which tend to infinity with different ``speeds" such as fast escaping points, slow escaping points, relatively fast escaping points etc. have been studied by various authors, (see for instance \cite{Rippon}, \cite{Rippon 1}, \cite{ Singh}, \cite{ Wang}, \cite{Zheng}). In contrast, there are points whose iterates under $f$ remain bounded. The set of such points is denoted by $ K(f) $ and is defined by
$ K(f) = \{z: \textrm{ there exists } R > 0 \textrm{ such that } | f^n(z) | \leq R \textrm{ for } n \geq 0 \}. $
For non linear polynomial $P,$ $ K(P) $ is known as filled Julia set, which has been extensively studied. For transcendental entire function, $K(f) $ has been studied for instance by Bergweiler \cite{ Bergweiler 1} and Osborne \cite{Osborne}. There are points whose iterate are neither bounded nor tending to $\infty$. Such sets have been considered by Osborne and Sixsmith. They called it Bungee set. More specifically Osborne and Sixsmith \cite{Osborne 1}, defined the Bungee set $ BU(f) $ as $ BU(f) = \mathbb{C} \setminus ( I(f) \cup K(f) )$. Note that as each of $I(f)$ and $K(f)$ are completely invariant, it follows that $ BU(f)$ is also completely invariant, and the sets $I(f)$, $K(f)$, and $ BU(f)$ partition the complex plane $ \mathbb{C} $ into three classes. Rather than studying the Bungee set as complement of $ I(f) \cup K(f) $ we give here an alternate definition for Bungee set, which is very easy to use and prove some results using this definition.
\begin{definition} Let $f$ be rational or transcendental entire function. We define the Bungee set of $f$ denoted by $BU(f)$ by:
$ BU(f) = \{ z : \textrm{ there exist atleast two subsequences } \{f^{n_k}\}, \{f^{m_k}\} \\ \textrm{ with } {n_k} \rightarrow \infty \textrm{ and } {m_k} \rightarrow \infty \textrm{ as } k \rightarrow \infty \textrm{ and a constant } R > 0 \textrm{ such } \textrm{that } |f^{n_k}(z) | \leq R \textrm{ for } k =1, 2, \dots, \textrm{ and }$ $ f^{m_k}(z) \rightarrow \infty \textrm{ as } m_k \rightarrow \infty \}. $
\end{definition}
Note that the subsequences in the above definition cannot have a ``pattern" in the sense that for the point $z_o \in BU(f) $ there do not exist sequences $ {n_k}$ and $ {m_k} $ such that $ n_k = k n_o, m_k = k m_o, $ where $ n_o, m_o \geq 1, k = 1, 2, \dots $ and such that $ |f^{n_k}(z_o) | \leq R \textrm{ for } k =1, 2, \dots, \textrm{ and } f^{m_k}(z_o ) \rightarrow \infty \textrm{ as } k \rightarrow \infty $, for if they do exist, then choose $k$ sufficiently large say $l$ such that $ |f^{m_k}(z_o) | > R$ for all $ k \geq l $, and so $ |f^{k m_o}(z_o) | > R$ for all $ k \geq l $, and in particular $ |f^{k m_o n_o}(z_o ) | > R $ for $ k \geq l $, contradicting $ |f^{t n_o}(z_o ) | = |f^{n_t}(z_o ) | \leq R$ for all $t$. Also, note that for $z_o \in BU(f)$ if $ ( n_k, m_k, R, l_o )$ denote the sequences $ \{n_k\}$ and $ \{m_k\} $ such that $ |f^{n_k}(z_o) | \leq R \textrm{ for } k =1, 2, \dots, \textrm{ and } | f^{m_k}(z_o ) | > R $, for all $ k \geq l_o $, then if
$ n_k = p n_{k-1} $ and $ m_k = q m_{k-1}, $ then $ p \not= ( \frac{m_o}{n_o} q^l )^{\frac{1}{k}}$ for all $ l \geq l_o. $ For if for some $ l \geq l_o $ $ p = ( \frac{m_o}{n_o} q^l )^{\frac{1}{k}}$, then $ n_k = p^k n_o = q^l m_o = m_l, $ and so $ |f^{n_k}(z_o) | = |f^{m_l}(z_o) | $ which is not possible as left equality is bounded by $R$ and the right equality is greater than $R$.
\\
For a non-linear polynomial $P, BU(P) = \phi $, and for a transcendental entire function $f$, $BU(f) \cap J(f) \not= \phi $ \cite{Baker 1}, and also there are examples where $BU(f) \cap F(f) \not= \phi $ (see for instance \cite{Bergweiler 1}, \cite{Bishop}). For rational functions, the Bungee set may coincide with Fatou set, for instance if $R(z) = \frac{1}{z^2}$ then $BU(R)
= (|z|<1)\cup (|z|>1) $ and $J(R) = \partial (BU(R))$. It is interesting to note that if $R$ and $S$ are permutable rational functions ( i.e., $ R \circ S = S \circ R$), then $J(R)= J(S)$, where as this is still an open question for transcendental entire function, though quite some progress has been made (see for instance \cite{Baker}, \cite{Singh}). However the corresponding result on Bungee set is not true for rational functions. For instance if $R(z) = \frac{1}{z^2} $ and $S(z) = z^2 $ then clearly $ R(S(z)) = S(R(z)) $, and $ BU(R)= (|z|<1) \cup (|z|>1) \textrm{ whereas } BU(S) = \phi $. It looks, the same type of statement would be true if $f$ is transcendental entire, though we do not have an example for it. Thus the Bungee set may behave quite differently from Julia sets for rational functions and transcendental entire functions. It is also well known that for any $n$, $J(f^n) = J(f), F(f^n) = F(f).$ However this is not true for Bungee sets. We only have $ BU(R^n) \subset BU(R)$, since for instance, if $R(z) = \frac{1}{z^2}$, then $ BU(R^2) = \phi , \textrm{ whereas } BU(R)= (|z|<1) \cup (|z|>1)$. We are not sure whether the equality holds for transcendental entire function. But we can atleast show $ BU(f^n) \subset BU(f)$. For this observe that if $z_o \in BU(f^n)$, then there exist $\{n_k\}$ and $\{m_k\} $ tending to $ \infty$ such that $|(f^n)^{n_k}(z_o)| \leq R $ for some positive $R$, $ k = 1, 2, \dots$ and $(f^n)^{m_k}(z_o) \rightarrow \infty $
as $ m_k \rightarrow \infty $. Let $ t_k = n \cdot n_k $ and $p_k = n \cdot m_k$, then $|(f)^{t_k}(z_o)| \leq R , k = 1, 2, \dots$ and $f^{p_k}(z_o) \rightarrow \infty $ as $ p_k \rightarrow \infty $, and so $z_o \in BU(f)$.\\
\newpage
\section{Main results and their proofs.}
\setcounter{equation}{0}
In this paper we concentrate on Bungee sets of transcendental entire functions, and more specifically with composition of transcendental entire functions.\\
If $U$ is a component of $F(f)$, then by complete invariance, $f(U)$ lies in some component of $F(f)$. If $U_n \cap U_m = \phi $ for $ n \not= m$ where $U_n$ denotes the component of $F(f)$ which contains $f^n(U)$, then $U$ is called a wandering domain, else it is either a periodic or a pre-periodic domain. A complete classification for periodic domain is well known ( see for instance \cite{Morosawa} ). Also it is well known \cite{Sullivan} that rational functions have no wandering domain, though the same is not true for transcendental entire functions. Infact transcendental entire functions may have wandering domains. Also there are transcendental entire functions, such as the functions in Speiser class ( i.e. entire functions whose singularities are finite in number)
which do not have wandering domain. There are other classes of transcendental entire functions which do not have wandering domain (see for instance \cite{Bergweiler}, \cite{Morosawa}).
If $P$ is a bounded periodic component of $F(f)$ then obviously every point $z_0 \in P$ does not belong to $BU(f)$. But what can be said about points on $\partial P$ ? We shall show that even $ \partial P$ does not contain points of $BU(f)$. Thus we shall prove
\begin{Theorem}\label{Theorem 1}
Let $f$ be transcendental entire function. Let $P$ be a bounded periodic component of $ F(f)$. Then $ \partial P \cap BU(f) = \phi $.
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 1}. Note that as $P$ is a bounded periodic Fatou component, there exists a constant $M$ such that $ |f^n(P)| \leq M $ for all $ n \in \mathbb{N}$.
Suppose $ \xi_o \in \partial P \cap BU(f). $
Then there exists a sequence $\{ m_k\}$ such that $ f^{m_k}(\xi_o) \to \infty $ as $ m_k \rightarrow \infty. $
Hence for sufficiently large $m_k$ say $ M_k$, $ | f^{M_k}(\xi_o)\mid > 1+M.$ Also $ f^{M_k}$ is continuous at
$\xi_o$ and hence there exists $ \delta > 0$ such that $ | z - \xi_o | < \delta $ implies
$ | f^{M_k}(z) - f^{M_k}(\xi_o) | < 1$. In particular for $ t \in ( | z - \xi_o | < \delta )\cap P $ we have $ | f^{M_k}(t) - f^{M_k}(\xi_o) | < 1$, and on the other hand
$ | f^{M_k}(t) - f^{M_k}(\xi_o) | \geq | f^{M_k}(\xi_o) |- |f^{M_k}(t) | >1. $ This contradiction proves the theorem.\\
{\bf Question}: Is $ \partial P \cap BU(f) = \phi $ if $P$ is unbounded Fatou component ? \\
We next show that atleast for a transcendental entire function with no wandering domain, the bungee set cannot be a closed set.
\begin{Theorem}\label{Theorem 2}
Let $f$ be transcendental entire function without wandering domain. Then $ BU(f) $ cannot be a closed subset of $ \mathbb{C} $.
\end{Theorem}
For the proof of the theorem we shall need the following theorem of
Osborne and Sixsmith \cite{Osborne}
\begin{Theorem}\label{Theorem 3}
Let $f$ be a transcendental entire function.\\
(a) If $U$ is a Fatou component of $f$ and $U \cap BU(f) \not= \phi,$ then $ U \subset BU(f) $ and $U$ is a wandering domain.\\
(b) $J(f) = \partial BU(f).$\\
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 2}. Suppose $ BU(f) $ is a closed subset of $ \mathbb{C}. $ As $ BU(f) \not= \phi $ it contains infinitely many points. Also clearly $ BU(f) $ is completely invariant set. Also as $J(f)$ is the smallest closed completely invariant set having atleast three points it follows that $ J(f) \subset BU(f)$. If $ J(f) \not= BU(f)$, then there exists $ z_o \in BU(f) \cap F(f) $, and consequently there exists $ U \subset F(f) $ which intersects $BU(f)$. By Theorem \ref{Theorem 3}, $U$ must be a wandering domain. This contradiction proves $ J(f) = BU(f).$ But then this result is also not possible as repelling periodc points lie in $J(f) $ and periodic points obviously do not lie in $BU(f)$. This completes the proof of the Theorem.\\
Our next two theorems deal with pre-images of bungee sets corresponding to permutable transcendental entire functions.
\begin{Theorem}\label{Theorem 4}
Let $f$ and $g$ be permutable transcendental entire functions. Let $U \subset BU(f).$ If
$g^{-1} (U) \not= \phi $, then $g^{-1} (U) \cap ( I(f) \cup BU(f) ) \not= \phi $\\
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 4}. Suppose $ g^{-1} (U) \cap ( I(f) \cup BU(f) ) = \phi $. Let $z_o \in g^{-1}(U) $. Then there exists a constant $A$ such that $ |f^n(z_o))| \leq A $ for all $n \in \mathbb{N} $, and consequently $ | g(f^n(z_o))| \leq M(A, g) $ for all $ n \in \mathbb{N}$. Also there exists $\xi_o \in U$ such that $ g(z_o) = \xi_o$, and as
$ U \subset BU(f)$ there exists a sequence $n_k$ such that $f^{n_k}(\xi_o) \rightarrow \infty $ as $ n_k \rightarrow \infty$. So choose $n_k$ sufficiently large so that $ |f^{n_k}(\xi_o) | > M(A,g) +1$. Then for such $n_k$, $ |f^{n_k}(g(z_o) )| > M(A,g) +1$ and where as $ |f^{n_k}(g(z_o) )| = |g(f^{n_k}(z_o) ) | \leq M(A,g) .$
This contradiction proves the theorem.\\
\begin{Theorem}\label{Theorem 5}
Let $f$ and $g$ be permutable transcendental entire functions. Further let there exist a non linear polynomial $P$ such that $ P\circ f = f \circ g $. If $U \subset BU(f), $
then $g^{-1} (U) \subset BU(f).$
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 5}. Following as in Theorem \ref{Theorem 4}, if $z_o \in g^{-1}(U) $ and if $ |f^n(z_o))| \leq A $ for all $n \in \mathbb{N} $, then we have obtained a contradiction. We next show that $ f^n(z_o) \not \rightarrow \infty $ as $ n \rightarrow \infty$, and so $z_o \in BU(f)$. So, let $ f^n(z_o) \rightarrow \infty $ as $ n \rightarrow \infty$. Then there exists $\xi_o \in U $ and a constant $K$ and a sequence $n_k \rightarrow \infty $ such that $ | f^{n_k}(\xi_o)| < K $ for all $ k=1,2,
\dots $. Let $ p_k = n_{k+1}-n_k $ for all $k$. As $P(z)$ is a non linear polynomial, there exists $R$ sufficiently large such that $|P(z)| > |z| $ for all $z \in (|z| >R) $.As $ f^n(z_o) \rightarrow \infty $ as $ n \rightarrow \infty$, we can choose $n_k $ sufficiently large so that\\
$ |f^{n_k}(z_o) | > Max \{R, K+1\} , k=1,2,\dots .$ Let $ f^{n_k}(z_o) = \eta $. Then $ f^{n_k}(g(z_o)) = f^{n_k}(\xi_o)= t $ say where $|t|< K$. And so by permutability, $ t = g(f^{n_k}(z_o))= g(\eta) $, and so
$ | f^{p_k}(g(\eta ))| = | f^{p_k}(g(f^{n_k}(z_o)))| = | f^{p_k}(f^{n_k}g(z_o))|= | f^{n_{k+1}}(g(z_o))| = | f^{n_{k+1}}(\xi_o))| < K $.\\
On the other hand, $| f^{p_k}(g(\eta))| = | f^{n_{k+1}}(g(z_o))| = | P(f^{n_{k+1}}(z_o))| \geq | (f^{n_{k+1}}(z_o)| > K + 1$, and so $ z_o \in BU(f)$. Thus $g^{-1}(U) \subset BU(f)$.\\
If $f$ is a transcendental entire function and $z_o \in BU(f)$, then there exists $R > 0 $ and sequences
$\{n_k\}$ and $\{m_k\} $ tending to $ \infty$ such that $ | f^{n_k}(z_o)| \leq R $ for some positive $R$, $ k = 1, 2, \dots$ and $ f^{m_k}(z_o) \rightarrow \infty $
as $ m_k \rightarrow \infty $. We shall denote such domain $ ( |z| \leq R ) $ by $D_R(z_o)$. (Note that such $D_R(z_o)$ is not unique).\\
\begin{Theorem}\label{Theorem 6}
Let $f$ and $g$ be permutable transcendental entire functions. Let $P$ be a polynomial of degree $\geq 2$ and let $h$ be a transcendental entire function such that $ P\circ f = h \circ g$. Let $z_0 \in BU(f)$. Then $ g(D_R(z_o))$ cannot be a periodic domain of $f$.
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 6}. Let $z_o \in BU(f)$. Then there exists $R > 0 $ and sequences
$\{n_k\}$ and $\{m_k\} $ tending to $ \infty$ such that $| f^{n_k}(z_o)| \leq R $ for positive $R$, $ k = 1, 2, \dots$ and $ f^{m_k}(z_o) \rightarrow \infty $
as $ m_k \rightarrow \infty.$\\
First suppose $ |f^n(g(D_R(z_o)))| \leq A $ for some constant $A$ and all $ n \in \mathbb{N}$. Then
$ |h(f^n(g(D_R(z_o))))| \leq M(A,h) < B $ say, for all $n \in \mathbb{N}$ where $ M(A,h) = \max _{|z|=A}|h(z)|$.
As $P$ is a polynomial of degree $ \geq 2 $, we can choose $S$ sufficiently large so that $ |P(z)|>|z|$ for all $|z|>S$.\\
Now choose $M_k$ sufficiently large so that $ | f^{m_k}(z_o)| > \max \{S, B \} $ for all $ m_k \geq M_k.$\\
Thus for $ m_k \geq M_k, $ $ | P( f^{m_k}(z_o))| > | f^{m_k}(z_o)| > B.$\\
On the other hand $ | P( f^{m_k}(z_o))| = | P( f^{m_k- n_1}(f^{n_1}(z_o)))| = | P( f^{m_k-n_1}(t))| = | h( f^{m_k-n_1-1}g(t))| \leq M(A,h) < B $ where $t= f^{n_1}(z_o)$. This contradiction shows that $ f^n(g(D_R(z_o))) $ cannot be bounded for all $n \in \mathbb{N}$ and hence $g(D_R(z_o))$ cannot be a periodic Fatou component of $f$ unless it is a periodic Baker domain, which also is not possible as $g(D_R(z_o))$ is bounded.\\
If $f$ and $g$ are permutable transcendental entire functions and if $z_0 \in BU(f)$ as well as in $F(f)$ then we have the following result.
\begin{Theorem}\label{Theorem 7}
Let $f$ and $g$ be permutable transcendental entire functions. Let $h$ be a trascendental entire function and $P$ be a non-linear polynomial such that $ P \circ f = h \circ g $. If $ z_0 \in F(f) \cap BU(f)$ then $ g^{-1}(z_0) \in F(f) \cap BU(f)$.
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 7}. Since $ z_0 \in F(f) $ and $F(f)$ is open, there exists a neighbourhood and consequently a component $U$ of Fatou set of $f$ such that $U \cap BU(f) \neq \phi$, and so by Theorem \ref{Theorem 3}, $U \subset BU(f)$ and $U$ is a wandering domain and $\partial BU(f) = J(f)$. If $ g^{-1}(z_0) \in J(f)$, then by complete invariance $ g( g^{-1}(z_0)) \in J(f)$. Thus $z_0 \in J(f)$, contradicting $z_0 \in F(f).$
We now show that $ g^{-1}(z_0) \in BU(f)$. For suppose $ |f^n(g^{-1}(z_0)| \leq A $ for some constant $A$ and for all $n \in \mathbb{N}$, then $ |g(f^n(g^{-1}(z_0)) | \leq M(A, g) $, and so as $f$ and $g$ are permutable, $ |(f^n(z_0)) | \leq M(A, g) $ for all $n\in \mathbb{N}$ contradicting $ z_0 \in BU(f)$. Next suppose $ | f^n(g^{-1}(z_0) | \rightarrow \infty,$ as $ n \rightarrow \infty. $ Now, as $ z_0 \in BU(f)$, there exist a subsequence
$\{n_k \} \textrm { tending to }\infty $ and a constant $ c > 0 $, such that $ | f^{n_k}(z_0)| \leq c, k=1,2,\dots. $
Let $ D = M(c, h)$. Since $ P $ is a nonlinear polynomial, we can choose $ r > D+1$ sufficiently large so that for all $ z \in (|z|>r), |P(z)| > |z|$. Since $ | f^n(g^{-1}(z_0) | \rightarrow \infty $ as $ n \rightarrow \infty $, we select $n_t$ from the subsequence so that
$ | f^{n_t}(g^{-1}(z_0) | > r $ and $ | f^{n_t+1}(g^{-1}(z_0) | > r $.
Now$ (f^{n_t}\circ g)(g^{-1}(z_0) = f^{n_t}(z_0) = \xi_0 $ say, where $ |\xi_0 | \leq c $.\\
And so\\
$ \xi_0 = (f^{n_t}\circ g)(g^{-1}(z_0) = g(f^{n_t}(g^{-1}(z_0)) = g(\eta ) $ say, where $|\eta| = |f^{n_t}g^{-1}(z_0)| > r$.
Now $ | (h \circ g) (\eta )| = |h(\xi_0)| \leq M(c, h) = D$, and also as $ |f^{n_t+1}(g^{-1}(z_0) | > r $, we have\\
$|(h \circ g) (\eta )| = | (P\circ f)(\eta) | = | P( f^{n_t+1}(g^{-1}(z_0)) | > | f^{n_t+1}(g^{-1}(z_0)) | > r > D+1.
$ This contradiction proves the theorem.\\
\begin{Theorem}\label{Theorem 8}
Let $f$ be a transcendental entire function. If $ z_0 \in BU(f \circ g)$ then $ g(z_0) \in BU(g \circ f)$.
\end{Theorem}
\noindent \textit{Proof of Theorem} \ref{Theorem 8}. Suppose $ g(z_0) \notin BU(g \circ f)$. Then either (i) $ | (g \circ f )^n (g(z_0)) | \leq A $ for all
$n \in \mathbb{N} $ and some $ A > 0$ or (ii) $ \lim_{n \rightarrow \infty }(g\circ f)^n (g(z_0)) = \infty $. \\
Now if (i) holds, then $ | g(f\circ g)^n(z_0) | \leq A $ for all $n \in \mathbb{N} $, and so $ | f(g(f\circ g)^n(z_0)|\leq M(A, f) $ for all $ n \in \mathbb{N},$ where $ M(A, f) = \max_{|z|=A}|f(z)| $. Thus $ | (f\circ g)^{n+1}(z_0) | \leq M(A, f) $ for all $n\in \mathbb{N}$, contradicting $ z_0 \in BU(f \circ g).$\\
Next, if (ii) holds, then since $z_0 \in BU(f \circ g)$, there exists a sequence ${n_k}$ such that $ |(f \circ g)^{n_k}(z_0) | \leq \beta,$ for some $\beta > 0 $. And so $ | g(f\circ g)^{n_k}(z_0)| \leq M( \beta, g), k = 1, 2, \dots $. Thus $ | (g\circ f)^{n_k}(g(z_0))| \leq M( \beta , g)$, contradicting (ii). This proves the theorem.\\
|
3,212,635,537,635 | arxiv | \section{Introduction}
PKS\,1510--089\ is a bright flat spectrum radio quasar (FSRQ) located at a redshift of $z=0.36$ \citep{ta96}.
It was the second FSRQ to be detected in the very-high-energy (VHE, $>100$\,GeV) range \citep{ab13}.
The source is monitored by various instruments spanning the full range from radio up to VHE gamma rays \citep[see e.g.][]{ma10, al14, ah17}.
Similarly to other FSRQs, the GeV gamma-ray emission of PKS\,1510--089\ is strongly variable \citep{ab10, br13, sa13, pr17}.
Multiple optical flares have been observed from PKS\,1510--089\ \citep{ll75,za16}\footnote{\kom{see also \protect\burl{http://users.utu.fi/kani/1m/PKS\_1510-089\_jy.html}}}.
Significant VHE gamma-ray emission from PKS\,1510--089\ has been observed on a few occasions: during enhanced optical and GeV states in 2009 \citep{ab13} and 2012 \citep{al14} and during short flares in 2015 \citep{ah17, za16} and 2016 \citep{za17}.
Interestingly, no variability in VHE gamma rays has been observed during (or between) the high optical/GeV states in 2009 and 2012 \citep{ab13,al14}.
The GeV state of PKS\,1510--089\ can be studied using the \textit{Fermi}-LAT\ all-sky monitoring data.
MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes designed for observations of gamma rays with energies from a few tens of GeV up to a few tens of TeV \citep{al16a}.
Since the detection of VHE gamma-ray emission from PKS\,1510--089\ in 2012, a monitoring program is being performed with the MAGIC telescopes.
The aimed cadence of monitoring is 2-6 pointings per month, with individual exposures of 1-3\,hrs.
The source is visible by MAGIC 5\,months of the year.
We use the \textit{Fermi}-LAT\ data to select periods of low gamma-ray emission of PKS\,1510--089 .
Then, we select a subsample of the MAGIC telescope data taken between 2012 and 2017, and contemporaneous multiwavelength data from a number of other instruments, in order to study the quiescent VHE gamma-ray state of the source.
Such low emission can then be used as a baseline for modeling of flaring states.
In Section~\ref{sec:data} we briefly introduce the instruments that provided multiwavelength data, describe the data reduction procedures and explain the principle of low-state data selection.
In Section~\ref{sec:results} we present the results of the observations, and the broadband emission modelling is illustrated in Section~\ref{sec:model}.
The most important findings are summarized in Section~\ref{sec:conc}.
\section{Data}\label{sec:data}
The continuous monitoring of PKS\,1510--089\ in the GeV band provided by \textit{Fermi}-LAT\ allows us to identify the low emission states of the source.
Multiwavelength light curves from the radio band up to the GeV band are shown in Fig.~\ref{fig:lc}.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.95\textwidth]{lc_mwl2.eps}
\caption{
Multiwavelength light curve of PKS\,1510--089\ between 2012 and 2017. From top to bottom:
\textit{Fermi}-LAT\ flux above 1\,GeV;
\textit{Swift}-XRT\ flux 2-10\,keV;
U band flux from UVOT;
KVA, SMARTS and MAPCAT optical flux in R-band;
IR flux from REM and SMARTS in J band;
radio 229\,GHz flux measured by POLAMI;
radio 37\,GHz flux measured by Mets\"ahovi;
15\,GHz flux measured by OVRO.
The red points show the observations within 12\,h (or 3 days for the radio measurements) when MAGIC data have been taken during the time that \textit{Fermi}-LAT\ flux is above $3\times10^{-8}\mathrm{cm^{-2}s^{-1}}$, while the \kom{blue} points are observations in time bins with \textit{Fermi}-LAT\ flux below this flux value (i.e. the low-state sample).
IR, optical and UV data have been dereddened using \citet{sf11}.
}\label{fig:lc}
\end{figure*}
\subsection{\textit{Fermi}-LAT}\label{sec:fermi}
\textit{Fermi}-LAT\ monitors the high energy gamma-ray sky in the energy range from 20 MeV to beyond 300 GeV \citep{Atwood09}.
For this work, we have analyzed the Pass 8 SOURCE class events within a region of interest (ROI) of $10^\circ$ radius centered at the position of PKS\,1510--089\ in the energy range from 100 MeV to 300 GeV.
A zenith angle cut of $<90^\circ$ was applied to reduce the contamination from the Earth’s limb.
The analysis was performed with the \texttt{ScienceTools} software package version \texttt{v11r7p0} using the \texttt{P8R2\_SOURCE\_V6}\footnote{\burl{http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_LAT_IRFs/IRF_overview.html}} instrument response function and the \texttt{gll\_iem\_v06} and \texttt{iso\_P8R2\_SOURCE\_V6\_v06} models\footnote{\burl{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}} for the Galactic and isotropic diffuse emission \citep{ac16}, respectively.
A first unbinned likelihood fit was performed for the events collected within five months from 01 February to 30 June 2013 (MJD 56324--56474) using \texttt{gtlike}, and including in the model file all 3FGL sources \citep{Acero15} of $20^\circ$ from PKS\,1510--089 .
We repeat the same 5-month analysis using the preliminary 8-year Source List~\footnote{\burl{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/fl8y/}} instead of the 3FGL catalog to search for bright sources within 20$^\circ$ of PKS\,1510--089\ .
No new strong sources were found.
The model generated from the 3FGL catalog was used for the subsequent analysis.
As we are interested in short time scale (daily) fluxes of PKS\,1510--089\ the purpose of this first fit is to identify weak nearby sources that can be removed from the source model, simplifying it.
Hence, the sources with a test statistic (TS; \citealp{ma96}) below 5 were removed from the model file.
Next, the optimized output model file was used to produce the PKS\,1510--089\ light curve with 1-day time bins above 1 GeV in the full time period from 5 December 2011 to 7 August 2017 (MJD 55900--57972).
The same optimized output model is later also used for the spectral analysis.
In the light curve calculations the spectra of PKS\,1510--089\ were modeled as power law leaving both the flux normalization and the spectral index as free parameters.
The normalization of the Galactic and isotropic diffuse emission models was left to vary freely during the calculation of both the light curves and the spectrum.
In addition, the spectra of all sources except PKS\,1510--089\ and the highly variable source 3FGL 1532.7-1319 (located 6.45$^\circ$ from PKS\,1510--089 , and having variability index of 1924.7 from the 3FGL catalog) were fixed to the catalog values.
In order to estimate when the flux can be considered being in a low state, we first calculate a light curve in relatively wide bins of 30 days in the full time period.
This allows us to estimate the flux with relative uncertainty $\lesssim20\%$ for all the points and hence disentangle intrinsic variability from the fluctuations of the measured flux induced by statistical uncertainties.
In Fig.~\ref{fig:fermidist} we present the distribution of the flux above 1\,GeV, which shows that during the low state the flux is between (1--3) $\times10^{-8}\,\mathrm{cm^{-2}s^{-1}}$ in contrast to the value of $>3\times 10^{-8} \mathrm{cm^{-2}s^{-1}}$ during active (flaring) periods.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{lc_months_days_dist.eps}
\caption{
Distribution of flux above 1\,GeV measured with \textit{Fermi}-LAT\ in 30-day (thick line) or 1-day (thin line) bins.
The vertical dashed line shows the value of the cut separating the low state.
}\label{fig:fermidist}
\end{figure}
Hence, we select the days of low state if:
\begin{equation}
F_{>1\mathrm{GeV}}< 3\times 10^{-8} \mathrm{cm^{-2}s^{-1}}. \label{eq:lowstate}
\end{equation}
The cut separates the low \kom{flux} peak of the daily fluxes distribution from the power-law extension of the high-flux days (see Fig.~\ref{fig:fermidist}).
The effect that choosing a different energy threshold would have on the data selection is discussed in Appendix~\ref{app:datasel}.
We note, however, that due to the low state and short exposure times the flux measurements during single nights are quite uncertain.
The typical uncertainty of the flux in those time bins is $\sim1.5\times 10^{-8} \mathrm{cm^{-2}s^{-1}}$.
We also include in the low-state sample nights for which the \textit{Fermi}-LAT\ flux did not reach TS of 4.
The average 95\% C.L. flux upper limit on those nights is also $\sim3\times 10^{-8} \mathrm{cm^{-2}s^{-1}}$.
For the low-state spectrum we combine individual one day integration windows selected with flux fulfilling Eq.~\ref{eq:lowstate}.
The spectrum, calculated above 100 MeV, is well described as a power law with spectral index $2.56\pm0.04$, with a TS$=1656.0$.
The possible curvature in the spectrum is investigated by fitting the spectrum with a logparabola which yields a TS$=1655.42$ and negligible curvature ($\beta=0.06\pm0.04$).
Therefore, no hint of spectral curvature was found during the low-state periods considered in this analysis.
The selection of \textit{Fermi}-LAT\ observation days according to the flux $>1$\,GeV can bias the reconstructed average spectrum in this energy range.
To investigate such possible bias for the selected low-emission time periods we also calculate the spectral index in the energy range 0.1--1\,GeV \kom{(not affected by the data selection)} and obtain $2.41\pm0.06$.
Moreover, the \textit{Fermi}-LAT\ spectral energy points above 1\,GeV are $\sim 25\%$ lower than the extrapolation of the spectrum below 1\,GeV\kom{, suggesting that indeed there is an up to $25\%$ underestimation bias in the obtained \textit{Fermi}-LAT\ flux above 1\,GeV}.
\subsection{MAGIC}
MAGIC is a system of two imaging atmospheric Cherenkov telescopes.
The telescopes are located in the Canary Islands, on La Palma ($28.7^\circ$\,N, $17.9^\circ$\,W), at a height of 2200 m above sea level \citep{al16a}.
The large mirror dish diameter of 17\,m, resulting in low energy threshold, makes it a perfect instrument for studies of soft-spectrum sources such as FSRQs.
As PKS\,1510--089\ is a southern source, only observable at zenith angle $>38^\circ$, the corresponding trigger threshold \kom{would be} $\gtrsim 90$\,GeV for a Crab nebula-like spectrum \citep{al16b}, about 1.7 times larger than for the low zenith observations.
\kom{
About 70\% of the data of PKS\,1510--089\ was taken at the culmination, with zenith angle $<40^\circ$.
Moreover, PKS\,1510--089\ is intrinsingly soft; the analysis energy threshold is only $\sim 80$\,GeV for a source with a spectral index of $\sim -3.3$.
Note also that the energy threshold of Cherenkov telescopes is not a sharp one and the unfolding procedure allows us to reconstruct the source spectrum slightly below the nominal value of the threshold.}
Between 2012 and 2017 the MAGIC telescopes observed PKS\,1510--089\ during 151 nights, out of which 115 passed at least partially the data quality selection cuts.
We then select the nights corresponding to the \textit{Fermi}-LAT\ periods fulfilling the Eq.~\ref{eq:lowstate} condition.
Such procedure results in low-state data stacked from 76 nights, amounting to a total observation time of 75\,hrs.
The cut on the flux $>1$\,GeV excludes the MAGIC data reporting the detections of the two flares observed in 2015 \citep{ah17} and 2016 \citep{za17}, as well as most of the data used for the detection during the high state of 2012 \citep{al14}.
The data were analyzed using MARS, the standard analysis package of MAGIC \citep{za13, al16b}.
Due to evolving telescope performance the data have been divided into 6 analysis periods.
Within each analysis period proper Monte Carlo simulations are used for the analysis.
At the last stage the analysis results from all the periods are merged together.
This low-state data set shows a gamma-ray excess with a significance of $9.5\sigma$ (see Fig.~\ref{fig:th2}).
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{pks1510_lowstate_2012-2017_odie_le.eps}
\caption{
Distribution of $\theta^2$, the squared angular distance between the reconstructed arrival direction of individual events and the nominal source position (points) or background estimation region (gray filled area) for MAGIC observations of PKS\,1510--089 .
Dashed line shows the value of the $\theta^2$ up to which the significance of the detection (see the inset text) is calculated.
}\label{fig:th2}
\end{figure}
\subsection{\textit{Swift}-XRT}
Since 2006, the source is monitored in the X-ray band by the XRT instrument on board the \textit{Neil Gehrels Swift Observatory} \citep[\textit{XRT},][]{2004SPIE.5165..201B}.
In total, 243 raw images are publicly available in SWIFTXRLOG (\textit{Swift}-XRT Instrument, Log)\footnote{\burl{https://heasarc.gsfc.nasa.gov/W3Browse/swift/swiftxrlog.html}}.
From those we selected 17 images based on the simultaneity to the GeV low-flux state and contemporaneousness with MAGIC observations.
\kom{The standard \textit{Swift}-XRT\ analysis\footnote{\burl{http://www.swift.ac.uk/analysis/xrt/}} is described in detail in \cite{ev09}.
The data} are processed following the procedure described by \citet{2017A&A...608A..68F}, assuming a fixed equivalent Galactic hydrogen column density $n_H = 6.89 \times 10^{20}\,\rm cm^{-2}$ reported by \citet{2005A&A...440..775K}.
We defined the source region as a circle of 20 pixels ($\sim47$'') centered on the source, and a background region as a ring also centered on the source with inner and outer radii of 40 ($\sim$94'') and 80 pixels ($\sim$188''), respectively.
In order to calculate the average low-state X-ray spectrum of PKS\,1510--089\ we have combined all selected individual \kom{\textit{Swift}-XRT{}} pointings (see the \kom{blue} points in Fig.~\ref{fig:lc}), adding up to a total exposure of 30\,ks.
The 2--10 keV\,flux measured during those 17 pointings shows clear variability.
Fitting the flux with a constant value yields $\chi^2/N_{\rm dof}=84/16$; however, the amplitude of the variability is moderate (the RMS of the points is about $\sim30\%$ of the mean flux).
The average spectrum is well fitted ($\chi^2/N_{\mathrm{dof}} = 187.7/214$) with a power law with an index of $1.382\pm0.020$ and $F_{2-10\,\mathrm{keV}} = 8.14^{+0.25}_{-0.19}\times 10^{-12}\mathrm{\,erg\,cm^{-2}s^{-1}}$.
The spectral index does not show \kom{significant} variability (fit to constant yields $\chi^2/N_{\mathrm{dof}}$=31.19/16, which translates to a chance probability of $\sim 1.2\%$).
A harder-when-brighter trend is \kom{only} hinted, with a Pearson's rank coefficient for a linear correlation between flux and spectral index of 0.81 (2-10 keV) and 0.74 (0.3-10 keV).
\subsection{Optical observations}
PKS\,1510--089\ is regularly monitored as part of the Tuorla blazar monitoring program\footnote{\url{http://users.utu.fi/kani/1m}} in the R band using a 35\,cm Celestron telescope attached to the KVA (Kunglinga Vetenskapsakademi) telescope located at La Palma.
The monitoring covers the period of 2012-2017 and the observations were mostly contemporaneous with the MAGIC observations of the source.
The data analysis was performed with the semi-automatic pipeline using the standard analysis procedures (Nilsson et al. in prep).
The differential photometry was performed using the comparison star magnitudes from \citet{villata97}.
Calar Alto data were acquired as part of the MAPCAT (Monitoring AGN with Polarimetry at the Calar Alto 2.2m Telescope) project\footnote{\url{http://www.iaa.es/~iagudo/_iagudo/MAPCAT.html}}, see \citet{ag12}.
The MAPCAT data presented here were reduced following the procedure explained in \citet{jo10}.
Additionally, we used the publicly available data in the R band from the Small and Moderate Aperture Research Telescope System (SMARTS) instrument located at Cerro Tololo Interamerican Observatory (CTIO) in Chile \citep{bo12}, processed as described in \citet{ah17}.
The KVA, MAPCAT and SMARTS R-band data have been corrected for Galactic extinction using $\mathrm{A_R} = 0.217$ \citep{sf11}.
In the optical range, PKS\,1510--089\ shows mostly low emission throughout 2012--2014 and during 2017.
Strong flares are seen in 2015 and 2016 at the times of high GeV state.
The individual measurements performed in the optical range have very small statistical uncertainties, well below the variability observed during the selected low-state nights.
Therefore, for the modelling, we use the average optical flux from 53 nights of observations (47 nights of observations with KVA, 3 with MAPCAT and 13 with SMARTS).
We take as the uncertainty the RMS spread of the measurements.
By applying such a procedure, we obtain that the mean optical flux during the low state is $1.55\pm 0.57$\,mJy.
\kom{In band B we combine \textit{Swift}-UVOT data (see the next section) with the SMARTS data, bringing the total number of observing nights to 20 and average flux of $1.22\pm 0.46$\,mJy}.
\subsection{\textit{Swift}-UVOT}
The Ultraviolet/Optical Telescope (UVOT, \citealp{po08}) is an instrument on board the \textit{Swift} satellite operating in the 180--600\,nm wavelength range.
The source counts were extracted from a circular region centered on the source with 5'' radius, the background counts from an annular region centered on the source with inner and outer radius of 15'' and 25'' respectively.
The data calibration was done following \cite{ra10}, where the effective wavelength, counts-to-flux conversion factor, and Galactic extinction for each filter were calculated in an iterative procedure by taking into account the filter's effective area and the source's spectral shape.
\kom{The Galactic extinction values derived from the re-calibration procedure are
$A_v=0.28$\,mag,
$A_b=0.37$\,mag,
$A_u=0.44$\,mag,
$A_{w1}=0.63$\,mag,
$A_{m2}=0.78$\,mag,
$A_{w2}=0.74$\,mag.}
The variability of the UV flux during the low-state nights is rather minor.
The average flux of the quiescent state was derived in the same way as for optical data.
The number of quasi-simultaneous UVOT observations, contemporaneous to MAGIC observations during the \textit{Fermi}-LAT\ low gamma-ray state is 9--13, depending on the filter.
\subsection{IR}
We use infrared observations of PKS\,1510--089\ performed with the REM (Rapid Eye Mount, \citealp{ze01, co04}) 60 cm diameter telescope located at La Silla, Chile.
The observations were performed with J, H and Ks filters, with individual exposures ranging from 12\,s to 30\,s.
After calibration using neighbouring stars, the magnitudes were converted to fluxes using the zero magnitude fluxes from \cite{me90}.
Additionally, we used the publicly available data in the J and K bands from SMARTS \citep{bo12}, processed as described in \citet{ah17}.
Since the data were taken independently from MAGIC, a limited number of nights of MAGIC observations have quasisimultaneous REM or SMARTS data.
The data taken during the times classified as low state consist of 5 nights of REM data for H filter and 13 nights of REM or SMARTS data from J and K filters.
Moreover, one of the nights observed by SMARTS on MJD 57181 had a major IR flare where the flux increased by a factor of $\sim$5--6 with respect to the average flux value of the rest of the selected data.
We nevertheless apply the same procedure as for R-band KVA data, averaging the IR flux over those low-state observations, neglecting, however, the night of the IR flare.
We obtain $F_K=7.3\pm 2.7$\,mJy, $F_H=4.2\pm2.4$\,mJy, $F_J=2.3\pm1.0$\,mJy.
Including the night of the IR flare in the sample would change the $F_J$ and $F_K$ fluxes relatively mildly ($\sim 30\%$ increase), however it would increase the RMS considerably to a value comparable to the flux.
\subsection{Radio}
We use radio monitoring observations of PKS\,1510--089\ performed by OVRO (15\,GHz, \citealp{ri11}), Mets\"ahovi (37\,GHz, \citealp{te98}) and POLAMI (86\,GHz, 229\,GHz).
We also use CARMA data taken at 95\,GHz between August 2012 and November 2014 \citep{ra16}.
POLAMI (Polarimetric Monitoring of AGN at Millimetre Wavelengths)\footnote{\url{http://polami.iaa.es}} \citep{ag18a,ag18b, th18} is a long-term program to monitor the polarimetric properties (Stokes I, Q, U, and V) of a sample of $\sim$40 bright AGN at 3.5 and 1.3 millimeter wavelengths with the IRAM 30m Telescope near Granada, Spain.
The program has been running since October 2006, and it currently has a time sampling of $\sim$2 weeks.
The XPOL polarimetric observing setup has been routinely used as described in \cite{th08} since the start of the program.
The reduction and calibration of the POLAMI data presented here are described in detail in \cite{ag10,ag14,ag18a}.
The 37 GHz observations were made with the 13.7 m diameter Aalto University Mets\"ahovi radio telescope\footnote{\url{http://metsahovi.aalto.fi/en/}}, which is a radome enclosed Cassegrain type antenna situated in Finland.
The measurements were made with a 1 GHz-band dual beam receiver centered at 36.8 GHz.
The HEMPT (High Electron Mobility Pseudomorphic Transistor) front end operates at room temperature.
The observations are Dicke switched ON--ON observations, alternating between the source and the sky in each feed horn.
A typical integration time to obtain one flux density data point is between 1200 and 1800 s.
The detection limit of the telescope at 37 GHz is of the order of 0.2\,Jy under optimal conditions.
Data points with a signal-to-noise ratio $<4$ are considered non-detections.
The flux density calibration is set by observations of the HII region DR 21.
The sources NGC 7027, 3C 274 and 3C 84 are used as secondary calibrators.
A detailed description of the data reduction and analysis is given in \cite{te98}.
The error estimate in the flux density includes the contribution from the measurement RMS and the uncertainty of the absolute calibration.
The Owens Valley Radio Observatory (OVRO) 40-Meter Telescope uses off-axis dual-beam optics and a cryogenic receiver with 3~GHz bandwidth centered at 15~GHz.
Atmospheric and ground contributions as well as gain fluctuations are removed with the double switching technique \citep{1989ApJ...346..566R} where the observations are conducted in an ON-ON fashion so that one of the beams is always pointed on the source.
Until May 2014 the two beams were rapidly alternated using a Dicke switch.
Since then a new pseudo-correlation receiver replaced the old one, and a 180$^\circ$ phase switch is used.
Relative calibration is obtained with a temperature-stable noise diode to compensate for gain drifts.
The primary flux density calibrator for those observations was 3C~286 with an assumed value of 3.44~Jy\citep{1977A&A....61...99B}, while DR21 is used as a secondary calibrator source.
Details of the observation and data reduction procedures are given in \citet{ri11}.
The radio flux at all frequencies shows slow variability, not simultaneous with the flares observed at higher energies.
In order to obtain the average emission during the low gamma-ray state we apply the same procedure as for the R-band flux; however we apply a larger margin in time, using the data within $\pm$3 days from the MAGIC observations during low \textit{Fermi}-LAT\ flux.
We obtain
$F_{\mathrm{15\,GHz}}=4.4\pm 1.2$\,Jy (average over 22 observations),
$F_{\mathrm{37\,GHz}}=3.9\pm 1.1$\,Jy (59 observations),
$F_{\mathrm{86\,GHz}}=3.14\pm0.86$\,Jy (6 observations),
$F_{\mathrm{95\,GHz}}=2.16\pm0.13$\,Jy (9 observations),
$F_{\mathrm{229\,GHz}}=1.76\pm0.42$\,Jy (4 observations).
\section{Low gamma-ray state of PKS\,1510--089}\label{sec:results}
The low-state spectrum of PKS\,1510--089\ observed by the MAGIC telescopes was reconstructed between 63 and 430\,GeV and is shown in Fig.~\ref{fig:sed}.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{pks1510_historical_sed.eps}
\caption{
Spectral Energy Distribution (SED) of PKS\,1510--089\ during the low state (red filled points and shaded magenta region) compared to historical measurements (open symbols):
high state in March 2009 (gray stars, \citealp{ab13}), high state in February-March 2012 (black diamonds, \citealp{al14}), flare in May 2015 (blue crosses, \citealp{ah17}) and flare in May 2016 (magenta squares, \citep{za17}).
The spectra are not deabsorbed from the EBL extinction.
}\label{fig:sed}
\end{figure}
The observed spectrum can be described by a power law:
$dN/dE = (4.66\pm0.59_{\mathrm{\kom{stat}}})\times 10^{-11}(E/175\,\mathrm{GeV})^{-3.97\pm0.23_{\mathrm{\kom{stat}}}} \mathrm{cm^{-2}s^{-1}TeV^{-1}}$.
Correcting for the absorption due to the interaction with the extragalactic background light (EBL) according to \cite{do11}, we obtain the following intrinsic spectrum:
$dN/dE = (7.9\pm1.1_{\mathrm{\kom{stat}}})\times 10^{-11}(E/175\,\mathrm{GeV})^{-3.26\pm0.30_{\mathrm{\kom{stat}}}} \mathrm{cm^{-2}s^{-1}TeV^{-1}}$.
\kom{Since the excess to residual background ratio is of the order of 6\%, the systematic uncertainty on the flux normalization (without the effect of the energy scale) is $\pm$20\%, larger than for typical MAGIC observations, \citep{al16b}.
Also the systematic uncertainty on the spectral slope is increased by the low excess to residual background ratio and following the prescription of Section~5.1 in \cite{al16b} can be estimated as $\pm0.4$.
The uncertainty of the energy scale is $\pm15\%$. }
Comparing with previous measurements, the high-state detection in 2012 \citep{al14} gives $\sim 1.7$ times larger flux than the low state studied here.
On the other hand, the most luminous flare observed from PKS\,1510--089\ in May 2016 gives flux $\sim$40--80 times higher than the low state (for the MAGIC and H.E.S.S. observation window respectively, see \citealp{za17}).
Interestingly, the intrinsic spectral index of $-3.26\pm0.30_{\mathrm{stat}}$ is consistent within the uncertainties with the one obtained during the high state in the 2012 ($-2.5\pm0.6_{\mathrm{stat}}$, \citealp{al14}), the 2015 flare ($-3.17\pm0.80_{\mathrm{stat}}$, \citealp{ah17}) and the 2016 flare ($-2.9\pm0.2_{\mathrm{stat}}$, $-3.37\pm0.09_{\mathrm{stat}}$, \citealp{za17}).
As reported in Section~\ref{sec:data}, the IR to UV low-state data show variability at the level of $\sim40$\%.
We search for possible variability in the MAGIC data taken during the defined low state by computing light curves using different binnings (see Fig.~\ref{fig:magiclc}).
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{pks1510_lowstate_yearly_daily_LC_minus3.eps}
\caption{
Light curve of PKS\,1510--089\ obtained with MAGIC observations during the low state in daily (red thin lines) and yearly (black thick lines) binning.
For clarity three nights with short exposure (resulting in flux estimation uncertainty $>15\times10^{-12}\mathrm{cm^{-2}s^{-1}}$) are omitted from the plot.
}\label{fig:magiclc}
\end{figure*}
Both the daily ($\chi^2/\mathrm{N_{dof}} = 51.9/74$) and yearly ($\chi^2/\mathrm{N_{dof}} = 3.08/5$) light curves do not show any evidence of variability when fitted with a constant flux model.
The gamma-ray flux of PKS\,1510--089\ during the low state is, however, too weak for probing variability with a similar relative amplitude at GeV energies with MAGIC or \textit{Fermi}-LAT\ as observed in IR-UV.
The average emission of the low state above 150\,GeV is $(4.27\pm0.61_{\rm stat})\times 10^{-12}\,\mathrm{cm^{-2}s^{-1}}$, which is also below the all-time average of the H.E.S.S. observations ($(5.5\pm0.4_{\rm stat})\times 10^{-12}\,\mathrm{cm^{-2}s^{-1}}$, \citealp{za16}).
\section{Modelling}\label{sec:model}
The multiwavelength SED constructed from the data selected according to the low flux above 1\,GeV, taken between 2012 and 2017 is shown in Fig.~\ref{fig:mwlsed}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.49\textwidth]{1510_lowstate_sed_r7e17.eps}
\includegraphics[width=0.49\textwidth]{1510_1pc_large.eps}
\caption{
Multiwavelength SED of PKS\,1510--089\ obtained from the data contemporaneous to MAGIC observations performed during \textit{Fermi}-LAT\ low state (red points).
The gray band shows the \textit{Swift}-BAT 105 months average spectrum \citep{oh18}.
Gray dot markers show the historical data from SSDC (\protect\url{www.asdc.asi.it}).
IR optical and UV data have been dereddened, MAGIC data have been corrected for the absorption by the EBL according to \citet{do11} model.
Observed MAGIC spectral points are shown in cyan.
The green short-long-dashed curve shows the synchrotron component, and orange dot-dashed the EC component.
SSC component is shown in cyan dotted line.
The long dashed and short dashed black lines show the dust torus and accretion disk emission respectively.
The solid blue line shows the total emission (including absorption in EBL).
The left panel is for the ``close'' model, the right panel for the ``far'' model (see the text).
}\label{fig:mwlsed}
\end{figure*}
We model the broad-band emission using an External Compton scenario (see, e.g., \citealp{sbr94,gh10}) in which the gamma-ray emission is produced due to inverse Compton scattering of a radiation field external to the jet by electrons located in an emission region inside the jet.
We use a particular scenario applied already to model a high state and a flare from PKS\,1510--089\ \citep{al14,ah17}, with the external photon field being the accretion disk radiation reflected by the broad line region (BLR) and dust torus (DT).
We apply the same BLR and DT parameters as in \cite{ah17}, namely a radius of $R_{\rm BLR}=2.6\times 10^{17}$\,cm and $R_{\rm IR}=6.5\times 10^{18}$\,cm respectively.
BLR and DT reflect $f_{\rm BLR}=0.1$ and $f_{\rm DT}=0.6$ respectively (the so-called covering factor) part of the accretion disk radiation, $L_{\rm disk}=6.7\times 10^{45}$ erg s$^{-1}$.
The DT temperature is set to 1000\,K.
The emission region is located at the distance $r$ from the base of the jet and has a radius of $R$.
As in the model employed in \cite{ah17}, jet plasma flows through the emission region.
The lack of strong variability of the low-state emission and the fact that it reaches sub-TeV energies suggests that the emission region should be beyond the BLR.
At such distances the cross section of the jet is large, making difficult to explain any short-term variability, but the absorption on the BLR radiation is negligible.
We consider two scenarios for the location of the emission region: the ``close'' scenario with $r=7\times 10^{17}$\,cm and ``far'' scenario with $r=3\times 10^{18}$\,cm.
In both cases, the dominating radiation field comes from the DT.
Such distances of the emission region have been applied for modelling of the 2015 flare \citep{ah17} and 2012 high state \citep{al14} respectively.
The size of the emission region $R=2\times 10^{16}$\,cm (for the ``close'' scenario) and $R=3\times 10^{17}$\,cm (for the ``far'' scenario) is on the order of the cross section of the jet at the distance $r$.
Although it is not a dominant emission component, the model also calculates the synchrotron-self-Compton emission of the source.
The model parameters for both scenarios are summarized and compared with earlier modelling in Table~\ref{tab:param}.
\begin{table*}
\centering
\begin{tabular}{cccccccccccc}
\hline
\hline
& $\gamma _{\rm min}$ & $\gamma _{\rm b}$& $\gamma _{\rm max}$& $n_1$& $n_2$ &$B$ &$K$ & $\delta$ & $\Gamma$ & $r$ & $R$ \\
\hline
Low State (close)&$2.5$& $130$ & $3\times 10^{5}$ & $1.9$ & $3.5$ & $0.35$ & $3\times 10^{4}$ & $25$ & $20$ & $7.0\times 10^{17}$ & $2.0\times 10^{16}$ \\
Low State (far) &$2$ & $300$ & $3\times 10^{5}$ & $1.9$ & $3.7$ & $0.05$ & $80$ & $25$ & $20$ & $3.0\times 10^{18}$ & $3.0\times 10^{17}$ \\
\hline
2012 &$3$ & $900$ &$6.5\times 10^{4}$& $1.9$ & $3.85$& $0.12$ & $20$ & $20$ &$20$& $3.1\times 10^{18}$ & $3.0\times 10^{17}$ \\
2015, Period A &$1$ & $150$ \& $800$ & $4\times 10^{4}$ & 1 \& $2$ & $3.7$ & $0.23$ & $3.0\times 10^{4}$ & $25$ & $20$ & $6.0\times 10^{17}$ & $2.8\times 10^{16}$ \\
2015, Period B &$1$ & $150$ \& $500$ & $3\times 10^{4}$ & 1 \& $2$ & $3.7$ & $0.34$ & $2.6\times 10^{4}$ & $25$ & $20$ & $6.0\times 10^{17}$ & $2.8\times 10^{16}$ \\
\hline
\hline
\end{tabular}
\caption{
Input model parameters for the EC models of PKS\,1510--089\ emission for the low state in ``close'' and ``far'' scenario.
For comparison, model parameters obtained from the 2012 high state \citep{al14} and 2015 flare \cite{ah17} are also quoted.
The individual columns are minimum, break and maximum electron Lorentz factor ($\gamma _{\rm min}$, $\gamma _{\rm b}$, $\gamma _{\rm max}$ respectively),
slope of the electron energy distribution below and above $\gamma_b$ ($n_1$ and $n_2$ respectively),
magnetic field in G ($B$),
normalization of the electron distribution in units of cm$^{-3}$ ($K$),
Doppler, Lorentz factor, distance and radius of the emission region ($\delta$, $\Gamma$, $r$ (in cm), $R$ (in cm) respectively).
}
\label{tab:param}
\end{table*}
However, the sets of parameters are not unique solutions for describing the low-state SED, as a certain degree of parameter degeneracy occurs in these kind of models (see e.g. synchrotron-self-Compton (SSC) model parameters degeneracy discussed in \citealp{ah17b}).
The data are compared with the model in Fig.~\ref{fig:mwlsed}.
Both scenarios can reproduce relatively well the IC peak.
The gamma-ray data of MAGIC and \textit{Fermi}-LAT\ are well explained as the high-energy part of the EC component, with the exception of the two highest energy \textit{Fermi}-LAT\ points which are $>1$\,GeV and hence are probably underestimated by the data selection procedure (see Section~\ref{sec:fermi}).
\textit{Swift}-XRT and historical \textit{Swift}-BAT data follow the rising part of the EC component (with a small contribution of the SSC process in the soft X-ray range for the ``close'' scenario).
The UV data form a bump that can be well explained by the direct accretion disk radiation included in the model.
In the IR range, the model curve underestimates the data points, especially in the case of the ``far'' model.
Among the quiescent data selected, the IR data show the highest variability.
The higher IR variability might come from a separate region, not associated with the GeV gamma-ray emission region.
In such a case, the IR emission associated with the low gamma-ray state would likely be at the level closer to the low edge of the observed spread in IR fluxes (reflected in the quoted uncertainty bar in Fig.~\ref{fig:mwlsed}).
The ``far'' model can reasonably reproduce the radio observations, while the ``close'' model underestimates the data due to strong synchrotron-self-absorption effects given by the compactness of the emission region.
This is not surprising since the radio core observed at 15\,GHz is estimated to be located at the distance of 17.7\,pc from the base of the jet \citep{pu12}.
Using the typical scaling of the core distance being inversely proportional to the frequency, we obtain that for the highest radio point at 229\,GHz its corresponding radio core should be located at $\sim1$\,pc.
Therefore, most of the radio emission should be produced at or beyond the region considered in the ``far'' scenario.
However, the magnetic field considered in the ``far'' scenario, $B=0.05$\,G, is an order of magnitude smaller that the magnetic field estimated from the radio observations at $r=1$\,pc of $0.73$\,G \citep{pu12}.
Larger values would result in a much smaller Compton dominance than observed in the broadband SED.
It is curious that an optical/GeV high state, a days-long flare and the low state can all be roughly described (except of the IR data) in the framework of the same External Compton scenario without a major change of the model parameters.
This suggests a common origin of the gamma-ray emission of PKS\,1510--089\ in different activity states, with the observed differences caused by changes in the content of the plasma flowing through the emission region\footnote{Note that a fast flare observed in 2016 from PKS\,1510--089\ \citep{za17} might have nevertheless a different origin.}.
We note, however, that the model used here is rather simple.
It is natural to assume that the low-state, broad-band emission is an integral of the emission in a range of distances from the base, with the varying conditions (such as $B$ field) along the jet, rather than originating in a single homogenous region (see e.g. \citealp{pc13}).
\subsection{\kom{Limits on the absorption of sub-TeV photons in BLR}}
\kom{In the case the emission region is located inside, or close to, the BLR the gamma-ray spectrum should carry an imprint of the absorption feature on the BLR photons \citep{dp03}.
Presence or lack of such an absorption can be therefore used to constrain the location of the emission region.
We use the emission-model-independent approach of \cite{ce17} to put such constraints of the location of the low-state emission region of PKS\,1510--089 .
We first make a power-law fit to the \textit{Fermi}-LAT\ spectrum of PKS\,1510--089\ in the energy range of 0.1--1\,GeV, which is unbiased by the data selection.
Next, we extrapolate the fit to the energy range observed by the MAGIC telescopes and apply an absorption by a factor of $\exp(-\tau)$, where $\tau$ is the so-called optical depth (see Fig.~\ref{fig:gammatau}).
}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{pks1510_lowstate_fermi_magic_sed_tau.eps}
\caption{
\kom{Gamma-ray spectrum of PKS\,1510--089\ during low state measured by \textit{Fermi}-LAT\ (squares) and MAGIC (filled circles).
68\% confidence band of the extrapolation of the \textit{Fermi}-LAT\ spectrum to sub-TeV energies is shown with gray shaded region.
The extrapolation in the MAGIC energy range assuming absorption in BLR following \citep{be16} for the emission region located at the distance of $1$, $0.82$ and $0.74$ of the outer radius of the broad line region is shown with black solid, blue dotted and green dashed lines respectively.
Empty circles show the effect of the systematic uncertainties on the MAGIC spectrum.
}
}\label{fig:gammatau}
\end{figure}
\kom{
We compare those extrapolations with the reconstructed MAGIC spectrum taking into account the statistical uncertainties as well as the systematic uncertainty both in the energy scale and in the flux normalization.
The systematic uncertainties of \textit{Fermi}-LAT\ are negligible in those calculations.
Due to source intrinsic effects (e.g. intrinsic break or cut-off in the accelerated electron spectrum, Klein-Nishina effect) the sub-TeV spectrum might be below the GeV extrapolation.
Hence no lower limit on the absorption can be derived in a model-independent way.
However it is natural to assume that there is no upturning in the photon spectrum, therefore we can place an upper limit on the maximum absorption in the BLR.
To estimate such a limit we perform a toy Monte Carlo study in which we vary 10000 times the extrapolated \textit{Fermi}-LAT\ flux and the measured MAGIC flux (corrected for the EBL absorption) according to their uncertainties.
Next, at each investigated energy we calculate a histogram of $\tau(E)=\ln(F_{\mathrm{ext}}(E)/F_{\mathrm{obs}}(E))$, where
$F_{\mathrm{ext}}(E)$ and $F_{\mathrm{obs}}(E)$ are the randomized extrapolated and randomized measured flux respectively.
The limit on $\tau$ is obtained as a 95\% quantile of such a distribution.
We include the systematic uncertainties of MAGIC by shifting its energy scale and normalization according to their systematic limits (see empty circles in Fig.~\ref{fig:gammatau}) and taking the least constraining one.
We obtain that $\tau(\mathrm{110\,GeV})$<1.4, $\tau(\mathrm{180\,GeV})$<1.7, $\tau(\mathrm{290\,GeV})$<2.3.
}
\kom{
Applying a model of absorption by the BLR those limits on the optical depth can be converted to lower limits on the location of the emission region, $r$.
We test the above procedure using two BLR models for PKS\,1510--089 .
First, we use the optical depth calculations of \cite{be16} assuming that 10\% of the accretion disk radiation is reprocessed in the BLR.
Note that \cite{be16} assumes that a homogeneous BLR in PKS\,1510--089\ spans between $6.9\times 10^{17}$\,cm and $8.4\times 10^{17}$\,cm, and reflects 10\% of the disk luminosity $L_D=10^{46}\,\mathrm{erg\,s^{-1}}$.
We obtain that the above limits result in $r>6.3 \times 10^{17}$\,cm (i.e. above $0.74$ of the outer radius of the BLR).
As an additional check we calculate the optical depths using a code adapted from \cite{sb08} with the line intensities and BLR geometry of \cite{lb06}.
We use the same radius and luminosity of the BLR as in Section~\ref{sec:model}.
We apply however the same ratio of the outer to inner radius of BLR as in \cite{be16}, resulting in the BLR spanning from $2.34\times 10^{17}$\,cm to $2.86\times 10^{17}$\,cm.
For such a BLR model, the obtained above limits on the optical depth force us to place the emission region farther than $3.2\times 10^{17}$\,cm (i.e. beyond $1.1$ of the outer radius of the BLR.
}
\kom{
It should be noted that the method has a number of simplifications and underlying assumptions.
The emission region is assumed to be relatively small compared to its distance from the black hole.
This is not necessarily true, in particular for the low state emission that can be generated in a more extended region (the broad band emission modelling presented in the previous section further supports such a hypothesis).
Second, if the gamma-ray emission is not produced by a single process, the spectrum can have a complicated (including convex) shape.
Note that for another FSRQ, B0218+357 the gamma-ray emission was explained as combination of SSC and EC process \citep{ah16}.
Third, the optical depth is dependent on the assumed geometry of the BLR.
For example, the size of the BLR derived in \citep{be16} is a factor of $\sim 3$ larger than the one of \citet{gt09}.
In addition the difference between the spherical and disk like geometry can easily change the optical depths by a factor of a few \citep{ap17}.
Finally, the radial stratification of the BLR and the total fraction of the light reflected in the BLR introduce further uncertainties.
}
\section{Conclusions}\label{sec:conc}
We have performed the analysis of MAGIC data searching for a possible low state of VHE gamma-ray emission from PKS\,1510--089 .
Selecting the data taken during periods when the \textit{Fermi}-LAT\ flux above 1\,GeV was below $3\times10^{-8}\mathrm{cm^{-2}s^{-1}}$ we have collected 75\,hrs of MAGIC data on 76 individual nights, resulting in a significant detection of the low state of VHE gamma-ray emission.
The measured flux is $\sim0.6$ of the flux of the source measured during high optical and GeV state \citep{al14} in the beginning of 2012 and $\sim0.75$ of the lowest previously known flux from this source (average over all the H.E.S.S. observations, \citealp{za16}).
Nevertheless, the spectral shape is consistent with the previous measurements, despite a factor of 80 difference to the flux during the strongest flare observed so far from this source.
This makes PKS\,1510--089\ the first FSRQ to be detected in a persistent low state with no hints of yearly variations in the observed flux.
Future observations with the Cherenkov Telescope Array should be able to probe if the low-state flux is also stable on shorter time scales \citep{ach13,ach17}.
\kom{Previous VHE gamma-ray observations of FSRQs in flaring states suggested that the emission region during such states should be located beyond the BLR and that the emission is mostly compatible with an EC scenario.}
The low-state broadband emission \kom{of PKS\,1510--089} from IR to VHE gamma-rays can be explained in the framework of an EC scenario, similarly to the previous VHE gamma-ray detections of the source.
The presence of the sub-TeV gamma-ray emission \kom{also} suggests that the emission region is located beyond the BLR, where the dominating seed radiation field for the EC process is the the dust torus.
\kom{Comparison of the extrapolated \textit{Fermi}-LAT\ spectrum and the MAGIC measured spectrum using two distinct BLR models allows us to put a limit on the location of the emission region beyond the $0.74$ of the outer radius of BLR.
The emission scenario placing the dissipation region beyond the BLR is in} line with the recent studies of \cite{co18} showing that most of the \textit{Fermi}-LAT\ detected blazars (including PKS\,1510--089 ) have GeV emission consistent with lack of BLR absorption.
\begin{acknowledgements}
We would like to thank the Instituto de Astrof\'{\i}sica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The financial support of the German BMBF and MPG, the Italian INFN and INAF, the Swiss National Fund SNF, the ERDF under the Spanish MINECO (FPA2015-69818-P, FPA2012-36668, FPA2015-68378-P, FPA2015-69210-C6-2-R, FPA2015-69210-C6-4-R, FPA2015-69210-C6-6-R, AYA2015-71042-P, AYA2016-76012-C3-1-P, ESP2015-71662-C2-2-P, CSD2009-00064), and the Japanese JSPS and MEXT is gratefully acknowledged. This work was also supported by the Spanish Centro de Excelencia ``Severo Ochoa'' SEV-2012-0234 and SEV-2015-0548, and Unidad de Excelencia ``Mar\'{\i}a de Maeztu'' MDM-2014-0369, by the Croatian Science Foundation (HrZZ) Project IP-2016-06-9782 and the University of Rijeka Project 13.12.1.3.02, by the DFG Collaborative Research Centers SFB823/C4 and SFB876/C3, the Polish National Research Centre grant UMO-2016/22/M/ST9/00382 and by the Brazilian MCTIC, CNPq and FAPERJ.
The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support
from a number of agencies and institutes that have supported both the
development and the operation of the LAT as well as scientific data analysis.
These include the National Aeronautics and Space Administration and the
Department of Energy in the United States, the Commissariat \`a l'Energie Atomique
and the Centre National de la Recherche Scientifique / Institut National de Physique
Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana
and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research
Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and
the K.~A.~Wallenberg Foundation, the Swedish Research Council and the
Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully
acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre
National d'\'Etudes Spatiales in France. This work performed in part under DOE
Contract DE-AC02-76SF00515.
This paper has made use of up-to-date SMARTS optical/near-infrared light curves that are available at \url{www.astro.yale.edu/smarts/glast/home.php}.
IA acknowledges support by a Ramón y Cajal grant of the Ministerio de Economía, Industria, y Competitividad (MINECO) of Spain.
Acquisition and reduction of the POLAMI and MAPCAT data was supported in part by MINECO through grants AYA2010-14844, AYA2013-40825-P, and AYA2016-80889-P, and by the Regional Government of Andalucía through grant P09-FQM-4784.
The POLAMI observations were carried out at the IRAM 30m Telescope.
IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
The MAPCAT observations were carried out at the German-Spanish Calar Alto Observatory, which is jointly operated by the Max-Plank-Institut f\"ur Astronomie and the Instituto de Astrofísica de Andalucía-CSIC
This research has made use of data from the OVRO 40-m monitoring program \citep{ri11} which is supported in part by NASA grants NNX08AW31G, NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST-1109911.
This publication makes use of data obtained at the Mets\"ahovi Radio Observatory, operated by Aalto University, Finland.
The authors would like to thank a number of people who provided comments to the manuscript: R.~Angioni, N.~MacDonald, M.~Giroletti, R.~Caputo, D.~Thompson and the anonymous journal reviewer.
\kom{We would also like to thank M.~B\"ottcher for providing us the numerical values of the optical depths calculated in \cite{be16}.}
\end{acknowledgements}
|
3,212,635,537,636 | arxiv | \section{Introduction}
Let $k$ be a field. The Grothendieck ring of varieties $K_0(\mathbf{Var}_{/k})$ is as a group defined to have generators the isomorphism classes $[X]$ where $X$ is a variety over $K$, and relations $[X-Y] + [Y] = [X]$ where $Y \hookrightarrow X$ is a closed inclusion. The multiplication is induced by Cartesian product of varieties. This ring is a fundamental object of study for algebraic geometers: it is a universal home for Euler characteristics of varieties, called motivic measures, as well as an easy version of motives. It has further deep ties to stable birational geometry, and a number of interesting statements in that field can be phrased in terms of the structure of $K_0 (\mathbf{Var}_{/k})$ (see, e.g. \cite{liu_sebag, larsen_lunts}). The Grothendieck ring of varieties also arises as the target for ``motivic integration'' \cite{looijenga}, a technique invented by Kontsevich for producing rational invariants of Calabi--Yau varieties. In his setup, the target for such an integral is a ring closely related to $K_0 (\mathbf{Var}_{/k})$. In general, any ring homomorphism $K_0 (\mathbf{Var}_{/k}) \to A$ can be used as a measure for motivic integration, hence the term motivic measure.
The construction of motivic measures is a powerful technique for understanding the structure of $K_0 (\mathbf{Var}_{/k})$, and a number of authors have constructed interesting ones. For example, in \cite{larsen_lunts} the authors construct a motivic measure $K_0 (\mathbf{Var}_{/k}) \to \mathbf{Z}[SB]$ where the latter denotes the free group ring on stable birational classes of varieties. Furthermore, they show that the kernel of that ring map is the ideal generated by the class of the affine line $[\mathbf{A}^1_k]$. The same motivic measure was also used in \cite{larsen_lunts} to prove the irrationality of a certain motivic zeta function. Another slightly more exotic motivic measure was produced in \cite{bondal_larsen_lunts}: a ring map $K_0 (\mathbf{Var}_{/k}) \to K_0 (\mathbf{PT})$ where $\mathbf{PT}$ is the category of small pre-triangulated categories.
Given a Grothendieck ring $K_0$, topologists and algebraic $K$-theorists have come to expect concomitant higher $K$-groups, $K_i$, that arise as homotopy groups of spaces or spectra. This is the case with the algebraic $K$-theory of rings \cite{bass,milnor,quillen}, and the algebraic $K$-theory of topological spaces \cite{waldhausen}. In the case of $K_0 (\mathbf{Var}_{/k})$ Zakharevich \cite{zakharevich} showed, using her formalism of assemblers, there is indeed an underlying spectrum and used the result to prove a number of results relating to cut-paste conjectures. We will call the spectrum she defined $K(\mathbf{Var}_{/k})$.
It is interesting to study the higher homotopy groups of $K(\mathbf{Var}_{/k})$ and there are concrete reasons to believe the higher homotopy contains a great deal of geometric information. For example, Zakharevich \cite{zakharevich_annihilator} has used $\pi_1 K(\mathbf{Var}_{/k})$ very effectively to study questions in birational geometry. For other flavors of algebraic $K$-theory, the typical way to study higher $K$-theory is to produce maps from $K(\mathbf{Var}_{/k})$ to target spectra with computable homotopy groups --- in our case such maps correspond to ``derived'' motivic measures. Unfortunately, assemblers are very difficult to define maps out of, and a different construction of $K(\mathbf{Var}_{/k})$ is needed. This paper provides such a construction. We note now that it takes work to prove the equivalence of the models. The comparison will appear in future work of the author, Jesse Wolfson, and Inna Zakharevich.
The standard way of defining higher algebraic $K$-theory begins with a category $\mathcal{E}$ with some notion of ``exact sequence'', for example Quillen's exact categories \cite{quillen} or triangulated categories. One then defines $K_0 (\mathcal{E})$ in the usual way by splitting exact sequences. Roughly, the higher $K$-groups are defined by using simplicial machinery to keep track of \textit{how} the sequences split. Waldhausen realized that in fact this type of machine works for a much less restrictive structure on the underlying category. One needs a zero object, ``cofibrations'', which are maps $X \to Y$ where one can define a quotient $Y/X$, and some mild categorical conditions on the existence of certain colimits \cite{waldhausen}. Granted this structure on a category $\mathcal{C}$ one can define a spectrum $K(\mathcal{C})$ by again using simplicial machinery to keep track of the ways in which $Y$ splits into $X$ and $Y/X$. In this case, the category together with the necessary structure is called a Waldhausen category, and the machinery is called the Waldhausen $S_\bullet$-construction.
One could hope to define a higher $K$-theory of varieties using such standard constructions. Unfortunately, there are immediate problems --- for example, $\mathbf{Var}_{/k}$ cannot be a Waldhausen category since it has no zero object, nor does it have quotients or pushouts in general. However, these objections can be remedied, and we introduce a new formalism where a modified $S_\bullet$-construction can be run. The production of this modified $S_\bullet$-construction is the main point of this paper.
First, the category $\mathbf{Var}_{/k}$ has just enough pushouts: pushouts where both legs are closed inclusions exist. Also, ``quotients'' in our setting will be replaced by ``subtraction,'' $Y - X$ for closed inclusions $X \hookrightarrow Y$ of varieties -- it is these ``subtraction sequences'' $X \hookrightarrow Y \leftarrow Y - X$ that we will split. We also observe that a zero object is not actually needed, but an initial object is and the empty variety will work in this case. Proceeding in this way, we create a new formalism of SW-categories (for semi-Waldhausen or scissors-Waldhausen or subtractive-Waldhausen...) and a suitably modified Waldhausen $S_\bullet$-construction called the $\widetilde{S}_\bullet$-construction (see Section 3 for details).
For Waldhausen's $S_\bullet$-construction, the main theorem, and the theorem from which almost all $K$-theory theorems follow \cite{staffeldt} is the Additivity Theorem \cite[Thm. 1.4.2]{waldhausen}. The main theorem of this paper is the following analogue.
\begin{thm}
Let $\mathcal{C}$ be an SW-category. Then the category $\mathbf{Sub}(\mathcal{C})$ of subtraction diagrams $X \hookrightarrow Y \leftarrow Y-X$ can also be made into an SW-category. Furthermore
\[
\widetilde{S}_\bullet \mathbf{Sub}(\mathcal{C}) \to \widetilde{S}_\bullet \mathcal{C} \times \widetilde{S}_\bullet \mathcal{C}
\]
is a weak equivalence of simplicial sets, with the map given by projecting onto to the first and last components of $X \hookrightarrow Y \leftarrow Y-X$.
\end{thm}
With some work, this implies the following theorem.
\begin{thm}
For $\mathcal{C}$ an SW-category, $K(\mathcal{C})$ is an infinite loop space.
\end{thm}
Since $\mathbf{Var}_{/k}$ is an SW-category, we obtain a spectrum $K(\mathbf{Var}_{/k})$.
\begin{prop}
The components of the spectrum $K(\mathbf{Var}_{/k})$ coincide with the Grothendieck group of varieties:
\[
\pi_0 K(\mathbf{Var}_{/k}) = K_0 (\mathbf{Var}_{/k}).
\]
\end{prop}
The category $\mathbf{Var}_{/k}$ is endowed with a product, and we can descend this product to the spectrum level giving us an even stronger statement:
\begin{thm}
The cartesian product on varieties gives $K(\mathbf{Var}_{/k})$ the structure of an $E_\infty$-ring spectrum. Furthermore, $\pi_0 K(\mathbf{Var}_{/k})$ coincides with the Grothendieck ring of varieties.
\end{thm}
Once the spectrum $K(\mathbf{Var}_{/k})$ has been defined using a relative of well-studied machinery, we proceed to define maps in and out of $K(\mathbf{Var}_{/k})$. These may be considered to be ``derived'' versions of motivic measures. The ability to do this is one of the main virtues of defining $K(\mathbf{Var}_{/k})$ in this way.
First, one can define a model for the unit map $S \to K(\mathbf{Var}_{/k})$. Next, when $k$ is a finite field, a point-counting functor defines a map from $K(\mathbf{Var}_{/k})$ to the sphere spectrum. One may also consider a complex variety as a topological space and relate this to Waldhausen's $K$-theory of spaces, $A(\ast)$ \cite{waldhausen}. Finally, one can define maps to $K(\mathbf{Q})$ by using derived versions of the Euler characteristics. Summarizing, we have
\begin{thm}
There are non-trivial spectrum maps (and explicit models for them)
\begin{enumerate}
\item $S \to K(\mathbf{Var}_{/k})$ which on $\pi_0$ gives the map $\mathbf{Z} \to K_0 (\mathbf{Var}_{/k})$ sending
\[
[n] \mapsto \underbrace{\operatorname{Spec}(k) \amalg \cdots \amalg \operatorname{Spec}(k)}_{n \ \text{times}}
\]
\item $K(\mathbf{Var}_{/k}) \to S$, $k$ finite, which on $\pi_0$ gives the point-counting map $[X] \mapsto \#X(k)$.
\item $K(\mathbf{Var}_{/\mathbf{C}}) \to A(\ast)$, which on $\pi_0$ is $[X] \mapsto \chi(X)$.
\item $K(\mathbf{Var}_{/\mathbf{C}}) \to K(\operatorname{Ch}^{hb} (\mathbf{Q}))$ and $K(\mathbf{Var}_{/\mathbf{C}}) \to K(\mathbf{Q})$, where $\operatorname{Ch}^{hb}(\mathbf{Q})$ denotes the category of homologically bounded chain complexes. On $\pi_0$ this is also $[X] \mapsto \chi(X)$.
\end{enumerate}
\end{thm}
There should be maps from $K(\mathbf{Var}_{/k})$ into much ``larger'' and more interesting ring spectra. As a putative example of how to produce such a map, we consider the following. Instead of discarding information by simply counting points or taking cohomology, one could instead pass to derived categories, i.e assign a variety $X$ to its derived category $\mathcal{D}(X)$. Done carefully, this procedure should product a functor from varieties to stable $\infty$-categories. This would give us a conjectural map $K(\mathbf{Var}_{/k}) \to K(\mathcal{C}\text{at}^{Ex}_\infty)$ where $\mathcal{C}\text{at}^{Ex}_\infty$ is the $\infty$-category of stable $\infty$-categories \cite{blumberg_gepner_tabuada, lurie}. A more concrete manifestion of this map is the following conjecture.
\begin{conj}
There is a map $K(\mathbf{Var}_{/k}) \to K(K(S))$ or $K(K(k))$ of $E_\infty$-ring spectra.
\end{conj}
\begin{rmk}
The conjecture above would essentially supply a lift of of the Bondal--Larsen--Lunts motivic measure $K_0 (\mathbf{Var}_{/k}) \to K_0 (\mathbf{PT})$.
\end{rmk}
There are many other possible ways of producing interesting motivic measures, and this will be the subject of future work.
\subsection{Acknowledgements}
This paper grew out of a seminar conducted with Andrew Blumberg at UT-Austin in Fall of 2014. I thank Andrew for suggesting the seminar topic, many helpful conversations about this paper, and general supportive enthusiasm for the project. At various points Sean Keel, Jen Berg, and Ben Williams have answered very naive questions about algebraic geometry. I am indebted to Jesse Wolfson for suggesting parts of the key definition \ref{w_exact}. Finally, I thank Inna Zakharevich for interest in the current work and encouragement. As is hopefully clear from the text, she was first to define the $K(\mathbf{Var}_{/k})$ spectrum, and this paper represents an alternate approach to work she has done. Inna also pointed out to me that Torsten Ekedahl was apparently thinking of an approach to $K(\mathbf{Var}_{/k})$ similar to the below at the time of his death \cite{ekedhal_overflow}. Denis-Charles Cisinski made very helpful comments on an earlier draft of this paper. The comments of an anonymous referree greatly improved the structure and exposition of this paper.
\section{Scheme-Theoretic Preliminaries}
In topological contexts, the construction of $K$-theory via Waldhausen categories \cite{waldhausen}, depends heavily on having certain categorical limits and colimits. We cannot take for granted the existence of all (or any) limits and colimits in the category of varieties. However, in this section we show that all of the limits and colimits that will be necessary do, in fact, exist and we collect a number of other useful results. The author first learned this material in \cite{schwede}, but the material exists in the Stacks Project \cite[Tag 07RS]{stacks-project} as well.
\begin{defn}
In what follows a \textbf{variety} will be a finite-type, separated scheme over an arbitrary base scheme $X$.
\end{defn}
\begin{nota}
Throughout, closed immersions in both varieties and schemes will be denoted with a hooked arrow $Z \hookrightarrow X$. Similarly, an open immersion will be denoted by $Y \xrightarrow{\circ} X$.
\end{nota}
We will need two results.
\begin{thm}\cite[Thm. 3.3]{schwede}, \cite[Tag 07RS]{stacks-project}
Suppose $A, B$ are rings and suppose $I \subset A$ is an ideal, and that there exists a map $f: B \to A/I$. Consider the diagram
\[
\xymatrix{
Z = \operatorname{Spec} A/I \ar[d] \ar[r] & X = \operatorname{Spec} B \\
Y = \operatorname{Spec} A &
}
\]
Then,
\begin{enumerate}
\item The pushout $X \amalg_Z Y$ exists and is affine
\item $Y \to X \amalg_Z Y$ is a closed immersion
\item both $X \to X \amalg_Z Y$ and $Y \to X \amalg_Z Y$ are morphisms of schemes.
\end{enumerate}
\end{thm}
\begin{thm}\cite[Cor. 3.7]{schwede}, \cite[Tag 07RS]{stacks-project}\label{pushouts_exist}
Let $Z \hookrightarrow X$ and $Z \hookrightarrow Y$ be closed immersions of schemes. Then $X \amalg_Z Y$ exists.
\end{thm}
\begin{comment}
\begin{rmk}
It will be useful for us to record how the pushout is constructed locally. In the proof of the first theorem, it turns out that $X \amalg_Z Y = \operatorname{Spec} C$ where
\[
C = B \times_{A/I} A = \{(a, b): f(b) = a + I\}
\]
\end{rmk}
\end{comment}
Although not stated explicitly in Schwede, it is a consequence of the proof of Thm. \ref{pushouts_exist} that closed inclusions are preserved by cobase change:
\begin{cor}
In the situation of Thm.\ref{pushouts_exist}, $X \to X \amalg_Z Y$ and $Y \to X \amalg_Z Y$ are closed immersions.
\end{cor}
However, we will need more. Since we will be working just in the category of varieties, we need that in fact pushouts exists in that category.
\begin{prop}\label{pushout_varieties}
Let $Z \to X$, $Z \to Y$ be closed embeddings of varieties. Form the pushout $X \amalg_Z Y$ in the category of schemes. Then $X \amalg_Z Y$ is a variety.
\end{prop}
In the category of varieties, an ``exact sequence'' will be a sequence
\[
X \hookrightarrow Y \xleftarrow{\circ} Y - X
\]
where the first map is a closed embedding. These will be the sequences we want to split. In order to view them as the input to a $K$-theory machine, however, we have to verify a number of categorical properties. In the rest of the section we collect these properties.
First, we define how to subtract schemes.
\begin{defn}
Let $i: Z \hookrightarrow X$ be a closed immersion. We define $X - Z$ as follows. The immersion $i$ determines a homeomorphism onto a closed subset $i(Z) \subset X$, which in turn determines an open subset $X - i(Z)$ of $X$. To view this as a scheme, we restrict the structure sheaf $\mathcal{O}_X$ to $X - i(Z)$. That is,
\[
X - Z = (X - i(Z), \mathcal{O}_X|_{X - i(Z)})
\]
\end{defn}
\begin{rmk}
This is a good time to remark on the functoriality of subtraction. It is clear that given a diagram
\begin{equation}\label{functorial_subtraction_diagram}
\xymatrix{
W \ar@{^{(}->}[r] \ar@{^{(}->}[d] & X\ar@{^{(}->}[d] \\
Z \ar@{^{(}->}[r] & Y
}
\end{equation}
there need not be an induced map $X - W \to Y - Z$. Indeed, if $X = Y$ and $W$ is strictly contained in $Z$, then $X-W$ \textit{contains} $Y-Z$. This is fixed, however, if we require that the diagram be cartesian. On the level of sets, this corresponds to intersecting $X$ and $Z$ inside of $Y$. We then get a map $X - W \hookrightarrow Y- Z$.
In the case (\ref{functorial_subtraction_diagram}) is cartesian, there is more we can do. We can extend it to a diagram
\[
\xymatrix{
W \ar@{^{(}->}[r] \ar@{^{(}->}[d] & X \ar@{^{(}->}[d] & X - W \ar@{^{(}->}[d] \ar[l]_{\circ}\\
Z \ar@{^{(}->}[r] & Y & Y - Z \ar[l]_{\circ} & \\
Z-W \ar[u]_{\circ}\ar@{^{(}->}[r] & Y - X \ar[u]_{\circ} & \square \ar[u]^{\circ}\ar[l]_{\circ}
}
\]
where
\[
(Y-Z)-(X-W) = \square = (Y-X)-(Z-W).
\]
and all maps along the bottom and right border are uniquely determined.
\end{rmk}
In general, however, this is a choice of subtraction; there are many other choices isomorphic to it. A better way to encode subtraction is the following:
\begin{defn}\label{subtraction_sequence_varieties}
We define the collection of maps $Z \hookrightarrow X \xleftarrow{\circ} Y$ such that the left map is a closed immersion, the right map is an open immersion and the underlying topological space of $X$ is the disjoint union of the underlying topological spaces of $Z$ and $Y$ to be the \textbf{subtraction sequences}.
\end{defn}
Note that with $X - Z$ defined as above, $Z \hookrightarrow X \xleftarrow{\circ} X-Z$ is a subtraction sequence. However, working with subtraction sequences allow for choices of isomorphic subtractions.
A property of these subtraction sequences is that they are closed under pullback:
\begin{prop}\label{subtractions_pullback}
Subtraction sequences are closed under pullback: Given a subtraction sequence $Z \hookrightarrow X \xleftarrow{\circ} Y$ and a map $X'$, the bottom row in the diagram below is a subtraction sequence:
\[
\xymatrix{
Z \ar@{^{(}->}[r] & X & Y \ar[l]^\circ\\
X' \times_X Z \ar[u] \ar@{^{(}->}[r] & X'\ar[u] & X' \times_X Y \ar[l]^\circ \ar[u]
}
\]
\end{prop}
\begin{proof}
The statement is clearly true for the underlying topological spaces and open and closed immersions are both closed under pullback.
\end{proof}
We record a useful corollary of Prop. \ref{subtractions_pullback}
\begin{cor}
If $i: X \hookrightarrow Y$ and $j: Y \hookrightarrow Z$ are closed immersions, then $Y - X \to Z - X$ is a closed immersion.
\end{cor}
We restate the observation of Schwede (in \cite{schwede} above Lem. 3.8) that if $X$ and $Y$ are closed in an ambient scheme $W$ with intersection ideals $\mathcal{I}_X$ and $\mathcal{I}_Y$ respectively, and $Z$ is the scheme intersection, then $X \amalg_Z Y$ is cut out by $\mathcal{I}_X \cap \mathcal{I}_Y$ in $W$.
\begin{prop}\cite{schwede}\label{pushout_product}
Given a cocartesian diagram of varieties
\[
\xymatrix{
Z \ar[r]\ar[d] & X\ar[d] \\
Y \ar[r] & W
}
\]
where all maps are cofibrations, the map $X \amalg_Z Y \to W$ is a cofibration.
\end{prop}
\begin{rmk}\label{pushout_pullback}
We also note that cocartesian diagrams above are cartesian squares.
\end{rmk}
Pushout also interacts in a controlled way with subtraction sequences:
\begin{prop}\label{var_sub_pushout}
Given a diagram
\[
\xymatrix{
X ' \ar@{^{(}->}[d] & W' \ar@{_{(}->}[l]\ar@{^{(}->}[r] \ar@{^{(}->}[d] & Y'\ar@{^{(}->}[d]\\
X & W \ar@{^{(}->}[r] \ar@{_{(}->}[l] & Y\\
X'' \ar[u]^\circ & W'' \ar@{^{(}->}[r] \ar@{_{(}->}[l] \ar[u]^\circ & Y'' \ar[u]^\circ
}
\]
such that the columns are subtraction sequences, and both top squares are cartesian squares, the pushouts of the rows form a subtraction sequence
\[
Y' \amalg_{W'} X' \hookrightarrow Y \amalg_W X \xleftarrow{\circ} Y'' \amalg_{W''} X''
\]
\end{prop}
\begin{proof}
We examine the diagram
\[
\xymatrix@C=.3cm@R=.3cm{
W' \ar[rr] \ar[dr] \ar[dd] & & Y' \ar[dr]\ar[dd] & \\
& X' \ar[dd] \ar[rr] & & X' \amalg_{W'} Y' \ar[dd]\\
W \ar[rr]\ar[dr] & & Y \ar[dr] \\
& X \ar[rr] & & X \amalg_{W} Y
}
\]
where the back and left faces are cartesian and all arrows except the rightmost are closed immersions. It suffices to show that the rightmost arrow is a closed immersion.
The left square is cartesian and the bottom square is cocartesian and so cartesian by Rmk. \ref{pushout_pullback}. Thus the composite square
\[
\xymatrix{
W' \ar[d]\ar[r] & X'\ar[d]\\
Y \ar[r] & X \amalg_W Y
}
\]
is cartesian. Thus, by Prop. \ref{pushout_product} the map $X' \amalg_{W'} Y \to X \amalg_W Y$ is a closed immersion. But, $X' \amalg_{W'} Y' \to X' \amalg_{W'} Y$ is a closed immersion as well, and closed immersions compose.
\end{proof}
\section{The $K$-Theory Spectrum of Varieties}
Typically, the input for algebraic $K$-theory is a category imbued with some notion of cofiber sequence that $K$-theory then splits: for cofiber sequences $A \to B \to C$ in $\mathcal{C}$, we have the relation $[B] = [A] + [C]$ in $K_0 (\mathcal{C})$ with more subtle information encoded in higher $K$-groups. One of the more general constructions of algbraic $K$-theory is due to Waldhausen, and the categories on which Waldhausen's machine operates are, naturally, called Waldhausen categories \cite{waldhausen}. We review the construction of the $K$-theory of a Waldhausen category $\mathcal{C}$ below.
For the category $\mathbf{Var}_{/k}$, the axioms of a Waldhausen category are certainly not satisfied: the sequences we would like to split, sequences of the form $Z \hookrightarrow X \leftarrow X-Z$, do not even have morphisms in the appropriate directions. To circumvent this issue, we introduce the formalism of subtractive categories (Def. \ref{subtractive_category}) and a modified version of Waldhausen's construction of $K$-theory for such categories.
Once the $K$-theory space is constructed, we can show that it is in fact an infinite loop space, or spectrum. This is done by proving the additivity theorem, a rigorous version of the statement that $K$-theory splits exact sequences. The key point is that subtraction sequences interact well enough with pushouts to allow adaptations of proofs of additivity (e.g. \cite{mccarthy}) to go through in the new context.
\begin{comment}
In this section we show that we can produce $K$-theory from a structure slighty different from a Waldhausen category. The category $\mathbf{Var}_{/k}$ is simply not a Waldhausen category, but if one modifies the definition, a suitable replacement notion can be found and a delooping produced. The path to all of this is via ``additivity.''
The standard slogan for algebraic $K$-theory is that it is a ``universal additive invariant'' \cite{blumberg_gepner_tabuada, barwick}. That is, it is initial among invariants of categories that split exact sequences. In \cite{waldhausen}, Waldhausen shows his definition satisfies additivity and then uses that property to produce a delooping of the algebraic $K$-theory space. Of course, Waldhausen's constructions and proofs use only Waldhausen categories, which have certain category theoretic requirements that $\mathbf{Var}_{/k}$ does not satisfy. We introduce a modification of Waldhausen categories that will allow us to to define an $S_\bullet$-construction for $\mathbf{Var}_{/k}$, and thus $K(\mathbf{Var}_{/k})$. After producing this $S_\bullet$-construction, we present a version of it that allows for immediate recognition of $K(\mathbf{Var}_{/k})$ as a symmetric spectrum, and then a version that allows for recognition of multiplicative properties.
After defining the appropriate $S_\bullet$ constructions, we go on to prove additivity in this context, following McCarthy's \cite{mccarthy} method. From there, modifications of a few more theorems of Waldhausen are needed to show that we can deloop the $K$-theory of varieties to produce an infinite loop space, and thus produce a quasi-fibrant symmetric spectrum.
\end{comment}
\subsection{Waldhausen Categories and the $S_\bullet$-construction}
We give a rapid review of Waldhausen categories and the construction of the $K$-theory spectrum for Waldhausen categories.
\begin{defn}\cite[Sect. 1.1]{waldhausen}
A \textbf{Walhdausen category} $\mathcal{C}$ is a category with an initial and terminal object $\ast$, equipped with two distinguished subcategories
\begin{enumerate}
\item cofibrations, denoted $\mathbf{co}(\mathcal{C})$, with arrows in $\mathbf{co}(\mathcal{C})$ denoted $\hookrightarrow$
\item weak equivalences, denoted $\mathbf{w}(\mathbf{C})$, with arrows in $\mathbf{w}(\mathcal{C})$ denoted $\xrightarrow{\sim}$.
\end{enumerate}
These are required to satisfy the following axioms
\begin{itemize}
\item The isomorphisms of $\mathcal{C}$ are in both $\mathbf{co}(\mathcal{C})$ and $\mathbf{w}(\mathcal{C})$
\item For $C \in \mathcal{C}$, $\ast \to \mathcal{C}$ is a cofibration.
\item Given a cofibration $C \hookrightarrow D$ and any arrow $C \to C'$, the pushout $C' \amalg_C D$ exists and furthermore $C' \to C' \amalg_C D$ is a cofibration
\item Given a diagram
\[
\xymatrix{
D \ar[d]^{\sim} & C \ar[d]^{\sim}\ar[d]\ar@{_{(}->}[l]\ar[r] & E \ar[d]^{\sim}\\
D' & C' \ar@{_{(}->}[l]\ar[r] & E'
}
\]
the induced map $D \amalg_C E \to D' \amalg_{C'} E'$ is a weak equivalence.
\end{itemize}
\end{defn}
\begin{rmk}
Given a cofibration $C \hookrightarrow D$, one can form a pushout along $C \to \ast$ to form a quotient $D/C$. The resulting sequence $C \hookrightarrow D \to D/C$ is called a cofiber sequence.
\end{rmk}
This is all the structure that is required to define $K$-theory. Note first, there is certainly a notion of Grothendieck group for a Waldhausen category, $\mathcal{C}$: it has generators the isomorphism classes $[C]$ with $C \in \mathcal{C}$ and relations $[C]+[D/C] = [D]$ for cofiber sequences $C \hookrightarrow D\to D/C$.
Before going on, we define the notion of functor between Waldhausen categories.
\begin{defn}
Let $\mathcal{C}$ and $\mathcal{D}$ be Waldhausen categories. Then a functor $F: \mathcal{C} \to \mathcal{D}$ is \textbf{exact} if it preserves zero objects, cofibrations, pushouts along cofibrations and weak equivalences.
\end{defn}
To define higher $K$-groups, we need the $S_\bullet$ construction.
\begin{defn}
Let $\mathcal{C}$ be a Waldhausen category. Let $\operatorname{Ar}[n]$ denote the arrow category: the objects are pairs $(i, j)$ with $0 \leq i \leq j \leq n$ and morphisms are $(i, j) \to (i', j')$ with $i \leq i'$ and $j \leq j'$. We define a category $S_n \mathcal{C}$ to be the full subcategory of functors $F: \operatorname{Ar}[n] \to \mathcal{C}$ such that
\begin{enumerate}
\item $F(i, i) = \ast$
\item $F(i, j) \to F(i, k)$ is a cofibration for all $i \leq j \leq k$
\item The square
\[
\xymatrix{
F(i, j) \ar[d]\ar[r] & F(i, k)\ar[d]\\
F(j, j)=\ast \ar[r] & F(j, k)
}
\]
is cocartesian for all $i \leq j \leq k$.
\end{enumerate}
The categories $S_n \mathcal{C}$ assemble into a simplicial category, which we will denote $S_\bullet \mathcal{C}$. The simplicial face maps $d_i: S_n \mathcal{C} \to S_{n-1} \mathcal{C}$ are given by deleting the $i$th row and $i$th column from a diagram in $S_n \mathcal{C}$. The degeneracies are given by inserting identity maps in the appropriate places.
\end{defn}
We may now define the $K$-theory space.
\begin{defn}
Let $w S_n \mathcal{C}$ be the subcategory of $S_n \mathcal{C}$ where morphisms are given by level-wise weak equivalences. We may form the simplicial category $w S_\bullet \mathcal{C}$ and take the level-wise nerve $Nw S_\bullet \mathcal{C}$, which we denote $w_\bullet S_\bullet \mathcal{C}$. The level-wise nerve $w_\bullet S_\bullet \mathcal{C}$ is a bisimplicial set. We define the \textbf{algebraic $K$-theory space of $\mathcal{C}$} to be
\[
K(\mathcal{C}):= \Omega |w_\bullet S_\bullet \mathcal{C}|
\]
where $|-|$ denotes the realization of a bisimplicial set.
\end{defn}
Walhausen shows that in fact $K(\mathcal{C})$ is an infinite loop space. The crucial step is the additivity theorem:
\begin{thm}\cite[Prop 1.3.2]{waldhausen}
Let $\mathcal{C}$ be a Waldhausen category and let $\mathcal{E}$ be the category whose objects are cofibration sequences $A \to B \to C$ in $\mathcal{C}$ and level-wise morphisms. Then $\mathcal{E}$ can be given the structure of a Waldhausen category.
Furthermore, there are functors $s, q: \mathcal{E} \to \mathcal{C} \times \mathcal{C}$ given by taking $(A \to B \to C)$ to $A$ and $C$ respectively, and these induce an equivalence of simplicial sets
\[
w S_\bullet \mathcal{E} \xrightarrow{(s,q)} w S_\bullet \mathcal{C} \times w S_\bullet \mathcal{C}
\]
\end{thm}
\subsection{Subtractive Categories}
In this section, we define ``categories with subtraction'', the minimal categorical input needed to define an analogue of the Waldhausen $S_\bullet$-construction. We go on to define ``subtractive categories'', which are more restrictive and which provide just enough structure to mimic standard proofs of the additivity theorem.
\begin{defn}\label{cat_w_cofibs}
A \textbf{category with subtraction} is a category $\mathcal{C}$, equipped with a subcategory of cofibrations, $\mathbf{co}(\mathcal{C})$ and a subcategory of fibrations, $\mathbf{fib}(\mathcal{C})$. The arrows of $\mathbf{co}(\mathcal{C})$ will be denoted by ``$\hookrightarrow$'' and those of $\mathbf{fib}(\mathcal{C})$ will be denoted by $\xrightarrow{\circ}$. The following axioms must hold:
\begin{enumerate}
\item There is an initial object, typically referred to as the empty object, $\emptyset$.
\item Isomorphisms are cofibrations and fibrations
\item (\textbf{pullbacks}) Pullbacks along cofibrations and fibrations exist, and satisfy base-change.
\item There is a notion of subtraction: that is, there is a collection of \textbf{subtraction sequences} $\{Z \hookrightarrow X \leftarrow Y\}$ which are required to satisfy the following axioms
\begin{enumerate}
\item $A \to A \amalg B \leftarrow B$ is a subtraction sequence
\item Every cofibration $Z \hookrightarrow X$ participates in a subtraction sequence $Z \hookrightarrow X \leftarrow Y$ where $Y$ is unique up to unique isomorphism. The same statement holds for fibrations. We will informally denote $Y$ by $X-Z$.
\item Subtraction is functorial in fiber squares. Given a fiber square where all arrows are cofibrations, we can form the diagram below where all of the rows and columns are subtraction sequences
\[
\xymatrix{
W \ar@{^{(}->}[r] \ar@{^{(}->}[d] & X \ar@{^{(}->}[d] & X - W \ar@{^{(}->}[d] \ar[l]_{\circ}\\
Z \ar@{^{(}->}[r] & Y & Y - Z \ar[l]_{\circ} & \\
Z-W \ar[u]_{\circ}\ar@{^{(}->}[r] & Y - X \ar[u]_{\circ} & \square \ar[u]^{\circ}\ar[l]_{\circ}
}
\]
where here $\square$ dentoes what would informally be called $(Y-Z)-(X-W)$ or $(Y-X)-(Z-W)$. In this diagram, we require that the arrows along the bottom and right of the diagram be uniquely determined and that the bottom right square be cartesian.
The dual statement is required for fibrations.
\item Subtraction is respected by base change. That is, given a subtraction sequence $Z \hookrightarrow X \xleftarrow{\circ} Y$ and a map $W \to X$ we can form the diagram where both squares are cartesian:
\[
\xymatrix{
Z \ar@{^{(}->}[r] & X & Y \ar[l]_{\circ}\\
Z \times_W X \ar@{^{(}->}[r]\ar[u] & W \ar[u]& W \times_X Y \ar[l]_{\circ}\ar[u]
}
\]
The bottom row is required to be a subtraction sequence.
\end{enumerate}
\end{enumerate}
\end{defn}
\begin{rmk}
The definition of subtraction may seem somewhat odd, given that we don't specify what, exactly, the subtraction should be, only that it exist. It is, however, all that we need for any of the arguments below. It also leaves room for ``relative'' subtraction sequences, where given $\mathcal{C} \to \mathcal{D}$ an inclusion of categories with subtraction, we could define a new subtraction structure on $\mathcal{D}$ by declaring $Z \hookrightarrow X \leftarrow Y$ to be a subtraction sequence if $X - Z = Y \amalg C$ in the old structure.
\end{rmk}
\begin{rmk}
The axiom for the functoriality of subtraction is necessitated by the fact that subtraction does not satisfy any good categorical properties: the intuitively suggested properties of subtraction must be inserted by fiat.
\end{rmk}
\begin{rmk}
The definition bears a strong resemblance to the definition of exact categories in \cite{quillen}.
\end{rmk}
There are a large number of examples of category with subtraction. The most important for us will be the following.
\begin{example}\label{varieties_are_categories_with_subtraction}
Let $X$ be a scheme. Then $\mathbf{Sch}_{/X}$ or $\mathbf{Var}_{/X}$ with cofibrations the closed inclusions are categories with subtraction. The cofibrations will be closed immersions and the subtraction sequences will be subtraction sequences of schemes or varieties Defn.\ref{subtraction_sequence_varieties}. It is clear that cofibrations and fibrations satisfy base change and that subtraction sequences also satisfy base change, thus $\mathbf{Sch}_{/X}$ and $\mathbf{Var}_{/X}$ are categories with subtraction.
\end{example}
\begin{example}
Smooth schemes $\mathbf{Sch}^{\text{sm}}_{/X}$ with cofibrations the closed inclusions or open inclusions.
\end{example}
Categories with subtraction are useful, but we will need a refinement of them in order to prove additivity.
\begin{defn}\label{subtractive_category}
A \textbf{subtractive category}, $\mathcal{C}$, is a category with subtraction such that
\begin{enumerate}
\item (\textbf{pushouts}) The pushout of of a diagram where both legs are cofibrations exist and satisfy base change. Furthermore, cocartesian diagrams of this form are required to be cartesian
\item (\textbf{pushout products}) In a cartesian square
\[
\xymatrix{
W \ar[r]\ar[d] & X\ar[d] \\
Y \ar[r] & Z
}
\]
where all arrows are cofibrations, the map $X \amalg_W Y \to Z$ is a cofibration.
\item (\textbf{subtraction and pushouts}) Given a diagram
\[
\xymatrix{
X ' \ar@{^{(}->}[d] & W' \ar@{_{(}->}[l]\ar@{^{(}->}[r] \ar@{^{(}->}[d] & Y'\ar@{^{(}->}[d]\\
X & W \ar@{^{(}->}[r] \ar@{_{(}->}[l] & Y\\
X'' \ar[u]^{\circ} & W'' \ar@{^{(}->}[r] \ar@{_{(}->}[l] \ar[u]^{\circ} & Y'' \ar[u]^{\circ}
}
\]
where the columns are subtraction sequences and the top two squares are cartesian, then the pushouts along the rows form a subtraction sequence
\[
\xymatrix{
X' \amalg_{W'} Y' \ar[r] & X \amalg_W Y & X'' \amalg_{W''} Y'' \ar[l]
}
\]
\end{enumerate}
\end{defn}
We need an appropriate notion of functor between two subtractive categories.
\begin{defn}
A functor $F: \mathcal{C} \to \mathcal{D}$ of subtractive categories is \textbf{exact} if
\begin{enumerate}
\item $F$ preserves the initial object: $F(\emptyset) = \emptyset$
\item $F$ preserves subtraction sequences: If $X \hookrightarrow Z \leftarrow Y$ is a subtraction sequence, then
\[
F(X) \hookrightarrow F(Z) \leftarrow F(Y)
\]
is a subtraction sequence.
\item $F$ preserves cocartesian diagrams.
\end{enumerate}
\end{defn}
\begin{rmk}
In Waldhausen's work, item 2 is subsumed by item 3. In our case quotients (i.e. pushouts along a map to the final object) and subtraction are not the same, so we must posit an extra condition.
\end{rmk}
The work of Section 2 gives us the following.
\begin{cor}
$\mathbf{Sch}_X$ and $\mathbf{Var}_{/X}$ are subtractive categories with cofibrations the closed inclusions.
\end{cor}
\begin{proof}
First, these are all categories with subtraction by Example \ref{varieties_are_categories_with_subtraction}.
Furthermore, pushouts diagrams where both legs are closed immersions exist Thm. \ref{pushouts_exist}, the pushout product axiom holds Prop.\ref{pushout_product}, and the final axiom regarding subtraction and pushout holds Prop. \ref{var_sub_pushout}. Thus $\mathbf{Sch}_{/X}$ and $\mathbf{Var}_{/X}$ are subtractive categories.
\end{proof}
\begin{rmk}
Note that $\mathbf{Sch}^{sm}_{/X}$ is \textit{not} a subtractive category, as pushing out along closed inclusions introduces singularities.
\end{rmk}
As in Waldhausen \cite[Lem. 1.1.1] {waldhausen}, we will proceed to show that the arrow category $F_1 \mathcal{C}$ of a subtractive category is also a subtractive category.
\begin{defn}
Let $\mathcal{C}$ be a subtractive category. Let $F_1 \mathcal{C}$ denote the category with objects cofibrations $Z \hookrightarrow X$ and morphisms cartesian diagrams.
\end{defn}
\begin{prop}
Let $f: (W \hookrightarrow X) \to (Y \hookrightarrow Z)$ be a map in $F_1 \mathcal{C}$. We define $f$ to be a cofibration if $W \to Y$ and $X \to Z$ are. This turns $F_1 \mathcal{C}$ into a subtractive category.
\end{prop}
\begin{proof}
First, the cofibrations form a category. This is clear by the usual properties of pullbacks. The category $F_1 \mathcal{C}$ has an initial object $(\emptyset \to \emptyset)$ and the isomorphisms are cofibrations. Pullbacks exist and are defined point-wise and are easily seen to satisfy cobase change. Furthermore, subtractions exist by the pullback axiom: given a diagram below we consider the left square to be a cofibration in $F_1 \mathcal{C}$ and the right vertical map will be the corresponding subtraction guaranteed by the axioms:
\[
\xymatrix{
Z \ar@{^{(}->}[d] \ar@{^{(}->}[r] & X \ar@{^{(}->}[d] & X - Z \ar@{^{(}->}[d]\ar[l]\\
Z' \ar@{^{(}->}[r] & X' & X'-Z' \ar[l]
}
\]
This proves that $F_1 \mathcal{C}$ is a category with subtraction.
To see that it is a subtractive category, we note that pushouts can be defined point-wise. The map produced by the pushout is a cofibration by the subtraction and pushout axiom. Pushout product follows from the definition of pullback, and the pushout product in $\mathcal{C}$. The interaction of subtraction and pushout follows easily, though tediously, from the work above.
\end{proof}
For future use, we introduce one more new category.
\begin{defn}
Let $F^+_1 \mathcal{C}$ denote the category whose objects are subtraction sequences $Z \hookrightarrow X \xleftarrow{\circ} Y$ in $\mathcal{C}$ and morphisms are diagrams
\begin{equation}\label{morphism_f1C}
\xymatrix{
Z \ar@{^{(}->}[r]\ar[d] & X \ar[d] & Y \ar[l]_\circ \ar[d] \\
Z'\ar@{^{(}->}[r] & X' & Y' \ar[l]_\circ
}
\end{equation}
where both squares are cartesian.
\end{defn}
\begin{defn}\label{stq}
We define three functors $s, t, q: F^+_1 \mathcal{C} \to \mathcal{C}$ on objects
\begin{enumerate}
\item $s (Z \hookrightarrow X \leftarrow Y) = Z$
\item $t (Z \hookrightarrow X \leftarrow Y) = X$
\item $q (Z \hookrightarrow X \leftarrow Y) = Y$
\end{enumerate}
\end{defn}
\begin{lem}
The functors $s, t, q$ are exact.
\end{lem}
\begin{proof}
Only the fact that $q$ is exact requires proof. First, $q$ takes cofibrations to cofibrations. To see this, note that a cofibration is a diagram such as (\ref{morphism_f1C}) where all vertical arrows are cofibrations. That $q$ takes subtraction sequences to subtraction sequences is a consequence of the preservation of subtraction sequences under pullback. That $q$ preserves cocartesian diagrams is exactly Defn. \ref{subtractive_category} Axiom 3.
\end{proof}
\subsection{$SW$-categories and $\widetilde{S}_\bullet$}
We finally introduce a modification of Waldhausen's $S_\bullet$-construction. Before doing so, we define a type of subtractive category where we allow for the presence of weak equivalence. In the main example in this paper, the category $\mathbf{Var}_{/k}$, the weak equivalences will simply be the isomorphisms. We introduce the definition below in order to allow for weaker notions of equivalence (e.g. birational equivalence) in future work.
\begin{defn}
An \textbf{SW-category} (subtractive Waldhausen category) is a subtractive category equipped with a category of weak equivalences, $w \mathcal{C}$, such that
\begin{enumerate}
\item The isomorphisms are contained in $w\mathcal{C}$
\item Gluing holds: Given the diagram where all horizontal arrows are cofibrations
\[
\xymatrix{
Y\ar[d]_{\simeq} & X\ar[d]_{\simeq} \ar@{_{(}->}[l]\ar@{^{(}->}[r] & Z\ar[d]_{\simeq}\\
Y' & X'\ar@{_{(}->}[l]\ar@{^{(}->}[r] & Z'
}
\]
we have
\[
Y \amalg_X Z \simeq Y' \amalg_{X'} Z'
\]
\item Subtraction is respected: If we have a commuting square
\[
\xymatrix{
X \ar@{^{(}->}[r] \ar[d]_{\simeq} & Y \ar[d]^{\simeq}\\
X' \ar@{^{(}->}[r] & Y'
}
\]
then there is an induced weak equivalence $X - Y \xrightarrow{\simeq} X' - Y'$.
\end{enumerate}
\end{defn}
\begin{defn}
A functor $F: \mathcal{C} \to \mathcal{D}$ between SW-categories is \textbf{exact} if $F$ preserves weak equivalences and $F$ is exact as a functor of subtractive categories.
\end{defn}
\begin{rmk}
Note that for any subtractive category $\mathcal{C}$, if we declare the isomorphisms in $\mathcal{C}$ to be the weak equivalences, we obtain an SW-category.
\end{rmk}
Now, the development above proves
\begin{prop}\label{var_SW_cat}
Let $X$ be a scheme and let $\mathbf{Var}_{/X}$ be the category of separated, finite-type schemes over $X$. Then $\mathbf{Var}_{/X}$ is an SW-category with cofibrations the closed immersions and weak equivalences the isomorphisms of schemes.
\end{prop}
We proceed to give the version of Waldhausen's construction of $K$-theory appropriate to SW-categories. This will be a modification of his $S_\bullet$-construction.
To cleanly state the construction we need to define a useful indexing category.
\begin{defn}
Let $[n]$ denote the ordered set $\{0, \dots, n\}$ considered as a category, i.e. there is a map $i \to j$ if $i \leq j$. Define $\widetilde{\operatorname{Ar}}[n]$ to be the full subcategory of $[n]^{\text{op}} \times [n]$ consisting of pairs $(i, j)$ with $i \leq j$.
\end{defn}
\begin{example}
$\widetilde{\operatorname{Ar}}[2]$ may be visualized as
\[
\xymatrix{
(0,0) \ar[r] & (0,1)\ar[r] & (0,2)\\
& (1,1)\ar[u] \ar[r] & (1,2)\ar[u]\\
& & (2,2) \ar[u]
}
\]
and will be referred to colloquially as ``flags'' below.
\end{example}
\begin{defn}[$\widetilde{S}_\bullet$-construction]\label{s_dot}
Let $\mathcal{C}$ be an SW-category. We define $\widetilde{S}_n \mathcal{C}$ to be the set of functors
\[
X: \widetilde{\operatorname{Ar}}[n] \to \mathcal{C}
\]
subject to the conditions
\begin{itemize}
\item $X_{i,i} = \emptyset$, the empty variety.
\item Every $X_{i,j} \to X_{i,k}$ where $j < k$ is a cofibration.
\item The sub-diagram
\[
X_{i,j} \to X_{i,k} \leftarrow X_{j, k}
\]
is a subtraction sequence.
\item For $i < j < k < l$, the subdiagram
\[
\xymatrix{
X_{ik} \ar@{^{(}->}[r] & X_{il}\\
X_{jk} \ar@{^{(}->}[r]\ar[u]^{\circ} & X_{jl} \ar[u]^{\circ}
}
\]
is cartesian.
\end{itemize}
This defines a simplicial set as follows. The face maps are
\begin{enumerate}
\item $d_0: \widetilde{S}_n \mathcal{C} \to \widetilde{S}_{n-1} \mathcal{C}$ is given by removing the first row.
\item $d_k: \widetilde{S}_n \mathcal{C} \to \widetilde{S}_{n-1} \mathcal{C}$ is given by deleting the $k$th row and column and composing the remaining maps.
\end{enumerate}
The $i$th degeneracy maps are given by inserting identity maps $X_{i,j} \xrightarrow{=} X_{i, j}$ for all $j$. From this it is clear that the simplicial relations hold.
\end{defn}
In fact, $\widetilde{S}_\bullet \mathcal{C}$ can be considered as a simplicial category (i.e. a simplicial object in categories). First, we introduce some notation. Let $i_0: [n] \hookrightarrow \widetilde{\operatorname{Ar}}[n]$ be given by $j \mapsto (0, j)$.
\begin{defn}
We consider $\widetilde{S}_n \mathcal{C}$ as a category as follows. The objects are the functors $X: \widetilde{\operatorname{Ar}}[n] \to \mathcal{C}$ as above and the morphisms are functors $Y: \widetilde{\operatorname{Ar}}[n] \times [1] \to \mathcal{C}$ with the additional restriction that all squares in $i^\ast_0 Y : [n] \times [1]\to \mathcal{C}$ are cartesian. Composition is given in the obvious way.
\end{defn}
\begin{rmk}
The requirement that the squares $[n]\times[1] \to \mathcal{C}$ be cartesian is a consequence of the fact that we only have functoriality of subtraction with respect to cartesian squares.
\end{rmk}
\begin{rmk}
This makes $\widetilde{S}_\bullet \mathcal{C}$ into a simplicial \textit{category}.
\end{rmk}
The category $\widetilde{S}_n \mathcal{C}$, built from a subtractive category, can itself be given the structure of a subtractive category.
\begin{lem}
The category $\widetilde{S}_n \mathcal{C}$ is a subtractive category. The category of cofibrations is given by functors $Y: \widetilde{\operatorname{Ar}}[n] \times [1] \to \mathcal{C}$ such that the restriction $Y((i,j)) : [1] \to \mathcal{C}$ are cofibrations. The category of fibrations are given similarly. The subtraction sequences are level-wise subtraction sequences.
\end{lem}
\begin{proof}
This is entirely analgous to the proof for $F_1 \mathcal{C}$.
\end{proof}
\begin{rmk}
This will allow us to iterate the $S_\bullet$-construction.
\end{rmk}
We can finally define our \textit{space} $K(\mathcal{C})$ --- more machinery is needed to prove that it may be delooped.
\begin{defn}
Let $\mathcal{C}$ be an SW-category. We define the space $\underline{K}(\mathcal{C})$ to be $\Omega |w_\bullet \widetilde{S}_\bullet \mathcal{C}|$, where $|-|$ denote the simplicial realization of a bisimplicial set. Here, $w \mathcal{C}$ denotes the subcategory of all objects with maps weak equivalences, and $w_\bullet \mathcal{C}$ denotes the simplicial nerve of that category.
\end{defn}
Of course, the salient property of this space holds.
\begin{prop}
$\pi_0 \underline{K}(\mathcal{C}) = K_0 (\mathcal{C})$.
\end{prop}
\begin{proof}
This follows by standard methods; see, for example, \cite{weibel}. For any simplicial space (or bisimplicial set) $X_\bullet$, we can compute $\pi_1 |X_\bullet|$ via generators and relations:
\[
\pi_1 |X_\bullet| = \langle \pi_0 (X_1) \rangle / (d_1 (x) = d_2 (x) + d_0 (x)) \qquad x \in \pi_0 (X_2).
\]
Here our simplicial space is $X_n = |i_\bullet \widetilde{S}_n \mathcal{C}|$. Therefore, $\pi_0 (X_1)$ is the set of equivalence classes of varieties up to isomorphism. Also, $X_2$ is the set of equivalences classes of subtraction sequences. For a subtraction $X \hookrightarrow Y \leftarrow X - Y$, call it $c$, $d_0 (c) = Y- X$, $d_1 (c) = Y$ and $d_2 (c) = X$. Therefore, the relations are
\[
[Y] = [X] + [Y-X].
\]
\end{proof}
\begin{rmk}
We can define $\underline{K}(\mathcal{C})$ for $\mathcal{C}$ any category with subtraction. We have not yet used any other structure. However, in order that the space $\underline{K}(\mathcal{C})$ deloop to a spectrum $K(\mathcal{C})$, we will need $\mathcal{C}$ to be a subtractive or SW-category.
\end{rmk}
We now produce $K(\mathcal{C})$ as a symmetric spectrum by iterating the $\widetilde{S}_\bullet$ construction in an appropriate way; that is, the following is what one gets if we consider $\widetilde{S}_\bullet \mathcal{C}$ as an SW-category and iterate the $\widetilde{S}_\bullet$-construction. We will show in the subsequent section that this is a quasi-fibrant symmetric spectrum \cite{hovey_shipley_smith, mmss}.
\begin{defn} \label{iterated_S_dot_construction}
Let $\mathcal{C}$ be an SW-category. We consider the category of functors
\[
F : \widetilde{\text{Ar}}[n_1] \times \cdots \times \widetilde{\text{Ar}}[n_k] \to \mathcal{C}.
\]
and write each object of $\widetilde{\text{Ar}}[n_\ell]$ as $(i_\ell, j_\ell)$. Let $S^k_{n_1, \dots, n_k} \mathcal{C}$ be the full subcategory consisting of functors $F$ such that
\begin{enumerate}
\item $F ((i_1, j_1), \dots, (i_k, j_k)) = \ast$ whenever $i_\ell = j_\ell$ for some $\ell$.
\item The subfunctor $F((0, i_1), \dots, (0,i_k)): [n_1] \times \cdots \times [n_k] \to \mathcal{C}$ defines a cube such that every sub-face is cartesian.
\item Given $((i_1, j_1), \dots, (i_k, j_k))$ in $ \widetilde{\text{Ar}}[n_1] \times \cdots \times \widetilde{\text{Ar}}[n_k]$ and $1 \leq \ell \leq k$ and every $j_\ell \leq m \leq n_\ell$ the sequence
\[
\xymatrix{
F((i_1, j_1), \dots, (i_\ell, j_\ell), \dots, (i_k, j_k)) \ar[r] & F((i_1, j_1), \dots, (i_\ell, m), \dots, (i_k, j_k))\\
& F((i_1, j_1), \dots, (j_\ell, m), \dots, (i_k, j_k)) \ar[u]
}
\]
is a subtraction sequence.
\item Given $i_l < n < j_l < m$ the diagram
\[
\xymatrix{
F((i_1, j_1), \dots, (i_l, j_l), \dots, (i_k, j_k)) \ar[r] & F((i_1, j_1), \dots, (i_l, m), \dots, (i_k, j_k))\\
F((i_1, j_1), \dots, (n, j_l), \dots, (i_k, j_k)) \ar[u]\ar[r] & F((i_1, j_1), \dots, (n,m), \dots, (i_k, j_k))\ar[u]
}
\]
is cartesian.
\end{enumerate}
\end{defn}
Using this we may define the symmetric spectrum $K(\mathcal{C})$
\begin{defn}\label{K_symmetric_spectrum}
Let $\mathcal{C}$ be an SW-category and define
\[
K(\mathcal{C})(k) = |N_\bullet (w S^{(k)}_{\bullet, \dots, \bullet} \mathcal{C})|.
\]
This space has a $\Sigma_k$-action given by permuting the simplicial directions.
\end{defn}
\begin{comment}
We introduce one more generalization in order to more cleanly define products later on; this is simply language, and we are not introducing any new notions. We will follow Geisser-Hesselholt \cite{geisser_hesselholt} in defining products, and so follow them in defining an $S_\bullet$-construction appropriate to the task. The only modification is to consider $S^Q \mathcal{C}$ where $Q$ is a finite set. That is, instead of indexing on numbers, we index on finite sets. This serves to make the action by the symmetric group more transparent.
\begin{defn}
Let $Q$ be a finite set. Consider positive integers $n_i$ indexed on $Q$, i.e. where $i \in Q$. Then $S^Q_{n_1, \dots, n_{|Q|}} \mathcal{C}$ is a functor from the arrow category
\[
F: \widetilde{\text{Ar}}[n_1]\times \cdots \times \widetilde{\text{Ar}}[n_{|Q|}] \to \mathcal{C}
\]
satisfying the same requirements as above.
\end{defn}
We now define
\begin{defn}
The $K$-theory spectrum is given by
\[
K(\mathcal{C})(k) = |w_\bullet S^Q_\bullet \mathcal{C}|
\]
with $Q = \{1, \dots, k\}$.
\end{defn}
\end{comment}
\section{Additivity}
The slogan for algebraic $K$-theory is that it is the universal machine to split exact sequences. A more precise statement is that the ``Additivty Theorem'' holds and that $K$-theory is the universal functor for which this theorem holds. This was recently proven in \cite{blumberg_gepner_tabuada,barwick} though it has been a guiding principle of the field since its inception. As one would expect, almost every other standard property of $K$-theory follows from additivity \cite{staffeldt}. In our situation, we cannot hope to prove the array of theorems that additivity usually provides; we settle for using it to prove that $K(\mathbf{Var}_{/k})$ is in infinite loop space.
The additivity theorem for SW-categories is as follows. This section will be devoted to the proof of this theorem.
\begin{thm}[Additivity]\label{additivity}
Let $\mathcal{C}$ be an SW-category. Consider the map
\[
A = (s, q): F^+_1(\mathcal{C}) \to \mathcal{C} \times \mathcal{C}.
\]
Upon applying $\widetilde{S}_\bullet$ we get a homotopy equivalence of simplicial sets
\[
\widetilde{S}_\bullet F^+_1(\mathcal{C}) \xrightarrow{\sim} \widetilde{S}_\bullet \mathcal{C} \times \widetilde{S}_\bullet \mathcal{C}.
\]
\end{thm}
\begin{comment}
There is an equivalent formulation of additivity which will be needed in the sequel. The statemet requires a preliminary definition.
\begin{defn}
Let $\mathcal{C}, \mathcal{D}$ be SW-categories and let $F \to G \leftarrow H$ be natural transformations of functors. Then $F \to G \leftarrow H$ is a \textbf{subtraction sequence} of functors if $F(C) \to G(C) \leftarrow H(C)$ is a subtraction sequence in $\mathcal{D}$ and for $Z \hookrightarrow X$ a cofibration in $\mathcal{C}$
\[
\xymatrix{
F(Z) \ar[r]\ar[d] & F(X)\ar[d]\\
G(Z) \ar[r] & G(X)
}
\]
is such that $G(Z) \amalg_{F(Z)} F(X) \to G(X)$ is a cofibration.
\end{defn}
\begin{thm}\label{additivity_two}
\end{thm}
\end{comment}
For Waldhausen categories, the cleanest proof of additivity is due to McCarthy \cite{mccarthy}. We will mimic his proof to show that additivity holds for SW-categories; the key point is that while pushouts are used extensively in the proof, \textit{only} pushouts where \textit{both} legs are cofibrations are needed. As pointed out in Section 2, these are exactly the types of pushouts that we do have.
We pause here to recall the definition of a simplicial homotopy, since it will be used frequently below.
\begin{defn}\label{simplicial_homotopy}
Let $X, Y$ be simplicial sets and $f, g: X \to Y$ simplicial maps. A \textbf{simplicial homotopy} is a simplicial map $X \times \Delta^1 \to Y$ such that restricting to the first vertex of $\Delta^1$ gives $f$ and restricting to the second vertex gives $g$.
These requirements can be packaged combinatorially as follows. A simplicial homotopy is a family of maps $h_i: X_n \to Y_{n+1}$ with $0 \leq i \leq n$ for each $n$. The following identities are required to hold:
\[
\begin{cases}
d_0 h_0 = f \\
d_{n+1} h_n = g
\end{cases}
\ \
\begin{cases}
d_i h_j = h_{j-1} d_i & i < j\\
d_{j+1} h_{j+1} = d_{j+1} h_j & \\
d_i h_j = h_j d_{i-1} & i > j+1
\end{cases}
\ \
\begin{cases}
s_i h_j = h_{j+1} s_i & i \leq j \\
s_i h_j = h_j s_{i-1} & i > j
\end{cases}
\]
\end{defn}
We begin with a useful construction.
\begin{defn}\label{mixing_category}
Let $\mathcal{C}, \mathcal{D}$ be categories with subtraction and let $f: \mathcal{C} \to \mathcal{D}$ be an exact functor. Define a bisimplicial set $\mathcal{C} \otimes_{S_\bullet f} \mathcal{D}$ by setting \[(\mathcal{C} \otimes_{S_\bullet f} \mathcal{D})([m],[n])\] to be pairs of diagrams in $S_m \mathcal{C}$ and $S_{m+n} \mathcal{D}$ (we are omitting the rows below the first):
\begin{equation}\label{mixing_diagram}
\xymatrix{
X_0 \ar@{^{(}->}[r] & X_1\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & X_m & & \\
Y_0 \ar@{^{(}->}[r] & Y_1\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & Y_m \ar@{^{(}->}[r]& \cdots\ar@{^{(}->}[r] & Y_{m+n}
}
\end{equation}
such that $f(X_i) = Y_i$ and $f(X_i \to X_{i+i}) = Y_i \to Y_{i+1}$. The face and degeneracy maps are given by composition and repetition, respectively.
\end{defn}
\begin{defn}
Let $X_\bullet$ be a simplicial set. Then $X^R$ will denote a bisimplicial set $X^R ([m],[n]) = X([n])$. Similarly, $X^L$ will denote the bisimplicial set $X^L([m],[n]) = X([m])$.
\end{defn}
\begin{defn}
We define a bisimplicial map $\rho: \mathcal{C} \otimes_{\widetilde{S}_\bullet f} \mathcal{D} \to \widetilde{S}_\bullet \mathcal{D}^R$ by taking (\ref{mixing_diagram}) to
\[
Y_{m+1} - Y_{m+1} \hookrightarrow Y_{m+2}-Y_{m+1} \hookrightarrow \cdots \hookrightarrow Y_{m+n}-Y_{m+1}
\]
\end{defn}
\begin{prop}\cite[p.326]{mccarthy}
The following are equivalent
\begin{enumerate}
\item $\widetilde{S}_\bullet f: \widetilde{S}_\bullet \mathcal{C} \to \widetilde{S}_\bullet \mathcal{D}$ is a homotopy equivalence
\item The map $\rho : \mathcal{C} \otimes_{\widetilde{S}_\bullet f} \mathcal{D} \to \widetilde{S}_\bullet \mathcal{D}^R$ is a homotopy equivalence.
\end{enumerate}
\end{prop}
\begin{proof}
Consider the commutative diagram of bisimplicial sets:
\[
\xymatrix{
\widetilde{S}_\bullet \mathcal{D}^R \ar@{=}[d] & \mathcal{C} \otimes_{\widetilde{S}_\bullet f} \mathcal{D}\ar[l] \ar[d]^f \ar[r]^{\mathbf{1}} & \widetilde{S} _\bullet \mathcal{C}^L\ar[d]^f \\
\widetilde{S}_\bullet \mathcal{D}^R & \mathcal{D} \otimes_{\widetilde{S}_\bullet \operatorname{Id}} \mathcal{D} \ar[r]^{\mathbf{2}}\ar[l]_{\mathbf{3}} & \widetilde{S}_\bullet \mathcal{D}^L
}
\]
The map labelled $\mathbf{1}$ is obtained by forgetting the ``$\mathcal{D}$''-portion of $\mathcal{C}\otimes_{S_\bullet f} \mathcal{D}$. We now fix $m$ to obtain maps between simplicial sets (indexed by $n$)
\[
(\mathcal{C} \otimes_{\widetilde{S}_\bullet f} \mathcal{D})([m],[n]) \to \widetilde{S}_\bullet \mathcal{C}^L ([m],[n]) = \widetilde{S}_\bullet \mathcal{C} ([m]).
\]
The simplicial set on the right is constant. The simplicial set on the left is homotopy equivalent to $\widetilde{S}_m \mathcal{C}$. To see this, fix the $\widetilde{S}_m \mathcal{C}$ portion of the pair and consider the resulting simplicial set. It is contractible by the same argument that contracts the nerve of a category with an initial object. Thus, levelwise, $\mathcal{C}\otimes_{\widetilde{S}_\bullet f} \mathcal{D}$ and $S_\bullet \mathcal{C}^L$ are equivalent, and thus homotopy equivalent as bisimplicial sets by the realization lemma. The maps $\mathbf{1}$ and $\mathbf{2}$ are shown to be homotopy equivalences in exactly the same way.
Thus, the vertical right arrow will be a homotopy equivalence if and only if the upper left horizontal arrow is a homotopy equivalence.
\end{proof}
This reduces the study of homotopy equivalences $\widetilde{S}_\bullet \mathcal{C} \to \widetilde{S}_{\bullet} \mathcal{D}$ to the study of the maps $\rho$.
Now define a self-map
\[
E_n: (\mathcal{C} \otimes_{S_\bullet F} \mathcal{D})(-,[n]) \to (\mathcal{C} \otimes_{S_\bullet F} \mathcal{D})(-,[n])
\]
via a subtracting procedure. We take the standard diagrams (\ref{mixing_diagram}) to (again, omitting the rows below the first)
\[
\xymatrix@R=.3cm{
\emptyset \ar@{=}[r] & \cdots\ar@{=}[r] & \emptyset & & & \\
\emptyset \ar@{=}[r] & \cdots \ar@{=}[r] & \emptyset \ar@{=}[r]& Y_{m+1} - Y_{m+1} \ar@{^{(}->}[r] & Y_{m+2} - Y_{m+1} \ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & Y_{m+n} - Y_{m+1} }
\]
The above proposition implies
\begin{cor}\cite[p.326]{mccarthy}
If $E_n$ are homotopy equivalences for all $n$, then $\widetilde{S}_\bullet F: \widetilde{S}_\bullet \mathcal{C} \to \widetilde{S}_\bullet \mathcal{D}$ is a homotopy equivalence.
\end{cor}
\begin{proof}
For a fixed $n$ define a map $I_n: \widetilde{S}_\bullet \mathcal{D}^L ([m],[n]) \to \mathcal{C} \otimes_{\widetilde{S}_\bullet f} \mathcal{D} ([m],[n])$ by sending
\[
\emptyset = Y_0 \hookrightarrow Y_1 \hookrightarrow \cdots \hookrightarrow Y_n
\]
to
\[
\xymatrix@R=.3cm{
Y_0 \ar@{=}[r] & \cdots & Y_0 \\
Y_0 \ar@{=}[r] & \cdots & Y_0 \ar@{=}[r] & Y_0 \ar@{^{(}->}[r] & Y_2 \ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & Y_n
}
\]
Note that $\rho \circ I_n = \operatorname{Id}$ and $I_n \circ \rho = E_n$. If $E_n$ are homotopy equivalences, then so are $\rho$ and $I_n$. But if $\rho$ is a homotopy equivalence then $\widetilde{S}_\bullet f$ is as well.
\end{proof}
Let $A: F^+_1(\mathcal{C}) \xrightarrow{(s,t)} \mathcal{C} \times \mathcal{C}$ be the functor defined by the additivity functors (Defn. \ref{stq}). In order to use the techniques above to work with this map, we need to consider the category
\[
F^+_1(\mathcal{C}) \otimes_{S_\bullet A} \mathcal{C}^2.
\]
and diagrams in this category. Here is their typical form (as always omitting the rows after the first)
\begin{equation}\label{additivity_diagram}
\xymatrix@C=.5cm@R=.5cm{
\emptyset\ar@{=}[r] & A_0 \ar@{^{(}->}[r]\ar@{^{(}->}[d] & A_1 \ar@{^{(}->}[r]\ar@{^{(}->}[d] & \cdots \ar@{^{(}->}[r] & A_m\ar@{^{(}->}[d]\\
\emptyset \ar@{=}[r]& C_0 \ar@{^{(}->}[r] & C_1\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & C_m\\
\emptyset \ar@{=}[r]&B_0\ar@{^{(}->}[r]\ar[u]_{\circ} & B_1\ar@{^{(}->}[r] \ar[u]_{\circ} & \cdots\ar@{^{(}->}[r] & B_m \ar[u]_{\circ}\\
\emptyset \ar@{=}[r]& A_0\ar@{^{(}->}[r] & A_1\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & A_m\ar@{^{(}->}[r] & S_0\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & S_n\\
\emptyset\ar@{=}[r] & B_0\ar@{^{(}->}[r] & B_1\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & B_m\ar@{^{(}->}[r] & T_0 \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & T_n
}
\end{equation}
\begin{rmk}\label{diagram_labelling}
In what follows, we will need to refer to the rows below the pictured rows in (\ref{additivity_diagram}) --- the pictured rows are the zeroth rows of a flag. The elements in the $k$th row will be referred to by $A_{k,l}$, $B_{k,l}$ and $C_{k,l}$.
\end{rmk}
\begin{rmk}
We note that the only difference between these diagrams and the diagrams that appear in \cite{mccarthy} is the fact that the arrows between the $B$s and $C$s go in opposite directions.
\end{rmk}
We will now show that $E_n$ for the functor $(s,q): F^+_1(\mathcal{C}) \to \mathcal{C} \times \mathcal{C}$ is a homotopy equivalence. Recall that $E_n$ will be a map
\[
E_n : F^+_1 (\mathcal{C}) \otimes_{S_\bullet A} \mathcal{C}^2 \to F^+_1 (\mathcal{C}) \otimes_{S_\bullet A} \mathcal{C}^2
\]
To show that this is a weak equivalence, McCarthy defines a map of simplicial sets
\[
\Gamma: F^+_1 (\mathcal{C}) \otimes_{S_\bullet A} \mathcal{C}^2(-,[n]) \to F^+_1 (\mathcal{C}) \otimes_{S_\bullet A} \mathcal{C}^2 (-,[n])
\]
and shows
\begin{enumerate}
\item $\Gamma$ is a retraction onto some subspace $\mathcal{X} \subset F^+_1 (\mathcal{C}) \otimes_{\widetilde{S}_\bullet A} \mathcal{C}^2 (-,[n])$
\item $\Gamma \simeq \operatorname{Id}$
\item $E_n|_{\mathcal{X}} \simeq \operatorname{Id}_{\mathcal{X}}$
\item $E_n \circ \Gamma = E_n$
\end{enumerate}
Taken together, these implies that $E_n$ is a homotopy equivalence.
We will use exactly the same procedure here.
\begin{defn}
The map $\Gamma: F^+_1(\mathcal{C}) \times_{S_\bullet A} \mathcal{C}^2 (-,[n]) \to F^+_1(\mathcal{C}) \times_{S_\bullet A} \mathcal{C}^2 (-,[n])$ is defined by taking diagrams (\ref{additivity_diagram}) to diagrams (as always, omitting rows below the first) to
\begin{equation}\label{En_diagram}
\xymatrix@C=.5cm@R=.5cm{
\emptyset \ar@{=}[r]& \emptyset \ar@{=}[r]\ar@{^{(}->}[d] & \emptyset \ar@{=}[r]\ar@{^{(}->}[d] & \cdots\ar@{=}[r] & \emptyset\ar@{^{(}->}[d]\\
\emptyset \ar@{=}[r] & B_0 \ar@{=}[d] \ar@{^{(}->}[r] & B_1\ar@{=}[d]\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & B_m\ar@{=}[d]\\
\emptyset \ar@{=}[r] & B_0 \ar@{^{(}->}[r] & B_1 \ar@{^{(}->}[r]& \cdots\ar@{^{(}->}[r] & B_m \\
\emptyset\ar@{=}[r] & \emptyset \ar@{=}[r]& \emptyset\ar@{=}[r] & \cdots\ar@{=}[r] & \emptyset\ar@{=}[r] & S_0 - S_0\ar@{^{(}->}[r] & S_1 - S_0 \ar@{^{(}->}[r]& \cdots\ar@{^{(}->}[r] & S_m - S_0 \\
\emptyset\ar@{=}[r] & B_0\ar@{^{(}->}[r] & B_1\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & B_m \ar@{^{(}->}[r]& T_0 \ar@{^{(}->}[r]& T_1 \ar@{^{(}->}[r]& \cdots\ar@{^{(}->}[r] & T_n
}
\end{equation}
\end{defn}
We have already defined $E_n$ in general above, but it is useful to spell out what it is in this context.
\begin{defn}
$E_n$ takes diagrams of the form (\ref{additivity_diagram}) to
\[
\xymatrix@C=.3cm@R=.3cm{
\emptyset \ar@{=}[r]\ar@{^{(}->}[d] & \cdots \ar@{=}[r] & \emptyset\ar@{^{(}->}[d]\\
\emptyset \ar@{=}[r] & \cdots \ar@{=}[r] & \emptyset\\
\emptyset \ar@{=}[r]\ar[u] & \cdots \ar@{=}[r] & \emptyset \ar[u]\\
\emptyset \ar@{=}[r] & \cdots\ar@{=}[r] & \emptyset \ar@{=}[r] & S_0 - S_0\ar@{^{(}->}[r] & S_1 - S_0\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & S_n - S_0 \\
\emptyset \ar@{=}[r] & \cdots\ar@{=}[r] &\emptyset \ar@{=}[r] & T_0 - T_0\ar@{^{(}->}[r] & T_1 - T_0 \ar@{^{(}->}[r]& \cdots\ar@{^{(}->}[r] & T_n - T_0
}
\]
\end{defn}
\begin{defn}
Let the subspace $\mathcal{X} \subset F^+_1(\mathcal{C}) \times_{S_\bullet A} \mathcal{C}^2 (-,[n])$ denote the subspace where all of the $A_i$ are $\emptyset$.
\end{defn}
\begin{prop}
$E_n|_{\mathcal{X}} \simeq \operatorname{Id}_{\mathcal{X}}$
\end{prop}
\begin{proof}
This is done by exactly the same argument that contracts a category with final object. One contracts the string of $B$s in (\ref{En_diagram}) to $T_0$.
\end{proof}
We note that $\Gamma$ is a retraction of $F^+_1(\mathcal{C}) \times_{S_\bullet A} \mathcal{C}^2 (-,[n])$ onto $\mathcal{X}$. To complete the proof that $E_n$ is a homotopy equivalence, and thus additivity, we need to show that $\Gamma$ is homotopic to the identity.
This is done by producing an explicit simplicial homotopy
\[
h: (E(\mathcal{C}) \times_{S_\bullet A} \mathcal{C}^2(-,[n])) \times \Delta^1 \to (E(\mathcal{C}) \times_{S_\bullet A} \mathcal{C}^2 (-,[n]))
\]
Recall that a simplicial homotopy can be expressed in a combinatorial fashion (Defn. \ref{simplicial_homotopy}) via maps $h_i$. We fix $m$ and for $e \in E(\mathcal{C}) \otimes_{S_\bullet A} \mathcal{C}^2 ([m],[n])$ (which recall is of the form (\ref{additivity_diagram})) and we define $h_i (e)$ with $0 \leq i \leq m$ to be
\begin{equation}\label{main_homotopy_diagram}
\xymatrix@C=.2cm @R=.5cm{
0=A_0 \ar@{^{(}->}[r] \ar@{^{(}->}[d] & A_1 \ar@{^{(}->}[r] \ar@{^{(}->}[d] & \cdots & A_i \ar@{^{(}->}[d] \ar@{^{(}->}[r] & S_0 \ar@{=}[r] \ar@{^{(}->}[d] & S_0 \ar@{=}[r] \ar@{^{(}->}[d] & \cdots \ar@{=}[r] & S_0 \ar@{^{(}->}[d]\\
C_0 \ar@{^{(}->}[r] & C_1\ar@{^{(}->}[r] & \cdots & C_i \ar@{^{(}->}[r] & C_i \amalg_{A_i} S_0 \ar@{^{(}->}[r] & C_{i+1} \amalg_{A_{i+1}} S_0 & \cdots \ar@{^{(}->}[r] & C_m \amalg_{A_m} S_0\\
B_0 \ar[u]^{\circ} \ar@{^{(}->}[r] & B_1 \ar[u]^{\circ} \ar@{^{(}->}[r] & \cdots & B_i \ar[u]^{\circ} \ar@{=}[r] & B_i \ar[u]^{\circ} \ar@{^{(}->}[r] & B_{i+1} \ar@{^{(}->}[r] \ar[u]^{\circ} & \cdots \ar@{^{(}->}[r] & B_m \ar[u]^{\circ}\\
0= A_0 \ar@{^{(}->}[r] & A_1\ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & A_i \ar@{^{(}->}[r] & S_0 \ar@{=}[r] & S_0 \ar@{=}[r] & \cdots \ar@{=}[r] & S_0\\
0 = B_0 \ar@{^{(}->}[r] & B_1 \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & B_i \ar@{=}[r] & B_i \ar@{^{(}->}[r] & B_{i+1} \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & B_m
}
\end{equation}
Note that here we are using the existence of pushouts provided by Th. \ref{pushouts_exist}. This is one of the critical points where that fact is used.
Although we are not displaying the levels below the upper row the the diagrams above, we will need to reference the rows below. For the diagram $e$, we retain the conventions of Rmk. \ref{diagram_labelling}: the choices of subtraction in the diagram $e$ will be refered to by $A_{k,l}$, $B_{k,l}$ and $C_{k,l}$. For the diagram $h_i (e)$ we make the convention that the symbol $h_i (e)^A$ represents the flag corresponding to the first row in (\ref{main_homotopy_diagram}), and similarly for $h_i (e)^B$ and $h_i (e)^C$. Thus, the hidden parts of the flags are indexed by $h_i(e)^A_{k,l}, h_i(e)^B_{k,l}, h_i(e)^C_{k,l}$. We now explicitly identify these flags. For $i \geq 0$ define
\begin{align*}
h_i (e)^A_{k,l} &=
\begin{cases}
A_{k,l} & k, l \leq i \\
S_0 - A_{0,k} & k \leq i, l > i\\
\emptyset & \text{otherwise}
\end{cases} \\
h_i (e)^C_{k,l} &=
\begin{cases}
C_{k,l} & k, l \leq i \\
C_{k,l-1} \amalg_{A_{k,l-1}} h_i (e)^A_{k,l} & k < i, l > i\\
h_i(e)_{k,l} & k= i, l = i+1\\
C_{k,l-1} \amalg_{A_{k,l-1}} h_i(e)^A_{k,l} & k = i, l > i+1\\
h_i(e)^B_{k,l} & l, k > i
\end{cases} \\
h_i (e)^B_{k,l} &=
\begin{cases}
B_{k,l} & k, l \leq i \\
B_{k,i} & l = i+1, k = i+1, l \neq k \\
\emptyset & l = k = i+1\\
B_{k,l-1} & \text{otherwise}
\end{cases}
\end{align*}
The appendix depicts a few of these diagrams.
For the most part, the maps in (\ref{main_homotopy_diagram}) are clear. One that requires comment is the map in $h_i(e)$ from $B_{k,l}$ to $C_{k,l} \amalg_{A_{k,l}} (S_0 - A_k)$. This will be the composition
\[
B_{k,l} \xrightarrow{\cong} C_{k,l} - A_{k,l} \xrightarrow{\cong} C_{k,l} \amalg_{A_{k,l}} (S_0 - A_k) - (S_0 - A_k) \hookrightarrow C_{k,l} \amalg_{A_{k,l}} (S_0 - A_k)
\]
Each of these isomorphisms and inclusions is uniquely determined by data in $e$. The other maps that require comment are those from $h^B_{k,l}$ to $h^C_{k,l}$ --- whenever both of them are $B_{k,l}$ the map between them will be the identity.
We now have to verify two assertions. The first is that the flags in (\ref{main_homotopy_diagram}) satisfy the requirements of Defn. \ref{s_dot} and the second is that $h_i$ satisfies the relations of simplicial homotopy in Defn. \ref{simplicial_homotopy}.
For the first assertion, it is clear that the flags below the rows $h_i(e)^A$ and $h_i (e)^B$ remain of the form required by Defn. \ref{s_dot}. The following proposition verifies the statement for the $h_i(e)^C$ row.
\begin{prop}
For any $k, l, s$ with $k < l < s$
\[
h^C_i(e)_{k,l} \to h^C_i(e)_{k,s} \leftarrow h^C_i(e)_{l,s}
\]
is a subtraction sequence.
\end{prop}
\begin{proof}
We show this in the case $k = 0$. The other cases are dealt with similarly. We proceed by dividing this into the sub-cases $l, s \leq i$, $l \leq i, s > i$ and $l, s > i$.
For $k < l < s \leq i$, the statement follows since $C_{0,l} \to C_{0,s} \leftarrow C_{l,s}$ is a subtraction sequence.
For $l \leq i$, $i<s$ this is the statement that
\[
C_{0,l} \to C_{0,s-1} \amalg_{A_{0,s-1}} (S_0 - A_{0,0}) \leftarrow C_{0,s-1} \amalg_{A_{l,s-1}} (S_0 - A_{0,l})
\]
is a subtraction sequence. To see this, consider the diagram
\[
\xymatrix{
C_{k,l}\ar@{^{(}->}[d] & A_{k,l} \ar@{^{(}->}[d]\ar@{^{(}->}[r]\ar@{_{(}->}[l] & A_{k,l}\ar@{^{(}->}[d]\\
C_{k,s-1} & A_{k,s-1}\ar@{^{(}->}[r]\ar@{_{(}->}[l] & S_0 - A_{0,k}\\
C_{l,s-1} \ar[u]^\circ & A_{l,s-1} \ar@{^{(}->}[r]\ar@{_{(}->}[l]\ar[u]^\circ & S_0 - A_{0,l} \ar[u]^\circ
}
\]
The top squares are cartesian (by definition of the $A$ and $C$ flags). Thus, this satisfies Axiom 3 of Defn. \ref{subtractive_category}, and the statement follows.
For $i < l, s$ the statement is that
\[
C_{0,l-1} \amalg_{A_{0,l-1}} S_0 \hookrightarrow C_{0,s-1} \amalg_{A_{0,s-1}} S_0 \leftarrow B_{l-1,s-1}
\]
is a subtraction sequence. To see this, we consider the diagram induced by functoriality
\[
\xymatrix{
S_0 \ar@{=}[r] \ar@{^{(}->}[d] & S_0 \ar@{^{(}->}[d] & \emptyset \ar[l]\ar@{^{(}->}[d]\\
C_{0,l-1} \amalg_{A_{0,l-1}} S_0 \ar@{^{(}->}[r] & C_{0,s-1} \amalg_{A_{0,s-1}} S_0 & B_{l-1,s-1} \ar[l]^{\circ}\\
B_{0,l-1} \ar[u]^\circ \ar@{^{(}->}[r] & B_{0,s-1} \ar[u]^\circ & B_{l-1,s-1} \ar@{=}[u]\ar[l]_{\circ}
}
\]
The first and second columns are easily seen to be subtraction sequences and the top and bottom rows as well. This forces the middle row to be a subtraction sequence.
\end{proof}
We now verify that $h_i$ is a simplicial homotopy. Recall that this means that the following identities hold:
\[
\begin{cases}
d_0 h_0 = \Gamma \\
d_{n+1} h_n = \operatorname{Id}
\end{cases}
\ \
\begin{cases}
d_i h_j = h_{j-1} d_i & i < j\\
d_{j+1} h_{j+1} = d_{j+1} h_j & \\
d_i h_j = h_j d_{i-1} & i > j+1
\end{cases}
\ \
\begin{cases}
s_i h_j = h_{j+1} s_i & i \leq j \\
s_i h_j = h_j s_{i-1} & i > j
\end{cases}
\]
First, it is clear that $d_{q+1} h_q = \operatorname{Id}$. It is also clear that $d_0 h_0 = \Gamma$.
The identities involving degeneracy hold trivially.
The middle group of identities is not hard:
$\boxed{d_i h_j = h_{j-1} d_i}$ when $i < j$. This part only involves the $C_{k,l}$ and thus holds by the simplicial identities in the $C_{k,l}$ part of $e$.
$\boxed{d_{j+1} h_{j+1} = d_{j+1} h_j}$. This identity is clear from the definitions.
$\boxed{d_i h_j = h_j d_{i-1}}$ when $i > j$. Again, this is not difficult. The identity comes from the simplicial identities on the $B_{k,l}$ part of $e$ and the fact that pushouts are chosen functorially and based on maps in $e$. (See the appendix for a picture).
With these verifications we know that $h_i$ is a simplicial homotopy and this ends the proof of the additivity theorem.
\subsection{Delooping}
Of course, additivity is a stepping stone to delooping for us. From the $\widetilde{S}_\bullet$ construction, we can produce a map $K(\mathcal{C})(k) \to \Omega K(\mathcal{C})(k+1)$ (the construction is reviewed below). A consequence of additivity will allow us to show that this map is a weak equivalence, which exhibits $K(\mathcal{C})(1)$ as an infinite loop space, and $K(\mathcal{C})$ as a quasi-fibrant symmetric spectrum.
We will approach delooping as Waldhausen does. However, we need a definition first.
\begin{defn}\cite[Defn. 1.5.4]{waldhausen}
Let $P X_\bullet$ denote the simplicial path space of the simplicial set $X_\bullet$. Then for SW-categories $\mathcal{A}$ and $\mathcal{B}$ with an exact functor $f: \mathcal{A} \to \mathcal{B}$ we define $\widetilde{S}_n (f: \mathcal{A} \to \mathcal{B})$ via pullback:
\[
\xymatrix{
\widetilde{S}_n (f: \mathcal{A} \to \mathcal{B}) \ar[r]\ar[d] & (P\widetilde{S}_\bullet \mathcal{B})_{n+1} \ar[d]\\
\widetilde{S}_n \mathcal{A} \ar[r] & \widetilde{S}_n \mathcal{B}
}
\]
\end{defn}
For a simplicial path space $PX_\bullet$ there is a sequence of maps $X_1 \to PX_\bullet \to X_\bullet$ and $PX_\bullet$ is contractible, so on realization, gives a map $|X_1| \to \Omega |X|$. For $\mathcal{C}$ a subtractive category, we may consider
\[
w \mathcal{C} = (\widetilde{S}_\bullet \mathcal{C})_1 \to P(\widetilde{S}_\bullet \mathcal{C}) \to \widetilde{S}_\bullet \mathcal{C}
\]
and obtain a map $|w \mathcal{C}| \to \Omega|\widetilde{S}_\bullet \mathcal{C}|$. This map will in general not be an equivalence, but upon applying $\widetilde{S}_\bullet$ on more time, it will be. To make this precise, we need the following proposition.
\begin{prop}\cite[Prop. 1.5.5]{waldhausen}
Let $\mathcal{A}, \mathcal{B}$ be SW-categories. Suppose $f: \mathcal{A} \to \mathcal{B}$ is exact. Then
\[
w \widetilde{S}_\bullet \mathcal{B} \to w \widetilde{S}_\bullet \widetilde{S}_\bullet (f: \mathcal{A} \to \mathcal{B}) \to w \widetilde{S}_\bullet \widetilde{S}_\bullet \mathcal{A}
\]
is a fibration up to homotopy.
\end{prop}
\begin{proof}
This is exactly as in Waldhausen \cite[Prop. 1.5.5]{waldhausen}.
\end{proof}
When $\mathcal{A} = \mathcal{B} = \mathcal{C}$ where $\mathcal{C}$ is a subtractive category, $w \widetilde{S}_\bullet \widetilde{S}_\bullet (f: \mathcal{C} \to \mathcal{C}) = P(w \widetilde{S}_\bullet \widetilde{S}_\bullet \mathcal{C})$, and we immediately obtain the following corollary.
\begin{cor}
The sequence
\[
i \widetilde{S}_\bullet \mathcal{C} \to P (i \widetilde{S}_\bullet \widetilde{S}_\bullet \mathcal{C}) \to i \widetilde{S}_\bullet \widetilde{S}_\bullet \mathcal{C}
\]
is a fibration sequence up to homotopy, i.e.
\[
|i S_\bullet \mathcal{C}| \simeq \Omega |i S_\bullet S_\bullet \mathcal{C}|
\]
\end{cor}
Finally, we have
\begin{thm}
Let $\mathcal{C}$ be a subtractive Waldhausen category. Then $\underline{K}(\mathcal{C})$ an infinite loop space. More precisely, $K(\mathcal{C})$ (see Defn. \ref{K_symmetric_spectrum}) is a quasi-fibrant symmetric spectrum.
\end{thm}
\begin{rmk}
We will denote the associated delooped spectrum by $K(\mathcal{C})$.
\end{rmk}
The main object of study is then obtained as a corollary:
\begin{cor}
Let $X$ be a scheme. There is a spectrum $K(\mathbf{Var}_{/X})$ such that $\pi_0 K(\mathbf{Var}_{/X})$ is the Grothendieck group of varieties over $X$.
\end{cor}
The next section shows we can do even better.
\subsection{Multiplicative Structure}
There is more structure to the category $\mathbf{Var}_{/k}$ than we have used thus far, in particular, there is a cartesian product: given $k$-varieties $X, Y$ we can consider $X \times_k Y$. This much is used to produce the ring structure on $K_0 (\mathbf{Var}_{/k})$. It will also produce a homotopy-coherent product structure on $K(\mathbf{Var}_{/k})$, an $E_\infty$-ring structure.
Before going on, we introduce a useful construction in order to define products. We will follow Geisser-Hesselholt \cite{geisser_hesselholt} in defining products, and so follow them in defining an $S_\bullet$-construction appropriate to the task. The only modification is to consider $S^Q \mathcal{C}$ where $Q$ is a finite set. That is, instead of indexing on numbers, we index on finite sets. This serves to make the action by the symmetric group more transparent.
\begin{defn}
Let $Q$ be a finite set. Consider positive integers $n_i$ indexed on $Q$, i.e. where $i \in Q$. Then $\widetilde{S}^Q_{n_1, \dots, n_{|Q|}} \mathcal{C}$ is a functor from the arrow category
\[
F: \widetilde{\text{Ar}}[n_1]\times \cdots \times \widetilde{\text{Ar}}[n_{|Q|}] \to \mathcal{C}
\]
satisfying the same requirements as Defn.\ref{iterated_S_dot_construction}.
\end{defn}
We now define
\begin{defn}
Let $\mathcal{C}$ be an SW-category. The $K$-theory spectrum is given by
\[
K(\mathcal{C})(k) = |w_\bullet S^Q_\bullet \mathcal{C}|
\]
with $Q = \{1, \dots, k\}$.
\end{defn}
We want to introduce a product structure on $K(\mathcal{C})$ from the product structure on $\mathcal{C}$. This can be done by exact analogy to the case of Waldhausen categories explained carefully in \cite{blumberg_mandell_koszul,geisser_hesselholt}. The structure necessary on $\mathcal{C}$ is that it be a permutative category (see, e.g. \cite{elmendorf_mandell} for an introduction to permutative categories) and that the product behave well with respect to subtractive structure (see Defn. \ref{bi_exact} below). Typically, however, we are given a symmetric monoidal structure, not a permutative structure on $\mathcal{C}$. Luckily, this presents no difficulty as symmetric monoidal categories can always be rigidified to \textit{equivalent} permutative categories \cite{isbell}. Since this procedure produces an equivalence of categories, the SW-structure may be carried along the equivalence.
\begin{comment}
We want to induce a product structure on $K(\mathcal{C})$ from a product structure on $\mathcal{C}$. The appropriate product structure on $\mathcal{C}$ is that it be a permutative category and that furthermore, this product structure behaves well with respect to the subtractive structure. The condition that the structure be permutative presents not technical problems for us: any symmetric monoidal category may be rigidifed to an \textit{equivalent} permutative category and by the equivalence, the SW-structure may be induced on the new rigidifed category.
\end{comment}
The requirement that the product structure interact nicely with the subtractive structure amounts of the following requirement.
\begin{defn}\label{bi_exact}
Let $\mathcal{C}$ be a permutative SW-category. Then a symmetric monoidal structure $\otimes: \mathcal{C} \times \mathcal{C} \to \mathcal{C}$ is \textbf{biexact} if
\begin{enumerate}
\item $X \times \emptyset$ and $\emptyset \times X$ are both $\emptyset$
\item $X \times (-)$ and $(-) \times X$ are exact functors
\item For $X \to Y$ and $X' \to Y'$ cofibrations the pushout-product
\[
X' \times Y \amalg_{X \times Y} X \times Y' \to X' \times Y'
\]
is a cofibration.
\end{enumerate}
\end{defn}
We then have (see \cite[p.40]{geisser_hesselholt})
\begin{defn}
Let $\mathcal{C}$ be a permutative SW-category with bi-exact product. There is an induced product
\[
\widetilde{S}^Q_\bullet \mathcal{C} \times \widetilde{S}^{Q'}_\bullet \mathcal{C} \to \widetilde{S}^{Q \amalg Q'}_{\bullet} \mathcal{C}
\]
given by amalgamating the morphisms in the arrow categories. This gives a $\Sigma_m \times \Sigma_n$-equivariant map
\[
K(\mathcal{C})_m \times K(\mathcal{C})_n \to K(\mathcal{C})_{m+n}
\]
which descends to
\[
K(\mathcal{C})_m \wedge K(\mathcal{C})_n \to K(\mathcal{C})_{m+n}.
\]
\end{defn}
\begin{thm}\cite[Th 2.8]{blumberg_mandell_koszul}\cite[Prop. 6.1.1]{geisser_hesselholt}
Let $\mathcal{C}$ be a symmetric monoidal SW-category with $\otimes : \mathcal{C} \times \mathcal{C} \to \mathcal{C}$ biexact. Let $\overline{\mathcal{C}}$ denote the rigidification of $\mathcal{C}$. Then $K(\overline{\mathcal{C}}) \simeq K(\mathcal{C})$ is an $E_\infty$-ring spectrum.
\end{thm}
Of course, we would like this result for $\mathcal{C} = \mathbf{Var}_{/k}$. That means that we have to show that the cartesian product is biexact. It is clear that properties 1 and 2 of biexactness hold. Property 3 is the content of the proposition below.
\begin{prop}
Let $X\hookrightarrow X'$ and $Y \hookrightarrow Y'$ be cofibrations of varieties. Then the pushout-product
\[
X \times Y' \amalg_{X \times Y} X' \times Y \to X' \times Y'
\]
is a cofibration.
\end{prop}
\begin{proof}
The diagram
\[
\xymatrix{
X\times Y \ar[d]\ar[r] & X' \times Y \ar[d]\\
X \times Y' \ar[r] & X' \times Y'
}
\]
is cartesian. Since we have verified the axioms for $\mathbf{Var}_{/k}$, this means the pushout-product of this diagram is a cofibration.
\end{proof}
\begin{cor}
The usual product induces a paing $\mathbf{Var}_{/k} \times \mathbf{Var}_{/k} \to \mathbf{Var}_{/k}$ which descends to a product on $K(\mathbf{Var}_{/k})$. Thus, $K(\mathbf{Var}_{/k})$ is an $E_\infty$-ring spectrum.
\end{cor}
\section{Maps out of $K(\mathbf{Var}_{/k})$}
We come to the main point of the paper, which is to produce derived motivic measures, i.e. maps out of $K(\mathbf{Var}_{/k})$. Even the structure of $K_0 (\mathbf{Var}_{/k})$ is difficult to get one's hands on, and the progress made thus far has been through uses of motivic measures (see, e.g. \cite{larsen_lunts} for a beautiful example). In order to figure out the structure of the higher homotopy groups of $K(\mathbf{Var}_{/k})$, it thus seems necessary to produce higher motivic measures. These maps take the form of spectrum maps $K(\mathbf{Var}_{/k}) \to R$ where $R$ is any spectrum. Given a map of this form, we could take components to obtain $K_0 (\mathbf{Var}_{/k}) \to \pi_0 R$ which is a classical motivic measure. As a first attempt at producing derived motivic meaures, we could thus ask for ones that lift known classical motivic measures. In this section, we will lift a number of classical motivic measures to such spectrum maps. This shows that in many known cases, classical motivic measures are the shadow of a much richer homotopical picture.
\begin{comment}We reap what we have sown above and produce maps out of $K(\mathbf{Var}_{/k})$. We will be able to lift various classical Euler characterstics including the point-counting map and Euler characteristics coming from cohomology theories. In addition, we produce a map to Waldhausen's K-theory of spaces.
In order to produce these maps, some work is needed. The spectrum $K(\mathbf{Var}_{/k})$ was of course constructed out of something that is \textit{not quite} a Waldhausen category. As such, we need some language to deal with functors that ``preserve'' the various structures. This turns out to require some care. The issue is that maps on $K$-theory are usually produced by producing exact functors between the categories. Because of variance issues here, there will be issues with producing functors. We offer the following example, which illustrates the issues and will become relevant below.
\end{comment}
Before we begin, an example illustrates the issue we will contend with:
\begin{example}
Consider the category of complex varieties $\mathbf{Var}_{/\mathbf{C}}$. There is a motivic measure $K_0 (\mathbf{Var}_{/\mathbf{C}})$ to $\mathbf{Q}$-vector spaces obtained by taking compactly supported cohomology with $\mathbf{Q}$-coefficients. For subtraction sequences $Z \hookrightarrow X \leftarrow X- Z$ this procedure is covariant with respect to closed inclusions and contravariant with respect to open inclusions and yields long exact sequences
\[
\cdots \to H^i_c (Z) \to H^i_c (X) \to H^i_c (X-Z) \to H^{i+1}_c (X) \to \cdots
\]
and so if we assign
\[
X \mapsto \chi(X) : = \sum [H^i_c (X;\mathbf{Q})] \in K_0 (\mathbf{Vect}_{\mathbf{Q}})
\]
we get a well-defined motivic measure.
\end{example}
To obtain a map $K(\mathbf{Var}_{/k}) \to K(\mathcal{C})$ where $\mathcal{C}$ is a Waldhausen category, we need a map from the simplicial set $i \widetilde{S}_\bullet \mathbf{Var}_{/k}$ into the simplicial sets $w S_\bullet \mathcal{C}$. In order to have such maps, we will have to use functors that behave differently with respect to open and closed inclusions, because of the differences in vertical arrows in the respective $S_\bullet$-constructions. In fact, we'll have to deal with functors that are only \textit{really} functors on the subcategory of closed inclusions and subcategory of open inclusions, respectively.
The definition below is inspired by proper base change theorems in algebraic geometry. It was suggested to the author by Jesse Wolfson. He also pointed out that it is quite close to \cite[Defn. 3.3]{getzler}.
\begin{defn}\label{w_exact}
Let $\mathcal{C}$ be an SW-ccategory and let $\mathcal{W}$ be a Waldhausen category. We define a \textbf{W-exact functor} from $\mathcal{C}$ to $\mathcal{W}$ to be a pair of functors $(F_!, F^!)$ such that
\begin{enumerate}
\item $F_!$ is a functor $F_!: \mathbf{co}(\mathcal{C}) \to \mathcal{W}$. For $i$ a map we often denote $F_! (i)$ by $i_!$.
\item $F^!$ is a functor $F^!: \mathbf{fib}(\mathcal{C})^{\text{op}} \to \mathcal{W}$. For $j$ a map we often denote $F^!(j)$ by $j^!$.
\item $F_! (X) = F^! (X)$ for $X \in \mathcal{C}$. We denote the common value by $F(X)$.
\item (\textbf{base change}) The cartesian diagram in $\mathcal{C}$
\[
\xymatrix{
X \ar@{^{(}->}[d]_i \ar[r]^j_{\circ} & Z \ar@{^{(}->}[d]^{i'}\\
Y \ar[r]^{\circ}_{j'} & W
}
\]
produces a diagram
\[
\xymatrix{
F(X) \ar[d]_{i_!} & F(Z) \ar[d]^{(i')_!} \ar[l]_{j^!}\\
F(Y) & F(W) \ar[l]^{(j')^!}
}
\]
and we require that the diagram commute, i.e.
\[
i_! \circ j^! = (j')^! \circ (i')_!
\]
\item (\textbf{excision}) For a subtraction sequence
\[
\xymatrix{
X \ar@{^{(}->}[r]^i & Y & Y - X \ar[l]^{\circ}_j
}
\]
the induced sequence
\[
F (X) \xrightarrow{i_!} F (Y) \xrightarrow{j^!} F(Y - X)
\]
is a cofiber sequence in $\mathcal{W}$.
\end{enumerate}
For ease, we will write a W-exact functor as $(F_!, F^!): \mathcal{C} \to \mathcal{W}$ with the understanding that there is no underlying functor on the category $\mathcal{C}$.
\end{defn}
We record the following consequence of the definition
\begin{prop}\label{pseudoexact}
Given a W-exact functor, there is a map of simplicial sets $i \widetilde{S}_\bullet \mathcal{C} \to w S_\bullet \mathcal{W}$ which induces a map of spectra \[K(\mathcal{C}) \to K(\mathcal{W}).\]
\end{prop}
\begin{proof}
Consider an $n$-simplex $X \in \widetilde{S}_n \mathcal{C}$. Recall (Defn. \ref{s_dot}) that this means that $X$ is a functor $X: \widetilde{\operatorname{Ar}}[n] \to \mathcal{C}$ such that $X_{j, j} = \emptyset$ and every sub-diagram $X_{i,j} \to X_{i, k} \leftarrow X_{j, k}$ is a subtraction sequence.
Apply $F_!$ to every cofibration and $F^!$ to every fibration in the diagram $X$. We note that by definition of $W$-exact functor, $F(X_{i,j}) \to F(X_{i,k})$ will be a cofibration in $\mathcal{W}$ and
\[
F(X_{i,j}) \to F(X_{i,k}) \to F(X_{j,k})
\]
will be a cofiber sequence in $\mathcal{W}$. Thus, the image of $F$ lies in $S_\bullet \mathcal{W}$.
\end{proof}
\begin{comment}
\begin{proof}
It is much easier to consider the proof by example. Consider a flag in $\widetilde{S}_3 \mathcal{C}$ where we let $i_{a,b}$ denote the $a$th map at quotient level $b$, and $j_{a,b}$ denotes the map $a$th map from quotient level $b$ to the one above (note that $i$ is a closed inclusion and $j$ is an open inclusion):
\[
\xymatrix @C=.5cm@R=.5cm{
X_0 \ar[r]^{i_{0}} & X_1 \ar[r]^{i_{1}} & X_2\ar[r]^{i_2} & X_3\\
& X_1 - X_0 \ar[r]^{i_{1,0}}\ar[u]_{j_{1,0}} & X_2 - X_0\ar[r]^{i_{2,0}}\ar[u]_{j_{2,0}} & X_3 - X_0\ar[u]_{j_{3,0}}\\
& & X_2 - X_1\ar[u]_{j_{2,1}}\ar[r]^{i_{2,1}} & X_3 - X_1 \ar[u]_{j_{3,1}}\\
& & & X_3 - X_2\ar[u]_{j_{3,2}}
}
\]
We may apply $F_!$ to the closed inclusions and $F^!$ to open inclusions to obtain the flags below.
\[
\xymatrix@R=1cm{
F (X_0) \ar[r]^{F_! (i_0)} & F (X_1)\ar[d]_{ F^!(j_{1,0})} \ar[r]^{F_! (i_1)}& F(X_2)\ar[d]_{ F^! (j_{2,0})}\ar[r]^{F_! (i_2)} & F (X_3)\ar[d]_{F^! (j_{3,0})} \\
& F(X_1 - X_0)\ar[r]^{F_!(i_{1,0})} & F (X_2 - X_0) \ar[d]_{F^! (j_{2,0})}\ar[r]^{F_! (i_{2,0})}& F (X_3 - X_0)\ar[d]_{F^!(j_{3,1})}\\
& & F (X_2 - X_1)\ar[r]^{F_! (i_{2,1})} & F(X_3 - X_1)\ar[d]_{F^! (j_{3,2})} \\
& & & F (X_3 - X_2)
}
\]
This diagram commutes by base change for $W$-exact functors. Also, $F_!$ and $F^!$ are functors, it is clear that the simplicial maps are compatible. Furthermore, by the axioms we see that every map
\[
F (X_i) \to F (X_j) \to F (X_j - X_i)
\]
is a cofiber sequence. Altogether, this means that the image flag is an object of $S_n \mathcal{C}$.
\end{proof}
\end{comment}
We also need a dual definition to prove maps \textit{from} a Waldhausen category to an SW-category. This situation seems to arise less commonly in practice, but will be useful below (Thm.\ref{splitting}).
\begin{defn}
Let $\mathcal{W}$ be a Waldhausen category ad $\mathcal{C}$ an SW-category. An \textbf{op-W-exact functor} is a pair of functors $(G_\ast, G^\ast)$ such that
\begin{enumerate}
\item $G_\ast$ is a functor $G_\ast : \text{co}(\mathcal{W}) \to \mathcal{C}$
\item $G^\ast$ is a functor $G^\ast: \text{fib}(\mathcal{W})^{\text{op}} \to \mathcal{C}$
\item For $X \in \mathcal{W}$, $G^\ast (X) = G_\ast (X)$. We refer to the common value as $G(X)$.
\item Given a diagram in $\mathcal{W}$
\[
\xymatrix{
X \ar@{->>}[d]_j\ar[r]^i & Z \ar@{->>}[d]_{j'}\\
Y \ar[r]_{i'} & W
}
\]
where the horizontal maps are cofibrations and vertical maps are fibrations, we get the corresponding diagram in $\mathcal{C}$
\[
\xymatrix{
G(X) \ar[r]^{i_\ast} & G(Z) \\
G(Y) \ar[u]_{j^\ast} \ar[r]_{(i')_\ast} & G(W)\ar[u]_{(j')^\ast}
}
\]
We require that the diagram commute, i.e.
\[
i_\ast \circ j^\ast = (j')^\ast \circ (i')_\ast
\]
\item Given a cofiber sequence in $\mathcal{W}$
\[
\xymatrix{
X \ar[r]^i & Y \ar@{->>}[r]^j & Z
}
\]
we get a subtraction sequence in $\mathcal{C}$
\[
G(X) \xrightarrow{i_\ast} G(Y) \xleftarrow{j^\ast} G (Y - X)
\]
\end{enumerate}
\end{defn}
\begin{rmk}
Because of the rigidity of the category of varieties, these will be harder to produce in practice, in fact, the only example we know is the one below.
\end{rmk}
By a proof entirely dual to Thm. \ref{pseudoexact}, we obtain
\begin{thm}\label{op_W_exact}
Given a Waldhausen category $\mathcal{C}$, an SW-category $\mathcal{W}$ and an op-$W$-exact functor $(G_\ast, G^\ast)$ we get a map on $K$-theory spectra
\[
K(\mathcal{W}) \to K(\mathcal{C}).
\]
\end{thm}
In the subsections below, we will have occasion to use the category of pointed finite sets a number of times, so it worth defining before we get to work.
\begin{defn}
Let $\mathbf{FinSet}_+$ be the category of pointed finite sets. We choose a skeleton of it so that the objects are the pointed sets with $n$-elements $[\mathbf{n}]_+$. Morphisms are maps preserving the basepoint, which we denote $\ast$.
\end{defn}
The salient property of this category for us is the following celebrated theorem.
\begin{thm}[Barratt-Priddy-Quillen]
Consider $\mathbf{FinSet}_+$ as a Waldhausen category by defining cofibrations to be injective maps. Then
\[
K(\mathbf{FinSet}_+) \simeq S
\]
where $S$ is the sphere spectrum.
\end{thm}
Thus, $\mathbf{FinSet}_+$ will be our category-level model of the sphere spectrum.
Below it will be necessary to view $\mathbf{FinSet}_+$ as a Waldhausen category and also to understand some of its combinatorics.
First, we note that $\mathbf{FinSet}_+$ can be made into a Waldhausen category by declaring that cofibrations are monomorphisms and weak equivalences are isomorphisms. We record the following definition for future use.
\begin{defn}
A map $p: [\mathbf{n}_1]_+ \to [\mathbf{n}_2]_+$ will be said to be a \textbf{fibration} if it arises as a pushout
\[
\xymatrix{
[\mathbf{n}_0]_+ \ar[r]^i\ar[d] & [\mathbf{n}_1]_+\ar[d]_p\\
\ast \ar[r] & [\mathbf{n}_2]_+
}
\]
where $i$ is a cofibration. More concretely, $p$ is a fibration if it is surjective and for $i \in [\mathbf{n}_2]_+$, $p^{-1}(i)$ has one element.
\end{defn}
We define two flavors of wrong way maps in $\mathbf{FinSet}_+$.
\begin{defn}\label{cofibration_backward_finset}
Let $f: [\mathbf{n}_1]_+ \to [\mathbf{n}_2]_+$ be a monomorphism in $\mathbf{FinSet}_+$. We define $f^\ast: [\mathbf{n}_2]_+ \to [\mathbf{n}_1]_+$ by mapping the corange to the basepoint and each $i \in \operatorname{Im}(f)$ to $f^{-1}(i)$.
\end{defn}
\begin{defn}\label{fibration_backward_finset}
Let $p: [\mathbf{n}_1]_+ \to [\mathbf{n}_2]_+$ be a fibration. We define $p^\ast$ as follows. For $i \in \operatorname{Im}(p)$, define $p^\ast (i) = p^{-1}(i)$ and then map the basepoint to the basepoint.
\end{defn}
We now consider commutative diagrams
\[
\xymatrix{
[\mathbf{n}_1]_+ \ar@{->>}[d]_{p_1} \ar@{^{(}->}[r]^{i_1} & [\mathbf{n}_2]_+\ar@{->>}[d]^{p_2}\\
[\mathbf{n}_3]_+ \ar@{^{(}->}[r]_{i_2} & [\mathbf{n}_4]_+
}
\]
Commutativity in this case means that $p^{-1}_1 (\ast) = i^{-1}_1 (p^{-1}_2 (\ast))$ and that for $i \in [\mathbf{n}_4]_+$, $(i_2 \circ p_1)^{-1}(i) = (p_2 \circ i_1)^{-1} (i)$.
This observation has the following simple, but useful, consequence.
\begin{lem}\label{backwards_commute_finset}
Given a commutative diagram as above, the following also commutes
\[
\xymatrix{
[\mathbf{n}_1]_+ \ar[r]^{i_1} & [\mathbf{n}_2]\\
[\mathbf{n}_3]_+ \ar[u]^{p^\ast_1}\ar[r]_{i_2} & [\mathbf{n}_4]_+ \ar[u]_{p^\ast_2}
}
\]
\end{lem}
\subsection{The Unit Map}
Since it is a spectrum, $K(\mathbf{Var}_{/k})$ naturally has a unit map from the sphere spectrum $S \to K(\mathbf{Var}_{/k})$. It will be useful for us to have a model for this map. When working with $K$-theoretic functors, finite pointed sets are always a proxy for the sphere spectrum, by Barrat-Priddy-Quillen. We construct functors out of this category to model maps out of the sphere spectrum.
\begin{defn}
We define an op-W-exact functor $(G_*,G^*): \mathbf{FinSet}_+ \to \mathbf{Var}_{/k}$ as follows.
\begin{enumerate}
\item $G_\ast: \mathbf{FinSet}_+ \to \mathbf{Var}_{/k}$ is defined on objects by
\[
G_\ast ([\mathbf{n}_1]) = \coprod^{n_1}_{i=0} \operatorname{Spec} (k).
\]
One cofibrations, i.e. inclusions it is defined by the corresponding inclusions of of $\operatorname{Spec} (k)$s. On fibrations, it is defined by the corresponding fold maps.
\item $G^\ast: \mathbf{FinSet}_+ \to \mathbf{Var}_{/k}$ is defined by objects as above. Given a cofibration $i: [\mathbf{n}_1]_+ \to [\mathbf{n}_2]_+$, we define $G^\ast$ to be $G_\ast (i^\ast)$ with $i^\ast$ defined as in \ref{cofibration_backward_finset}. Given a fibration $p: [\mathbf{n}_1]_+ \to [\mathbf{n}_2]_+$ we define $G^\ast (p)$ to be $G_\ast (p^\ast)$ with $p^\ast$ defined as in \ref{fibration_backward_finset}.
\end{enumerate}
\end{defn}
\begin{prop}
The map above is in fact op-W-exact.
\end{prop}
\begin{proof}
The first three conditions are trivial. To check the 4th, we consider a diagram in $\mathbf{FinSet}_+$
\[
\xymatrix{
[\mathbf{n}_1]_+ \ar@{->>}[d]_j \ar[r]^i & [\mathbf{n}_2]_+ \ar@{->>}[d]^{j'}\\
[\mathbf{n}_3]_+ \ar[r]_{i'} & [\mathbf{n}_4]_+
}
\]
This induces a diagram of varieties
\[
\xymatrix{
G([\mathbf{n}_1]_+) \ar[r]^{G_\ast (i)} & G([\mathbf{n}_2]_+)\\
G([\mathbf{n}_3]_+) \ar[r]_{G_\ast (i')}\ar[u]^{G^\ast (j)} & G([\mathbf{n}_4]_+) \ar[u]_{G^\ast (j')}
}
\]
We now check that the two maps we need to agree in fact agree. That is, we need
\[
G_\ast (i) \circ G^\ast (j) = G^\ast (j') \circ G_\ast (i')
\]
However, this is the content of Lem \ref{backwards_commute_finset}.
\end{proof}
\begin{cor}
The op-W-exact functors descend to a map of spectra $S \to K(\mathbf{Var}_{/k})$.
\end{cor}
\begin{rmk}
It is not hard to see that we get an $E_\infty$-map $S \to \mathbf{Var}_{/k}$, but this will not be needed.
\end{rmk}
\subsection{Point Counting}
One of the fundamental goals of algebraic geometry is to systematically count points on algebraic varieties over finite fields. This procedure would take an algebraic variety over a finite field $k$ and return the number of $k$-points $|X(k)|$. Such a procedure behaves well with respect to subtracting varieties, and so it descends to a motivic measure $K_0 (\mathbf{Var}_{/k}) \to \mathbf{Z}$. This is the first motivic measure that we will lift.
\begin{defn}
Define a W-exact functor $(-(k)_!, -(k)^!): \mathbf{Var}_{/k} \to \mathbf{FinSet}_+$ as follows. On objects, we define the functor to be $X(k)_+$, the set of $k$-points of $X$ with a disjoint basepoint added. We assign a linear order to the points, once and for all. We assign closed inclusions $Z \hookrightarrow X$ to be the obvious inclusion $Z(k)_+ \to X(k)_+$. For open inclusions, $X \xleftarrow{\circ} Y$, define $X(k)_+ \to Y(k)_+$ by restriction coupled with the requirement that if $p \in X(k)$, but $p \notin Y(k)$ then $p$ maps to the basepoint.
\end{defn}
\begin{prop}
The functors $-(k)^!$ and $-(k)_!$ assemble into a W-exact functor $\mathbf{Var}_{/k} \to \mathbf{FinSet}_+$.
\end{prop}
\begin{proof}
We need to verify the conditions of Def. \ref{pseudoexact}. Suppose we have a commutative square
\[
\xymatrix{
X \ar@{^{(}->}[r]^j\ar[d]^{\circ}_i & Z\ar[d]^{i'}_{\circ} \\
Y \ar@{^{(}->}[r]_{j'} & W
}
\]
where $j, j'$ are closed and $i$,$i'$ are open. Then we get an induced square in $\mathbf{FinSet}_+$
\[
\xymatrix{
X(k)_+ \ar[r]^{j_!} & Z(k)_+ \\
Y(k)_+ \ar[r]_{(j')_!} \ar[u]^{i^!} & W(k)_+\ar[u]_{(i')_!}
}
\]
which we would like to be commutative. However, this is a consequence of Lem. \ref{backwards_commute_finset}.
\end{proof}
Note that if we have two $k$-varieties $X, Y$ then the number of $k$-points in $X \times_k Y$ is the product of the number of $k$-points in $X$ and $Y$. This product can be made functorial.
\begin{thm}
There is a map of spectra $K(\mathbf{Var}_{/k}) \to S$
\end{thm}
\begin{proof}
By the previous proposition, we have a $W$-exact functor $\mathbf{Var}_{/k} \to \mathbf{FinSet}_+$. By Thm. \ref{pseudoexact} this induces a map of spectra $K(\mathbf{Var}_{/k}) \to K(\mathbf{FinSet}_+)$. Barrat-Priddy-Quillen finishes the proof.
\end{proof}
\begin{rmk}
This too is a map of $E_\infty$-ring spectra.
\end{rmk}
\begin{prop}\label{splitting}
The composition of the point-cointing map with the unit map is the identity, thus the sphere spectrum splits off of $K(\mathbf{Var}_{/k})$ and we may write $K(\mathbf{Var}_{/k}) \simeq S \vee \widetilde{K}(\mathbf{Var}_{/k})$.
\end{prop}
\begin{proof}
We consider the compostion of W-exact and op-W-exact functors
\[
\mathbf{FinSet}_+ \xrightarrow{(G_\ast, G^\ast)} \mathbf{Var}_{/k} \xrightarrow{(-(k)_!, -(k)^!)} \mathbf{FinSet}_+.
\]
It is easy to see that this is the identity. The first map is op-W-exact and the second is $W$-exact. Thus, by Thm. \ref{pseudoexact} and Thm. \ref{op_W_exact} we obtain
\[
S \to K(\mathbf{Var}_{/k}) \to S.
\]
\end{proof}
\subsection{Map to Waldhausen A-Theory}
Throughout this subsection we work over the base field $\mathbf{C}$. In this case varieties may be considered as topological spaces. However, there is already a $K$-theory of topological spaces, namely, Waldhausen's $A$-theory \cite[p.383]{waldhausen}. We produce a map $K(\mathbf{Var}_{/\mathbf{C}}) \to A(\ast)$ relating these two $K$-theories.
First, we recall the definition of Waldhausen's $A(\ast)$.
\begin{defn}\cite[p.379]{waldhausen}
Let $\mathcal{R}^{hf}_{/\ast}$ be the Waldhausen category of homotopy finite retractive spaces. These are spaces homotopy equivalent to a finite complex, equipped with cofibrations given by the homotopy extension property and weak equivalences the usual weak equivalences.
\end{defn}
\begin{defn}
The \textbf{Waldhausen $A$-theory} of a point is
\[
A(\ast) = \Omega |w S_\bullet R^{hf}_{/\ast}|
\]
\end{defn}
In order to produce a map from $K(\mathbf{Var}_{/\mathbf{C}})$ to $A(\ast)$, We need to produce a W-exact map $\mathbf{Var}_{/\mathbf{C}} \to \mathcal{R}^{hf}_{/\ast}$. First, there is a forgetful functor $\mathbf{Var}_{/\mathbf{C}} \to \mathbf{Top}$ given by considering the smooth variety as a topological space.
The following result is folklore \cite{cisinski}
\begin{prop}\label{sep_fin_type}
Consider $X$ a separated, finite-type, complex scheme. If we consider it as a topological space and consider the one point compactification $X^+$, then $X^+$ is homotopy equivalent to a finite CW-complex.
\end{prop}
\begin{prop}
The one-point compactification functor $((-)^+_!, (-)^{+,!}): \mathbf{Var}_{/\mathbf{C}} \to \mathcal{R}^{hf}_{/\ast}$ is W-exact.
\end{prop}
\begin{proof}
One point compactification is covariant with respect to proper maps between topological spaces and contravariant with respect to open inclusions. The necessary diagrams obviously commute.
\end{proof}
We thus obtain
\begin{thm}
There is a map of spectra
\[
K(\mathbf{Var}_{/\mathbf{C}}) \to A(\ast)
\]
\end{thm}
\begin{rmk}
The homotopy groups and homotopy type of $A(\ast)$ have recently been computed \cite{blumberg_mandell_A_point,blumberg_mandell_A_point_II}. It would be very interesting to know what parts of this are picked up by $K(\mathbf{Var}_{/\mathbf{C}})$.
\end{rmk}
\begin{rmk}
Using trace methods there is a map $A(\ast) \to S$, and thus a composition
\[
K(\mathbf{Var}_{/\mathbf{C}}) \to A(\ast) \to S.
\]
This is likely the analgoue of point-counting or the Euler characteristic.
\end{rmk}
\begin{rmk}
The reliance on Prop. \ref{sep_fin_type} is somewhat unsatisfactory. However, there are much cleaner, more ``motivic'', ways of producing this map, as suggested to the author by Denis-Charles Cisinski \cite{cisinski}. We will pursue these in future work.
\end{rmk}
Granted the above map, we can also obtain a map to any $K(R)$ for $R$ a ring or ring spectrum. The $A$-theory of a point is equivalent to the spectrum $K(S)$. There is a functor $\mathbf{Var}_{/\mathbf{C}}$ to spectra (i.e. $S$-modules) specified by $X \mapsto \Sigma^\infty X(\mathbf{C})_+$. By smashing with any ring spectrum $R$ we obtain a functor $\mathbf{Var}_{/\mathbf{C}} \to \operatorname{Mod}_{R}$. In the case when $R$ is an Eilenberg-MacLane spectrum $HA$, this is equivalent to considering the compactly-supported cohomology of $X$ with coefficients in $R$.
\section{Conjectures and Future Work}
This paper has set up a model for investigating $K(\mathbf{Var}_{/k})$. There are of course further points to investigate. Not only are there many more derived motivic measures, but one may wonder about the relationship with other aspects of $K_0 (\mathbf{Var}_{/k})$, for example, whether motivic integration could be lifted.
Let us briefly discuss a conjectural motivic measure. When looking for a motivic measure, we of course have to produce W-exact functors, and thus need functors with certain specific variance properties. We consider one such functor presently.
Let $X$ be a Noetherian scheme. Quillen defines $K'(X)$ to be $K(\mathbf{Coh}(X))$, that is he defines it to be the $K$-theory of the abelian category of coherent sheaves on $X$ \cite{quillen}. He also proves the following proposition
\begin{prop}\cite[3.1]{quillen}
Let $X \hookrightarrow Y$ be a closed immersion. Then there is a cofibration sequence of spectra
\[
K'(X) \to K'(Y) \to K'(Y-X)
\]
\end{prop}
This means that $K'(-)$ is exactly the sort of functor that we need. It is covariant with respect to closed inclusions, and contravariant with respect to open inclusions. It thus gives us a W-exact functor $K': \mathbf{Var}_{/k} \to \mathbf{Sp}$ where the latter is the category of spectra considered as a Waldhausen category via its model structure. Furthermore, every $K$-theory spectrum $K'(X)$ is a $K(S)$-module. Thus the $K'$ functor is actually an exact functor
\[
K': \mathbf{Var}_{/k} \to \mathbf{Mod}_{K(S)}
\]
where the latter denotes the modules over the $E_\infty$-ring $K(S)$. We would like this to produce a map on $K$-theory. However, by the Eilenberg swindle, the $K$-theory of $\mathbf{Mod}_{K(S)}$ vanishes. In order to get a map $K(\mathbf{Var}_{/k}) \to K(K(S))$ we require that $K'$ land in \textit{compact} (or perhaps dualizable) $K(S)$-modules. To put this more succinctly, we have two conjectures, the former implied by the latter.
\begin{conj}
There is a map of ring spectra
\[
K(\mathbf{Var}_{/k}) \to K(K(S))
\]
\end{conj}
\begin{conj}
Let $X$ be a smooth scheme. Then $K(X)$ is compact or dualizable as a $K(S)$-module.
\end{conj}
\begin{rmk}
When $X$ is a $k$-variety, $K'(X)$ is also a $K(k)$-module. It is also possible that $K'(X)$ could be compact as a $K(k)$-module, in which case we would have a map
\[
K(\mathbf{Var}_{/k}) \to K(K(k))
\]
\end{rmk}
\section{Appendix: Simplicial Homotopy}
In this appendix we present a few diagrams to aid in intuition with the simplicial homotopy produced in the additivity theorem. The simplex $h_3(e)$ where $e$ is a 5-simplex looks like
\[
\xymatrix@C=.5cm@R=.5cm{
A_0 \ar@{^{(}->}[r]& A_1 \ar@{^{(}->}[r] & A_2 \ar@{^{(}->}[r] & A_3 \ar@{^{(}->}[r] & S_0 \ar@{=}[r]& S_0 \ar@{=}[r] & S_0\\
& A_{1, 1} \ar@{^{(}->}[r]\ar[u]^{\circ} & A_{1,2} \ar[u]^{\circ} \ar@{^{(}->}[r] & A_{1,3} \ar@{^{(}->}[r] \ar[u]^{\circ} & S_0 - A_1 \ar@{=}[r] \ar[u]^{\circ} & S_0 - A_1 \ar@{=}[r] \ar[u]^{\circ} & S_0 - A_1 \ar[u]^{\circ} \\
& & A_{2,2} \ar[u]^{\circ} \ar@{^{(}->}[r]& A_{2,3} \ar[u]^{\circ}\ar@{^{(}->}[r]& S_0 - A_2 \ar[u]^{\circ}\ar@{=}[r] & S_0 - A_2 \ar[u]^{\circ}\ar@{=}[r] & S_0 - A_2 \ar[u]^{\circ} \\
& & & A_{3,3} \ar[u]^{\circ} \ar@{^{(}->}[r] & S_0 - A_3 \ar[u]^{\circ}\ar@{=}[r] & S_0 - A_3 \ar[u]^{\circ} \ar@{=}[r]& S_0 - A_3 \ar[u]^{\circ} & \\
& & & &\emptyset & \emptyset & \emptyset \\
& & & & & \emptyset& \emptyset \\
& & & & & & \emptyset
}
\]
The more important part of the simplicial homotopy is the $h^C_i(e)$ simplex. The ppicture below is of $h_3 (e)$ when $e$ is a 5-simplex. For compactness we write $S_{i,0} := S_0 - A_i$.
\[
\xymatrix@C=.2cm @R=.5cm{
C_0 \ar@{^{(}->}[r]& C_1 \ar@{^{(}->}[r] & C_2 \ar@{^{(}->}[r] & C_3 \ar@{^{(}->}[r] & C_3 \amalg_{A_3} S_0 \ar@{^{(}->}[r] & C_4 \amalg_{A_4} S_0 \ar@{^{(}->}[r] & C_5 \amalg_{A_5} S_0\\
& C_{1,1} \ar[u]^\circ \ar@{^{(}->}[r] & C_{1,2} \ar[u]^\circ \ar@{^{(}->}[r] & C_{1,3} \ar[u]^\circ \ar@{^{(}->}[r] & C_{1,3} \amalg_{A_{1,3}} S_{1,0} \ar[u]^\circ \ar@{^{(}->}[r] & C_{1,4} \amalg_{A_{1,4}} S_{1,0} \ar[u]^\circ \ar@{^{(}->}[r] & C_{1,5} \amalg_{A_{1,5}} S_{1,0} \ar[u]^\circ\\
& & C_{2,2} \ar[u]^\circ \ar@{^{(}->}[r] & C_{2,3} \ar[u]^\circ \ar@{^{(}->}[r] & C_{2,3} \amalg_{A_{2,3}} S_{2,0} \ar[u]^\circ \ar@{^{(}->}[r] & C_{2,4} \amalg_{A_{2,4}} S_{2,0} \ar[u]^\circ \ar@{^{(}->}[r] & C_{2,5} \amalg_{A_{2,5}} S_{2,0} \ar[u]^\circ \\
& & & C_{3,3} \ar[u]^\circ \ar@{^{(}->}[r] & S_{3,0} \ar[u]^\circ \ar@{^{(}->}[r] & C_{3,4} \amalg_{A_{3,4}} S_{3,0} \ar[u]^\circ \ar@{^{(}->}[r] & C_{3,5} \amalg_{A_{3,5}} S_{3,0} \ar[u]^\circ \\
& & & &\emptyset & B_{3,4} \ar[u]^\circ \ar@{^{(}->}[r] & B_{3,5} \ar[u]^\circ \\
& & & & & B_{4,4} \ar[u]^\circ\ar@{^{(}->}[r] & B_{4,5} \ar[u]^\circ \\
& & & & & & B_{5,5} \ar[u]^\circ \\
}
\]
For completeness, we include the $h_i(e)^B$ in this case as well.
\[
\xymatrix@C=.5cm@R=.5cm{
B_0\ar@{^{(}->}[r]& B_1\ar@{^{(}->}[r] & B_2\ar@{^{(}->}[r] & B_3\ar@{=}[r] & B_3 \ar@{^{(}->}[r]& B_4\ar@{^{(}->}[r] & B_5\\
& B_{1,1}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{1,2}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{1,3}\ar@{=}[r]\ar[u]^{\circ} & B_{1,3}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{1,4}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{1,5}\ar[u]^{\circ}\\
& & B_{2,2}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{2,3}\ar@{=}[r]\ar[u]^{\circ} & B_{2,3}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{2,4}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{2,5}\ar[u]^{\circ}\\
& & & B_{3,3}\ar@{=}[r]\ar[u]^{\circ} & B_{3,3} \ar@{^{(}->}[r]\ar[u]^{\circ}& B_{3,4} \ar@{^{(}->}[r]\ar[u]^{\circ}& B_{3,5}\ar[u]^{\circ}\\
& & & & B_{3,3}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{3,4}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{3,5}\ar[u]^{\circ}\\
& & & & &B_{4,4}\ar@{^{(}->}[r]\ar[u]^{\circ} & B_{4,5}\ar[u]^{\circ} \\
& & & & & & B_{5,5}\ar[u]^{\circ}
}
\]
\bibliographystyle{amsplain}
|
3,212,635,537,637 | arxiv | \section{Introduction}
We study programs with {\em monotone} constraints \cite{mt04,mnt03,mnt06}
and introduce a related class of programs with {\em convex} constraints.
These formalisms allow constraints to appear in the heads of program
rules, which sets them apart from other recent proposals for integrating
constraints into logic programs \cite{p04,pdb04,pdb06,dlv-agg-03,flp04},
and makes them suitable as an abstract basis for formalisms such as {\em
lparse} programs \cite{sns02}.
We show that several results from normal logic programming generalize to
programs with monotone constraints. We also discuss how these techniques
and results can be extended further to the setting of programs with
convex constraints. We then apply some of our general results to design
and implement a method to compute stable models of {\em lparse} programs
and show that it is often much more effective than {\em smodels}
\cite{sns02}.
Normal logic programming with the semantics of stable models is an
effective knowledge representation formalism, mostly due to its ability
to express default assumptions \cite{baral03,GelLeo02}. However,
modeling numeric constraints on sets in normal logic programming is
cumbersome, requires auxiliary atoms and leads to large programs
hard to process efficiently. Since such constraints, often called {\em
aggregates}, are ubiquitous, researchers proposed extensions of normal
logic programming with explicit means to express aggregates, and
generalized the stable-model semantics to the extended settings.
Aggregates imposing bounds on weights of sets of atoms and literals,
called {\em weight} constraints, are especially common in practical
applications and are included in all recent extensions of logic programs
with aggregates. Typically, these extensions do not allow aggregates to
appear in the heads of rules. A notable exception is the formalism of
{\em programs with weight constraints} \cite{nss99,sns02},
which we refer to as {\em lparse} programs\footnote{Aggregates in the
heads of rules have also been studied recently by \citeA{sp06} and
\citeA{spt06}.}.
{\em Lparse} programs are logic programs whose rules have weight
constraints in their heads and whose bodies are conjunctions of weight
constraints. Normal logic programs can be viewed as a subclass of {\em
lparse} programs and the semantics of {\em lparse} programs generalizes
the stable-model semantics of
normal logic programs \cite{gl88}.
{\em Lparse} programs are
one of the most commonly used extensions of logic programming with
weight constraints.
Since rules in {\em lparse} programs may have weight constraints as
their heads, the concept of one-step provability is nondeterministic,
which hides direct parallels between {\em lparse} and normal logic
programs. An explicit connection emerged when
\citeA{mt04} and \citeA{mnt03,mnt06}
introduced {\em logic programs with monotone constraints}. These programs
allow aggregates in the heads of rules and support nondeterministic
computations.
\citeA{mt04} and \citeA{mnt03,mnt06}
proposed a
generalization of the van Emden-Kowalski one-step provability operator
to account for that nondeterminism, defined supported and stable models
for programs with monotone constraints that mirror their normal logic
programming counterparts, and showed encodings of {\em smodels} programs
as programs with monotone constraints.
In this paper, we continue investigations of programs with monotone
constraints. We show that the notions of uniform and strong equivalence
of programs \cite{lpv01,lin02,tu03,ef03} extend to programs with
monotone constraints, and that their characterizations \cite{tu03,ef03}
generalize, too.
We adapt to programs with monotone constraints the notion
of a {\em tight} program \cite{el03} and generalize Fages Lemma
\cite{fag94}.
We introduce extensions of propositional logic with monotone
constraints. We define the completion of a monotone-constraint
program with respect to this logic, and generalize the notion of a
loop formula. We then prove the loop-formula characterization of stable
models of programs with monotone constraints, extending to the setting
of monotone-constraint programs results obtained for normal logic
programs
by \citeA{cl78} and \citeA{lz02}.
Programs with monotone constraints make explicit references to the
default negation operator. We show that by allowing a more general class
of constraints, called {\em convex}, default negation can be eliminated
from the language. We argue that all results in our paper extend to
programs with convex constraints.
Our paper shows that programs with monotone and convex constraints have
a rich theory that closely follows that of normal logic programming.
It implies that programs with monotone and convex constraints form
an abstract generalization of extensions of normal logic programs. In
particular, all results we obtain in the abstract setting of programs
with monotone and convex constraints specialize to {\em lparse} programs
and, in most cases, yield results that are new.
These results have practical implications. The properties of the program
completion and loop formulas, when specialized to the class of {\em
lparse} programs, yield a method to compute stable models of {\em
lparse} programs by means of solvers of {\em pseudo-boolean} constraints,
developed by the propositional satisfiability and integer programming
communities \cite{es03,arms02,wal97,pbcomp05,lt03}. We describe
this method in detail and present experimental results on its performance.
The results show that our method on problems we used for testing
typically outperforms {\em smodels}.
\section{Preliminaries}
\label{prel}
We consider the propositional case only and assume a fixed set $\mathit{At}$
of propositional atoms. It does not lead to loss of generality, as it
is common to interpret programs with variables in terms of their
propositional groundings.
The definitions and results we present in this section come from
papers by \citeA{mt04} and \citeA{mnt06}.
Some of them
are more general as we allow constraints with
infinite domains and programs with inconsistent constraints in the
heads.
\noindent
{\bf Constraints.} A {\em constraint} is an expression $A=(X,C)$,
where $X\subseteq \mathit{At}$ and $C\subseteq {\cal P}(X)$ (${\cal P}(X)$
denotes the powerset of $X$). We call the set $X$ the {\em domain}
of the constraint $A=(X,C)$ and denote it by $\mathit{Dom}(A)$. Informally
speaking, a constraint $(X,C)$ describes a property of subsets of its
domain, with $C$ consisting precisely of these subsets of $X$ that
{\em satisfy} the constraint (have property) $C$.
In the paper, we identify truth assignments (interpretations) with the
sets of atoms they assign the truth value {\em true}. That is, given an
interpretation $M\subseteq \mathit{At}$, we have $M\models a$ if and only if $a
\in M$. We say that an interpretation $M \subseteq\mathit{At}$ {\em satisfies} a
constraint $A=(X,C)$ ($M\models A$), if $M\cap X \in C$. Otherwise,
$M$ does not satisfy $A$, ($M\not \models A$).
A constraint $A=(X,C)$ is {\em consistent} if there is $M$ such
that $M\models A$. Clearly, a constraint $A=(X,C)$ is consistent if and
only if $C\not= \emptyset$.
We note that propositional atoms can be regarded as constraints. Let $a
\in \mathit{At}$ and $M\subseteq \mathit{At}$. We define $C(a)=(\{a\},\{\{a\}\})$. It is
evident that $M\models C(a)$ if and only if $M\models a$. Therefore, in
the paper we often write $a$ as a shorthand for the constraint $C(a)$.
\noindent
{\bf Constraint programs.} Constraints are building blocks of rules and
programs.
\citeA{mt04}
defined {\em constraint programs} as sets of {\em
constraint} rules
\begin{equation}
\label{eq1a}
A \leftarrow A_1, \ldots, A_k, \mathbf{not}(A_{k+1}),\ldots,\mathbf{not}(A_m)
\end{equation}
where $A$, $A_1,\ldots,A_n$ are constraints and $\mathbf{not}$ is the {\em default
negation} operator.
In the context of constraint programs, we refer to constraints and
negated constraints as {\em literals}. Given a rule $r$ of the form
(\ref{eq1a}), the constraint (literal) $A$ is the {\em head} of $r$
and the set $\{A_1,\ldots,$ $A_k,\ldots,\mathbf{not}(A_{k+1}),\ldots,\mathbf{not}(A_m)\}$
of literals is the {\em body} of $r$\footnote{Sometimes we view the
body of a rule as the {\em conjunction} of its literals.}. We denote
the head and the body of $r$ by $\mathit{hd}(r)$ and $\mathit{bd}(r)$, respectively.
We define the the {\em headset} of $r$, written $\mathit{hset}(r)$, as the domain
of the head of $r$. That is, $\mathit{hset}(r)=\mathit{Dom}(\mathit{hd}(r))$.
For a constraint program $P$, we denote by $\mathit{At}(P)$ the set of atoms
that appear in the domains of constraints in $P$. We define the {\em
headset} of $P$, written $\mathit{hset}(P)$, as the union of the headsets of all
rules in $P$.
\noindent
{\bf Models.}
The concept of satisfiability extends in a standard way to literals
$\mathbf{not}(A)$ ($M\models \mathbf{not}(A)$ if $M\not\models A$), to sets (conjunctions)
of literals and, finally, to constraint programs.
\noindent
{\bfseries {\slshape M}-applicable rules.} Let $M\subseteq \mathit{At}$ be an
interpretation. A rule (\ref{eq1a}) is {\em $M$-applicable} if $M$
satisfies every literal in $\mathit{bd}(r)$. We denote by $P(M)$ the set of
all $M$-applicable rules in $P$.
\noindent
{\bf Supported models.}
Supportedness is a property of models. Intuitively, every atom $a$ in a
supported model must have ``reasons'' for being ``in''. Such reasons
are $M$-applicable rules whose heads contain $a$ in their domains.
Formally, let $P$ be a constraint program and $M$ a subset of $\mathit{At}(P)$.
A model $M$ of $P$ is {\em supported} if $M\subseteq \mathit{hset}(P(M))$.
\noindent
{\bf Examples.}
We illustrate the concept with examples. Let $P$ be the constraint
program that consists of the following two rules:
\begin{quote}
$(\{c, d, e\}, \{\{c\}, \{d\}, \{e\}, \{c,d,e\}\})\leftarrow $\\
$(\{a, b\}, \{\{a\}, \{b\}\}) \leftarrow (\{c, d\},\{\{c\}, \{c,
d\}\}), \mathbf{not}((\{e\}, \{\{e\}\}))$
\end{quote}
A set $M=\{a,c\}$ is a model of $P$ as $M$ satisfies the heads of the
two rules. Both rules in $P$ are $M$-applicable. The first of them
provides the support for $c$, the second one --- for $a$. Thus, $M$ is
a supported model.
A set $M'=\{a, c, d, e\}$ is also a model of $P$. However, $a$ has no
support in $P$. Indeed, $a$ only appears in the headset of the second
rule. This rule is not $M'$-applicable and so, it does not support
$a$. Therefore, $M'$ is not a supported model of $P$.
\hfill$\bigtriangleup$
\noindent
{\bf Nondeterministic one-step provability.} Let $P$ be a constraint
program and $M$ a set of atoms. A set $M'$ is {\em nondeterministically
one-step provable} from $M$ by means of $P$, if $M'\subseteq \mathit{hset}(P(M))$
and $M' \models \mathit{hd}(r)$, for every rule $r$ in $P(M)$.
The {\em nondeterministic one-step provability operator} $T_P^{nd}$ for
a program $P$ is an operator on ${\cal P}(\mathit{At})$ such that for every $M
\subseteq \mathit{At}$, $T_P^{nd}(M)$ consists of all sets that are
nondeterministically one-step provable from $M$ by means of $P$.
The operator $T_P^{nd}$ is {\em nondeterministic} as it assigns to each
$M\subseteq \mathit{At}$ a {\em family} of subsets of $\mathit{At}$, each being a
possible outcome of applying $P$ to $M$. In general, $T_P^{nd}$ is {\em
partial}, since there may be sets $M$ such that $T_P^{nd}(M)=\emptyset$
(no set can be derived from $M$ by means of $P$). For instance, if
$P(M)$ contains a rule $r$ such that $\mathit{hd}(r)$ is inconsistent, then
$T_P^{nd}(M)=\emptyset$.
\noindent
{\bf Monotone
constraints.} A constraint $(X,C)$ is
{\em monotone} if $C$ is closed under superset, that is, for every $W,
Y \subseteq X$, if $W \in C$ and $W \subseteq Y$ then $Y\in C$.
Cardinality and weight constraints provide examples of monotone
constraints. Let $X$ be a {\em finite} set and let $C_k(X)=\{Y\colon
Y \subseteq X,\ k\leq |Y|\}$, where $k$ is a non-negative integer.
Then $(X,C_k(X))$ is a constraint expressing the property that a
subset of $X$ has at least $k$ elements. We call it a {\em lower-bound
cardinality constraint} on $X$ and denote it by $kX$.
A more general class of constraints are {\em weight constraints}. Let
$X$ be a finite set, say $X=\{x_1,\ldots,x_n\}$, and let $w, w_1,\ldots,
w_n$ be non-negative reals. We interpret each $w_i$ as the {\em weight}
assigned to $x_i$. A {\em lower-bound weight constraint} is a constraint
of the form $(X,C_w)$, where $C_w$ consists of those subsets of $X$
whose total weight (the sum of weights of elements in the subset) is at
least $w$. We write it as
\[
w[x_1=w_1,\ldots, x_n=w_n].
\]
If all weights are equal to 1 and $w$ is an integer, weight constraints
become cardinality constraints. We also note that the constraint $C(a)$
is a cardinality constraint $1\{a\}$ and also a weight constraint
$1[a=1]$. Finally, we observe that lower-bound cardinality and weight
constraints are monotone.
Cardinality and weight constraints (in a somewhat more general form)
appear in the language of {\em lparse} programs \cite{sns02}, which
we discuss later in the paper. The
notation we adopted for these constraints in this paper follows
the one proposed by \citeA{sns02}.
We use cardinality and weight constraints in some of our examples.
They are also the focus of the last part of the paper, where we use
our abstract results to design a new algorithm to compute models of
{\em lparse} programs.
\noindent
{\bf Monotone-constraint programs.} We call constraint programs built
of monotone constraints --- {\em monotone-constraint programs} or
{\em programs with monotone constraints}. That is, monotone-constraint
programs consist of rules of rules of the form (\ref{eq1a}),
where $A$, $A_1,\ldots,A_m$ are {\em monotone} constraints.
From now on, unless explicitly stated otherwise, programs we consider
are monotone-constraint programs.
\subsection{Horn Programs and Bottom-up Computations}
Since we allow constraints with infinite domains and inconsistent
constraints in heads of rules, the results given in this subsection are
more general than their counterparts
by \citeA{mt04} and \citeA{mnt03,mnt06}.
Thus, for the
sake of completeness, we present them with proofs.
A rule (\ref{eq1a}) is {\em Horn} if $k=m$ (no occurrences of the
negation operator in the body or, equivalently, only monotone
constraints). A constraint program is {\em Horn} if every rule in the
program is Horn.
With a Horn constraint program we associate {\em bottom-up} computations,
generalizing the corresponding notion of a bottom-up computation for
a normal Horn program.
\begin{definition}
\label{defPC}
Let $P$ be a Horn program. A {\em $P$-computation} is a (transfinite)
sequence $\langle X_\alpha\rangle$ such that
\begin{enumerate}
\item $X_0 = \emptyset$,
\item for every ordinal number $\alpha$, $X_\alpha\subseteq
X_{\alpha+1}$ and $X_{\alpha+1} \in T_P^{nd}(X_\alpha)$,
\item for every {\em limit} ordinal $\alpha$, $X_{\alpha}
=\bigcup_{\beta<\alpha} X_\beta$.
\end{enumerate}
\end{definition}
Let $t=\langle X_\alpha \rangle$ be a $P$-computation. Since for every
$\beta < \beta'$, $X_\beta\subseteq X_{\beta'} \subseteq \mathit{At}$,
there is a least ordinal number $\alpha_t$ such that $X_{\alpha_t
+1} = X_{\alpha_t}$, in other words, a least ordinal when the
$P$-computation stabilizes. We refer to $\alpha_t$ as the {\em length} of
the $P$-computation $t$.
\noindent
{\bf Examples.}
Here is a simple example showing that some programs have computations
of length exceeding $\omega$ and so, the transfinite induction in the
definition cannot be avoided. Let $P$ be the program consisting of the
following rules:
\begin{quote}
$(\{a_0\},\{\{a_0\}\}) \leftarrow .$\\
$(\{a_i\},\{\{a_i\}\}) \leftarrow (X_{i-1},\{X_{i-1}\})$,
for $i=1,2,\ldots$\\
$(\{a\},\{\{a\}\}) \leftarrow (X_\infty,\{X_\infty\})$,
\end{quote}
where $X_i=\{a_0,\ldots a_i\}$, $0\leq i$, and $X_\infty=\{a_0,a_1,
\ldots\}$.
Since the body of the last rule contains a constraint with an infinite
domain $X_\infty$, it does not become applicable in any finite step of
computation. However, it does become applicable in the step $\omega$ and
so, $a\in X_{\omega+1}$. Consequently, $X_{\omega+1}\not=X_\omega$.
\hfill$\bigtriangleup$
For a $P$-computation $t=\langle X_\alpha\rangle$, we call $\bigcup_
{\alpha} X_\alpha$ the {\em result} of the computation and denote it by
$R_t$. Directly from the definitions, it follows that
$R_t=X_{\alpha_t}$.
\begin{proposition}
\label{propresmod}
Let $P$ be a Horn constraint program and $t$ a $P$-computation.
Then $R_t$ is a supported model of $P$.
\end{proposition}
\begin{proof}
Let $M=R_t$ be the result of a $P$-computation $t = \langle
X_\alpha\rangle$. We need to show that: (1) $M$ is a model of $P$;
and (2) $M\subseteq \mathit{hset}(P(M))$.
\noindent
(1) Let us consider a rule $r\in P$ such that $M\models \mathit{bd}(r)$. Since
$M=R_t=X_{\alpha_t}$ (where $\alpha_t$ is the length of $t$),
$X_{\alpha_t} \models \mathit{bd}(r)$. Thus, $X_{\alpha_t+1}\models \mathit{hd}(r)$.
Since $M=X_{\alpha_t+1}$, $M$ is a model of $r$ and, consequently,
of $P$, as well.
\noindent
(2) We will prove by induction that, for every set $X_\alpha$ in the
computation $t$, $X_\alpha\subseteq \mathit{hset}(P(M))$. The base case holds
since $X_0 = \emptyset \subseteq \mathit{hset}(P(M))$.
If $\alpha=\beta+1$, then $X_\alpha \in T_P^{nd}(X_{\beta})$. It
follows that $X_\alpha \subseteq \mathit{hset}(P(X_{\beta}))$. Since $P$ is
a Horn program and $X_{\beta}\subseteq M$, $\mathit{hset}(P(X_{\beta}))
\subseteq \mathit{hset}(P(M))$. Therefore, $X_\alpha \subseteq \mathit{hset}(P(M))$.
If $\alpha$ is a limit ordinal, then $X_\alpha=\bigcup_{\beta<\alpha}
X_\beta$. By the induction hypothesis, for every $\beta<\alpha$,
$X_\beta\subseteq \mathit{hset}(P(M))$. Thus, $X_\alpha\subseteq \mathit{hset}(P(M))$.
By induction, $M\subseteq \mathit{hset}(P(M))$.
\end{proof}
\noindent
{\bf Derivable models.} We use computations to define {\em derivable}
models of Horn constraint programs. A set $M$ of atoms is a {\em
derivable model} of a Horn constraint program $P$ if for some
$P$-computation $t$, we have $M=R_t$. By Proposition \ref{propresmod},
derivable models of $P$ are supported models of $P$ and so, also models
of $P$.
Derivable models are similar to the least model of a normal Horn
program in that both can be derived from a program by means of a
bottom-up computation. However, due to the nondeterminism of
bottom-up computations of Horn constraint programs, derivable models
are not in general unique nor minimal.
\noindent
{\bf Examples.}
For example, let $P$ be the following Horn constraint program:
\[
P = \{ 1\{a, b\} \leftarrow \}
\]
Then $\{a\}$, $\{b\}$ and $\{a, b\}$ are its derivable models. The
derivable models $\{a\}$ and $\{b\}$ are minimal models of $P$.
The third derivable model, $\{a, b\}$, is not a minimal model of $P$.
\hfill$\bigtriangleup$
Since inconsistent monotone constraints may appear in the heads of Horn
rules, there are Horn programs $P$ and sets $X\subseteq \mathit{At}$, such that
$T_P^{nd}(X)=\emptyset$. Thus, some Horn constraint programs have no
computations and no derivable models. However, if a Horn constraint
program has models, the existence of computations and derivable models
is guaranteed.
To see this, let $M$ be a model of a Horn constraint program $P$. We
define a {\em canonical computation} $t^{P,M} = \langle X_\alpha^{P,M}
\rangle$ by specifying the choice of the next set in the computation
in part (2) of Definition \ref{defPC}. Namely, for every ordinal
$\beta$, we set
\[
X_{\beta+1}^{P,M} = \mathit{hset}(P(X_\beta^{P,M})) \cap M.
\]
That is, we include in $X_{\alpha}^{P,M}$ {\em all} those atoms
occurring in the heads of $X_\beta^{P,M}$-applicable rules that belong
to $M$. We denote the result of $t^{P,M}$ by $Can(P,M)$. Canonical
computations are indeed $P$-computations.
\begin{proposition}
\label{canIsComp}
Let $P$ be a Horn constraint program. If $M \subseteq \mathit{At}$ is a
model of $P$, the sequence $t^{P,M}$ is a $P\mbox{-}$computation.
\end{proposition}
\begin{proof}
As $P$ and $M$ are fixed, to simplify the notation in the proof we will
write $X_\alpha$ instead of $X^{P,M}_\alpha$.
To prove the assertion, it suffices to show that
(1) $\mathit{hset}(P(X_{\alpha})) \cap M \in T_P^{nd}(X_{\alpha})$,
and (2) $X_{\alpha} \subseteq \mathit{hset}(P(X_{\alpha})) \cap M$, for every ordinal
$\alpha$.
\noindent
(1) Let $X\subseteq M$ and $r\in P(X)$. Since all constraints in $\mathit{bd}(r)$
are monotone, and $X\models \mathit{bd}(r)$, $M\models \mathit{bd}(r)$, as well. From
the fact that $M$ is a model of $P$ it follows now that $M\models
\mathit{hd}(r)$. Consequently, $M\cap \mathit{hset}(P(X)) \models \mathit{hd}(r)$ for every $r\in
P(X)$. Since $M\cap \mathit{hset}(P(X))\subseteq \mathit{hset}(P(X))$,
\[
M\cap \mathit{hset}(P(X)) \in T^{nd}_P(X).
\]
Directly from the definition of the canonical computation for $P$ and
$M$ we obtain that for every ordinal $\alpha$, $X_\alpha\subseteq M$.
Thus, (1), follows.
\noindent
(2) We proceed by induction. The basis is evident as $X_0=\emptyset$.
Let us consider an ordinal $\alpha > 0$ and let us assume that (2) holds
for every ordinal $\beta <\alpha$. If $\alpha=\beta+1$, then
$X_\alpha=X_{\beta+1}=\mathit{hset}(P(X_\beta)) \cap M$. Thus, by the induction
hypothesis, $X_\beta\subseteq X_{\alpha}$. Since $P$ is a Horn
constraint program, it follows that
$P(X_\beta)\subseteq P(X_{\alpha})$.
Thus
\[
X_\alpha=X_{\beta+1}=\mathit{hset}(P(X_\beta))\cap M \subseteq \mathit{hset}(P(X_{\alpha}))
\cap M.
\]
If $\alpha$ is a limit ordinal then for every $\beta<\alpha$, $X_\beta
\subseteq X_\alpha$ and, as before, also $P(X_\beta)\subseteq
P(X_{\alpha})$. Thus, by the induction hypothesis for every $\beta<
\alpha$,
\[
X_\beta \subseteq \mathit{hset}(P(X_\beta))\cap M \subseteq \mathit{hset}(P(X_\alpha))\cap M,
\]
which implies that
\[
X_\alpha = \bigcup_{\beta<\alpha} X_\beta \subseteq \mathit{hset}(P(X_\alpha))\cap
M.
\]
\end{proof}
Canonical computations have the following {\em fixpoint} property.
\begin{proposition}\label{propnewfp}
Let $P$ be a Horn constraint program. For every model $M$ of $P$, we
have
\[
\mathit{hset}(P(Can(P,M)))\cap M = Can(P,M).
\]
\end{proposition}
\begin{proof}
Let $\alpha$ be the length of the canonical computation $t^{P,M}$.
Then, $X^{P,M}_{\alpha+1}=X^{P,M}_\alpha=Can(P,M)$. Since
$X_{\alpha+1}= \mathit{hset}(X_\alpha)\cap M$, the assertion follows.
\end{proof}
We now gather properties of derivable models that extend properties of
the least model of normal Horn logic programs.
\begin{proposition}
\label{grtdm}
Let $P$ be a Horn constraint program. Then:
\begin{enumerate}
\item For every model $M$ of $P$, $Can(P,M)$ is a greatest derivable
model of $P$ contained in $M$
\item A model $M$ of $P$ is a derivable model if and only if $M=Can(P,M)$
\item If $M$ is a minimal model of $P$ then $M$ is a derivable model of
$P$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let $M'$ be a derivable model of $P$ such that $M'\subseteq M$.
Let $T=\langle X_\alpha\rangle$ be a $P$-derivation such that $M'=R_t$.
We will prove that for every ordinal $\alpha$, $X_\alpha\subseteq
X^{P,M}_\alpha$. We proceed by transfinite induction. Since $X_0=X^{P,
M}_0=\emptyset$, the basis for the induction is evident. Let us
consider an ordinal $\alpha>0$ and assume that for every ordinal
$\beta <\alpha$, $X_\beta\subseteq X^{P,M}_\beta$.
If $\alpha=\beta+1$, then $X_{\alpha} \in T^{nd}_P(X_\beta)$ and
so, $X_{\alpha} \subseteq \mathit{hset}(P(X_\beta))$. By the induction
hypothesis and by the monotonicity of the constraints in the bodies of
rules in $P$, $X_{\alpha}\subseteq \mathit{hset}(P(X^{P,M}_\beta))$. Thus, since
$X_\alpha\subseteq R_t=M'\subseteq M$,
\[
X_\alpha\subseteq \mathit{hset}(P(X^{P,M}_\beta))\cap M = X^{P,M}_{\beta+1}=
X^{P,M}_{\alpha}.
\]
The case when $\alpha$ is a limit ordinal is straightforward as
$X_\alpha=\bigcup_{\beta<\alpha}X_\beta$ and
$X^{P,M}_\alpha=\bigcup_{\beta<\alpha}X^{P,M}_\beta$.
\noindent
(2) ($\Leftarrow$) If $M=Can(P,M)$, then $M$ is the result of the
canonical $P$-derivation for $P$ and $M$. In particular, $M$ is a
derivable model of $P$.
\noindent
($\Rightarrow$) if $M$ is a derivable model of $P$, then $M$ is also a
model of $P$. From (1) it follows that $Can(P,M)$ is the greatest
derivable model of $P$ contained in $M$. Since $M$ itself is derivable,
$M=Can(P,M)$.
\noindent
(3) From (1) it follows that $Can(P,M)$ is a derivable model of $P$
and that $Can(P,$ $M) \subseteq M$. Since $M$ is a minimal model,
$Can(P,M)=M$ and, by $(2)$, $M$ is a derivable model of $P$.
\end{proof}
\subsection{Stable Models}
In this section, we will recall and adapt to our setting the definition
of stable models proposed and studied
by \citeA{mt04} and \citeA{mnt03,mnt06}
Let $P$ be a
monotone-constraint program and $M$ a subset of $At(P)$. The {\em
reduct} of $P$, denoted by $P^M$, is a program obtained from $P$ by:
\begin{enumerate}
\item removing from $P$ all rules whose body contains a literal
$\mathbf{not}(B)$ such that $M\models B$;
\item removing literals $\mathbf{not}(B)$ for the bodies of the remaining
rules.
\end{enumerate}
The reduct of a monotone-constraint program is Horn since it contains
no occurrences of default negation. Therefore, the following definition
is sound.
\begin{definition}
Let $P$ be a monotone-constraint program. A set of atoms $M$ is a {\em
stable} model of $P$ if $M$ is a derivable model of $P^M$. We denote
the set of stable models of $P$ by $St(P)$.
\end{definition}
The definitions of the reduct and stable models follow and generalize
those proposed for normal logic programs, since in the setting of Horn
constraint programs, derivable models play the role of a least model.
As in normal logic programming and its standard extensions, stable
models of monotone-constraint programs are supported models and,
consequently, models.
\begin{proposition}
Let $P$ be a monotone-constraint program. If $M \subseteq At(P)$ is a
stable model of $P$, then $M$ is a supported model of $P$.
\end{proposition}
\begin{proof}
Let $M$ be a stable model of $P$. Then, $M$ is a derivable model of
$P^M$ and, by Proposition \ref{propresmod}, $M$ is a supported model
of $P^M$.
It follows that $M$ is a model of $P^M$. Directly from the
definition of the reduct it follows that $M$ is a model of $P$.
It also follows that $M \subseteq \mathit{hset}(P^M(M))$. For every rule $r$
in $P^M(M)$, there is a rule $r'$ in $P(M)$, which has the same head
and the same non-negated literals in the body as $r$. Thus, $\mathit{hset}(P^M
(M))\subseteq \mathit{hset}(P(M))$ and, consequently, $M\subseteq \mathit{hset}(P(M))$. It
follows that $M$ is a supported model of $P$.
\end{proof}
\noindent
{\bf Examples.}
Here is an example of stable models of a monotone-constraint program.
Let $P$ be a monotone-constraint program that contains the following
rules:
\begin{quote}
$2\{a, b, c\}\leftarrow 1\{a, d\}, \mathbf{not}(1\{c\})$\\
$1\{b, c, d\}\leftarrow 1\{a\}, \mathbf{not}(3\{a, b, d\}))$\\
$1\{a\}\leftarrow$
\end{quote}
Let $M=\{a, b\}$. Therefore, $M\not\models 1\{c\}$ and $M\not\models
3\{a, b, d\}$. Hence the reduct $P^M$ contains the following three
Horn rules:
\begin{quote}
$2\{a, b, c\}\leftarrow 1\{a, d\}$\\
$1\{b, c, d\}\leftarrow 1\{a\}$\\
$1\{a\}\leftarrow$
\end{quote}
Since $M=\{a, b\}$ is a derivable model of $P^M$, $M$ is a stable model
of $P$.
Let $M'=\{a, b, c\}$. Then $M'\models 1\{c\}$ and $M\not\models
3\{a,b,d\}$. Therefore, the reduct $P^{M'}$ contains two Horn rules:
\begin{quote}
$1\{b, c, d\}\leftarrow 1\{a\}$\\
$1\{a\}\leftarrow$
\end{quote}
Since $M'=\{a,b,c\}$ is a derivable models of $P^{M'}$, $M'$ is also
a stable model of $P$. We note that stable models of a
monotone-constraint program, in general, do not form an anti-chain.
\hfill$\bigtriangleup$
If a normal logic program is Horn then its least model is its (only)
stable model. Here we have an analogous situation.
\begin{proposition}
\label{DerIsStable}
Let $P$ be a Horn monotone-constraint program. Then $M \subseteq At(P)$
is a derivable model of $P$ if and only if $M$ is a stable model of $P$.
\end{proposition}
\begin{proof}
For every set $M$ of atoms $P=P^M$. Thus, $M$ is a derivable model of
$P$ if and only if it is a derivable model of $P^M$ or, equivalently,
a stable model of $P$.
\end{proof}
In the next four sections of the paper we show that several fundamental
results concerning normal logic programs extend to the class of
monotone-constraint programs.
\section{Strong and Uniform Equivalence of Monotone-cons\-traint Programs}
Strong equivalence and uniform equivalence concern the problem of
replacing some rules in a logic program with others without changing
the overall semantics of the program. More specifically, the strong
equivalence concerns replacement of rules within {\em arbitrary}
programs, and the uniform equivalence concerns replacements of all
{\em non-fact} rules. In each case, the stipulation is that the
resulting program must have the same stable models as the original
one. Strong (and uniform) equivalence is an important concept
due to its potential uses in program rewriting and optimization.
Strong and uniform equivalence have been studied in the literature
mostly for normal logic programs \cite{lpv01,lin02,tu03,ef03}.
\citeA{tu03}
presented an elegant characterization of strong equivalence
of {\em smodels} programs, and
\citeA{ef03}
described a similar
characterization of uniform equivalence of normal and disjunctive
logic programs. We show that both characterizations can be adapted to
the case of monotone-constraint programs. In fact, one can show that
under the representations of normal logic programs as monotone-constraint
programs \cite{mnt03,mnt06} our definitions and characterizations
of strong and uniform equivalence reduce to those introduced and
developed originally for normal logic programs.
\subsection{{\bfseries {\slshape M}}-maximal Models}
A key role in our approach is played by models of Horn constraint
programs satisfying a certain maximality condition.
\begin{definition}
\label{defmax}
Let $P$ be a Horn constraint program and let $M$ be a model of $P$.
A set $N\subseteq M$ such that $N$ is a model of $P$ and $M\cap
\mathit{hset}(P(N))\subseteq N$ is an {\em $M$-maximal} model of $P$, written
$N \models_M P$.
\end{definition}
Intuitively, $N$ is an $M$-maximal model of $P$ if $N$ satisfies each
rule $r\in P(N)$ ``maximally'' with respect to $M$. That is, for every
$r\in P(N)$, $N$ contains all atoms in $M$ that belong to $\mathit{hset}(r)$ ---
the domain of the head of $r$.
To illustrate this notion, let us consider a Horn constraint program
$P$ consisting of a single rule:
\[
1 \{ p, q, r \} \leftarrow 1 \{ s, t \} .
\]
Let $M= \{ p, q, s, t \}$ and $N=\{ p, q, s \}$. One can verify that
both $M$ and $N$ are models of $P$. Moreover, since the only rule in
$P$ is $N$-applicable, and $M \cap \{p, q, r\} \subseteq N$, $N$ is
an $M$-maximal model of $P$. On the other hand, $N'=\{ p, s \}$
is not $M$-maximal even though $N'$ is a model of $P$ and it is
contained in $M$.
There are several similarities between properties of models of
normal Horn programs and $M$-maximal models of Horn constraint programs.
We state and prove here one of them that turns out to be especially
relevant to our study of strong and uniform equivalence.
\begin{proposition}
\label{max}
Let $P$ be a Horn constraint program and let $M$ be a model of $P$. Then
$M$ is an $M$-maximal model of $P$ and $Can(P,M)$ is the least
$M$-maximal model of $P$.
\end{proposition}
\begin{proof}
The first claim follows directly from the definition. To prove the
second one, we simplify the notation: we will write $N$ for
$Can(P,M)$ and $X_\alpha$ for $X_\alpha^{P,M}$.
We first show that $N$ is an $M$-maximal model of $P$. Clearly,
$N\subseteq M$. Moreover, by Proposition \ref{propnewfp},
$\mathit{hset}(P(N))\cap M= N$. Thus, $N$ is indeed an $M$-maximal model of $P$.
We now show $N$ is the least $M$-maximal model of $P$.
Let $N'$ be any $M$-maximal model of $P$. We will show by transfinite
induction that $N\subseteq N'$. Since $X_0=\emptyset$, the basis for the
induction holds. Let us consider an ordinal $\alpha > 0$ and let us
assume that $X_\beta\subseteq N'$, for every $\beta<\alpha$. To show
$N\subseteq N'$, it is sufficient to show that $X_{\alpha}\subseteq N'$.
Let us assume that $\alpha=\beta+1$ for some $\beta<\alpha$. Then, since
$X_\beta\subseteq N'$ and $P$ is a Horn constraint program, we have
$P(X_\beta) \subseteq P(N')$. Consequently,
\[
X_\alpha=X_{\beta+1}= \mathit{hset}(P(X_\beta))\cap M \subseteq \mathit{hset}(P(N'))\cap M
\subseteq N',
\]
the last inclusion follows from the fact that$N'$ is an $M$-maximal
model of $P$.
If $\alpha$ is a limit ordinal, then $X_\alpha=\bigcup_{\beta<\alpha}
X_\beta$ and the inclusion $X_\alpha\subseteq N'$ follows directly from
the induction hypothesis.
\end{proof}
\subsection{Strong Equivalence and SE-models}
Monotone-constraint programs $P$ and $Q$ are {\em strongly equivalent},
denoted by $P \equiv_s Q$, if for every monotone-constraint program $R$,
$P\cup R$ and $Q\cup R$ have the same set of stable models.
To study the strong equivalence of monotone-constraint programs, we
generalize the concept of an {\em SE-model}
due to \citeA{tu03}.
There are close connections between strong equivalence of normal logic
programs and the logic here-and-there. The semantics of the
logic here-and-there is given in terms of Kripke models with two words
which, when rephrased in terms of pairs of interpretations (pairs of
sets of propositional atoms), give rise to SE-models.
\begin{definition}
\label{defse}
Let $P$ be a monotone-constraint program and let $X, Y$ be sets of
atoms. We say that $(X,Y)$ is an {\em SE-model} of $P$ if the following
conditions hold: (1) $X \subseteq Y$; (2) $Y \models P$; and (3) $X
\models_Y P^Y$. We denote by $SE(P)$ the set of all SE-models of $P$.
\end{definition}
\noindent
{\bf Examples.}
To illustrate the notion of an SE-model of a monotone-constraint
program, let $P$ consist of the following two rules:
\begin{quote}
$2\{p,q,r\}\leftarrow 1\{q, r\}, \mathbf{not}(3\{p,q,r\})\}$\\
$1\{p,s\}\leftarrow 1\{p, r\}, \mathbf{not}(2\{p,r\})$
\end{quote}
We observe that $M=\{p,q\}$ is a model of $P$. Let $N=\emptyset$.
Then $N\subseteq M$ and $P^M(N)$ is empty. It follows that $M\cap
\mathit{hset}(P^M(N)) =\emptyset\subseteq N$ and so, $N\models_M P^M$. Hence,
$(N,M)$ is an SE-models of $P$.
Next, let $N'=\{p\}$. It is clear that $N'\subseteq M$. Moreover,
$P^M(N')= \{1\{p,s\}\leftarrow 1\{p, r\}\}$. Hence $M\cap \mathit{hset}(P^M(N'
))=\{p\} \subseteq N'$ and so, $N'\models_M P^M$. That is, $(N',M)$ is
another SE-model of $P$.
\hfill$\bigtriangleup$
SE-models yield a simple characterization of strong equivalence of
monotone-constraint programs. To state and prove it, we need several
auxiliary results.
\begin{lemma}\label{newlemma}
Let $P$ be a monotone-constraint program and let $M$ be a model of $P$.
Then $(M,M)$ and $(Can(P^M,M),M)$ are both SE-models of $P$.
\label{canse}
\end{lemma}
\begin{proof}
The requirements $(1)$ and $(2)$ of an SE-model hold for $(M,M)$.
Furthermore, since $M$ is a model of $P$, $M\models P^M$. Finally,
we also have $\mathit{hset}(P(M))\cap M\subseteq M$. Thus, $M\models_M P^M$.
Similarly, the definition of a canonical computation and
Proposition \ref{propresmod}, imply the first two requirements of the
definition of SE-models for $(Can(P^M,M), M)$. The third
requirement follows from Proposition \ref{max}.
\end{proof}
\begin{lemma}
\label{sesteq}
Let $P$ and $Q$ be two monotone-constraint programs such that $SE(P)=
SE(Q)$. Then $St(P)=St(Q)$.
\end{lemma}
\begin{proof}
If $M\in St(P)$, then $M$ is a model of $P$ and, by
Lemma \ref{newlemma}, $(M,M)\in SE(P)$. Hence, $(M,M)\in SE(Q)$ and,
in particular, $M\models Q$. By Lemma \ref{newlemma} again,
\[
(Can(Q^M, M),M)\in SE(Q).
\]
By the assumption,
\[
(Can(Q^M,M),M)\in SE(P)
\]
and so, $Can(Q^M,M)\models_M P^M$ or, in other terms, $Can(Q^M,M)$ is
an $M$-maximal model of $P^M$. Since $M\in St(P)$, $M=Can(P^M,M)$.
By Proposition \ref{max}, $M$ is the least $M$-maximal model of $P^M$.
Thus, $M \subseteq Can(Q^M,M)$. On the other hand, we have
$Can(Q^M,M)\subseteq M$ and so, $M=Can(Q^M,M)$. It follows that $M$
is a stable model of $Q$. The
other inclusion can be proved in the same way.
\end{proof}
\begin{lemma}
\label{secupcap}
Let $P$ and $R$ be two monotone-constraint programs. Then $SE(P\cup R)=
SE(P)\cap SE(R)$.
\end{lemma}
\begin{proof}
The assertion follows from the following two simple observations.
First, for every set $Y$ of atoms, $Y\models (P\cup R)$ if and only
if $Y\models P$ and $Y\models R$. Second, for every two sets $X$ and
$Y$ of atoms, $X\models_Y(P\cup R)^Y$ if and only if $X\models_Y P^Y$
and $X\models_Y R^Y$.
\end{proof}
\begin{lemma}
\label{semd}
Let $P$, $Q$ be two monotone-constraint programs. If $P\equiv_s Q$, then
$P$ and $Q$ have the same models.
\end{lemma}
\begin{proof}
Let $M$ be a model of $P$. By $r$ we denote a constraint rule
$(M,\{M\})\leftarrow\ $. Then, $M\in St(P\cup \{r\})$. Since $P$
and $Q$ are strongly equivalent, $M\in St(Q\cup \{r\})$. It follows
that $M$ is a model of $Q\cup \{r\}$ and so, also a model of $Q$.
The converse inclusion can be proved in the same way.
\end{proof}
\begin{theorem}
\label{sethm}
Let $P$ and $Q$ be monotone-constraint programs. Then $P\equiv_s Q$ if
and only if $SE(P)=SE(Q)$.
\end{theorem}
\begin{proof}
($\Leftarrow$) Let $R$ be an arbitrary monotone-constraint program.
Lemma \ref{secupcap} implies that $SE(P\cup R)= SE(P)\cap SE(R)$ and
$SE(Q\cup R)=SE(Q)\cap SE(R)$. Since $SE(P)=SE(Q)$, we have that $SE(P
\cup R)= SE(Q\cup R)$. By Lemma \ref{sesteq}, $P\cup R$ and $Q\cup R$ have
the same stable models. Hence, $P\equiv_s Q$ holds.
\noindent
($\Rightarrow$)
Let us assume $SE(P)\setminus SE(Q)\not=\emptyset$ and let us consider
$(X,Y) \in SE(P)\setminus SE(Q)$. It follows that $X\subseteq Y$ and
$Y\models P$. By Lemma \ref{semd}, $Y\models Q$. Since $(X,Y)\notin
SE(Q)$, $X\not\models_Y Q^Y$. It follows that $X\not\models Q^Y$ or
$\mathit{hset}(Q^Y(X))\cap Y \not\subseteq X$. In the first case, there is a
rule $r\in Q^Y(X)$ such that $X\not\models \mathit{hd}(r)$. Since $X\subseteq
Y$ and $Q^Y$ is a Horn constraint program, $r\in Q^Y(Y)$. Let us
recall that $Y\models Q$ and so, we also have $Y\models Q^Y$. It
follows that $Y\models\mathit{hd}(r)$. Since
$\mathit{hset}(r) \subseteq \mathit{hset}(Q^Y(X))$, $Y\cap \mathit{hset}(Q^Y(X))\models \mathit{hd}(r)$.
Thus, $\mathit{hset}(Q^Y(X)) \cap Y \not\subseteq X$ (otherwise, by the
monotonicity of $\mathit{hd}(r)$, we would have $X\models \mathit{hd}(r)$).
The same property holds in the second case. Thus, it follows that
\[
(\mathit{hset}(Q^Y(X))\cap Y)\setminus X\not=\emptyset.
\]
We define
\[
X'= (\mathit{hset}(Q^Y(X))\cap Y)\setminus X.
\]
Let $R$ be a constraint program consisting of the following two
rules:
\begin{quote}
$(X,\{X\})\leftarrow$\\
$(Y,\{Y\})\leftarrow (X',\{X'\})$.
\end{quote}
Let us consider a program $Q_0=Q\cup R$. Since $Y\models Q$ and $X
\subseteq Y$, $Y\models Q_0$. Thus, $Y\models Q_0^Y$ and, in
particular, $Can(Q_0^Y,Y)$ is well defined. Since $R\subseteq
Q_0^Y$, $X\subseteq Can(Q_0^Y,Y)$. Thus, we have
\[
\mathit{hset}(Q_0^Y(X))\cap Y \subseteq \mathit{hset}(Q_0^Y(Can(Q_0^Y,Y))) \cap Y =
Can(Q_0^Y,Y)
\]
(the last equality follows from Proposition \ref{propnewfp}). We
also have $Q\subseteq Q_0$ and so,
\[
X'\subseteq\mathit{hset}(Q^Y(X))\cap Y \subseteq \mathit{hset}(Q_0^Y(X))\cap Y.
\]
Thus, $X'\subseteq Can(Q_0^Y,Y)$. Consequently, by Proposition
\ref{propnewfp}, $Y\subseteq Can(Q_0^Y,Y)$. Since
$Can(Q_0^Y,Y)$ $\subseteq Y$, $Y=Can(Q_0^Y,Y)$ and so, $Y\in St(Q_0)$.
Since $P$ and $Q$ are strongly equivalent, $Y\in St(P_0)$, where
$P_0=P\cup R$. Let us recall that $(X,Y)\in SE(P)$. By Proposition
\ref{max}, $Can(P^Y,Y)$ is a least $Y$-maximal model of $P^Y$. Since
$X$ is a $Y$-maximal model of $P$ (as $X\models_Y P^Y$), it follows
that $Can(P^Y,Y)\subseteq X$. Since $X'\not\subseteq X$, $Can(P_0^Y,Y)
\subseteq X$. Finally, since $X'\subseteq Y$, $Y\not\subseteq X$. Thus,
$Y\not=Can(P_0^Y,Y)$, a contradiction.
It follows that $SE(P)\setminus SE(Q)=\emptyset$. By symmetry,
$SE(Q)\setminus SE(P)=\emptyset$, too. Thus, $SE(P)=SE(Q)$.
\end{proof}
\subsection{Uniform Equivalence and UE-models}
Let $D$ be a set of atoms. By $r_D$ we denote
a monotone-constraint rule
\[
r_D = \ \ (D,\{D\})\leftarrow .
\]
Adding a rule $r_D$ to a program forces all atoms in $D$ to be true
(independently of the program).
Monotone-constraint programs $P$ and $Q$ are {\em uniformly equivalent},
denoted by $P \equiv_u Q$, if for every set of {\em atoms} $D$,
$P\cup \{r_D\}$ and $Q\cup \{r_D\}$ have the same stable models.
An SE-model $(X,Y)$ of a monotone-constraint program $P$
is a {\em UE-model} of $P$ if for every SE-model $(X',Y)$ of $P$ with
$X\subseteq X'$, either $X=X'$ or $X'=Y$ holds. We write $UE(P)$ to
denote the set of all UE-models of $P$. Our notion of a UE-model is a
generalization of the notion of a UE-model
due to \citeA{ef03}
to the
setting of monotone-constraint programs.
\noindent
{\bf Examples.}
Let us look again at the program we used to illustrate the concept of
an SE-model. We showed there that $(\emptyset,\{p,q\})$ and $(\{p\},
\{p,q\})$ are SE-models of $P$. Directly from the definition of
UE-models it follows that $(\{p\}, \{p,q\})$ is a UE-model of $P$.
\hfill$\bigtriangleup$
We will now present a characterization of uniform equivalence of
monotone-con\-straint programs under the assumption that their sets of
atoms are finite. One can prove a characterization of uniform equivalence
of arbitrary monotone-cons\-traint programs, generalizing one of the
results
by \citeA{ef03}.
However, both the characterization and its proof
are more complex and, for brevity, we restrict our attention to the
finite case only.
We start with an auxiliary result, which allows us to focus only on
atoms in $\mathit{At}(P)$ when deciding whether a pair $(X,Y)$ of sets of atoms
is an SE-model of a monotone-constraint program $P$.
\begin{lemma}
\label{semdat}
Let $P$ be a monotone-constraint program, $X\subseteq Y$ two sets of atoms.
Then $(X, Y)\in SE(P)$ if and only if $(X\cap At(P), Y\cap At(P))\in
SE(P)$.
\end{lemma}
\begin{proof}
Since $X\subseteq Y$ is given, and $X\subseteq Y$ implies $X\cap
At(P)\subseteq Y\cap At(P)$, the first condition of the definition
of an SE-model holds on both sides of the equivalence.
Next, we note that for every constraint $C$, $Y\models C$ if and
only if $Y\cap \mathit{Dom}(C) \models C$. Therefore, $Y\models P$ if and
only if $Y\cap At(P) \models P$. That is, the second condition of the
definition of an SE-model holds for $(X,Y)$ if and only if it holds
for $(X\cap At(P), Y\cap At(P))$.
Finally, we observe that $P^Y = P^{Y\cap At(P)}$
and $P(X) = P(X\cap At(P))$. Therefore,
\[
Y\cap \mathit{hset}(P^Y(X)) = Y\cap \mathit{hset}(P^{Y\cap At(P)}(X\cap At(P))).
\]
Since $\mathit{hset}(P^{Y\cap At(P)}(X\cap At(P)))\subseteq At(P)$, it
follows that
\[
Y\cap \mathit{hset}(P^Y(X))\subseteq X
\]
if and only if
\[
Y\cap \mathit{At}(P)\cap \mathit{hset}(P^{Y\cap \mathit{At}(P)}(X\cap \mathit{At}(P))) \subseteq
X\cap \mathit{At}(P).
\]
Thus, $X\models_Y P^Y$ if and only if $X\cap At(P)
\models_{Y \cap At(P)} P^{Y\cap At(P)}$. That is, the third
condition of the definition of an SE-model holds for $(X,Y)$ if and
only if it holds for $(X\cap At(P), Y\cap At(P))$.
\end{proof}
\begin{lemma}
\label{exmax}
Let $P$ be a monotone-constraint program such that $\mathit{At}(P)$ is finite.
Then for every $(X,Y)\in SE(P)$ such that $X\not=Y$,
the set
\begin{equation}
\label{eq211}
\{X'\colon X\subseteq X'\subseteq Y,\ X'\not=Y,\ (X',Y)\in SE(P)\}
\end{equation}
has a maximal element.
\end{lemma}
\begin{proof}
If $\mathit{At}(P)\cap X =\mathit{At}(P)\cap Y$, then for every element $y\in
Y\setminus X$, $Y\setminus \{y\}$ is a maximal element of the set
(\ref{eq211}). Indeed,
since $(X, Y)\in SE(P)$, by Lemma \ref{semdat}, $(X\cap At(P), Y\cap
At(P))\in SE(P)$. Since $X\cap At(P)=Y\cap At(P)$ and $y\not\in
At(P)$, $X\cap At(P)=(Y\setminus\{y\})\cap At(P)$. Therefore,
$((Y\setminus\{y\})\cap At(P), Y\cap At(P))\in SE(P)$. Then from Lemma
\ref{semdat} and the fact $Y\setminus\{y\}\subseteq Y$, we have
$(Y\setminus\{y\}, Y)\in SE(P)$.
Therefore, $Y\setminus\{y\}$ belongs to the set (\ref{eq211})
and so, it is a maximal element of this set.
Thus, let us assume that $\mathit{At}(P)\cap X \not=\mathit{At}(P)\cap Y$. Let us define
$X'=X\cup (Y\setminus \mathit{At}(P))$. Then $X\subseteq X' \subseteq Y$ and
$X'\not=Y$. Moreover, no element in $X'\setminus X$ belongs to $\mathit{At}(P)$.
That is, $X'\cap At(P)=X\cap At(P)$. Thus, by Lemma \ref{semdat},
$(X',Y)\in SE(P)$ and so, $X'$ belongs to the set (\ref{eq211}).
Since $Y\setminus X'\subseteq \mathit{At}(P)$, by the finiteness of $\mathit{At}(P)$ it
follows that the set (\ref{eq211}) contains a maximal element containing
$X'$. In particular, it contains a maximal element.
\end{proof}
\begin{theorem}
\label{uethm}
Let $P$ and $Q$ be two monotone-constraint programs such that
$\mathit{At}(P)\cup\mathit{At}(Q)$ is finite. Then $P\equiv_u Q$ if and only if
$UE(P)=UE(Q)$.
\end{theorem}
\begin{proof}
($\Leftarrow$) Let $D$ be an arbitrary set of atoms and $Y$ be a stable
model of $P\cup \{r_D\}$. Then $Y$ is a model of $P\cup \{r_D\}$.
In particular, $Y$ is a model of $P$ and so, $(Y,Y) \in UE(P)$. It
follows that $(Y,Y)\in UE(Q)$, too. Thus, $Y$ is a model of $Q$. Since
$Y$ is a model of $r_D$, $D\subseteq Y$. Consequently, $Y$ is a model
of $Q\cup \{r_D\}$ and thus, also of $(Q\cup \{r_D\})^Y$.
Let $X=Can((Q\cup \{r_D\})^Y,Y)$. Then $D\subseteq X\subseteq Y$ and,
by Proposition \ref{max}, $X$ is a $Y$-maximal model of $(Q\cup
\{r_D\})^Y$. Consequently, $X$ is a $Y$-maximal model of $Q^Y$.
Since $X\subseteq Y$ and $Y\models Q$, $(X,Y)\in SE(Q)$.
Let us assume
that $X\not=Y$. Then, by Lemma \ref{exmax}, there is a maximal set
$X'$ such that $X\subseteq X'\subseteq Y$, $X'\not= Y$ and $(X',Y)\in
SE(Q)$. It follows that $(X',Y)\in UE(Q)$. Thus, $(X',Y)\in UE(P)$ and
so, $X'\models_Y P^Y$. Since $D\subseteq X'$, $X'\models_Y (P\cup\{r_D
\})^Y$. We recall that $Y$ is a stable model of $P\cup \{r_D\}$. Thus,
$Y=Can((P\cup\{
r_D\})^Y,Y)$. By Proposition \ref{max}, $Y\subseteq X'$ and so we get
$X'= Y$, a contradiction. It follows that $X=Y$ and,
consequently, $Y$ is a stable model of $Q\cup \{r_D\}$.
By symmetry, every stable model of $Q\cup \{r_D\}$ is also a stable
model of $P\cup \{r_D\}$.
\noindent
($\Rightarrow$)
First, we note that $(Y,Y)\in UE(P)$ if and only if $Y$ is a model
of $P$. Next, we note that $P$ and $Q$ have the same models. Indeed,
the argument used in the proof of Lemma \ref{semd} works also under
the assumption that $P\equiv_u Q$. Thus, $(Y,Y)\in UE(P)$ if and only
if $(Y,Y)\in UE(Q)$.
Now let us assume that $UE(P)\neq UE(Q)$. Let $(X,Y)$ be an element
of $(UE(P)\setminus UE(Q))\cup (UE(Q)\setminus UE(P))$.
Without loss of generality, we can assume that $(X,Y)
\in UE(P)\setminus UE(Q)$. Since $(X,Y)\in UE(P)$, it follows that
\begin{enumerate}
\item $X\subseteq Y$
\item $Y\models P$ and, consequently, $Y\models Q$
\item $X\not=Y$ (otherwise, by our earlier observations, $(X,Y)$
would belong to $UE(Q)$).
\end{enumerate}
Let $R=(Q\cup \{r_X\})^Y$. Clearly, $R$ is a Horn constraint program.
Moreover, since $Y\models Q$ and $X\subseteq Y$, $Y\models R$. Thus,
$Can(R,Y)$ is defined. We have $X\subseteq Can(R,Y) \subseteq Y$. We
claim that $Can(R,Y)\neq Y$. Let us assume to the contrary that
$Can(R,Y)=Y$. Then $Y\in St(Q\cup \{r_X\})$. Hence, $Y\in St(P\cup
\{r_X\})$, that is, $Y=Can((P\cup \{r_X\})^Y,Y)$. By Proposition
\ref{max}, $Y$ is the least $Y$-maximal model of $(P\cup \{r_X\})^Y$
and $X$ is a $Y$-maximal model of $(P\cup \{r_X\})^Y$ (since $(X,Y)\in
SE(P)$, $X\models_Y P^Y$ and so, $X\models_Y (P\cup \{r_X\})^Y$, too).
Consequently, $Y \subseteq X$ and, as $X\subseteq Y$, $X=Y$, a
contradiction.
Thus, $Can(R,Y)\neq Y$.
By Proposition \ref{max}, $Can(R,Y)$ is a $Y$-maximal model of $R$.
Since $Q^Y \subseteq R$, it follows that $Can(R,Y)$ is a $Y$-maximal
model of $Q^Y$ and so, $(Can(R,Y),Y)\in SE(Q)$. Since $Can(R,Y)\not=
Y$, from Lemma \ref{exmax} it follows that there is a maximal set $X'$
such that $Can(R,Y)\subseteq X'\subseteq Y$, $X'\not=Y$ and $(X',Y)\in
SE(Q)$. By the definition, $(X',Y)\in UE(Q)$. Since $(X,Y)\notin
UE(Q)$. $X\not=X'$. Consequently, since $X\subseteq X'$, $X'\not=Y$
and $(X,Y)\in UE(P)$, $(X',Y)\notin UE(P)$.
Thus, $(X',Y)\in UE(Q)\setminus UE(P)$. By applying now the same
argument as above to $(X',Y)$ we show the existence of $X''$ such
that $X'\subseteq X''\subseteq Y$, $X'\not=X''$,
$X''\not=Y$ and $(X'',Y)\in SE(P)$. Consequently, we have $X\subseteq
X''$, $X\not= X''$ and $Y\not=X''$, which contradicts the fact that
$(X,Y)\in UE(P)$. It follows then that $UE(P)=UE(Q)$.
\end{proof}
\noindent
{\bf Examples.}
Let $P=\{ 1 \{ p, q \} \leftarrow \mathbf{not}(2 \{ p, q \}) \}$, and
$Q = \{ p \leftarrow \mathbf{not}(q)$, $q \leftarrow \mathbf{not}(p) \}$.
Then $P$ and $Q$ are strongly equivalent. We note that both programs
have
$\{ p \}$, $\{ q \}$, and $\{ p, q \}$ as models. Furthermore, $(\{p\},
\{p\})$, $(\{q\},\{q\})$, $(\{p\},\{p,q\})$, $(\{q\},\{p,q\})$,
$(\{p,q\},\{p,q\})$ and $(\emptyset,\{p,q\})$ are ``all'' SE-models of
the two programs
\footnote{From Lemma \ref{semdat} and Theorem \ref{sethm}, it follows
that only those SE-models that contain atoms only from $At(P)\cup At(Q)$
are the essential ones.}.
Thus, by Theorem
\ref{sethm}, $P$ and $Q$ are strongly equivalent.
We also observe that the first five SE-models are precisely UE-models
of $P$ and $Q$. Therefore, by Theorem \ref{uethm}, $P$ and $Q$ are also
uniformly equivalent.
It is possible for two monotone-constraint programs to be uniformly
but not strongly equivalent. If we add rule $p \leftarrow $
to $P$, and rule $p \leftarrow q$ to $Q$, then the two resulting
programs, say $P'$ and $Q'$, are uniformly equivalent. However, they
are not strongly
equivalent. The programs $P'\cup\{ q \leftarrow p \}$ and $Q'\cup\{ q
\leftarrow p \}$ have different stable models. Another way to show it
is by observing that $(\emptyset, \{p, q\})$ is an SE-model of $Q'$ but
not an SE-model of $P'$.
\hfill$\bigtriangleup$
\section{Fages Lemma}
In general, supported models and stable models of a logic program (both
in the normal case and the monotone-constraint case) do not coincide.
Fages Lemma \cite{fag94}, later extended by \citeA{el03},
establishes a sufficient condition under which a supported model of a
normal logic program is stable. In this section, we show that Fages
Lemma extends to programs with monotone constraints.
\begin{definition}
A monotone-constraint program $P$ is called {\em tight}
on a set $M \subseteq
At(P)$ of atoms, if there exists a mapping $\lambda$ from $M$ to
ordinals such that for every rule $A \leftarrow A_1, \ldots, A_k,
\mathbf{not}(A_{k+1}),$ $\ldots,\mathbf{not}(A_m)$ in $P(M)$, if $X$ is the domain of $A$ and
$X_i$ the domain of $A_i$, $1\leq i\leq k$, then for every $x \in M \cap
X$ and for every $a \in M \cap \bigcup_{i=1}^{k} X_i$, $\lambda(a) <
\lambda(x)$.
\end{definition}
We will now show that tightness provides a sufficient condition for
a supported model to be stable. In order to prove a general result, we
first establish it in the Horn case.
\begin{lemma} \label{fages.horn}
Let $P$ be a Horn monotone-constraint program and let $M$ be a supported
model of $P$. If $P$ is tight on $M$, then $M$ is a stable model of $P$.
\end{lemma}
\begin{proof}
Let $M$ be an arbitrary supported model of $P$ such that $P$ is tight
on $M$. Let $\lambda$ be a mapping showing the tightness of $P$ on $M$.
We will show that for every ordinal $\alpha$ and for every atom $x\in M$
such that $\lambda(x)\leq \alpha$, $x\in Can(P,M)$. We will proceed by
induction.
For the basis of the induction, let us consider an atom $x\in M$ such
that $\lambda(x)=0$. Since $M$ is a supported model for $P$ and
$x \in M$, there exists a rule $r\in P(M)$ such that $x\in \mathit{hset}(r)$.
Moreover, since $P$ is tight on $M$, for every $A\in \mathit{bd}(r)$ and for
every $y\in \mathit{Dom}(A)\cap M$, $\lambda(y) < \lambda(x) = 0$. Thus, for
every $A\in \mathit{bd}(r)$, $\mathit{Dom}(A)\cap M=\emptyset$. Since $M\models \mathit{bd}(r)$
and since $P$ is a Horn monotone-constraint program, it follows that
$\emptyset\models \mathit{bd}(r)$. Consequently, $\mathit{hset}(r)\cap M \subseteq Can
(P,M)$ and so, $x\in Can(P,M)$.
Let us assume that the assertion holds for every ordinal $\beta <
\alpha$ and let us consider $x\in M$ such that $\lambda(x)=\alpha$.
As before, since $M$ is a supported model of $P$, there exists a rule
$r\in P(M)$ such that $x\in \mathit{hset}(r)$. By the assumption, $P$ is tight
on $M$ and, consequently, for every $A\in \mathit{bd}(r)$ and for every $y\in
\mathit{Dom}(A)\cap M$, $\lambda(y) < \lambda(x)=\alpha$. By the induction
hypothesis, for every $A\in \mathit{bd}(r)$, $\mathit{Dom}(A)\cap M \subseteq Can(P,M)$.
Since $P$ is a Horn monotone-constraint program, $Can(P,M)\models
\mathit{bd}(r)$. By Proposition \ref{propnewfp}, $\mathit{hset}(r)\cap M\subseteq Can(P,
M)$ and so, $x\in Can(P,M)$.
It follows that $M\subseteq Can(P,M)$. By the definition of a
canonical computation, we have $Can(P,M) \subseteq M$. Thus, $M=Can(P,
M)$. By Proposition \ref{DerIsStable}, $M$ is a stable model of $P$.
\end{proof}
Given this lemma, the general result follows easily.
\begin{theorem}
\label{fages.thm}
Let $P$ be a monotone-constraint program and let $M$ be a supported
model of $P$. If $P$ is tight on $M$, then $M$ is a stable model of $P$.
\end{theorem}
\begin{proof}
One can check that if $M$ is a supported model of $P$, then it
is a supported model of the reduct $P^M$. Since $P$ is tight on $M$,
the reduct $P^M$ is tight on $M$, too. Thus, $M$ is a stable model of
$P^M$ (by Lemma \ref{fages.horn}) and, consequently, a derivable model
of $P^M$ (by Proposition \ref{DerIsStable}). It follows that $M$ is a
stable model of $P$.
\end{proof}
\section{Logic $\mathit{PL^{mc}}$ and the Completion of
a Monotone-con\-straint Program}
\label{secplmc}
The {\em completion} of a normal logic program \cite{cl78} is a
propositional theory whose models are precisely supported models of the
program. Thus, supported models of normal logic programs can be computed
by means of SAT solvers. Under some conditions, for instance, when the
assumptions of Fages Lemma hold, supported models are stable. Thus,
computing models of the completion can yield stable models, an idea
implemented in the first version of {\em cmodels} software
\cite{cmodels}.
Our goal is to extend the concept of the completion to programs with
monotone constraints. The completion, as we define it, retains much of
the structure of monotone-constraint rules and allow us, in the
restricted setting of {\em lparse} programs, to use pseudo-boolean
constraint solvers to compute supported models of such programs. In this
section we define the completion and prove a result relating supported
models of programs to models of the completion. We discuss extensions
of this result in the next section and their practical computational
applications in Section \ref{sec-appl}.
To define the completion, we first introduce an extension of
propositional logic with monotone constraints, a formalism we denote by
$\mathit{PL^{mc}}$. A {\em formula} in the logic $\mathit{PL^{mc}}$ is an expression built
from monotone constraints by means of boolean connectives $\wedge$,
$\vee$ (and their {\em infinitary} counterparts), $\rightarrow$ and
$\neg$. The notion of a model of a constraint, which we discussed
earlier, extends in a standard way to the class of formulas in the
logic $\mathit{PL^{mc}}$.
For a set $L =\{A_1,\ldots,A_k, \mathbf{not}(A_{k+1}),\ldots, \mathbf{not}(A_m)\}$ of
literals, we define
\[
L^\wedge = A_1\wedge \ldots\wedge A_k\wedge \neg
A_{k+1}\wedge\ldots\wedge \neg A_m.
\]
Let $P$ be a monotone-constraint program. We form the {\em completion}
of $P$, denoted $\mathit{Comp}(P)$, as follows:
\begin{enumerate}
\item For every rule $r\in P$ we include in $\mathit{Comp}(P)$ a $\mathit{PL^{mc}}$ formula
\[
[\mathit{bd}(r)]^\wedge \rightarrow \mathit{hd}(r)
\]
\item For every atom $x\in \mathit{At}(P)$, we include in $\mathit{Comp}(P)$ a $\mathit{PL^{mc}}$
formula
\[
x \rightarrow \bigvee \{[\mathit{bd}(r)]^\wedge\colon r\in P, x\in
\mathit{hset}(r)\}
\]
(we note that when the set of rules in $P$ is infinite, the
disjunction may be infinitary).
\end{enumerate}
The following theorem generalizes a fundamental result on the program
completion from normal logic programming \cite{cl78} to the case of
programs with monotone constraints.
\begin{theorem}
\label{cmp.thm}
Let $P$ be a monotone-constraint program. A set $M\subseteq \mathit{At}(P)$
is a supported model of $P$ if and only if $M$ is a model
of $\mathit{Comp}(P)$.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let us suppose that $M$ is a supported model of $P$.
Then $M$ is a model of $P$, that is, for each rule $r \in P$, if $M
\models \mathit{bd}(r)$ then $M \models \mathit{hd}(r)$. Since $M\models \mathit{bd}(r)$ if
and only if $M\models[\mathit{bd}(r)]^\wedge$, it follows that all formulas
in $\mathit{Comp}(P)$ of the first type are satisfied by $M$.
Moreover, since $M$ is a supported model of $P$, $M \subseteq
\mathit{hset}(P(M))$. That is, for every atom $x\in M$, there exists at least
one rule $r$ in $P$ such that $x\in\mathit{hset}(r)$ and $M\models\mathit{bd}(r)$.
Therefore, all formulas in $\mathit{Comp}(P)$ of the second type are satisfied
by $M$, too.
\noindent
$(\Leftarrow)$ Let us now suppose that $M$ is a model of $\mathit{Comp}(P)$.
Since $M\models \mathit{bd}(r)$ if and only if $M\models[\mathit{bd}(r)]^\wedge$,
and since $M$ satisfies formulas of the first type in $\mathit{Comp}(P)$,
$M$ is a model of $P$.
Let $x\in M$. Since $M$ satisfies the formula $x \rightarrow \bigvee
\{[\mathit{bd}(r)]^\wedge\colon r\in P, x\in \mathit{hset}(r)\}$, it follows that
$M$ satisfies $\bigvee \{[\mathit{bd}(r)]^\wedge\colon r\in P, x\in\mathit{hset}(r)\}$.
That is, there is $r\in P$ such that $M$ satisfies $[\mathit{bd}(r)]^\wedge$
(and so, $\mathit{bd}(r)$, too) and $x\in\mathit{hset}(r)$. Thus, $x\in \mathit{hset}(P(M))$.
Hence, $M$ is a supported model of $P$.
\end{proof}
Theorems \ref{fages.thm} and \ref{cmp.thm} have the following
corollary.
\begin{corollary}
Let $P$ be a monotone-constraint program. A set $M\subseteq At(P)$ is a
stable model of $P$ if $P$ is tight on $M$ and $M$ is a model of $\mathit{Comp}(P)$.
\end{corollary}
We observe that for the material in this section it is not necessary to
require that constraints appearing in the bodies of program rules be
monotone. However, since we are only interested in this case, we adopted
the monotonicity assumption here, as well.
\section{Loops and Loop Formulas in Monotone-constraint Programs}
\label{secloop}
The completion alone is not quite satisfactory as it relates {\em
supported} not {\em stable} models of monotone-constraint programs with
models of $\mathit{PL^{mc}}$ theories. Loop formulas, proposed
by \citeA{lz02},
provide a way to eliminate those supported models of normal logic
programs, which are not stable. Thus, they allow us to use SAT solvers
to compute stable models of {\em arbitrary} normal logic programs and
not only those, for which supported and stable models coincide.
We will now extend this idea to monotone-constraint programs. In this
section, we will restrict our considerations to programs $P$ that are
{\em finitary}, that is, $\mathit{At}(P)$ is finite. This restriction implies
that monotone constraints that appear in finitary programs have finite
domains.
Let $P$ be a finitary monotone-constraint program. The {\em positive
dependency graph} of $P$ is the directed graph $G_P=(V,E)$, where $V=
At(P)$ and $\langle u, v \rangle$ is an edge in $E$ if there exists a
rule $r\in P$ such that $u\in \mathit{hset}(r)$ and $v\in \mathit{Dom}(A)$ for some
monotone constraint $A\in \mathit{bd}(r)$ (that is, $A$ appears non-negated in
$\mathit{bd}(r)$). We note that positive dependency graphs of finitary
programs are finite.
Let $G=(V,E)$ be a directed graph. A set $L\subseteq V$ is a {\em loop}
in $G$ if the subgraph of $G$ induced by $L$ is strongly connected.
A loop is {\em maximal} if it is not a
proper subset of any other loop in $G$. Thus, maximal loops are vertex
sets of strongly connected components of $G$.
A maximal loop is {\em terminating} if there is no edge in $G$
from $L$ to any other maximal loop.
These concepts can be extended to the case of programs. By a {\em loop}
({\em maximal loop}, {\em terminating loop}) of a monotone-constraint
program $P$, we mean the loop (maximal loop, terminating loop) of the
positive dependency graph $G_P$ of $P$. We observe that every finitary
monotone-constraint program $P$ has a terminating loop, since $G_P$ is
finite.
Let $X\subseteq \mathit{At}(P)$. By $G_P[X]$ we denote the subgraph of $G_{P}$
{\em induced} by $X$. We observe that if $X\not=\emptyset$ then every
loop of $G_P[X]$ is a loop of $G_P$.
Let $P$ be a monotone-constraint program $P$. For every model $M$ of
$P$ (in particular, for every model $M$ of $\mathit{Comp}(P)$), we define $M^-=
M \setminus Can(P^M,M)$. Since $M$ is a model of $P$, $M$ is a model of
$P^M$. Thus, $Can(P^M,M)$ is well defined and so is $M^-$.
For every loop in the graph $G_P$ we will now define the corresponding
loop formula. First, for a constraint $A=(X,C)$ and a set $L\subseteq
\mathit{At}$, we set $A_{|L}=(X, \{Y\in C\colon Y\cap L=\emptyset\})$ and
call $A_{|L}$ the {\em restriction} of $A$ to $L$. Next, let $r$ be a
monotone-constraint rule, say
\[
r=\ \ A \leftarrow A_1,\ldots,A_k,\mathbf{not}(A_{k+1}),\ldots, \mathbf{not}(A_m).
\]
If $L\subseteq \mathit{At}$, then define a $\mathit{PL^{mc}}$ formula $\beta_L(r)$
by setting
\[
\beta_L(r) = {A_1}_{|L}\wedge\ldots\wedge {A_k}_{|L}\wedge \neg A_{k+1}
\wedge \ldots \wedge \neg A_m.
\]
Let $L$ be a loop of a monotone-constraint program $P$. Then, the {\em
loop formula} for $L$, denoted by $LP(L)$, is the $\mathit{PL^{mc}}$ formula
\[
LP(L) = \bigvee L \rightarrow \bigvee \{\beta_L(r)\colon r\in P\ \mbox{
and}\ L\cap \mathit{hset}(r) \neq \emptyset\}
\]
(we recall that we use the convention to write $a$ for the constraint
$C(a) = (\{a\},$ $\{\{a\}\})$. A {\em loop completion} of a finitary
monotone-constraint program $P$ is the $\mathit{PL^{mc}}$ theory
\[
LComp(P) = \mathit{Comp}(P)\cup \{LP(L) \colon \mbox{$L$ is a loop in $G_P$}\}.
\]
The following theorem exploits the concept of a loop formula to provide
a necessary and sufficient condition for a model being a stable model.
transfinite one.
\begin{theorem}
\label{loop.thm}
Let $P$ be a finitary monotone-constraint program. A set $M\subseteq
\mathit{At}(P)$ is a stable model of $P$ if and only if $M$ is a
model of $LComp(P)$.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $M$ be a stable model of $P$. Then $M$ is a
supported model of $P$ and, by Theorem \ref{cmp.thm}, $M \models
\mathit{Comp}(P)$.
Let $L$ be a loop in $P$.
If $M\cap L=\emptyset$ then $M\models LP(L)$.
Thus, let us assume that $M\cap L\not=\emptyset$.
Since $M$ is a stable model of $P$, $M$ is a derivable model of
$P^M$, that is, $M=Can(P^M,M)$. Let $(X_n)_{n=0,1,\ldots}$ be the
canonical $P^M$-derivation with respect to $M$ (since we assume that
$P$ is finite and each constraint in $P$ has a finite domain,
$P$-derivations reach their results in finitely many steps). Since
$Can(P^M,M) \cap L = M\cap L \not=\emptyset$, there is a smallest
index $n$ such that $X_n\cap L\not=\emptyset$. In particular, it
follows that $n>0$ (as $X_0=\emptyset$) and $L\cap X_{n-1}=
\emptyset$.
Since $X_n=\mathit{hset}(P^M(X_{n-1})\cap M$ and $X_n\cap L\not=\emptyset$, there
is a rule $r\in P^M(X_{n-1})$ such that $\mathit{hset}(r)\cap L\not=\emptyset$,
that is, such that $L\cap \mathit{hset}(r)) \neq \emptyset$.
Let $r'$ be a rule in $P$, which
contributes $r$ to $P^M$. Then, for every literal $\mathbf{not}(A)\in \mathit{bd}(r')$,
$M\models \mathbf{not}(A)$. Let $A\in \mathit{bd}(r')$. Then $A\in \mathit{bd}(r)$ and so,
$X_{n-1}\models A$. Since $X_{n-1}\cap L=\emptyset$, $X_{n-1}\models
A_{|L}$, too, By the monotonicity of $A_{|L}$, $M\models A_{|L}$.
Thus, $M\models\beta_L(r')$. Since $\mathit{hset}(r')\cap L\not=\emptyset$,
$L\cap \mathit{hset}(r)) \neq \emptyset$
and so, $M\models LP(L)$. Thus, $M\models LComp(P)$.
\noindent
$(\Leftarrow)$ Let us consider a set $M\subseteq \mathit{At}(P)$ such that
$M$ is not a stable model of $P$. If $M$ is not a supported model of
$P$ that $M\not\models \mathit{Comp}(P)$ and so $M$ is not a model of
$LComp(P)$. Thus, let us assume that $M$ is a supported
model of $P$. It follows that $M^-\not=\emptyset$. Let $L\subseteq
M^-$ be a terminating loop for $G_P[M^-]$.
Let $r'$ be an arbitrary rule in $P$ such that $L\cap \mathit{hset}(r')) \neq
\emptyset$, and let $r$ be the rule obtained from $r'$ by removing
negated constraints from its body. Now, let us assume that $M\models
\beta_{r'}(L)$. It follows that for every literal $\mathbf{not}(A)\in \mathit{bd}(r')$, $M
\models \mathbf{not}(A)$. Thus, $r\in P^M$. Moreover, since $L$ is a terminating
loop for $G_P[M^-]$, for every constraint $A\in \mathit{bd}(r')$, $\mathit{Dom}(A)\cap
M^- \subseteq L$. Since $M\models A_{|L}$, it follows that $Can(P^M,M)
\models A$. Consequently, $\mathit{hset}(r')\cap L \subseteq
\mathit{hset}(r')\cap M\subseteq Can(P^M,M)$ and so, $L\cap Can(P^M,M)\not=
\emptyset$, a contradiction. Thus, $M\not\models \bigvee\{\beta_{r'}(L)
\colon r'\in P\ \mbox{and}\ L\cap \mathit{hset}(r')) \neq \emptyset\}$. Since
$M\models \bigvee L$, it follows that $M\not\models LP(L)$ and so,
$M\not\models LComp(P)$.
\end{proof}
The following result follows directly from the proof of Theorem
\ref{loop.thm} and provides us with a way to filter out specific
non-stable supported models from $\mathit{Comp}(P)$.
\begin{theorem}
\label{loop.cor}
Let $P$ be a finitary monotone-constraint program and $M$ a model
of $\mathit{Comp}(P)$. If $M^-$ is not empty, then $M$ violates the loop
formula of every terminating loop of $G_P[M^-]$.
\end{theorem}
Finally, we point out that, Theorem \ref{loop.thm} does not hold
when a program $P$ contains infinitely many rules. Here is a counterexample:
\noindent
{\bf Examples.}
Let $P$ be the set of following rules:
\begin{quote}
\noindent
$1\{a_0\} \leftarrow 1\{a_1\}$\\
$1\{a_1\} \leftarrow 1\{a_2\}$\\
$\cdots$\\
$1\{a_n\} \leftarrow 1\{a_{n+1}\}$\\
$\cdots$
\end{quote}
Let $M=\{a_0,\ldots,a_n,\ldots\}$. Then $M$ is a supported model of $P$.
The only stable model of $P$ is $\emptyset$. However, $M^-=M\setminus
\emptyset$ does not contain any terminating loop. The problem arises
because there is an infinite simple path in $G_P[{M^-}]$. Therefore,
$G_P[{M^-}]$ does not have a sink, yet it does not have a terminating
loop either.
\hfill$\bigtriangleup$
The results of this section, concerning the program completion and
loop formulas --- most importantly, the loop-completion theorem ---
form the basis of a new software system to compute stable models of
{\em lparse} programs. We discuss this matter in Section \ref{sec-appl}.
\section{Programs with Convex Constraints}
\label{secconvex}
We will now discuss programs with convex constraints, which are closely
related to programs with monotone constraints. Programs with convex
constraints are of interest as they do not involve explicit occurrences
of the default negation operator $\mathbf{not}$, yet are as expressive as programs
with monotone-constraints. Moreover, they directly subsume an essential
fragment of the class of {\em lparse} programs \cite{sns02}.
A constraint $(X,C)$ is {\em convex}, if for every $W,Y,Z \subseteq X$
such that $W \subseteq Y \subseteq Z$ and $W,Z\in C$, we have $Y \in
C$.
A constraint rule of the form
(\ref{eq1a})
is a
{\em convex-constraint rule} if $A$, $A_1,\ldots,A_n$ are convex
constraints and $m=k$.
Similarly, a constraint program built of
convex-constraint rules is a {\em convex-constraint program}.
The concept of a model discussed in Section \ref{prel} applies to
convex-constraint programs. To define supported and stable models
of convex-constraint programs, we view them as special programs with
monotone-constraints.
To this end, we define the {\em upward} and {\em downward closures} of
a constraint $A=(X,C)$ to be constraints $A^+=(X,C^+)$ and $A^-=(X,C^-)$,
respectively, where
\begin{quote}
$C^+= \{Y\subseteq X\colon \mbox{for some $W\in C$, $W\subseteq Y$}\}$,
{and}\\
$C^-= \{Y\subseteq X\colon \mbox{for some $W\in C$, $Y\subseteq W$}\}$.
\end{quote}
We note that the constraint $A^+$ is monotone. We call a constraint
$(X,C)$ {\em antimonotone} if $C$ is closed under subset, that is, for
every $W, Y \subseteq X$, if $Y \in C$ and $W \subseteq Y$ then $W\in
C$. It is clear that the constraint $A^-$ is antimonotone.
The upward and downward closures allow us to represent any convex
constraint as the ``conjunction'' of a monotone constraint and an
antimonotone constraint.Namely, we have the following property of convex
constraints.
\begin{proposition}
\label{can}
A constraint $(X,C)$ is convex if and only if $C=C^+ \cap C^-$.
\end{proposition}
\begin{proof}
($\Leftarrow$) Let us assume that $C=C^+\cap C^-$and let us consider
a set $M$ such that $M'\subseteq M \subseteq M''$, where $M',M''\in C$.
it follows that $M'\in C^+$ and $M''\in C^-$. Thus, $M\in C^+$ and $M
\in C^-$. Consequently, $M\in C$, which implies that $(X,C)$ is
convex.
\noindent
($\Rightarrow$) The definitions directly imply that $C\subseteq C^+$
and $C\subseteq C^-$. Thus, $C\subseteq C^+\cap C^-$. Let us consider
$M\in C^+\cap C^-$. Then there are sets $M',M''\in C$ such that $M'
\subseteq M$ and $M\subseteq M''$. Since $C$ is convex, $M\in C$.
Thus, $C^+\cap C^-\subseteq C$ and so, $C=C^+\cap C^-$.
\end{proof}
Proposition \ref{can} suggests an encoding of convex-constraint programs as
monotone-constraint programs. To present it, we need more notation.
For a constraint $A=(X,C)$, we call the constraint $(X,\overline{C})$,
where $\overline{C}= {\cal P}(X) \setminus C$, the {\em dual constraint}
for $A$. We denote it by $\overline{A}$.
It is a direct consequence of the
definitions that a constraint $A$ is monotone if and only if its
dual $\overline{A}$ is antimonotone.
Let $C$ be a convex constraint. We set $mc(C)=\{C\}$ if $C$ is monotone.
We set $mc(C)=\{\mathbf{not}(\overline{C})\}$, if $C$ is antimonotone. We define
$mc(C)=\{C^+,\mathbf{not}(\overline{C^-})\}$, if $C$ is neither monotone nor
antimonotone. Clearly, $C$ and $mc(C)$ have the same models.
Let $P$ be a convex-constraint program. By $mc(P)$ we denote the program
with monotone constraints obtained by replacing every rule
$r$ in $P$ with a rule $r'$ such that
\[
\mathit{hd}(r')=\mathit{hd}(r)^+\ \ \mbox{and}\ \ \mathit{bd}(r')=\bigcup\{mc(A)\colon A\in
\mathit{bd}(r)\}
\]
and, if $\mathit{hd}(r)$ is {\em not} monotone, also with an additional rule
$r''$ such that
\[
\mathit{hd}(r'')= (\emptyset, \emptyset)\ \ \mbox{and}\ \ \mathit{bd}(r'')=
\{\overline{\mathit{hd}(r)^-}\}\cup \mathit{bd}(r').
\]
By our observation above, all constraints appearing in rules of $mc(P)$
are indeed monotone, that is, $mc(P)$ is a program with monotone
constraints.
It follows from Proposition \ref{can} that $M$ is a model of $P$ if and
only if $M$ is a model of $mc(P)$. We extend this correspondence to
supported and stable models of a convex constraint program $P$ and the
monotone-constraint program $mc(P)$.
\begin{definition}
Let $P$ be a convex constraint program. Then a set of atoms $M$ is
a supported (or stable) model of $P$ if $M$ is a supported (or
stable) model of $mc(P)$.
\end{definition}
With these definitions, monotone-constraint programs can be viewed
(almost) directly as convex-constraint programs. Namely, we note that
monotone and antimonotone constraints are convex. Next, we observe that
if $A$ is a monotone constraint, the expression $\mathbf{not}(A)$ has the same
meaning as the antimonotone constraint $\overline{A}$ in the sense that
for every interpretation $M$, $M\models \mathbf{not}(A)$ if and only if $M\models
\overline{A}$.
Let $P$ be a monotone-constraint program. By $cc(P)$ we denote the
program obtained from $P$ by
replacing every rule $r$ of the form (\ref{eq1a}) in $P$ with $r'$
such that
\[
\mathit{hd}(r')=\mathit{hd}(r)\ \ \mbox{and}\ \ \mathit{bd}(r')=\bigcup\{A_i\colon i=1,\ldots,k\}
\cup \bigcup\{\overline{A_j}\colon j=k+1,\ldots,m\}
\]
One can show that
programs $P$ and $cc(P)$ have the same models, supported models and
stable models. In fact, for every monotone-constraint program $P$ we
have $P=mc(cc(P))$.
\noindent
{\bf Remark.}
Another consequence of our discussion is that the default negation
operator can be eliminated from the syntax at the price of allowing
antimonotone constraints and using antimonotone constraints as negated
literals.
\hfill$\Box$
Due to the correspondences we have established above, one can extend to
convex-constraint programs all concepts and results we discussed earlier
in the context of monotone-constraint programs. In many cases, they can
also be stated {\em directly} in the language of convex-constraints. The
most important for us are the notions of the completion and loop formulas,
as they lead to new algorithms for computing stable models of {\em
lparse} programs. Therefore, we will now discuss them in some detail.
As we just mentioned, we could use $\mathit{Comp}(mc(P))$ as a definition of
the completion $\mathit{Comp}(P)$ for a convex-constraint logic program $P$.
Under this definition Theorems \ref{convexloop.thm} extends to the case
of convex-constraint programs. However, $\mathit{Comp}(mc(P))$ involves monotone
constraints and their negations and {\em not} convex constraints that
appear in $P$. Therefore, we will now propose another approach, which
preserves convex constraints of $P$.
To this end, we first extend the logic $\mathit{PL^{mc}}$ with convex constraints.
In this extension, which we denote by $\mathit{PL^{cc}}$ and refer to as the {\em
propositional logic with convex-constraints}, formulas are boolean
combinations of convex constraints. The semantics of such formulas is
given by the notion of a model obtained by extending over boolean
connectives the concept of a model of a convex constraint.
Thus, the only difference between the logic $\mathit{PL^{mc}}$, which we used to
define the completion and loop completion for monotone-convex programs
and the logic $\mathit{PL^{cc}}$ is that the former uses monotone constraints as
building blocks of formulas, whereas the latter is based on convex
constraints. In fact, since monotone constraints are special convex
constraints, the logic $\mathit{PL^{mc}}$ is a fragment of the logic $\mathit{PL^{cc}}$.
Let $P$ be a convex-constraint program. The completion of $P$,
denoted by \\
$\mathit{Comp}(P)$, is the following set of $\mathit{PL^{cc}}$ formulas:
\begin{enumerate}
\item For every rule $r\in P$ we include in $\mathit{Comp}(P)$ a $\mathit{PL^{cc}}$ formula
\[
[\mathit{bd}(r)]^\wedge \rightarrow \mathit{hd}(r)
\]
(as before, for a set of convex constraints $L$, $L^\wedge$ denotes the
conjunction of the constraints in $L$)
\item For every atom $x\in \mathit{At}(P)$, we include in $\mathit{Comp}(P)$ a $\mathit{PL^{cc}}$
formula
\[
x \rightarrow \bigvee \{[\mathit{bd}(r)]^\wedge\colon r\in P,\ x\in
\mathit{hset}(r)\}
\]
(again, we note that when the set of rules in $P$ is infinite, the
disjunction may be infinitary).
\end{enumerate}
One can now show the following theorem.
\begin{theorem}
Let $P$ be a convex-constraint program and let $M\subseteq \mathit{At}(P)$.
Then $M$ is a supported model of $P$ if and only if
$M$ is a model of $\mathit{Comp}(P)$.
\end{theorem}
\begin{proof}
(Sketch) By the definition, $M$ is a supported model of $P$ if and only
if $M$ is a supported model of $mc(P)$. It is a matter of routine
checking that $\mathit{Comp}(mc(P))$ and $\mathit{Comp}(P)$ have the same models. Thus
the assertion follows from Theorem \ref{cmp.thm}.
\end{proof}
Next, we restrict attention to {\em finitary} convex-constraint programs,
that is, programs with finite set of atoms, and extend to this class of
programs the notions of the positive dependency graph and loops.
To this end,
we exploit
its representation as a monotone-constraint program $mc(P)$. That is,
we define the positive dependency graph, loops and loop formulas for $P$
as the positive dependency graph, loops and loop formulas of $mc(P)$,
respectively. In particular, $L$ is a loop of $P$ if and only if $L$ is
a loop of $mc(P)$ and the loop formula for $L$, with respect to a
convex-constraint program $P$, is defined as the loop formula $LP(L)$
with respect to the program $mc(P)$\footnote{There is one minor
simplification one might employ. For a monotone constraint $A$, $\neg A$
and $\overline{A}$ are equivalent and $\overline{A}$ is antimonotone and
so, convex. Thus, we can eliminate the operator $\neg$ from loop
formulas of convex-constraint programs by writing $\overline{A}$ instead
of $\neg A$.}. We note that since loop formulas for monotone-constraint
programs only modify non-negated literals in the bodies of rules and
leave negated literals intact, there seems to be no simple way to extend
the notion of a loop formula to the case of a convex-constraint program
$P$ without making references to $mc(P)$.
We now define a {\em loop completion} of a finitary convex-constraint
program $P$ as the $\mathit{PL^{cc}}$ theory
\[
LComp(P) = \mathit{Comp}(P)\cup \{LP(L) \colon \mbox{$L$ is a loop of $P$}\}.
\]
We have the following theorem that provides a necessary and
sufficient condition for a set of atoms to be a stable model of a
convex-constraint program.
\begin{theorem}
\label{convexloop.thm}
Let $P$ be a finitary convex-constraint program. A set $M\subseteq
\mathit{At}(P)$ is a stable model of $P$ if and only if $M$ is a model of
$LComp(P)$.
\end{theorem}
\begin{proof}
(Sketch)
Since $M$ is a stable model of $P$ if and only of $M$ is a stable
model of $mc(P)$, Theorem \ref{loop.thm} implies that $M$ is a
stable model of $P$ if and only if $M$ is a stable model of
$LComp(mc(P))$. It is a matter of routine checking that $LComp(mc
(P))$ and $LComp(P)$ have the same models. Thus, the result follows.
\end{proof}
In a similar way, Theorem \ref{loop.cor} implies the following result for
convex-constraint programs.
\begin{theorem}
\label{convexloop.cor}
Let $P$ be a finitary convex-constraint program and $M$ a model
of $\mathit{Comp}(P)$. If $M^-$ is not empty, then $M$ violates the loop
formula of every terminating loop of $G_P[M^-]$.
\end{theorem}
We emphasize that one could simply use $LComp(mc(P))$ as a definition
of the loop completion for a convex-constraint logic program. However,
our definition of the completion component of the loop completion
retains the structure of constraints in a program $P$, which is
important when using loop completion for computation of stable
models, the topic we address in the next section of the paper.
\section{Applications}
\label{sec-appl}
In this section, we will use theoretical results on the program
completion, loop formulas and loop completion of programs with convex
constraints to design and implement a new method for computing stable
models of {\em lparse} programs \cite{sns02}.
\subsection{{\em Lparse} Programs}
\label{wa}
\citeA{sns02} introduced and studied an extension of normal logic
programming with weight atoms. Formally, a {\em weight atom} is an
expression
\[
A = l[a_1=w_1,\ldots,a_k=w_k]u,
\]
where $a_i$, $1\leq i\leq k$ are propositional atoms, and $l,u$ and
$w_i$, $1\leq i\leq k$ are non-negative integers. If all weights $w_i$
are equal to 1, $A$ is a {\em cardinality atom}, written as $l\{a_1,
\ldots,a_k\}u$.
An {\em lparse rule} is an expression of the form
\[
A\leftarrow A_1,\ldots,A_n
\]
where $A$, $A_1,\ldots,A_n$ are weight atoms. We refer to sets of {\em
lparse} rules as {\em lparse programs}. \citeA{sns02} defined for {\em
lparse} programs the semantics of stable models.
A set $M$ of atoms is a {\em model} of (or {\em satisfies}) a weight atom
$l[a_1=w_1,\ldots, a_k=w_k]u$ if
\[
l\leq \sum_{i=1}^k \{w_i\colon a_i\in M\} \leq u.
\]
With this semantics a weight atom $l[a_1=w_1,\ldots, a_k= w_k]u$ can be
identified with a constraint $(X,C)$, where $X=\{a_1,\ldots,a_k\}$ and
\[
C=\{Y\subseteq X\colon l\leq \sum_{i=1}^k \{w_i\colon a_i\in Y\} \leq u\}.
\]
We notice that all weights in a weight atom $W$ are non-negative.
Therefore, if $M\subseteq M'\subseteq M''$ and both $M$ and $M''$
are models of $W$, then $M'$ is also a model of $W$. It follows
that the constraint $(X,C)$ we define above is convex.
Since $(X,C)$ is convex, weight atoms represent a class of convex
constraints and {\em lparse} programs syntactically are a class of
programs with convex constraints. This relationship extends to
the stable-model semantics. Namely, \citeA{mt04} and \citeA{mnt03,mnt06}
showed that {\em
lparse} programs can be encoded as programs with monotone constraints so
that the concept of a stable model is preserved. The transformation used
there coincides with the encoding $mc$ described in the previous section,
when we restrict the latter to {\em lparse} programs. Thus, we have the
following theorem.
\begin{theorem}
\label{lparse}
Let $P$ be an lparse program. A set $M\subseteq \mathit{At}$ is a stable model
of $P$ according to the definition
by \citeA{sns02}
if and only if $M$
is a stable model of $P$ according to the definition given in the
previous section (when $P$ is viewed as a convex-constraint
program).
\end{theorem}
It follows that to compute stable models of {\em lparse} programs we
can use the results obtained earlier in the paper, specifically the
results on program completion and loop formulas for convex-constraint
programs.
\noindent
{\bf Remark.}
To be precise, the syntax of {\em lparse} programs is more
general. It allows both atoms and negated atoms to appear within weight
atoms. It also allows weights to be negative. However, negative weights
in {\em lparse} programs are treated just as a notational convenience.
Specifically, an expression of the form $a=w$ within a weight atom (where
$w<0$) represents the expression $\mathbf{not}(a)=-w$ (eliminating negative weights
in this way from a weight atom requires modifications of the bounds
associated with this weight atom). Moreover, by introducing new
propositional variables one can remove occurrences of negative literals
from programs. These transformations preserve stable models (modulo
new atoms). \citeA{mt04} and \citeA{mnt03,mnt06} provide
a detailed discussion
of this transformation.
In addition to weight atoms, the bodies of {\em lparse} rules may contain
propositional literals (atoms and negated atoms) as conjuncts. We can
replace these propositional literals with weight atoms as follows: an
atom $a$ can be replaced with the cardinality atom $1\{a\}$, and a
literal $\mathbf{not}(a)$ --- with the cardinality atom $\{a\}0$. This
transformation preserves stable models, too. Moreover, the size of the
resulting program does not increase more than by a constant factor.
Thus, through the transformations discussed here, monotone- and
convex-constraint programs capture arbitrary {\em lparse} programs.
\hfill$\Box$
\subsection{Computing Stable Models of {\em Lparse} Programs}
In this section we present an algorithm for computing stable models
of {\em lparse} programs. Our method uses the results we obtained in
Section \ref{secconvex} to reduce the problem to that of computing
models of the loop completion of an {\em lparse} program. The loop
completion is a formula in the logic $\mathit{PL^{cc}}$, in which the class of
convex constraints is restricted to weight constraints, as defined
in the previous subsection. We will denote the fragment of the logic
$\mathit{PL^{cc}}$ consisting of such formulas by $\mathit{PL^{wa}}$.
To make the method practical, we need programs to compute models of
theories in the logic $\mathit{PL^{wa}}$. We will now show a general way to
adapt to this task off-the-shelf {\em pseudo-boolean constraint
solvers}
\cite{es03,arms02,wal97,pbcomp05,lt03}.
{\em Pseudo-boolean constraints} ($\mathit{PB}$ for short) are integer
programming constraints
in which variables have 0-1 domains. We will write them as inequalities
\begin{equation}
\label{pbeq}
w_1\times x_1 + \ldots + w_k\times x_k \mathit{\ comp\ } w,
\end{equation}
where $\mathit{\ comp\ }$ stands for one of the relations $\leq$, $\geq$, $<$ and
$>$, $w_i$'s and $w$ are integer coefficients (not necessarily
non-negative), and $x_i$'s are integers taking value 0 or 1. A set of
pseudo-boolean constraints is a {\em pseudo-boolean theory}.
Pseudo-boolean constraints can be viewed as constraints. The basic idea
is to treat each 0-1 variable $x$ as a propositional atom (which we will
denote by the same letter). Under this
correspondence, a pseudo-boolean constraint (\ref{pbeq}) is equivalent
to the constraint $(X,C)$, where $X =\{x_1,\ldots,x_k\}$ and
\[
C=\{Y\subseteq X\colon \sum_{i=1}^k\{w_i\colon x_i\in Y\} \mathit{\ comp\ } w\}
\]
in the sense that solutions to (\ref{pbeq}) correspond to models of
$(X,C)$ ($x_i=1$ in a solution if and only if $x_i$ is true in the
corresponding model). In particular, if all coefficients $w_i$ and the
bound $w$ in (\ref{pbeq}) are non-negative, and if $\mathit{\ comp\ }=\mbox{
`$\geq$'}$, then the constraint (\ref{pbeq}) is equivalent to a monotone
lower-bound weight atom $w[x_1=w_1,\ldots,x_n=w_n]$.
It follows that an arbitrary weight atom can be represented by one or
two pseudo-boolean constraints. More generally, an arbitrary $\mathit{PL^{wa}}$
formula $F$ can be encoded as a set of $\mathit{PB}$ constraints. We will describe
the translation as a two-step process.
The first step consists of converting $F$ to a {\em
clausal} form $\mathit{\tau_{cl}}(F)$\footnote{A $\mathit{PL^{wa}}$ {\em clause} is any formula
$B_1\wedge \ldots \wedge B_m \rightarrow H_1\vee\ldots\vee H_n$, where
$B_i$ and $H_j$ are weight atoms.}. To control the size of the
translation, we introduce auxiliary propositional atoms. Below, we
describe the translation $F \mapsto \mathit{\tau_{cl}}(F)$ under the assumption
that $F$ is a formula of the loop completion of an {\em lparse} program
$P$. Our main motivation is to compute stable models of logic programs
and to this end algorithms for computing models of loop completions
are sufficient.
Let $F$ be a formula in the loop completion of an {\em lparse}-program
$P$. We define $\mathit{\tau_{cl}}(F)$ as follows (in the transformation, we use a
propositional atom $x$ as a shorthand for the cardinality atom $C(x)=
1\{x\}$).
\noindent
1. If $F$ is of the form $A_1 \wedge \ldots \wedge A_n \rightarrow A$,
then $\mathit{\tau_{cl}}(F)=F$\\
2. If $F$ is of the form $ x \rightarrow ([\mathit{bd}(r_1)]^{\wedge}) \vee
\ldots \vee ([\mathit{bd}(r_l)]^{\wedge})$,
then we introduce new propositional atoms $b_{r,1},\ldots,b_{r,l}$ and
set $\mathit{\tau_{cl}}(F)$ to consist of the following $\mathit{PL^{wa}}$ clauses:
\[
x \rightarrow b_{r,1} \vee \ldots \vee b_{r,l}
\]
\[
[\mathit{bd}(r_i)]^{\wedge} \rightarrow b_{r,i}, \textrm{ for every }\mathit{bd}(r_i)
\]
\[
b_{r,i} \rightarrow A_j,\textrm{ for every }\mathit{bd}(r_i)\textrm{ and }
A_j\in \mathit{bd}(r_i)
\]
3. If $F$ is of the form $\bigvee L \rightarrow \bigvee_r
\{\beta_L(r)\}$,
where $L$ is a set of atoms, and every $\beta_L(r)$ is a conjunction of
weight atoms, then we introduce new propositional atoms $bdf_{L,r}$ for
every $\beta_L(r)$ in $F$ and represent $\bigvee L$ as the weight atom
$W_L=1[l_i=1:l_i\in L]$. We then define $\mathit{\tau_{cl}}(F)$ to consist of
the following clauses:
\[
W_L \rightarrow \bigvee bdf_{L,r}
\]
\[
\beta_L(r) \rightarrow bdf_{L,r},\textrm{ for every }\beta_L(r)\in F
\]
\[
bdf_{L,r} \rightarrow A_j,\textrm{ for every }\beta_L(r)\in F
\textrm{ and }A_j\in \beta_L(r).
\]
It is clear that the size $\mathit{\tau_{cl}}(F)$ is linear in the size of $F$.
The second step of the translation, converts a $\mathit{PL^{wa}}$ formula $C$ in
a clausal form into a set of $\mathit{PB}$ constraints, $\mathit{\tau_{pb}}(C)$.
To define the translation $C\rightarrow \mathit{\tau_{pb}}(C)$, let us consider a
$\mathit{PL^{wa}}$ clause $C$ of the form
\begin{equation}
\label{clause}
B_1 \wedge \ldots \wedge B_m \rightarrow H_1 \vee \ldots \vee H_n,
\end{equation}
where $B_i$'s and $H_i$'s are weight atoms.
We introduce new propositional atoms $b_1,\ldots,b_m$ and $h_1,\ldots,
h_n$ to represent each weight atom in the clause. As noted earlier in
the paper, we simply write $x$ for a weight atoms of the form $1[x=1]$.
With the new atoms, the clause (\ref{clause}) becomes a propositional
clause $b_1\wedge \ldots \wedge b_m \rightarrow h_1\vee \ldots \vee
h_n$. We represent it by the following $\mathit{PB}$ constraint:
\begin{equation}
\label{pbclause}
- b_1 - \ldots - b_m + h_1 + \ldots + h_n \geq 1 - m.
\end{equation}
Here and later in the paper, we use the same symbols to denote
propositional variables and the corresponding 0-1 integer variables.
The context will always imply the correct meaning of the symbols.
Under this convention, it is easy to see that a propositional clause
$b_1\wedge \ldots \wedge b_m \rightarrow h_1\vee \ldots \vee h_n$ and
its $\mathit{PB}$ constraint (\ref{pbclause}) have the same models.
We introduce next $\mathit{PB}$ constraints that enforce the equivalence of the
newly introduced atoms $b_i$ (or $h_i$) and the corresponding weight
atoms $B_i$ (or $H_i$).
Let $B=l[a_1=w_1,\ldots,a_k=w_k]u$ be a weight atom and $b$ a
propositional atom. We split $B$ to $B^+$ and $B^-$ and introduce two
more atoms $b^+$ and $b^-$. To model $B\equiv b$, we model with
pseudo-boolean constraints the following three equivalences:
$b\equiv b^+ \wedge b^-$, $b^+ \equiv B^+$, and $b^- \equiv B^-$.
\noindent
1. The first equivalence can be captured with three propositional
clauses. Hence the following three $\mathit{PB}$ constraints model that
equivalence:
\begin{equation}
\label{pb3eq1}
-b + b^+ \geq 0
\end{equation}
\begin{equation}
\label{pb3eq2}
-b + b^- \geq 0
\end{equation}
\begin{equation}
\label{pb3eq3}
-b^+ - b^- + b \geq -1
\end{equation}
2. The second equivalence, $b^+\equiv B^+$, can be modeled by the
following two $\mathit{PB}$ constraints
\begin{equation}
\label{pblb1}
(-l)\times b^+ + \sum_{i=1}^k(a_i\times w_i) \geq 0
\end{equation}
\begin{equation}
\label{pblb2}
-(\sum_{i=1}^k w_i -l + 1)\times b^+ +
\sum_{i=1}^k(a_i\times w_i) \leq l-1
\end{equation}
3. Similarly, the third equivalence, $b^-\equiv B^-$, can be modeled
by the following two $\mathit{PB}$ constraints
\begin{equation}
\label{pbub1}
(\sum_{i=1}^k w_i - u)\times b^- +
\sum_{i=1}^k(a_i\times w_i) \leq \sum_{i=1}^k w_i
\end{equation}
\begin{equation}
\label{pbub2}
(u + 1)\times b^- + \sum_{i=1}^k(a_i\times w_i) \geq u+1
\end{equation}
We define now $\mathit{\tau_{pb}}(C)$, for a $\mathit{PL^{wa}}$ clause $C$, as the set of all
pseudo-boolean constraints (\ref{pbclause}) and (\ref{pb3eq1}),
(\ref{pb3eq2}), (\ref{pb3eq3}), (\ref{pbub1}), (\ref{pbub2}),
(\ref{pblb1}), (\ref{pblb2}) constructed for every weight atom occurring
in $C$. One can verify that the size of $\mathit{\tau_{pb}}(C)$ is linear in the
size of $C$. Therefore, $\mathit{\tau_{pb}}(\mathit{\tau_{cl}}(F))$ has size linear in the size of
$F$.
In the special case where all $B_i$'s and $H_j$'s are weight atoms of
the form $1[b_i=1]$ and $1[h_j=1]$, we do not need to introduce any
new atoms and $\mathit{PB}$ constraints (\ref{pb3eq1}), (\ref{pb3eq2}),
(\ref{pb3eq3}), (\ref{pbub1}), (\ref{pbub2}), (\ref{pblb1}),
(\ref{pblb2}). Then $\mathit{\tau_{pb}}(C)$ consists of a single $\mathit{PB}$ constraint
(\ref{pbclause}).
We have the following theorem establishing the correctness of the
transformation $\tau$. The proof of the theorem is straightforward.
\begin{theorem}
Let $F$ be a loop completion formula in logic $\mathit{PL^{wa}}$, and $M$ a
set of atoms, $M\subseteq \mathit{At}(F)$. Then $M$ is a model of $F$ in
$\mathit{PL^{wa}}$ logic if and only if $M$ has a unique extension $M'$ by
some of the new atoms in $\mathit{At}(\mathit{\tau_{pb}}(\mathit{\tau_{cl}}(F)))$ such that $M'$
is a model of the pseudo-boolean theory $\mathit{\tau_{pb}}(\mathit{\tau_{cl}}(F))$.
\end{theorem}
We note that when we use solvers designed for $\mathit{PL^{wa}}$ theories, then
translation $\mathit{\tau_{pb}}$ is no longer needed. The benefit of using such
solvers is that we do not need to split weight atoms in the $\mathit{PL^{wa}}$
theories and do not need the auxiliary atoms introduced in $\mathit{\tau_{pb}}$.
\subsubsection{The Algorithm}
We follow the approach proposed
by \citeA{lz02}.
As in that paper, we
first compute the completion of a {\em lparse} program. Then, we iteratively
compute models of the completion using a $\mathit{PB}$ solver. Whenever a
model is found, we test it for stability. If the model is not a
stable model of the program, we extend the completion by loop formulas
identified in Theorem \ref{convexloop.cor}. Often, adding a single loop
formula filters out several models of $\mathit{Comp}(P)$ that are not stable
models of $P$.
The results given in the previous section ensure that our algorithm is
correct. We present it in Figure \ref{fig.alg}. We note that it may
happen that in the worst case exponentially many loop formulas are
needed before the first stable model is found or we determine that no
stable models exist \cite{lz02}. However, that problem arises only rarely in
practical situations\footnote{In fact, in many cases programs turn out
to be tight with respect to their supported models. Therefore, supported
models are stable and no loop formulas are necessary at all.}.
\begin{figure}
\noindent
\rule{12.2cm}{0.5mm}\\
Input: $P$ --- a {\em lparse} program;\\
\hspace*{0.4in} $A$ --- a pseudo-boolean solver
\noindent
{\bf BEGIN}\\
\mbox{}\ \ \ compute the completion $\mathit{Comp}(P)$ of $P$;\\
\mbox{}\ \ \ $T$ := $\mathit{\tau_{pb}}(\mathit{\tau_{cl}}(\mathit{Comp}(P)))$;\\
\mbox{}\ \ \ {\bf do}\\
\mbox{}\ \ \ \ \ \ {\bf if} (solver $A$ finds no models of $T$)\\
\mbox{}\ \ \ \ \ \ \ \ \ \ \ output ``no stable models found'' and terminate;\\
\mbox{}\ \ \ \ \ \ \ $M$ := a model of $T$ found by $A$;\\
\mbox{}\ \ \ \ \ \ \ {\bf if} ($M$ is stable) output $M$ and terminate;\\
\mbox{}\ \ \ \ \ \ \ compute the reduct $P^M$ of $P$ with respect to $M$;\\
\mbox{}\ \ \ \ \ \ \ compute the greatest stable model $M'$, contained in
$M$, of $P^M$;\\
\mbox{}\ \ \ \ \ \ \ $M^-$ := $M\setminus M'$;\\
\mbox{}\ \ \ \ \ \ \ find all terminating loops in $M^-$;\\
\mbox{}\ \ \ \ \ \ \ compute loop formulas and convert them into $\mathit{PB}$
constraints using\\
\mbox{}\ \ \ \ \ \ \ \ \ \ \ $\mathit{\tau_{pb}}$ and $\mathit{\tau_{cl}}$;\\
\mbox{}\ \ \ \ \ \ \ add all $\mathit{PB}$ constraints computed in the previous
step to $T$;\\
\mbox{}\ \ \ {\bf while} ({\bf true});\\
{\bf END}\\
\rule{12.2cm}{0.5mm}\\
\caption{Algorithm of $\mathit{pbmodels}$}
\label{fig.alg}
\end{figure}
The implementation of $\mathit{pbmodels}$ supports several $\mathit{PB}$ solvers such as
{\em satzoo} \cite{es03}, {\em pbs} \cite{arms02}, {\em wsatoip}
\cite{wal97}. It also supports a program {\em wsatcc} \cite{lt03} for
computing models of $\mathit{PL^{wa}}$ theories. When this last program is used,
the transformation, from ``clausal'' $\mathit{PL^{wa}}$ theories to pseudo-boolean
theories is not needed. The first two of these four programs are
complete $\mathit{PB}$ solvers. The latter two are local-search solvers based
on {\em wsat} \cite{skc94}.
We output the message ``no stable model found'' in the first line
of the loop and not simply ``no stable models exist'' since in the case
when $A$ is a local-search algorithm, failure to find a model of the
completion (extended with loop formulas in iteration two and the
subsequent ones) does not imply that no models exist.
\subsection{Performance}
In this section, we present experimental results concerning the
performance of $\mathit{pbmodels}$. The experiments compared $\mathit{pbmodels}$, combined with
several $\mathit{PB}$ solvers, to $\mathit{smodels}$ \cite{sns02} and $\mathit{cmodels}$ \cite{cmodels}.
We focused our experiments on problems whose statements explicitly involve
pseudo-boolean constraints, as we designed $\mathit{pbmodels}$ with such problems in
mind.
For most benchmark problems we tried $\mathit{cmodels}$ did not perform well.
Only in one case (vertex-cover benchmark) the performance of $\mathit{cmodels}$ was
competitive, although even in this case it was not the best performer.
Therefore, we do not report here results we compiled for $\mathit{cmodels}$. For a
complete set of results we obtained in the experiments we refer to
\url{http://www.cs.uky.edu/ai/pbmodels}.
In the experiments we used instances of the following problems: {\em
traveling salesperson}, {\em weighted $n$-queens}, {\em weighted Latin
square}, {\em magic square}, {\em vertex cover}, and {\em Towers of
Hanoi}. The {\em lparse} programs we used for the first four problems
involve general pseudo-boolean constraints. Programs modeling the
last two problems contain cardinality constraints only.
\noindent
{\bf Traveling salesperson problem (TSP)}. An instance consists of a
weighted complete graph with $n$ vertices, and a bound $w$. All edge
weights and $w$ are non-negative integers. A solution to an instance is
a Hamiltonian cycle whose total weight (the sum of the weights of all
its edges) is less than or equal to $w$.
We randomly generated $50$ weighted complete graphs with $20$ vertices,
To this end, in each case we assign to every edge of a complete
undirected
graph an integer weight selected uniformly at random from the
range $[1..19]$. By setting $w$ to $100$ we obtained a set of ``easy''
instances, denoted by {\em TSP-e} (the bound is high enough for every
instance in the set to have a solution). From the same collection of
graphs, we also created a set of ``hard'' instances, denoted by {\em
TSP-h}, by setting $w$ to $62$. Since the requirement on the total weight
is stronger, the instances in this set in general take more time.
\noindent
{\bf Weighted $n$-queens problem (WNQ)}. An instance to the problem
consists of a weighted $n\times n$ chess board and a bound $w$. All
weights and the bound are non-negative integers. A solution to an
instance is a placement of $n$ queens on the chess board so that no two
queens attack each other and the weight of the placement (the sum of the
weights of the squares with queens) is not greater than $w$.
We randomly generated $50$ weighted chess boards of the size $20\times
20$, where each chess board is represented by a set of $n\times n$
integer weights $w_{i,j}$, $1\leq i,j\leq n$, all selected uniformly at
random from the range $[1..19]$. We then created two sets of instances,
easy (denoted by {\em wnq-e}) and hard (denoted by {\em wnq-h}), by
setting the bound $w$ to 70 and 50, respectively.
\noindent
{\bf Weighted Latin square problem (WLSQ)}. An instance consists of an
$n\times n$ array of weights $w_{i,j}$, and a bound $w$. All weights
$w_{i,j}$ and $w$ are non-negative integers. A solution to an instance
is an $n\times n$ array $L$ with all entries from $\{1,\ldots,n\}$ and
such that each element in $\{1,\ldots,n\}$ occurs exactly once in each
row and in each column of $L$, and $\sum_{i=1}^n\sum_{j=1}^n L[i,j]
\times w_{i,j} \leq w$.
We set $n=10$ and we randomly generated $50$ sets of integer weights,
selecting them uniformly at random from the range $[1..9]$. Again we
created two families of instances, easy ({\em wlsq-e}) and hard ({\em
wlsq-h}), by setting $w$ to $280$ and $225$, respectively.
\noindent
{\bf Magic square problem}. An instance consists of a positive integer
$n$. The goal is to construct an $n\times n$ array using each integer
$1,\ldots n^2$ as an entry in the array exactly once in such a way that
entries in each row, each column and in both main diagonals sum up to
$n(n^2+1)/2$. For the experiments we used the magic square problem for
$n=4,5$ and $6$.
\noindent
{\bf Vertex cover problem}. An instance consists of graph with $n$
vertices and $m$ edges, and a non-negative integer $k$ --- a bound. A
solution to the instance is a subset of vertices of the graph with no
more than $k$ vertices and such that at least one end vertex of every
edge in the graph is in the subset.
We randomly generated $50$ graphs, each with $80$ vertices and $400$
edges. For each graph, we set $k$ to be a smallest integer such that
a vertex cover with that many elements still exists.
\noindent
{\bf Towers of Hanoi problem}. This is a slight generalization of the
original problem. We considered the case with six disks
and three pegs.
An instance
consists of an initial configuration of disks that satisfies the
constraint of the problem (larger disk must not be on top of a smaller
one) but does not necessarily requires that all disks are on one peg.
These initial configurations were selected so that they were 31,
36, 41 and 63 steps away from the goal configuration (all disks from the
largest to the smallest on the third peg), respectively. We also
considered a standard version of the problem with seven disks, in which
the initial configuration is $127$ steps away from the goal.
We encoded each of these problems as a program in the general syntax of
{\em lparse}, which allows the use of relation symbols and variables
\cite{syr99a}. The programs are available at
\url{http://www.cs.uky.edu/ai/pbmodels}. We then used these programs in
combination with appropriate instances as inputs to {\em lparse}
\cite{syr99a}. In this way, for each problem and each set of instances
we generated a
family of ground (propositional) {\em lparse} programs so that stable
models of each of these programs represent solutions to the corresponding
instances of the problem (if there are no stable models, there are no
solutions). We used these families of {\em lparse} programs as inputs
to solvers we were testing. All these ground programs are also available
at \url{http://www.cs.uky.edu/ai/pbmodels}.
In the tests, we used $\mathit{pbmodels}$ with the following four $\mathit{PB}$ solvers:
$\mathit{satzoo}$ \cite{es03}, $\mathit{pbs}$ \cite{arms02}, $\mathit{wsatcc}$ \cite{lt03},
and $\mathit{wsatoip}$ \cite{wal97}. In particular, $\mathit{wsatcc}$ deals with
$\mathit{PL^{wa}}$ theories directly.
All experiments were run on machines with 3.2GHz Pentium 4 CPU, 1GB
memory, running Linux with kernel version 2.6.11, gcc version 3.3.4.
In all cases, we used 1000 seconds as the timeout limit.
We first show the results for the {\em magic square} and {\em towers of
Hanoi} problems. In Table \ref{tab.msth}, for each solver and each
instance, we report the corresponding running time in seconds.
Local-search solvers were unable to solve any of the instances
in the two problems and so are not included in the table.
\begin{table}[ht]
\begin{footnotesize}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\emph{Benchmark}&
\emph{smodels}&
\emph{pbmodels-satzoo}&
\emph{pbmodels-pbs}\tabularnewline
\hline
\hline
\emph{magic square $(4\times4)$}&
$1.36$&
$1.70$&
$2.41$\tabularnewline
\hline
\emph{magic square $(5\times5)$}&
$>1000$&
$28.13$&
$0.31$\tabularnewline
\hline
\emph{magic square $(6\times6)$}&
$>1000$&
$75.58$&
$>1000$\tabularnewline
\hline
\emph{towers of Hanoi $(d=6,t=31)$}&
$16.19$&
$18.47$&
$1.44$\tabularnewline
\hline
\emph{towers of Hanoi $(d=6,t=36)$}&
$32.21$&
$31.72$&
$1.54$\tabularnewline
\hline
\emph{towers of Hanoi $(d=6,t=41)$}&
$296.32$&
$49.90$&
$3.12$\tabularnewline
\hline
\emph{towers of Hanoi $(d=6,t=63)$}&
$>1000$&
$>1000$&
$3.67$\tabularnewline
\hline
\emph{towers of Hanoi $(d=7,t=127)$}&
$>1000$&
$>1000$&
$22.83$\tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Magic square and towers of Hanoi problems}
\label{tab.msth}
\end{footnotesize}
\end{table}
Both $\mathit{pbmodels}\mbox{-}\mathit{satzoo}$ and $\mathit{pbmodels}\mbox{-}\mathit{pbs}$ perform better than $\mathit{smodels}$ on programs
obtained from the instances of both problems. We observe that
$\mathit{pbmodels}\mbox{-}\mathit{pbs}$ performs exceptionally well in the tower of Hanoi problem. It
is the only solver that can compute a plan for $7$ disks, which requires
127 steps. Magic square and Towers of Hanoi problems are highly regular.
Such problems are often a challenge for local-search problems, which
may explain a poor performance we observed for $pbmodels\mbox{-}wsatcc$ and
$\mathit{pbmodels}\mbox{-}\mathit{wsatoip}$ on these two benchmarks.
For the remaining four problems, we used 50-element families of
instances, which we generated randomly in the way discussed above. We
studied the performance of complete solvers ($\mathit{smodels}$, $\mathit{pbmodels}\mbox{-}\mathit{satzoo}$ and
$\mathit{pbmodels}\mbox{-}\mathit{pbs}$) on all instances. We then included local-search solvers
($\mathit{pbmodels}\mbox{-}\mathit{wsatcc}$ and $pbmodels\-wsatoip$) in the comparisons but restricted attention
only to instances that were determined to be satisfiable (as
local-search solvers are, by their design, unable to decide
unsatisfiability). In Table \ref{tab.suminst}, for each family we list
how many of its instances are satisfiable, unsatisfiable, and for how
many of the instances none of the solvers we tried was able to decide
satisfiability.
\begin{table}
\begin{footnotesize}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
&
\emph{\# of SAT instances}&
\emph{\# of UNSAT instances}&
\emph{\# of UNKNOWN instances}\tabularnewline
\hline
\hline
\emph{TSP-e}&
$50$&
$0$&
$0$\tabularnewline
\hline
\emph{TSP-h}&
$31$&
$1$&
$18$\tabularnewline
\hline
\emph{wnq-e}&
$49$&
$0$&
$1$\tabularnewline
\hline
\emph{wnq-h}&
$29$&
$0$&
$21$\tabularnewline
\hline
\emph{wlsq-e}&
$45$&
$4$&
$1$\tabularnewline
\hline
\emph{wlsq-h}&
$8$&
$41$&
$1$\tabularnewline
\hline
\emph{vtxcov }&
$50$&
$0$&
$0$\tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Summary of Instances}
\label{tab.suminst}
\end{footnotesize}
\end{table}
In Table \ref{tab.sum}, for each of the seven families of instances
and for each {\em complete} solver, we report two values $s/w$,
where $s$ is the number of instances solved by the solver and $w$ is
the number of times it was the fastest among the three.
\begin{table}
\begin{footnotesize}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
&
\emph{smodels}&
\emph{pbmodels-satzoo}&
\emph{pbmodels-pbs}\tabularnewline
\hline
\hline
\emph{TSP-e}&
$45/17$&
$50/30$&
$18/3$\tabularnewline
\hline
\emph{TSP-h}&
$7/3$&
$16/14$&
$0/0$\tabularnewline
\hline
\emph{wnq-e}&
$11/5$&
$26/23$&
$0/0$\tabularnewline
\hline
\emph{wnq-h}&
$2/2$&
$0/0$&
$0/0$\tabularnewline
\hline
\emph{wlsq-e}&
$21/1$&
$49/29$&
$46/19$\tabularnewline
\hline
\emph{wlsq-h}&
$0/0$&
$47/26$&
$47/23$\tabularnewline
\hline
\emph{vtxcov }&
$50/40$&
$50/1$&
$47/3$\tabularnewline
\hline
\emph{sum over all}&
$136/68$&
$238/123$&
$158/48$\tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Summary on all instances}
\label{tab.sum}
\end{footnotesize}
\end{table}
The results in Table \ref{tab.sum} show that overall $\mathit{pbmodels}\mbox{-}\mathit{satzoo}$ solved
more instances than $pbmodels\mbox{-}$ $pbs$, followed by $\mathit{smodels}$. When we look at the
number of times a solver was the fastest one, $\mathit{pbmodels}\mbox{-}\mathit{satzoo}$ was a clear
winner overall, followed by $\mathit{smodels}$ and then by $\mathit{pbmodels}\mbox{-}\mathit{pbs}$. Looking
at the seven families of tests individually, we see that $\mathit{pbmodels}\mbox{-}\mathit{satzoo}$
performed better than the other two solvers on five of the families.
On the other two $\mathit{smodels}$ was the best performer (although, it is a clear
winner only on the vertex-cover benchmark; all solvers were essentially
ineffective on the \emph{wnq-h}).
We also studied the performance of $\mathit{pbmodels}$ combined with local-search
solvers $\mathit{wsatcc}$ \cite{lt03} and $\mathit{wsatoip}$ \cite{wal97}. For this
study, we considered only those instances in the seven families that
we knew were satisfiable. Table \ref{tab.sumsat} presents results for all
solvers we studied (including the complete ones). As before, each entry
provides a pair of numbers $s/w$, where $s$ is the number of solved
instances and $w$ is the number of times the solver performed better
than its competitors.
\begin{table}
\begin{footnotesize}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
&
\emph{smodels}&
\emph{pbmd-satzoo}&
\emph{pbmd-pbs}&
\emph{pbmd-wsatcc}&
\emph{pbmd-wsatoip}\tabularnewline
\hline
\hline
\emph{TSP-e}&
$45/3$&
$50/5$&
$18/2$&
$32/7$&
$47/34$\tabularnewline
\hline
\emph{TSP-h}&
$7/0$&
$16/2$&
$0/0$&
$19/6$&
$28/22$\tabularnewline
\hline
\emph{wnq-e}&
$11/0$&
$26/0$&
$0/0$&
$49/45$&
$49/4$\tabularnewline
\hline
\emph{wnq-h}&
$2/0$&
$0/0$&
$0/0$&
$29/15$&
$29/14$\tabularnewline
\hline
\emph{wlsq-e}&
$21/0$&
$45/0$&
$44/0$&
$45/33$&
$45/14$\tabularnewline
\hline
\emph{wlsq-h}&
$0/0$&
$7/0$&
$8/0$&
$7/1$&
$8/7$\tabularnewline
\hline
\emph{vtxcov }&
$50/0$&
$50/0$&
$47/0$&
$50/36$&
$50/15$\tabularnewline
\hline
\emph{sum over all}&
$136/3$&
$194/7$&
$117/2$&
$231/143$&
$256/110$\tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Summary on SAT instances}
\label{tab.sumsat}
\end{footnotesize}
\end{table}
The results show superior performance of $\mathit{pbmodels}$ combined with
local-search solvers. They solve more instances than complete
solvers (including $\mathit{smodels}$). In addition, they are significantly faster,
winning much more frequently than complete solvers do (complete solvers
were faster only on 12 instances, while local-search solvers were
faster on 253 instances).
Our results demonstrate that $\mathit{pbmodels}$ with solvers of pseudo-boolean
constraints outperforms $\mathit{smodels}$ on several types of search problems
involving pseudo-boolean (weight) constraints).
We note that we also analyzed the run-time distributions for each of
these families of instances. A run-time distribution is regarded as
a more accurate and detailed measure of the performance of algorithms
on randomly generated instances\footnote{\citeA{hs05} provide a detailed
discussion of this matter in the context of local-search methods.}. The
results are consistent with the summary results presented above and
confirm our conclusions. As the discussion of run-time distributions
requires much space, we do not include this analysis here.
They are available at the website \url{http://www.cs.uky.edu/ai/pbmodels}.
\section{Related work}
Extensions of logic programming with means to model properties of {\em
sets} (typically consisting of ground terms) have been extensively
studied. Usually, these extensions are referred to by the common term
of {\em logic programming with aggregates}. The term comes from the fact
that most properties of sets of practical interest are defined through
``aggregate'' operations such as sum, count, maximum, minimum and
average. We chose the term {\em constraint} to stress that we speak
about abstract properties that define constraints on truth assignments
(which we view as sets of atoms).
\citeA{mum90}, and \citeA{ks91} were among the first to study
logic programs with aggregates. Recently, \citeA{nss99} and
\citeA{sns02}
introduced the class of {\em lparse} programs. We discussed this
formalism in detail earlier in this paper.
\citeA{p04} and \citeA{pdb06} studied a more general class of aggregates
and developed
a systematic theory of aggregates in logic programming based on the
approximation theory \cite{dmt00a}. The resulting theory covers not only
the stable models semantics but also the supported-model semantics and
extensions of 3-valued Kripke-Kleene and well-founded semantics. The
formalism introduced and studied by \citeA{p04} and \citeA{pdb06}
allows for arbitrary aggregates (not only monotone ones) to appear in
the bodies of rules. However, it does not
allow for aggregates to appear in the heads of program clauses. Due to
differences in the syntax and the scope of semantics studied there is no
simple way to relate
\citeS{p04} and \citeS{pdb06} formalism
to programs with monotone (convex)
constraints. We note though that programs with abstract monotone
constraints with the heads of rules of the form $C(a)$ can be viewed
almost literally as programs in the formalism by \citeA{p04} and
\citeA{pdb06} and that they have the same stable models according to
the definitions we used in this paper and those by \citeA{p04}
and \citeA{pdb06}.
\citeA{flp04} developed the theory of {\em disjunctive} logic programs
with aggregates. Similarly as \citeA{p04} and \citeA{pdb06},
\citeA{flp04} do not allow for aggregates to appear in the heads of
program clauses. This is one
of the differences between that approach and programs with monotone
(convex) constraints we studied here. The other major difference is
related to the postulate of the minimality of stable models (called {\em
answer sets} in the context of the formalism considered
by \citeR{flp04}).
In keeping with the spirit of the original answer-set semantics
\cite{gl90b}, answer sets of disjunctive programs with aggregates, as
defined
by \citeA{flp04},
are minimal models. Stable models of programs
with abstract constraints do not have this property. However, for the
class of programs with abstract monotone constraints with the heads of
rules of the form $C(a)$ the semantics of answer sets defined
by \citeA{flp04}
coincides with the semantics of stable models by
\citeA{mt04} and \citeA{mnt03,mnt06}.
Yet another approach to aggregates in logic programming was presented
by \citeA{sp06}.
That approach considered programs of the syntax similar
to programs with monotone abstract constraints. It allowed arbitrary
constraints (not only monotone ones) but not under the scope of $\mathbf{not}$
operator. A general principle behind the definition of the stable-model
semantics
by \citeA{sp06}
is to view a program with constraints
as a concise representation of a set of its ``instances'', each being a
normal logic program. Stable models of the program with constraints are
defined as stable models of its instances and is quite different from
the operator-based definition
by \citeA{mt04} and \citeA{mnt03,mnt06}.
However, for programs
with {\em monotone} constraint atoms which fall in the scope of the
formalism of \citeA{sp06} both approaches coincide.
We also note that recently \citeA{spt06} presented a {\em conservative}
extension of the syntax proposed by \citeA{mt04} \citeA{mnt06}, in which
clauses are built of arbitrary constraint atoms.
Finally, we point out the work by
\citeA{fl04} and \citeA{fer05}
which treats
aggregates as {\em nested expressions}. In particular, \citeA{fer05}
introduces a propositional logic with a certain nonclassical semantics,
and shows that it extends several approaches to programs with aggregates,
including those by
\citeA{sns02}
(restricted to core lparse programs) and
\citeA{flp04}.
The nature of the relationship of the formalism
by \citeA{fer05}
and programs with abstract constraints remains an open
problem.
\section{Conclusions}
Our work shows that concepts, techniques and results from normal logic
programming, concerning strong and uniform equivalence, tightness and
Fages lemma, program completion and loop formulas, generalize to the
abstract setting of programs with monotone and convex constraints.
These general properties specialize to {\em new} results about {\em
lparse} programs (with the exception of the characterization strong
equivalence of {\em lparse} programs, which was first obtained
by \citeR{tu03}).
Given these results we implemented a new software {\em pbmodels} for
computing stable models of {\em lparse} programs. The approach reduces
the problem to that of computing models of theories consisting of
pseudo-boolean constraints, for which several fast solvers exist
\cite{pbcomp05}. Our experimental results show that {\em pbmodels}
with $\mathit{PB}$ solvers, especially local search $\mathit{PB}$ solvers, performs better
than $\mathit{smodels}$ on several types of search problems we tested. Moreover, as
new and more efficient solvers of pseudo-boolean constraints become
available (the problem is receiving much attention in the satisfiability
and integer programming communities), the performance of {\em pbmodels}
will improve accordingly.
\section*{Acknowledgments}
We acknowledge the support of NSF grants IIS-0097278 and IIS-0325063.
We are grateful to the reviewers for their useful comments and
suggestions.
This paper combines and extends results included in conference papers
\cite{lt05,lt05b}.
{\small
|
3,212,635,537,638 | arxiv | \section{Introduction\newline}
Polarons emerge naturally from cold atom mixtures with an extreme population
imbalance where minority atoms are so outnumbered by majority atoms that they may be considered impurities submerged in a host medium. Polaron studies have undergone an exciting revival in recent years, sparked by
the experimental realization of polarons in mixtures of fermionic atoms
\cite{schirotzek09PhysRevLett.102.230402,nascimbene09PhysRevLett.103.170402},
with properties that are in excellent agreement with theoretical predictions
\cite{prokofev08PhysRevB.77.020408,mora09PhysRevA.80.033607}. This
resurgence, which originally centered on the Fermi polaron problem where
background atoms are fermions (see
\cite{chevy10RepProgPhys.73.112401,massignan13arXiv:1309.0219} for a review), has spread rapidly to its bosonic cousin, where background atoms are
bosons, and led to recent detailed experimental studies \cite{jorgensen16arXiv:1604.07883,hu16arXiv:1605.00729}. This so called Bose polaron problem has been the subject of theoretical studies using a variety of tools, including a weak coupling ansatz
\cite{Huang09ChinesePhysicsLetters.26.080302,shashi14PhysRevA.89.053617,kain14PhysRevA.89.023612
, a strong coupling approach
\cite{cucchietti06PhysRevLett.96.210401,sacha06PhysRevA.73.063604,casteels11LaserPhysics.21.1480}
involving the Landau and Pekar treatment
\cite{landau46ZhEkspTeorFiz.16.341,landau48ZhEkspTeorFiz.18.419}, a
variational approach \cite{tempere09PhysRevB.80.184504} based on Feynman's
path integral formalism \cite{feynman55PhysRev.97.660}, those
\cite{li14PhysRevA.90.013618,levinsen15PhysRevLett.115.125302} inspired by
a Chevy-type variational ansatz \cite{chevy06PhysRevA.74.063628}, exact
numerical simulation \cite{vlietinck15NewJournalOfPhysics.17.033023} based
upon the diagrammatic quantum Monte Carlo (MC) method
\cite{prokofev98PhysRevLett.81.2514,mishchenko00PhysRevB.62.6317} and the diffusion Monte Carlo method \cite{ardila_giorgini}, and a
systematic perturbation expansion
\cite{rath13PhysRevA.88.053632,sogaard15PhysRevLett.115.160401} involving
use of the $T$-matrix \cite{fetter71ManyParticleSystemsBook}.
Our interest here is with the Fr\"{o}hlich model
\cite{mahan00ManyParticlePhysicsBook}, a generic polaron model describing a
single mobile impurity interacting with a bath of bosonic particles.
Interest in this model has remained virtually unabated ever since Landau and
Pekar \cite{landau46ZhEkspTeorFiz.16.341,landau48ZhEkspTeorFiz.18.419} likened
a polaron to an impurity dressed in a cloud of nearby phonons and Fr\"{o}hlich
\cite{frohlich54AdvPhys.3.325} formulated the problem in its present form more
than half a century ago (see \cite{alexandrov07Book} for a review). The recent upsurge of interest in the Bose polaron problem has once again brought the
Fr\"{o}hlich polaron to the forefront, examples of which include those in
Refs.\
\cite{cucchietti06PhysRevLett.96.210401,tempere09PhysRevB.80.184504,casteels11PhysRevA.83.033631}
for large (continuous) polarons and those in Refs.\
\cite{bruderer07PhysRevA.76.011605,bruderer08NewJournalOfPhysics.10.033015,privitera10PhysRevA.82.063614,tao15PhysRevA.92.063635}
for small (Holstein) polarons.
The present work has been motivated by recent studies
\cite{shashi14PhysRevA.89.053617,shchadilova14arXiv:1410.5691,grusdt15ScientificReports.5.12124,grusdt15arXiv:1510.04934}
that applied the well-known Lee, Low, and Pine (LLP) transformation
\cite{lee53PhysRev.90.297} to convert the Fr\"{o}hlich model, in which
impurities interact with non-interacting phonons, to the LLP-Fr\"ohlich model (or LLP model for short), which describes an interacting phonon system free of impurity degrees of freedom. When described within mean-field (MF) theory, the phonon ground state is a direct product of coherent
states at different momentum modes \cite{lee53PhysRev.90.297}---quantum
fluctuations (correlations), which can be of vital importance to a strongly
interacting system, are notably absent. We are particularly inspired by
recent attempts to overcome this weakness inherent in the MF product state by
Shchadilova \textit{et al}.\ \cite{shchadilova14arXiv:1410.5691} using a
correlated Gaussian wave function (CGW) ansatz
\cite{altanhan93JPhysCondensMatter.5.6729,kandemir94JPhysCondensMatter.6.4505}
and Grusdt \textit{et al}.\
\cite{grusdt15ScientificReports.5.12124,grusdt15arXiv:1510.04934} using a
renormalization group (RG) approach \cite{hewson97HeavyFermionsBook}.
We adapt the self-consistent Hartree-Fock-Bogoliubov (HFB) approach to the interacting phonons in the LLP model. The HFB-based approach
shall be similar, in spirit, to the CGW ansatz, where various cross mode
correlations are automatically built in. However, instead of independent
variables housed in a symmetric matrix, we parametrize quantum fluctuations
between various momentum modes with dependent variables (which will be the density and pair
correlation functions) housed in a single-particle density matrix. As a
result, instead of an unconstrained minimization we perform a constrained
minimization of energy with respect to the variational parameters
characterizing the quasiparticle vacuum defined via a
generalized Bogoliubov transformation. This approach allows Fr\"{o}hlich
polarons to be studied self consistently without having to introduce
additional small perturbative parameters.
We test our HFB formalism by applying it to
Fr\"{o}hlich polarons in quasi-1D cold atom mixtures. A remarkable feature of cold atom systems is that system parameters, such as
dimensionality and coupling strength, can be tuned precisely
\cite{bloch08RevModPhys.80.885}. Potential avenues for realizing Bose polarons include Bose-Fermi mixtures with fermionic
impurities, e.g.\ $^{7}$Li-$^{6}$Li
\cite{schreck01PhysRevLett.87.080403,truscott01Science.291.2570,schreck01PhysRevLett.87.080403,ferrierBarbut14Science.345.1035
, $^{23}$Na-$^{6}$Li
\cite{hadzibabic02PhysRevLett.88.160401,stan04PhysRevLett.93.143001,
schuster12PhysRevA.85.042721}, $^{87}$Rb-$^{40}$K
\cite{ferrari02PhysRevLett.89.053202,roati02PhysRevLett.89.150403,inouye04PhysRevLett.93.183201,ferlaino06PhysRevA.73.040702
, $^{23}$Na-$^{40}$K \cite{park12PhysRevA.85.051602}, $^{87}$Rb-$^{6}$Li
\cite{silber05PhysRevLett.95.170408}, and $^{4}$He-$^{3}$He
\cite{macnamara06PhysRevLett.97.080404}, Bose-Bose mixtures with bosonic
impurities, e.g.\ $^{85}$Rb-$^{87}$Rb \cite{bloch01PhysRevA.64.021402}, $^{87
$Rb-$^{41}$K
\cite{modugno01Science.294.1320,catani08PhysRevA.77.011603,catani12PhysRevA.85.023623
, and $^{87}$Rb-$^{133}$Cs
\cite{mcCarron11PhysRevA.84.011603,spethmann12PhysRevLett.109.235301}, and
ion-Bose mixtures with ionic impurities, e.g.\ Ba$^{+}$-$^{87}$Rb
\cite{schmid10PhysRevLett.105.133202}. 1D systems have the nice property that
particle interactions can be resonantly enhanced by confinement-induced
resonance \cite{olshanii98PhysRevLett.81.938,bergeman03PhysRevLett.91.163201},
in addition to the usual Feshbach resonance \cite{chin10RevModPhys.82.1225}.
Bose polarons in quasi-1D cold atoms have recently been experimentally
\cite{catani12PhysRevA.85.023623} and theoretically
\cite{catani12PhysRevA.85.023623,casteels12PhysRevA.86.043614} investigated. The importance of HFB type quantum fluctuations in 1D Bose polarons has been stressed by Sacha and Timmermans \cite{sacha06PhysRevA.73.063604} in connection with impurity self-localization.
An important goal of the present work is to gain clean insight into how
phonon-phonon interactions in the LLP model affect quantum fluctuations, which in turn affect the underlying polaron states. In 3D (as well as 2D \cite{casteels12PhysRevA.86.043614}) atomic models, computing the polaron energy
involves a momentum integral that contains an ultraviolet divergence
\cite{tempere09PhysRevB.80.184504}. At the MF level
\cite{kain14PhysRevA.89.023612,shashi14PhysRevA.89.053617}, regularization
based on the Lippmann-Schwinger equation
\cite{fetter71ManyParticleSystemsBook} can remove this divergence, but such
regularization is unable to stem the log-divergence expected to arise in more
elaborate, e.g.\ RG and CGW, methods
\cite{grusdt15ScientificReports.5.12124,shchadilova14arXiv:1410.5691}. By
contrast, 1D models do not suffer from such a problem. Thus, testing the HFB
theory using 1D models provides us with a ``proof-of-principle" opportunity, allowing us to interpret our results in a manner free
of complications due to the ultraviolet divergence.
Our paper is organized as follows. In Sec.\ II we review and adapt the HFB
theory to the generic LLP model. We construct the energy functional,
assuming the system to be in a generalized Bogoliubov quasiparticle vacuum
parameterized in terms of phonon fields describing a MF coherent state and
density and pair correlation functions describing quantum fluctuations.
We apply the constrained Ritz variational principle to arrive at a set of
HFB equations specific to the LLP model. In Sec.\ III we focus on a quasi-1D
Bose polaron in the context of cold atom physics and solve the problem using
our HFB theory self consistently. For comparison we also solve the problem analytically using MF theory and numerically using Feynman's variational approach. We discuss how phonon-phonon interactions
can enrich the polaron state and how quantum fluctuations
included in our HFB approach, which are absent in MF
theory, can help lower the polaron energy to a level in fairly good agreement
with Feynman's result, even in the regime of relatively light impurity and
strong coupling. We conclude in Sec.\ V.
\section{Theory: Self-Consistent HFB Formulation of Fr\"{o}hlich Polarons}
We begin with the generic Fr\"{o}hlich Hamiltonian
\cite{mahan00ManyParticlePhysicsBook}
\begin{equation}
\hat{H}^{\prime}=\frac{\mathbf{\hat{p}}^{2}}{2m_{I}}+\sum_{\mathbf{k}
\hbar\omega_{\mathbf{k}}\hat{b}_{\mathbf{k}}^{\dag}\hat{b}_{\mathbf{k}
+\sum_{\mathbf{k}}\frac{g_{\mathbf{k}}}{\sqrt{\mathcal{V}}}e^{i\mathbf{k\cdot
\hat{r}}}\left( \hat{b}_{\mathbf{k}}+\hat{b}_{-\mathbf{k}}^{\dag}\right)
,\label{H-Frohlich
\end{equation}
which describes a single mobile impurity with mass $m_{I}$, momentum operator
$\mathbf{\hat{p}}$, and position operator $\mathbf{\hat{r}}$ interacting with
phonons with field operator $\hat{b}_{\mathbf{k}}$ for annihilating a phonon of
momentum $\hbar\mathbf{k}$ and energy $\hbar\omega_{\mathbf{k}}$, where
$g_{\mathbf{k}}$ is the impurity-phonon coupling strength and $\mathcal{V}$ is
the quantization length in 1D, area in 2D, and volume in 3D.
Different systems are characterized with a different set of $\omega
_{\mathbf{k}}$ and $g_{\mathbf{k}}$. In the solid state Einstein
model (containing longitudinal optical phonons), $\omega_{\mathbf{k}}$ is
modeled as a constant and $g_{\mathbf{k}}$ is modeled as inversely proportional to $k$.
In the solid state acoustic model, $\omega_{\mathbf{k}}$ and $g_{\mathbf{k
}$ are approximated as proportional to $k$ and $\sqrt{k}$, respectively
\cite{peeters85PhysRevB.32.3515}. In cold atom systems where impurities
are immersed in a Bose-Einstein condensate (BEC) of density $n_{B}$, phonons
are identified with Bogoliubov quasiparticles arising from BEC density
fluctuations, and $\omega_{\mathbf{k}}$ and $g_{\mathbf{k}}$ are given by
\begin{equation}
\omega_{\mathbf{k}}=v_{B}k\sqrt{1+\left( \xi_{B}k\right) ^{2}} \label{w_k
\end{equation}
and
\begin{equation}
g_{\mathbf{k}}=g_{IB}\sqrt{n_{B}\hbar k^{2}/\left( 2m_{B}\omega_{\mathbf{k
}\right) },\label{g_k
\end{equation}
where $v_{B}=\sqrt{n_{B}g_{BB}/m_{B}}$ is the phonon speed, $\xi_{B
=\hbar/\sqrt{4m_{B}n_{B}g_{BB}}$ is the healing length, and $g_{BB}=4\pi
\hbar^{2}a_{BB}/m_{B}$ and $g_{IB}=4\pi\hbar^{2}a_{IB}/[ m_{IB
\equiv2m_{I}m_{B}/\left( m_{I}+m_{B}\right) ] $ are, respectively,
boson-boson and bose-fermion interaction strengths with $a_{BB}$ and $a_{IB}$
$s$-wave scattering lengths. In this cold atom case, Hamiltonian (\ref{H-Frohlich}) is measured
relative to the bare impurity-condensate interaction energy, $n_{B}g_{IB}$, which
accounts for the interaction of the impurity with the condensed bosons. In
reduced dimensions, $n_{B}$, $a_{BB}$, and $a_{IB}$ (hence $g_{BB}$ and
$g_{IB}$) are their effective versions for the corresponding dimensions.
In what follows, we adopt a unit convention in which $\hbar=1$ (unless keeping $\hbar$ helps elucidate physics).
\subsection{Fr\"{o}hlich Hamiltonian After Lee-Low-Pine Transformation}
The Lee-Low-Pine (LLP) transformation \cite{lee53PhysRev.90.297} is defined by
\begin{equation}
\mathcal{\hat{S}}=\exp\left( i\sum_{\mathbf{k}}\mathbf{k}\hat{b}_{\mathbf{k
}^{\dag}\hat{b}_{\mathbf{k}}\mathbf{\cdot\hat{r}}\right) \label{S}
\end{equation}
and is a unitary transformation under which the phonon vacuum is invariant (since any power of $\hat{b}_{\mathbf{k}}^{\dag}\hat{b}_{\mathbf{k}}$ gives
zero when acting on it). Following \cite{shashi14PhysRevA.89.053617,shchadilova14arXiv:1410.5691,grusdt15ScientificReports.5.12124} we apply the LLP transformation to the Hamiltonian in (\ref{H-Frohlich}), $\hat{H}=\mathcal{\hat{S}}\hat{H}^{\prime}\mathcal{\hat{S}}^{-1}$, which gives
\begin{align}
\hat{H} & =\frac{\left( \mathbf{\hat{p}}-\sum_{\mathbf{k}}\mathbf{k}\hat
{b}_{\mathbf{k}}^{\dag}\hat{b}_{\mathbf{k}}\right) ^{2}}{2m_{I}}\nonumber\\
&\qquad+ \sum_{\mathbf{k}}\omega_{\mathbf{k}}\hat{b}_{\mathbf{k}}^{\dag}\hat
{b}_{\mathbf{k}}+\frac{1}{\sqrt{\mathcal{V}}}\sum_{\mathbf{k}}g_{\mathbf{k
}\left( \hat{b}_{\mathbf{k}}+\hat{b}_{-\mathbf{k}}^{\dag}\right)
,\label{H LLP
\end{align}
where we used $\mathcal{\hat{S}}\mathbf{\hat{p}}\mathcal{\hat{S}
^{-1}=\mathbf{\hat{p}}-\sum_{\mathbf{k}}\mathbf{k}\hat{b}_{\mathbf{k}}^{\dag
}\hat{b}_{\mathbf{k}}$ and $\mathcal{\hat{S}}\hat{b}_{\mathbf{k
}\mathcal{\hat{S}}^{-1}=\hat{b}_{\mathbf{k}}\exp( -i\mathbf{k}\cdot\mathbf{\hat{r}} t)$. Since $\hat{\mathbf{p}}$ commutes with $\hat{H}$ it is a constant of motion and may be replaced with its $c$-number equivalent $\mathbf{p}$, allowing Eq.\ (\ref{H LLP}) to be written as
\begin{align}
\hat{H} & =\frac{p^{2}}{2m_{I}}+\frac{1}{\sqrt{\mathcal{V}}}\sum_{\mathbf{k
}g_{\mathbf{k}}\left( \hat{b}_{\mathbf{k}}+\hat{b}_{-\mathbf{k}}^{\dag
}\right) \nonumber\\
&\qquad+ \sum_{\mathbf{k}}\left( \omega_{\mathbf{k}}+\frac{k^{2}}{2m_{I}
-\frac{\mathbf{k\cdot p}}{m_{I}}\right) \hat{b}_{\mathbf{k}}^{\dag}\hat
{b}_{\mathbf{k}}+\hat{H}_{int},\label{H after LLP
\end{align}
where
\begin{equation}
\hat{H}_{int}=\frac{1}{2}\sum_{\mathbf{k},\mathbf{k}^{\prime}}\frac
{\mathbf{k}\cdot\mathbf{k}^{\prime}}{m_{I}}\hat{b}_{\mathbf{k}}^{\dag}\hat
{b}_{\mathbf{k}^{\prime}}^{\dag}\hat{b}_{\mathbf{k}^{\prime}}\hat
{b}_{\mathbf{k}}\label{four Boson interaction
\end{equation}
is a normal-ordered four-boson interaction term, representing the phonon-phonon interaction.
The Fr\"{o}hlich Hamiltonian (\ref{H-Frohlich}) prior to the LLP transformation
describes an impurity-phonon system where phonons are non-interacting but are
coupled to the impurity via terms involving $e^{i\mathbf{k}\cdot\mathbf{\hat{r}}}$, which account for the impurity recoil during emission and absorption
of a phonon.
The LLP transformation moves into a frame moving at a speed
determined by the total phonon momentum,
\begin{equation}
\mathbf{p}_{ph}=\sum_{\mathbf{k}}\mathbf{k}\left\langle \hat{b}_{\mathbf{k
}^{\dag}\hat{b}_{\mathbf{k}}\right\rangle .\label{p_ph
\end{equation}
This transformation is motivated by the fact that the total momentum (the
impurity momentum plus the total phonon momentum) is a constant of motion so
that in a moving frame defined by the total phonon momentum, the impurity
momentum $\mathbf{\hat{p}}$ becomes the total momentum and is thus a constant
of motion, replaceable with a $c$-number.
As promised, the LLP transformation has transformed the Fr\"{o}hlich
Hamiltonian (\ref{H-Frohlich}) to Eq.\ (\ref{H after LLP}) which is free of
impurity degrees of freedom, but at the expense of phonons interacting via the
four-boson interaction in Eq.\ (\ref{four Boson interaction}).
\subsection{Generalized Bogoliubov Transformation and Polaron Energy
Functional}
From this point forward we describe our system as a many-body phonon system
free of impurities. The only indication of the impurity in the Hamiltonian
(\ref{H after LLP}) is $\mathbf{p}$, which we treat as a parameter (i.e.\
quantum number). The impurity-phonon scattering term, $\sum_{\mathbf{k}}g_{\mathbf{k}}( \hat{b}_{\mathbf{k}}+\hat{b}_{-\mathbf{k}}^{\dag
})$, being linear in the phonon field, leads to a nonzero average,
$\langle \hat{b}_{\mathbf{k}}\rangle \equiv z_{\mathbf{k}}$.
It is convenient to move to the shifted phonon field, \
\begin{equation}
\hat{c}_{\mathbf{k}}=\hat{b}_{\mathbf{k}}-z_{\mathbf{k}}, \label{shifting}
\end{equation}
whose average vanishes, $\langle \hat{c}_{\mathbf{k}}\rangle =0$.
The Hamiltonian (\ref{H after LLP}) then describes phonons in terms of
$z_{\mathbf{k}}$ and $\hat{c}_{\mathbf{k}}$.
In anticipation of the use of the Ritz variational principle in the next
subsection, we choose as the trial state, $\vert \phi\rangle $,
the quasiparticle vacuum state defined by field operator $\hat
{d}_{\mathbf{k}}$, i.e.\ $\hat{d}_{\mathbf{k}}\vert \phi\rangle
=0$, where $\hat{d}_{\mathbf{k}}$ is defined through the generalized
Bogoliubov transformation, $\hat{d}_{\mathbf{k}}=\sum_{\mathbf{k}^{\prime
}( U_{\mathbf{kk}^{\prime}}^{\ast}\hat{c}_{\mathbf{k}^{\prime
}-V_{\mathbf{kk}^{\prime}}^{\ast}\hat{c}_{\mathbf{k}^{\prime}}^{\dag})
$, which may equivalently be writte
\begin{equation}
\left(
\begin{array}
[c]{c
\hat{d}\\
\hat{d}^{\dag
\end{array}
\right) =\mathcal{T}\left(
\begin{array}
[c]{c
\hat{c}\\
\hat{c}^{\dag
\end{array}
\right) ,\label{Bogliubov T
\end{equation}
wher
\begin{equation}
\mathcal{T}=\left(
\begin{array}
[c]{cc
U^{\ast} & -V^{\ast}\\
-V & U
\end{array}
\right),
\quad \mathcal{T}^{-1}=\left(
\begin{array}
[c]{cc
U^{T} & V^{\dag}\\
V^{T} & U^{\dag
\end{array}
\right) .\text{ }\label{T and T inverse
\end{equation}
In Eqs.\ (\ref{Bogliubov T}) and (\ref{T and T inverse}), $\hat{c}$ ($\hat
{c}^{\dag}$) is a column vector with elements $\hat{c}_{i}\equiv\hat
{c}_{\mathbf{k}_{i}}$ ($\hat{c}_{i}^{\dag}\equiv\hat{c}_{\mathbf{k}_{i}
^{\dag}$) [a similar definition applies to $\hat{d}$ ($\hat{d}^{\dag}$)], and
$U$ ($V$) is a square matrix with matrix elements $U_{ij}\equiv U_{\mathbf{k
_{i}\mathbf{k}_{j}}$ ($V_{ij}\equiv V_{\mathbf{k}_{i}\mathbf{k}_{j}}$). The
number of elements depends on the number of $\mathbf{k}$ values included in the calculation.
Definin
\begin{equation}
\eta=\left(
\begin{array}
[c]{cc
I & 0\\
0 & -I
\end{array}
\right) ,
\quad \gamma=\left(
\begin{array}
[c]{cc
0 & I\\
I & 0
\end{array}
\right) ,\label{eta and gamma
\end{equation}
we note that for $\mathcal{T}$ in the form given in Eq.\
(\ref{T and T inverse}), $\gamma\mathcal{T}\gamma=\mathcal{T}^{\ast}$
holds automatically and the only requirement for the Bogoliubov transformation
(\ref{Bogliubov T}) to remain canonical is
\begin{equation}
\mathcal{T}\eta\mathcal{T}^{\dag}\eta=1,\label{cannonical condition
\end{equation}
which, together with Eq.\ (\ref{T and T inverse}), amounts to requiring $U$ and
$V$ to obey
\begin{subequations}
\label{UV
\begin{align}
UU^{\dag}-VV^{\dag} & =I, & UV^{T}-VU^{T}&=0,\label{UV 1}\\
U^{\dag}U-V^{T}V^{\ast} & =I, & U^{T}V^{\ast}-V^{\dag}U&=0.\label{UV2
\end{align}
Let $z_{i}\equiv z_{\mathbf{k}_{i}}$, $\omega_{i}\equiv\omega_{\mathbf{k}_{i
}$, $g_{i}\equiv g_{\mathbf{k}_{i}}$, and $\sum_{i}\equiv$ $\sum
_{\mathbf{k}_{i}}$. The average of the Hamiltonian in the quasiparticle
vacuum, $E_{p}\equiv\langle \phi\vert \hat{H}\vert
\phi\rangle $, then reads
\end{subequations}
\begin{widetext}
\begin{align}
E_{p} & =\frac{p^{2}}{2m_{I}}+\sum_{i}\left( \omega_{i}-\frac{\mathbf{k
_{i}\mathbf{\cdot p}}{m_{I}}+\frac{k_{i}^{2}}{2m_{I}}\right) \left\vert
z_{i}\right\vert ^{2}+\sum_{i}\frac{g_{i}}{\sqrt{\mathcal{V}}}\left(
z_{i}+z_{i}^{\ast}\right) +\sum_{i,j}\frac{\mathbf{k}_{i}\cdot\mathbf{k}_{j
}{2m_{I}}\left\vert z_{i}\right\vert ^{2}\left\vert z_{j}\right\vert
^{2}\nonumber\\
&\qquad+ \sum_{i}\left( \omega_{i}-\frac{\mathbf{k}_{i}\mathbf{\cdot p}}{m_{I
}+\frac{k_{i}^{2}}{2m_{I}}+\frac{\mathbf{k}_{i}}{m_{I}}\cdot\sum_{j}
\mathbf{k}_j
\left\vert
z_{j}\right\vert ^{2}\right) \rho_{ii}+\sum_{ij}\frac{\mathbf{k}_{i
\cdot\mathbf{k}_{j}}{2m_{I}}\left( z_{i}^{\ast}z_{j}^{\ast}\kappa_{ij
+z_{i}z_{j}\kappa_{ij}^{\ast}+z_{i}^{\ast}z_{j}\rho_{ij}+z_{i}z_{j}^{\ast
\rho_{ij}^{\ast}\right) \nonumber\\
&\qquad+ \sum_{ij}\frac{\mathbf{k}_{i}\cdot\mathbf{k}_{j}}{2m_{I}}\left(
\kappa_{ij}^{\ast}\kappa_{ij}+\rho_{ij}^{\ast}\rho_{ij}+\rho_{ii}\rho
_{jj}\right) ,\label{<H>
\end{align}
\end{widetext}
where we have introduced the single-particle density matrix $\rho$ and
single-particle pair matrix $\kappa$ of state $\left\vert \phi\right\rangle $
whose matrix elements are defined, respectively, as
\begin{subequations}
\label{rho and kappa
\begin{align}
\rho_{ij} & =\rho_{ij}^{\dag}=\left\langle \phi\right\vert \hat{c}_{j}^{\dag
}\hat{c}_{i}\left\vert \phi\right\rangle ,\\
\kappa_{ij} & =\kappa_{ij}^{T}=\left\langle \phi\right\vert \hat{c}_{j
\hat{c}_{i}\left\vert \phi\right\rangle ,
\end{align}
which become, with the help of the generalized Bogoliubov transformation
(\ref{Bogliubov T}),
\end{subequations}
\begin{subequations}
\label{rho and kappa 1
\begin{align}
\rho_{ij} & =\left( V^{\dag}V\right) _{ij}=\sum_{n}V_{ni}^{\ast}V_{nj},\\
\kappa_{ij} & =\left( V^{\dag}U\right) _{ij}=\sum_{n}V_{ni}^{\ast}U_{nj}.
\end{align}
The first line in Eq.\ (\ref{<H>}) follows from the part of the Hamiltonian
that is independent of the field operators ($\hat{c}_\mathbf{k},\hat
{c}_\mathbf{k}^\dag$), the second line follows from the part quadratic in field
operators ($\hat{c}_\mathbf{k},\hat{c}_\mathbf{k}^\dag$), and the last
line represents the average of the four-boson term, $\hat{c}_{\mathbf{k
}^{\dag}\hat{c}_{\mathbf{k}^{\prime}}^{\dag}\hat{c}_{\mathbf{k}^{\prime}
\hat{c}_{\mathbf{k}}$, which can be computed using Wick's theorem \cite{fetter71ManyParticleSystemsBook}.
\subsection{Ritz Variational Principle and Self-Consistent HFB Equations}
The self-consistent HFB method is typically employed to solve many-body
problems with a fixed (average) number of particles in nuclear and condensed
matter physics \cite{ring04Book,blaizot96QuantumTheoryBook}. In
comparison, the average number of phonons in our system is not given a priori;
it depends on the impurity and phonon interaction and is therefore unknown and
must be determined self consistently. We thus use a canonical, instead of
grand canonical, Hamiltonian, which explains the absence of a chemical
potential in the energy functional (\ref{<H>}) compared to the usual HFB formulation.
Minimizing the energy in Eq.\ (\ref{<H>}) with respect to $\left(
z_{\mathbf{k}}\text{,}z_{\mathbf{k}}^{\ast}\right) $, we arrive at a matrix
equation,
\end{subequations}
\begin{equation}
\left(
\begin{array}
[c]{cc
C & D\\
D^{\ast} & C^{\ast
\end{array}
\right) \left(
\begin{array}
[c]{c
z\\
z^{\ast
\end{array}
\right) =-\frac{1}{\sqrt{\mathcal{V}}}\left(
\begin{array}
[c]{c
g\\
g
\end{array}
\right) ,\label{CD equation
\end{equation}
where $C=C^{\dag}$ and $D=D^{T}$ are matrices defined as
\begin{subequations}
\label{C and D
\begin{align}
C_{ij} & =\mathcal{C}_{i}\delta_{i,j}+\frac{\mathbf{k}_{i}\cdot\mathbf{k
_{j}}{m_{I}}\rho_{ij},\\
D_{ij} & =\frac{\mathbf{k}_{i}\cdot\mathbf{k}_{j}}{m_{I}}\kappa_{ij}.
\end{align}
Here,
\end{subequations}
\begin{equation}
\mathcal{C}_{i}=\omega_{i}-\frac{\mathbf{k}_{i}\cdot\left( \mathbf{p
-\mathbf{p}_{ph}\right) }{m_{I}}+\frac{k_{i}^{2}}{2m_{I}},\label{H_i
\end{equation}
which is the only surviving term in the MF theory when density and pair
correlation functions $\rho$ and $\kappa$ are neglected, and
\begin{equation}
\mathbf{p}_{ph}=\sum_{j}\mathbf{k}_{j}\left( \left\vert z_{j}\right\vert
^{2}+\rho_{jj}\right) \label{p_ph 1
\end{equation}
is the expectation value of the total phonon momentum [Eq.\ (\ref{p_ph})]
with respect to the quasiparticle vacuum $\left\vert \phi\right\rangle $.
The next step would normally be to minimize the energy with respect to $\rho$
and $\kappa$, but a word of caution is in order---$\rho$ and $\kappa$ cannot
be treated as independent variables. This is because $\rho$ and $\kappa$ are made up of $U$ and $V$ [Eq.\ (\ref{rho and kappa 1})], which are not
independent [Eq.\ (\ref{UV})]. This may be contrasted with the correlated
Gaussian wave function approach \cite{shchadilova14arXiv:1410.5691} where the energy functional is parameterized in terms of a symmetric matrix with
independent parameters. The restrictions imposed on $\rho$ and
$\mathbf{\kappa}$ can be understood, perhaps most conveniently, with the help
of the generalized density matrix \cite{blaizot96QuantumTheoryBook}
\begin{equation}
\mathcal{R}=\left\langle \phi\right\vert \left(
\begin{array}
[c]{cc
\hat{c}_{j}^{\dag}\hat{c}_{i} & \hat{c}_{j}\hat{c}_{i}\\
\hat{c}_{j}^{\dag}\hat{c}_{i}^{\dag} & \hat{c}_{j}\hat{c}_{i}^{\dag
\end{array}
\right) \left\vert \phi\right\rangle =\left(
\begin{array}
[c]{cc
\rho & \kappa\\
\kappa^{\ast} & 1+\rho^{\ast
\end{array}
\right) , \label{R definition
\end{equation}
for field $\hat{c}_{\mathbf{k}}$, and
\begin{equation}
\mathcal{R}^{\prime}=\left\langle \phi\right\vert \left(
\begin{array}
[c]{cc
\hat{d}_{j}^{\dag}\hat{d}_{i} & \hat{d}_{j}\hat{d}_{i}\\
\hat{d}_{j}^{\dag}\hat{d}_{i}^{\dag} & \hat{d}_{j}\hat{d}_{i}^{\dag
\end{array}
\right) \left\vert \phi\right\rangle =\left(
\begin{array}
[c]{cc
0 & 0\\
0 & 1
\end{array}
\right) ,
\end{equation}
for quasiparticle field $\hat{d}_{\mathbf{k}}$. By virtue of the Bogoliubov
transformation in Eq.\ (\ref{Bogliubov T}), $\mathcal{R}^{\prime}$ is linked to
$\mathcal{R}$ according to
\begin{equation}
\mathcal{R}^{\prime}=\mathcal{TRT}^{\dag},
\end{equation}
which, together with Eq.\ (\ref{cannonical condition}), means that
\begin{equation}
\left( \eta\mathcal{R}\right) ^{2}=-\eta\mathcal{R}. \label{constraint
\end{equation}
Equation (\ref{constraint}) encapsulates all relations among $\rho\, $and
$\kappa$.
We now minimize the total energy $E_{p}$ in Eq.\ (\ref{<H>}) with respect to
$\mathcal{R}$ (or equivalently $\rho$ and $\kappa$) subject to condition
(\ref{constraint}), i.e.
\begin{equation}
\delta\left\{ E_{p}-\text{Tr}\left[ \Lambda\left( \left( \eta
\mathcal{R}\right) ^{2}+\eta\mathcal{R}\right) \right] \right\} =0,
\end{equation}
where $\Lambda$ is a matrix of Lagrangian multipliers implementing constraint (\ref{constraint}). By carrying out the variation explicitly and
then eliminating $\Lambda$, we arrive at the HFB equatio
\begin{equation}
\left[ \eta\mathcal{M},\mathcal{R}\eta\right] =0, \label{HFB equation
\end{equation}
where
\begin{equation}
\mathcal{M}=\left(
\begin{array}
[c]{cc
A & B\\
B^{\ast} & A^{\ast
\end{array}
\right) ,
\end{equation}
and $A=A^{\dag}$ and $B=B^{T}$ are matrices defined below in Eq.\
(\ref{A and B}).
At this point, we observe that the matrices in Eqs.\ (\ref{C and D}) and (\ref{A and B}) are local in the sense that a matrix element in the $i$th row
and $j$th column is determined by the matrix elements of $\rho$ and $\kappa$ in the same row
and same column, e.g., $C_{ij}$ depends on $\rho_{ij}$ but not on $\rho_{i^{\prime}\neq i,j^{\prime}\neq j}$. This local property is unique to the four-boson term in Eq.\
(\ref{four Boson interaction}) as we now explain. If particles were to
interact via, for example, the usual two-body $s$-wave potential, the four-boson
term would be in the form $\sum_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q
}\hat{b}_{\mathbf{k}+\mathbf{q}}^{\dag}\hat{b}_{\mathbf{k}^{\prime
-\mathbf{q}}^{\dag}\hat{b}_{\mathbf{k}^{\prime}}\hat{b}_{\mathbf{k}}$, in
which momentum $\mathbf{q}$ is exchanged in each scattering event. The
local property would then not hold, e.g., $C_{ij}$ would depend not only on
$\rho_{ij}$ but also on $\rho_{i^{\prime}\neq i,j^{\prime}\neq j}$. The
four-boson term in Eq.\ (\ref{four Boson interaction}) is, however, of a very
different origin, arising artificially from the LLP transformation:\ in the ``boosted" LLP frame, phonons appear to interact without momentum exchange, i.e.\ $\mathbf{q}=0$. It is this lack of momentum exchange that is responsible for the local property, and it allows us to formulate a much simplified HFB description of the Fr\"{o}hlich model compared to if the model had the usual two-body interaction.
Returning to Eq.\ (\ref{HFB equation}), the fact that $\eta\mathcal{M}$ and
$\mathcal{R}\eta$ commute means that solving for $\mathcal{R}$ from Eq.\
(\ref{HFB equation}) amounts to finding a set of simultaneous eigenstates of
$\eta\mathcal{M}$ and $\mathcal{R}\eta$. Consider first the eigenstates of
$\eta\mathcal{M}$, which, because $\gamma\mathcal{M}\gamma=\mathcal{M}^{\ast
$, are grouped into pairs with eigenvalues $\pm w_{n}$; for each eigenstate
$\left\vert w_{n}^{+}\right\rangle $ with a positive (real)
eigenvalue, $w_{n}>0$, there exists an eigenstate $\left\vert w_{n}
^{-}\right\rangle =\gamma\left\vert w_{n}^{+}\right\rangle ^{\ast}$ with the negative of that eigenvalue, $-w_{n}$:
\begin{equation}
\eta\mathcal{M}\left\vert w_{n}^{+}\right\rangle =w_{n}\left\vert w_{n
^{+}\right\rangle,
\quad
\eta\mathcal{M}\left\vert w_{n}^{-}\right\rangle
=-w_{n}\left\vert w_{n}^{-}\right\rangle . \label{w+ w-
\end{equation}
The set of states $\left\vert w_{n}^{\pm}\right\rangle $ is complete in the
sense that they obey orthonormality conditions with metric $\eta$
\begin{equation}
\left\langle w_{n}^{\pm}\right\vert \eta\left\vert w_{m}^{\pm}\right\rangle
=\pm\delta_{n,m},
\quad
\left\langle w_{n}^{+}\right\vert \eta\left\vert w_{m
^{-}\right\rangle =0.
\end{equation}
Next consider the eigenstates of $\mathcal{R}\eta$. From Eq.\ (\ref{constraint}) we have $\left( \mathcal{R}\eta\right) ^{2
=-\mathcal{R}\eta$, which allows us to divide the eigenstates into two
groups, one group with eigenvalue $0$ and the other group with eigenvalue
$-1$. $\mathcal{R}$, the solution to the HFB equation (\ref{HFB equation}),
must then take the form
\begin{equation} \label{R self-consistent}
\mathcal{R}=\sum_{n}\left\vert w_{n}^{-}\right\rangle \left\langle w_{n
^{-}\right\vert =\sum_{n}\gamma\left\vert w_{n}^{+}\right\rangle ^{\ast
}\left\langle w_{n}^{+}\right\vert ^{\ast}\gamma
\end{equation}
in the space spanned by $\left\{ \left\vert w_{n}^{\pm}\right\rangle
\right\} $, from which we easily find that $\left\vert w_{n}^{\pm
}\right\rangle $ are also eigenstates of $\mathcal{R}\eta$:
\begin{equation}
\mathcal{R}\eta\left\vert w_{n}^{+}\right\rangle =0\left\vert w_{n
^{+}\right\rangle ,
\quad
\mathcal{R}\eta\left\vert w_{n}^{-}\right\rangle
=-1\left\vert w_{n}^{-}\right\rangle .
\end{equation}
We have now defined two expressions for $\mathcal{R}$, one in Eq.\
(\ref{R self-consistent}) in terms of the eigenstates of $\eta\mathcal{M}$
[Eq.\ (\ref{w+ w-})] and the other earlier in Eq.\ (\ref{R definition}) in terms
of the $U$ and $V$ matrices [Eq.\ (\ref{rho and kappa 1})]. Self consistency
requires that they be equivalent, which can be accomplished by making the
$n$th row of matrices $U$ and $V$ in Eq.\ (\ref{T and T inverse}) equal to the
$n$th eigenstate $\vert w_{n}^{+}\rangle =( U_{n
,V_{n}) ^{T}$ of Eq.\ (\ref{w+ w-}) or by explicitly constructing $U$
and $V$ from those states with positive eigenvalues in Eq.\ (\ref{w+ w-}), which in matrix form is
\begin{equation}
\left(
\begin{array}
[c]{cc
A & B\\
-B^{\ast} & -A^{\ast
\end{array}
\right) \left(
\begin{array}
[c]{c
U\\
V
\end{array}
\right) =w\left(
\begin{array}
[c]{c
U\\
V
\end{array}
\right) ,\label{AB equation
\end{equation}
where $A=A^{\dag}$ and $B=B^{T}$ are defined as
\begin{subequations}
\label{A and B
\begin{align}
A_{ij} & =\mathcal{C}_{i}\delta_{i,j}+\frac{\mathbf{k}_{i}\cdot\mathbf{k
_{j}}{m_{I}}\left( \rho_{ij}+z_{i}z_{j}^{\ast}\right) ,\\
B_{ij} & =\frac{\mathbf{k}_{i}\cdot\mathbf{k}_{j}}{m_{I}}\left( z_{i
z_{j}+\kappa_{ij}\right) ,
\end{align}
with ($\rho_{ij},\kappa_{ij}$) and $\mathcal{C}_{i}$ already given in Eqs.\
(\ref{rho and kappa 1}) and (\ref{H_i}), respectively.
In summary, following the generalized HFB approach
\cite{ring04Book,blaizot96QuantumTheoryBook}, we have arrived at the closed
set of equations (\ref{rho and kappa 1}), (\ref{CD equation}), (\ref{p_ph 1}),
and (\ref{AB equation}), which constitutes our HFB formulation of Fr\"{o}hlich
polarons. Although we apply these equations to cold atom systems in the next section, we stress that they were derived generally, and we have in mind their widespread use for the many applications of the Fr\"ohlich model.
\section{Application: Quasi-1D Bose Polarons}
This section is devoted to the study of a Bose polaron in a 1D cold atom
mixture where atoms are confined, by sufficiently high harmonic trap
potentials along the transverse dimensions, to a 1D waveguide where the
transverse degrees of freedom are ``frozen" to the zero-point oscillation. This problem has been
investigated by Casteels \textit{et.\ al.}\ \cite{casteels12PhysRevA.86.043614} at finite temperature using
Feynman's variational method \cite{feynman55PhysRev.97.660}. In the present work, we focus exclusively on the zero temperature limit.
We will explore various polaron properties in terms of the
polaronic coupling constant $\alpha^{\left( 1\right) }$ [defined below in
Eq.\ (\ref{alpha})] and the boson-impurity mass ratio $m_{B}/m_{I}$. The
former can be tuned via a combination of Feshbach resonance and confinement-induced resonance \cite{olshanii98PhysRevLett.81.938,bergeman03PhysRevLett.91.163201}
while the latter can be treated practically as a tunable parameter owing to
the rich existence of atomic elements and their isotopes in nature. The Fr\"ohlich Hamiltonian omits a quartic interaction term (which is quadratic in both the impurity and the BEC operators). This term describes scattering between the impurity and a Bogolubov mode and is essential to correctly describe strong interactions near a Feshbach resonance between the impurity and BEC. The absence of this term places an upper bound on the impurity-BEC coupling strength. A thorough analysis of 3D Bose polarons in cold atomic systems [37, 38] indicates that an intermediate coupling regime is accessible to current technology involving interspecies Feshbach resonance. As in other studies of strongly interacting Bose polarons (see e.g.\ \cite{cucchietti06PhysRevLett.96.210401, tempere09PhysRevB.80.184504, casteels11PhysRevA.83.033631, shchadilova14arXiv:1410.5691, grusdt15ScientificReports.5.12124, grusdt15arXiv:1510.04934}), we extend our theory into the strongly interacting regime with the understanding that such results have only qualitative meaning.
\subsection{Polaron States}
Before presenting the full HFB description, we first consider the MF
description of polarons, which is described by $z_{k}$, governed by Eq.\
(\ref{CD equation}), in the MF limit where all correlations vanish (i.e., $\kappa=$ $\rho=0$):
\end{subequations}
\begin{equation}
z_k=-\frac{1}{\sqrt{\mathcal{V}}}\frac{g_k}{\omega_k+\frac{k^{2}
{2m_{I}}-\frac{k\left( p-p_{ph}\right) }{m_{I}}}, \label{MF z
\end{equation}
where $k$ ranges from $-\infty$ to $+\infty$. The only unknown in Eq.\
(\ref{MF z}) is $p_{ph}$, which is given by Eq.\ (\ref{p_ph 1}). If we can
solve for $p_{ph}$, $z_{k}$ is completely determined. Inserting Eq.\ (\ref{MF z}) into
Eq.\ (\ref{p_ph 1}) and moving to an integral in terms of the scaled quantities
$\left( \bar{k},\bar{p},\bar{p}_{ph}\right) =\left( k,p,p_{ph}\right)
\xi_{B}$ and $\bar{m}_{B}=m_{B}/m_{I}$, we obtai
\begin{align}
\bar{p}_{ph} & =4\alpha^{\left( 1\right) }\bar{m}_{B}\left( 1+\bar{m
_{B}\right) ^{2}\left( \bar{p}-\bar{p}_{ph}\right) \int_{0}^{\infty
d\bar{k}\nonumber\\
&\times \frac{1+\bar{m}_{B}\bar{k}/\sqrt{1+\bar{k}^{2}}}{\left[ \left(
\sqrt{1+\bar{k}^{2}}+\bar{m}_{B}\bar{k}\right) ^{2}-4\bar{m}_{B}^{2}\left(
\bar{p}-\bar{p}_{ph}\right) ^{2}\right] ^{2}},\label{p_ph bar
\end{align}
where
\begin{equation}
\alpha^{\left( 1\right) }=a_{IB}^{2}\xi_{B}/a_{BB}\label{alpha
\end{equation}
is the 1D dimensionless polaron coupling constant.\footnote{The 1D coupling constant used by Casteels \textit{et.\ al.}\ in \cite{casteels12PhysRevA.86.043614} (also labeled $\alpha^{(1)}$) equals $2\sqrt{2}\pi\alpha^{(1)}$.} Evaluating the integral (\ref{p_ph bar}), we find that $p_{ph}$ corresponds to the root of the following transcendental equation:
\begin{widetext}
\begin{equation}
\bar{p}_{ph}=\alpha^{\left( 1\right) }\left( 1+\bar{m}_{B}\right)
^{2} \left\{
\begin{array}{ll}
\frac{\bar{b}\bar{m}_{B}}{\left( 1-\bar{b}^{2}\right) \bar{r}}+\frac{\bar
{b}}{\bar{r}\sqrt{\left\vert \bar{r}\right\vert }}\left( \tanh^{-1
\frac{1+\bar{m}_{B}+\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert }
-2\tanh^{-1}\frac{\bar{m}_{B}}{\sqrt{\left\vert \bar{r}\right\vert }
+\tanh^{-1}\frac{1+\bar{m}_{B}-\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert
}\right) &\text{ if }\bar{r}>0\\
\frac{\bar{b}\bar{m}_{B}}{\left( 1-\bar{b}^{2}\right) \bar{r}}-\frac{\bar
{b}}{\bar{r}\sqrt{\left\vert \bar{r}\right\vert }}\left( \tan^{-1
\frac{1+\bar{m}_{B}+\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert }}-2\tan
^{-1}\frac{\bar{m}_{B}}{\sqrt{\left\vert \bar{r}\right\vert }}+\tan^{-1
\frac{1+\bar{m}_{B}-\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert }}\right)
&\text{ if }\bar{r}<0,
\end{array}
\right. \label{p_ph implicit
\end{equation}
\end{widetext}
which changes smoothly across $\bar{r}=0$ at which $\bar{p}_{ph
=\alpha^{\left( 1\right) }\left( 1+\bar{m}_{B}\right) ^{2}2\bar{b}/\left(
3\bar{m}_{B}^{3}\right)$, and where $\bar{b}$ and $\bar{r}$ are functions of $p_{ph}$ given by
\begin{align}
\bar{b} & =2\bar{m}_{B}\left( \bar{p}-\bar{p}_{ph}\right) ,\quad 0<\bar{b}<1,\\
\bar{r} & =\bar{m}_{B}^{2}+4\bar{m}_{B}^{2}\left( \bar{p}-\bar{p
_{ph}\right) ^{2}-1.
\end{align}
We now consider the HFB description encoded in $z_{k}$ and correlation
functions $\rho_{kk^{\prime}}$ and $\kappa_{kk^{\prime}}$. We solve for them
using the above MF solution as the initial guess in a self-consistent loop
which iteratively updates $z_{k}$, $\rho_{kk^{\prime}}$, and $\kappa_{kk^{\prime}}$
by solving Eqs.\ (\ref{CD equation}) and (\ref{AB equation}), in conjunction
with Eqs.\ (\ref{rho and kappa 1}) and (\ref{p_ph 1}), until values of a prescribed accuracy are reached.
The first column in Fig.\ \ref{Fig:phonon property}
\begin{figure}
[ptb]
\begin{center}
\includegraphics[width=3.3in]{figure1.\filetype}
\caption{(Color online) The phonon momentum density $kz_{k}^{2}$, the density correlation
$\rho_{k,k^{\prime}}$, and the pair correlation $\kappa_{k,k^{\prime}}$
characterizing a many-phonon system. The first column is for the polaron ground state ($p=0$) and the second column is for a polaron state at finite momentum $p\xi_B = 1.5$. Both columns have $m_B / m_I = 1$ and $\alpha^{(1)} = 2$.}
\label{Fig:phonon property
\end{center}
\end{figure}displays an example with $p=0$, $m_{B}/m_{I}=1$, and $\alpha^{(1)}=2$. Interesting details emerge
from the 2D contour plots of $\rho_{kk^{\prime}}$ and $\kappa_{kk^{\prime}}$
in Figs.\ 1(b) and 1(c). First, the vertical and horizontal lines at $k=0$ and
$k^{\prime}=0$ are the zero contour lines along which the correlation functions
vanish or are ``transparent." This ``transparency" occurs because the effective phonon
interaction in momentum space is given by $kk^{\prime}/m_{I}$ [Eq.\
(\ref{four Boson interaction})] and thus vanishes when $k=0$ or $k^{\prime
=0$. Second, the $k=0$ and $k^{\prime}=0$ lines divide each contour plot
into two regions, one with $kk^{\prime}>0$ (the first and third quadrants) and
the other with $kk^{\prime}<0$ (the second and fourth quadrants). Each
correlation is seen to have opposite signs in these two regions. Third,
correlations develop peaks near but not at the origin, along the positive
diagonal ($k=k^{\prime}$) and negative diagonal ($k=-k^{\prime}$), but
decrease rapidly towards zero as momentum increases. This can be explained
as follows. As $k$ and $k^{\prime}$ increase, phonons interact more
strongly, but are tuned farther away from resonance, owing to an increase in
the effective single-phonon energy, $\mathcal{C}_{i}$ [Eq.\ (\ref{H_i})] in the
diagonal elements of matrix $A$ in Eq.\ (\ref{A and B}). In the limit of large
$k$ and $k^{\prime}$, being tuned away from resonance dominates and $\rho
_{kk^{\prime}}$ and $\kappa_{kk^{\prime}}$ become diminishingly small. The
peaks at intermediate momenta are the outcome of the competition between these
two opposing factors.
The second column in Fig.\ \ref{Fig:phonon property} is the same as the
first column except $p=1.5\xi_{B}^{-1}$, e.g.\ a polaronic system prepared
adiabatically from one in which the impurity has a momentum $p=1.5\xi_{B
^{-1}$. In contrast to the $p=0$ case, where $p_{ph}=0$ and all diagrams
[Figs.\ \ref{Fig:phonon property}(a), \ref{Fig:phonon property}(b), and \ref{Fig:phonon property}(c)] are symmetric, nonzero $p$ leads to nonzero $p_{ph}$ and an
asymmetry develops:\ the $k>0$ peak has a larger magnitude than the $k<0$
peak for $kz_{k}^{2}$ in Fig.\ \ref{Fig:phonon property}(d), with similar scenarios for the peaks along the diagonal
elements of the correlation functions, $\rho_{kk}$ and $\kappa_{kk}$ in Figs.\ \ref{Fig:phonon property}(e) and \ref{Fig:phonon property}(f). This is consistent
with the expectation that for nonzero $p$, a moving impurity drags a phonon
cloud with it, leading to nonzero phonon momentum $p_{ph}$. However, nonzero
$p$ does not affect the symmetry of correlations between opposite momenta,
$\rho_{k,-k}$ and $\kappa_{k,-k}$, as can be seen in Figs.\ 1(e) and 1(f). The reason
is that $\rho$ and $\kappa$ are symmetric matrices and therefore $\rho_{k,-k}$ and $\kappa_{k,-k}$ must be even functions of $k$, independent of $p$.
In order to better understand the phonon cloud, such as the statistical character
of the quantum fluctuations, we follow \cite{shchadilova14arXiv:1410.5691} and
examin
\begin{equation}
g_{kk^{\prime}}^{\left( 2\right) }=\frac{\left\langle \hat{b}_{k}^{\dag
\hat{b}_{k^{\prime}}^{\dag}\hat{b}_{k^{\prime}}\hat{b}_{k}\right\rangle
}{\left\langle \hat{b}_{k}^{\dag}\hat{b}_{k}\right\rangle \left\langle \hat
{b}_{k^{\prime}}^{\dag}\hat{b}_{k^{\prime}}\right\rangle },
\end{equation}
which is the multi-mode generalization of the single-mode second-order
correlation, $g_{kk}^{( 2) }$, popular in the study of quantum
optics \cite{walls08Book}, where $\langle \hat{b}_{k}^{\dag}\hat
{b}_{k^{\prime}}\rangle =z_{k}z_{k^{\prime}}+\rho_{kk^{\prime}}$ and
$\langle \hat{b}_{k}^{\dag}\hat{b}_{k^{\prime}}^{\dag}\hat{b}_{k^{\prime
}}\hat{b}_{k}\rangle =( z_{k}z_{k^{\prime}}+\kappa_{kk^{\prime
}) ^{2}+z_{k}^{2}\rho_{k^{\prime}k^{\prime}}+z_{k^{\prime}}^{2
\rho_{kk}+2z_{k}z_{k^{\prime}}\rho_{kk^{\prime}}+\rho_{kk^{\prime}}^{2
+\rho_{kk}\rho_{k^{\prime}k^{\prime}}$, which are valid when quantities are real. In Fig.\
\ref{Fig:g^2},
\begin{figure}
[ptb]
\begin{center}
\includegraphics[width=3.4in]{figure2.\filetype}
\caption{(Color online) Two perspectives for the second order correlation function $g_{k,k^{\prime}}^{(2)}$ as a
function of $k$ and $k^{\prime}$ for the example in the first column of Fig.\ \ref{Fig:phonon property}.}
\label{Fig:g^2}
\end{center}
\end{figure} the thick black lines passing through the origin indicate the $g_{kk^{\prime}}^{(2) }=1$ plane (not shown), which is the value of $g_{kk^{\prime}}^{(2) }$ if phonons are prepared in a MF coherent state. The
region $kk^{\prime}<0$ exhibits phonon bunching, $g_{kk^{\prime}}^{(
2) }>1$, while the region $kk^{\prime}>0$ exhibits phonon anti-bunching,
$g_{kk^{\prime}}^{( 2) }<1$. In particular, $g_{k,-k}^{(
2) }$ decreases from 1 and saturates at a value less than 1 while
$g_{k,k}^{( 2) }$ increases from 1 and saturates at a value
larger than $1$, a phenomenon first observed in an analogous 3D model
\cite{shchadilova14arXiv:1410.5691} and is believed to be accessible by noise
correlation analysis in time-of-flight experiments
\cite{ehud04PhysRevA.70.013603,simon05Nature.434.481,rom06Nature.434.481}. A
qualitative explanation may be that in the region $kk^{\prime}<0$, the phonon
interaction is attractive and thus tends to cause phonons to cluster,
leading to phonon bunching, while in the region $kk^{\prime}>0$, the phonon
interaction is repulsive and thus tends to cause phonons to spread, leading to
phonon anti-bunching.
\subsection{Polaron Energy}
Having discussed the variables parameterizing the polaron, we now investigate
the polaron energy for a system with total momentum $p$. The
polaron energy was given in Eq.\ (\ref{<H>}), which may be simplified, with the
help of Eqs.\ (\ref{p_ph 1}) and (\ref{CD equation}), to
\begin{align}
E_{p} & =\frac{p^{2}}{2m_{I}}+\frac{1}{\sqrt{\mathcal{V}}}\sum_{i}g_{_{i
}z_{i}-\frac{p_{ph}^{2}}{2m_{I}}\nonumber\\
&\qquad + \sum_{i}\mathcal{C}_{i}\rho_{ii}+\frac{1}{2m_{I}}\sum_{i,j}\left(
k_{i}k_{j}\right) \left( \kappa_{ij}^{2}+\rho_{ij}^{2}\right) ,\label{E2
\end{align}
which is valid in equilibrium where all variables are real.
As in the previous subsection, we begin with the MF limit where the trial
state is chosen as a product of coherent states parameterized by only $z_{k}$.
In this limit, the polaron energy (\ref{E2}) may be evaluated analytically and
gives (where $\bar{E}_{p}\equiv E_{p}/[\hbar^{2}/( m_{B}\xi_{B}^{2}) ]$)
\begin{align}
\bar{E}_{p} & =\frac{\bar{m}_{B}}{2}\bar{p}^{2}-\frac{\bar{m}_{B}}{2}\bar
{p}_{ph}^{2}-\frac{\alpha^{\left( 1\right) }}{2}\frac{\bar{m}_{B}^{2}
{\sqrt{\left\vert \bar{r}\right\vert }}\left( 1+\frac{1}{\bar{m}_{B}}\right)
^{2}\nonumber\\
&\quad \times\left\{
\begin{array}{ll}
\coth^{-1}\frac{1+\bar{m}_{B}+\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert
}+\coth^{-1}\frac{1+\bar{m}_{B}-\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert
}} & \text{ if }\bar{r}>0\\
\cot^{-1}\frac{1+\bar{m}_{B}+\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert
}+\cot^{-1}\frac{1+\bar{m}_{B}-\bar{b}}{\sqrt{\left\vert \bar{r}\right\vert
} &\text{ if }\bar{r}<0,
\end{array}
\right. \label{Ep MF
\end{align}
which changes smoothly across $\bar{r}=0$ at which $\bar{E}_{p} = \bar{m}_B[
\bar{p}^{2}-\bar{p}_{ph}^{2}-\alpha^{\left( 1\right) }\left( 1+\bar{m
_{B}^{-1}\right) ^{2}] /2$. The polaron energy $\bar{E}_{p}$
depends on the total momentum $p$. However, it has been long established
\cite{spohn86JPhysicsA.19.533} that the ground state, where the polaron energy
is lowest, occurs at $p=0$. This is a general statement, and is thus true
for both the HFB and MF descriptions. For the MF description, the ground
state polaron energy is then obtained from Eq.\ (\ref{Ep MF}) by setting
$p=0$:
\begin{equation}
\bar{E}_{0}=-\alpha^{\left( 1\right) }\frac{\left( 1+\bar{m}_{B}\right)
^{2}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert }}\left\{
\begin{array}{ll}
\coth^{-1}\frac{1+\bar{m}_{B}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert
}} & \text{ if }\bar{m}_{B}>1\\
\cot^{-1}\frac{1+\bar{m}_{B}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert
} & \text{ if }\bar{m}_{B}<1,
\end{array}
\right. \label{E0 MF
\end{equation}
and $\bar{E}_{0}=-2\alpha^{(1)}$ when $\bar{m}_{B}=1$.
We benchmark our HFB model by comparing its prediction for the ground state polaron energy with the predictions from MF theory (\ref{E0 MF}) and Feynman's path integral formalism, which was regarded as a superior
all coupling approximation \cite{tempere09PhysRevB.80.184504}. Feynman's
method amounts to applying the Feynman-Jensen inequality on a
variational action describing two (classical) particles coupled via a harmonic
force, where one is the impurity and the other is a fictitious particle.
Steps involved in integrating out the degrees of freedom for the fictitious
particle leading to an effective variational action for the impurity are
highlighted in Appendix A.
The first column in Fig.\ \ref{Fig:Polaron Energy}
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=3.2093in
{figure3.\filetype
\caption{(Color online) The first column displays the ground state polaron energy $E_{0}$ in units of $\hbar^{2}/\left( m_{B}\xi_{B}^{2}\right) $ as a function of the
dimensionless polaronic coupling constant $\alpha^{\left( 1\right) }$ when
(a) $m_{B}/m_{I}=0.5$, (b) $1$, and (c) $5$. The second column shows the
polaron energy divided by $\alpha^{\left( 1\right) },$ $E_{0}/\alpha
^{\left( 1\right) }$, in units of $\hbar^{2}/\left( m_{B}\xi_{B
^{2}\right) $ as a function of the boson-impurity mass ratio, $m_{B}/m_{I}$,
when (d) $\alpha^{\left( 1\right) }=0.5$, (e) $1$, and (f) $5$. In each plot the solid black curve is our HFB result, the dashed blue curve is the MF result, and the dash-dotted red curve is Feynman's path integral result.
\label{Fig:Polaron Energy
\end{center}
\end{figure}displays the ground state
polaron energy, $\bar{E}_{0}$, as a function of the coupling constant
$\alpha^{( 1) }$ for various boson-impurity mass ratios, $\bar
{m}_{B}=m_{B}/m_{I}$. The dashed blue curves are obtained from the MF theory [Eq.\
(\ref{E0 MF})], the solid black curves are obtained from our HFB theory, and the dash-dotted red
curves from Feynman's path integral formalism. The MF variational ansatz
for finite $\bar{m}_{B}$ is motivated by the observation that the MF theory
becomes exact in the limit of heavy impurity $\bar{m}_{B}\rightarrow0$, where
$\hat{H}_{int}$ in Eq.\ (\ref{H after LLP}) is negligible and the shifting
operation (\ref{shifting}) with $z_{k}=-(g_{k}/\omega_{k})/\sqrt{\mathcal{V}}$
alone can diagonalize Eq.\ (\ref{H after LLP}). Indeed, results from all
three approaches, although not shown, would be plotted virtually atop one
another for roughly $\bar{m}_{B}<0.2$. As the impurity becomes increasingly
less massive, i.e.\ $\bar{m}_{B}$ increases, Figs.\ \ref{Fig:Polaron Energy}(a),
\ref{Fig:Polaron Energy}(b), and \ref{Fig:Polaron Energy}(c) illustrate that the MF results become increasingly larger than
Feynman's, in sharp contrast to the HFB results which match nicely with
Feynman's, demonstrating that correlations, which are excluded from the MF
theory, are an important part of the ground polaron state
The second column in Fig.\ \ref{Fig:Polaron Energy} displays $\bar{E
_{0}/\alpha^{( 1) }$ as a function of the mass ratio, $\bar
{m}_{B}$, for various values of the coupling constant $\alpha^{(
1) }$. Equation (\ref{E0 MF}) tells us the MF $\bar{E}_{0}$ is
proportional to $\alpha^{( 1) }$ and thus $\bar{E}_{0
/\alpha^{( 1) }$ is independent of $\alpha^{( 1) }$,
as illustrated by identical dashed curves in the second column. In the limit
of heavy impurity mass, Eq.\ (\ref{E0 MF}) asymptotes to
\begin{equation}
\bar{E}_{0}/\alpha^{\left( 1\right) }\approx-\frac{\pi}{4}+\frac{1}{2
\bar{m}_{B}-\frac{5\pi}{8}\bar{m}_{B}^{2}+\cdots,\label{E0 heavy
\end{equation}
where, as explained above, the MF result becomes an exact solution. As can
be seen from the second column, the HFB and Feynman results agree very well
with the MF results in this limit. In the limit of light impurity, Eq.\
(\ref{E0 MF}) asymptotes to
\begin{equation}
\frac{\bar{E}_{0}}{\alpha^{\left( 1\right) }}
\approx-\ln\left( 2\bar{m}_{B}\right)
\left( \frac{\bar{m}_{B}}{2}+1\right) +\frac{1-6\ln\left( 2\bar{m
_{B}\right) }{8\bar{m}_{B}}+\cdots.\label{E0 light
\end{equation}
In this case we do not expect the MF result to be accurate and we find again
that the HFB and Feynman results disagree strongly with the MF results but
agree well with each other, indicating as before that neglecting quantum
fluctuations in the light impurity limit can lead to significant errors. The HFB and Feynman energies are seen to decrease rapidly with decreasing
impurity mass (increasing $\bar{m}_{B}$), while the MF energy changes slowly
due to the existence of a logarithmic function in the leading term in Eq.\
(\ref{E0 light}).
\subsection{Effective Polaron Mass}
Finally, we turn our attention to the effective polaron mass $m_{I}^{\ast}$
defined by
\begin{equation}
m_{I}^{\ast}=\left( \left. \frac{\partial^{2}E_{p}}{\partial p^{2
}\right\vert _{p=0}\right) ^{-1}, \label{m*
\end{equation}
which follows from expansion of the polaron energy through second order in the
total momentum $p$, $E_{p}\approx E_{0}+p^{2}/2m_{I}^{\ast}$, where $E_{0}$ is
the ground state polaron energy studied in Fig.\ \ref{Fig:Polaron Energy}.
$m_{I}^{\ast}$ emerges naturally from Landau's concept of a mobile
polaron, in which an impurity drags with it a cloud of nearby background
particles, leading to an effective mass $m_{I}^{\ast}$ heavier than its bare
mass $m_{I}$. This picture together with the conservation of momentum means
the impurity momentum $p_{I}$ equals the total momentum minus the momentum of
the phonon cloud $p_{ph}$: $p_{I}=p-p_{ph}$, leading to the formula
\cite{shashi14PhysRevA.89.053617}
\begin{equation}
\frac{1}{m_{I}^{\ast}}=\frac{1}{m_{I}}-\frac{1}{m_{I}}\lim_{p\rightarrow
0}\frac{p_{ph}}{p},\text{ } \label{1/m_I
\end{equation}
which is consistent with Eq.\ (\ref{m*}).
Figure \ref{Fig:polaron mass}
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=3.3in
{figure4.\filetype
\caption{(Color online) A comparison of effective polaron masses according to MF theory (dashed blue curves), our HFB theory (solid black curves), and Feynman's variational method
(dash-dotted red curves). In the first column, $m_{I}^{\ast}/m_{I} -1$ is plotted as a function of $\alpha^{(1)}$ for (a) $m_{B}=0.5$, (b) $1.0$, and (c)
$2.0$. In the second column, $(m_I^*/m_I - 1)/\alpha^{(1)}$ is plotted as a function of $m_{B}/m_{I}$ for
(d) $\alpha^{(1)}=0.5$, (e) $1.0$, and (f)
$2.0$.
\label{Fig:polaron mass
\end{center}
\end{figure}displays the effective polaron mass $m_{I
^{\ast}$. We show $\bar{m}_{I}^{\ast}$ as a function of
$\alpha^{\left( 1\right) }$ for various values of $\bar{m}_{B}$ in the first column and as a function of $\bar{m}_{B}$ for various values of
$\alpha^{\left( 1\right) }$ in the
second column. In both columns the solid black curves are from our
HFB method, and the dashed blue curves are from the MF theory, which, as in the
previous subsection, can be computed analytically:
\begin{align}
&\bar{m}_{I}^{\ast} =1+\frac{2\alpha^{\left( 1\right) }\bar{m}_{B
^{2}\left( 1+\bar{m}_{B}\right) }{\bar{m}_{B}-1}+\frac{4\alpha^{\left(
1\right) }\bar{m}_{B}\left( 1+\bar{m}_{B}\right) }{\left( \bar{m
_{B}-1\right) \sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert }}
\nonumber\\
& \times \left\{
\begin{array}{ll}
\tanh^{-1}\frac{1+\bar{m}_{B}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert
}}-\tanh^{-1}\frac{\bar{m}_{B}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert
}}&\text{ if }\bar{m}_{B}>1,\\
-\tan^{-1}\frac{1+\bar{m}_{B}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert
}}+\tan^{-1}\frac{\bar{m}_{B}}{\sqrt{\left\vert \bar{m}_{B}^{2}-1\right\vert
}} & \text{ if }\bar{m}_{B}<1,
\end{array}
\right.
\end{align}
and $\bar{m}_{I}^{\ast}=1+\frac{16}{3}\alpha^{( 1) }$ if
$\bar{m}_{B}=1$. Figure \ref{Fig:polaron mass} also includes the effective
mass obtained from Feynman's approach using Eq.\ (\ref{m* Feynman}) in Appendix
A as the dash-dotted red curves. Figure \ref{Fig:polaron mass} demonstrates
that the HFB theory consistently gives a heavier effective mass than the MF
theory and that it can be significantly heavier for small $\bar{m}_{B}$ or
large $\alpha^{( 1) }$. The effective mass using Feynman's
method, while consistently heavier than in both methods, is much closer to our
HFB result, once again demonstrating the nonclassical nature of the phonon
cloud inside of which phonons are highly correlated. A difference between
Feynman's and our HFB masses is expected since Feynman's approach cannot compute
the polaron energy at finite $p$ and hence defines the effective mass
differently from Eq.\ (\ref{m*}).
We conclude this subsection by noting that in the heavy impurity limit, Eq.\
(\ref{m* Feynman}) is found numerically to agree with the MF result
\begin{equation}
\bar{m}_{I}^{\ast}\approx1+\alpha^{\left( 1\right) }\pi\bar{m}_{B
+\alpha^{\left( 1\right) }\left( 2\pi-4\right) \bar{m}_{B}^{2
+\cdots,\label{m_I MF
\end{equation}
while the variational mass $M$ is found to depart significantly from the above
MF result. Thus, in Feynman's method, $M$ does not agree with the effective polaron mass formula in Eq.\ (\ref{m* Feynman}) and we must use Eq.\ (\ref{m* Feynman}) to compute the effective polaron mass.
\section{Conclusion}
We considered the Fr\"ohlich model in a moving frame defined by the LLP transformation, where the original impurity-phonon system is transformed to an interacting many-phonon system free of impurities. This LLP model distinguishes itself with the four-boson interaction term in Eq.\ (\ref{four Boson interaction}) where an interaction between two phonons with momentum $\mathbf{k}$ and $\mathbf{k}'$ does not involve any
momentum exchange and is facilitated by a
``potential" that depends on $\mathbf{k}\cdot\mathbf{k}'$. In the
spirit of generalized HFB theory, we formulated a field theoretical description of the LLP model where phonons are subject to this unique phonon-phonon interaction. As an application, we applied our theory to Bose polarons in quasi-1D cold atom mixtures and investigated polaron properties such as energy and mass by solving the HFB equations self-consistently and the HFB equations in the MF limit analytically.
We found in the regime of relatively light impurity and strong coupling, our HFB results were significantly closer to those from Feynman's method than predictions from MF theory. The agreement between our HFB approach and Feynman's method on the polaron energy was particularly impressive. We found in the strongly interacting region that the polaron ground state contains highly correlated phonon pairs. In any many-body system at (or close to) zero temperature, the exact nature of the ground state depends crucially on how particles interact with each other. We attributed the existence of both repulsive (in the region
$kk^{\prime}>0$) and attractive (in the region $kk^{\prime}<0$) phonon-phonon
interactions to the rich structure exhibited in various correlation functions, and to bunching and anti-bunching statistics exhibited in the second-order correlation function.
We expect the 3D polaron to behave differently than our 1D polaron since their densities of states differ. Nevertheless, it is worth pointing out that for the 3D polaron, as the polaron coupling constant increases the ground state energy first rises above the impurity-condensate
interaction energy and then decreases below it, while in our 1D case the
ground state energy is always below it and decreases monotonically with increasing
$\alpha^{\left( 1\right) }$. This difference may be traced to the fact
that the 3D case suffers from an ultraviolet divergence, a
complication that does not occur in our 1D model. As a result, the 1D system
has allowed us to focus our attention on our main purpose:\ gaining clean
insight into the role the effective phonon-phonon interaction and quantum
fluctuations play in polaronic states.
Finally, we comment that the recent upsurge of interest in Bose polarons has
been largely spurred by the prospect that the rich toolbox and the flexibility
of cold atom systems may allow polaron theories to be tested, to great
precision, in cold atom experiments. However, many observables, which occur as
correlation functions involving various field operators, are inaccessible to
Feynman's method. Our HFB theory, however, can cast these observables
into forms which are, at least in principle, amenable to numerical analysis.
As a concrete example, in Appendix B we express, in terms of the variational
parameters of polarons, a time-dependent overlap function that lies at the
heart of radio-frequency (rf) spectroscopy, which has emerged as a
powerful tool in the study of cold atom physics in general and polaron physics
in particular.
\section*{Acknowledgments}
B.K.\ is grateful to ITAMP and the Harvard-Smithsonian Center for Astrophysics for their hospitality during the beginning stages of this work. H.Y.L.\ is supported in part by the U.S. National Science Foundation under Grant No.\ PHY 11-25915.
|
3,212,635,537,639 | arxiv | \section{Introduction}
Submodularity is an important property that models a diminishing return phenomenon, i.e., the marginal value of adding an element to a set decreases as the set expands. It has been extensively studied in the literature, mainly accompanied with maximization or minimization problems of set functions \cite{nemhauser1978best,khuller1999budgeted}. This is called submodularity. Mathematically, a set function $f: 2^V \to \mathbb{R}$ is submodular if for any two sets $A\subseteq B \subseteq V$ and an element $v\in V\backslash B$, we have $f(A\cup v) - f(A) \ge f(B\cup v) -f(B)$. Such property finds a wide range of applications in machine learning, combinatorial optimization, economics, and so on. \cite{krause2005near,lin2011class,Li2022HERO,shi2021profit,kirchhoff2014submodularity,gabillon2013adaptive,kempe2003maximizing,wang2021efficient}.
Further equipped with monotonicity, i.e., $f(A)\le f(B)$ for any $A\subseteq B\subseteq V$, a submodular set function can be maximized by the cardinality constrained classic greedy algorithm, achieving an approximation ratio up to $1-1/e$ \cite{nemhauser1978best} (almost the best). Since then, the study of submodular functions has been extended by a variety of different scenarios, such as non-monotone scenario, adaptive scenario, and continuous scenario, etc \cite{feige2007maximizing,golovin2011adaptive,das2011submodular,bach2019submodular,shi2019adaptive}.
The above works focus on the set functions. In real applications, the order of adding elements plays an important role and affects the function value significantly. Recently, the submodularity has been generalized to sequence functions \cite{zhang2015string,tschiatschek2017selecting,streeter2008online,zhang2013near}. Considering sequences instead of sets causes an exponential increase in the size of the search space, while allowing for much more expressive models.
In this paper, we consider that the elements are networked by a directed graph. The edges encode the additional value when the connected elements are selected in a particular order. Such setting is not given a specific name before. To distinguish from the classic submodularity, we in this paper name it as networked submodularity (Net-submodularity for short). More specifically, the Net-submodular function $f(\sigma)$ is a sequence function, which is not submodular on the induced element set by $\sigma$ but is submodular on the induced edge set by $\sigma$. The Net-submodularity is first considered in \cite{tschiatschek2017selecting}, which mainly focuses on the case where the underlying graph is a directed acyclic graph. General graphs and hypergraphs are considered in \cite{mitrovic2018submodularity}.
Recently, robust versions of the submodular maximization problem have arisen \cite{orlin2018robust,mitrovic2017streaming,bogunovic2017robust,sallam2020robust} to meet the increasing demand in the stability of the system. The robustness of the model mainly concerns with its ability in handling the malfunctions or adversarial attacks, i.e., the removal of a subset of elements in the selected set or sequence. Sample cases of elements removal in real world scenarios include items sold out or stop production in recommendation \cite{mitrovic2018submodularity}, web failure of user logout in link prediction \cite{mitrovic2019adaptive} and equipment malfunction in sensor allocation or activation \cite{zhang2015string}. In this paper, we take one step further and study a new problem of \underline{ro}bust \underline{se}quence \underline{net}worked \underline{s}ubmodular maximization (RoseNets). We show an example in Figure 1 to illustrate the importance of RoseNets problem.
See in Figure 1. Suppose all edge weights in sequence A are 0.9, in sequence B are 0.5, in sequence C are 0.4. Let the net-submodular utility function $f$ of the sequence be the summation of all the weights of the induced edge set by the sequence. Such utility function is obviously monotone but not submodular.\footnote{Easy to see that in sequence B, the utility of $\{B4\}$ is 0, but $\{B3,B4\}$ is 0.5, which violates the submodularity.} However, it is submodular on the edge set. Now we can see, the utility of sequence A, B and C are 2.7 (largest), 2.5 and 2.4 respectively. We can easily check that if one node would be removed in each sequence, the worst utility after removal of sequence A, B and C is 0.9, 1.0 (largest), and 0.8. If we remove two nodes in each sequence, the utility of A, B and C becomes 0, 0, and 0.4 (largest). With different number of nodes removed, the three sequences show different robustness. Existing non-robust algorithm may select sequence A since it has the largest utility. However, sequence B and C are more robust against node removal.
\begin{figure}
\centering
\includegraphics[width=2.5in]{robustEX}\\
\caption{Example of RoseNets}
\vspace{-1.5em}
\end{figure}
Given a net-submodular function and the corresponding network, the RoseNets problem aims to select a sequence of elements with cardinality constraints, such that the value of the sequence function is maximized when a certain number of the selected elements may be removed. As far as sequence functions and net-submodularity are concerned, the design and analysis of robust algorithms are faced with novel technical difficulties. The impact of removing an element from a sequence depends both on its position in the sequence and in the network. This makes the existing robust algorithms inapplicable here. It is unclear what conditions are sufficient for designing efficient robust algorithm with provable approximation ratios for RoseNets problem. We aim to take a step for answering this question in this paper. Our contributions are summarized as follows.
\begin{enumerate}
\item To the best of our knowledge, this is the first work that considers the RoseNets problem. Combining robust optimization and sequence net-submodular maximization requires subtle yet critical theoretical efforts.
\item We design a robust greedy algorithm that is robust against the removal of an arbitrary subset of the selected sequence. The theoretical approximation ratio depends both on the number of the removed elements and the network topology.
\item We conduct experiments on real applications of recommendation and link prediction. The experimental results demonstrate the effectiveness and robustness of the proposed algorithm, against existing sequence submodular baselines. We hope that this work serves as an important first step towards the design and analysis of efficient algorithms for robust submodular optimization.
\end{enumerate}
\section{Related Works}
Submodular maximization has been extensively studied in the literature. Efficient approximation algorithms have been developed for maximizing a submodular set function in various settings \cite{nemhauser1978best,khuller1999budgeted,calinescu2011maximizing,chekuri2014submodular}. By considering the robustness requirement, recently, robust versions of submodular maximization have been extensively studied. These works aim at selecting a set of elements that is robust against the removal of a subset of elements. The first algorithm for the cardinality constrained robust submodular maximization problem is studied in \cite{orlin2018robust}. A constant factored approximation ratio is achieved. The selected $k$-sized set is robust against the removal of any $\tau$ elements of the selected set. The constant approximation ratio is valid as long as $\tau= O(\sqrt{k}))$. An improvement is made in \cite{bogunovic2017robust}, which provide an algorithm that guarantees the same constant approximation ratio but allows the removal of a larger number of elements (i.e.,$\tau= O(k))$. With a mild assumption, the algorithm proposed in \cite{mitrovic2017streaming} allows the removal of an arbitrary number of elements. The restriction on $\tau$ is relaxed in \cite{tzoumas2017resilient}, while the derived approximation ratio is parameterized $\tau$. This work is extended to a multi-stage setting in \cite{tzoumas2018resilient} and \cite{tzoumas2020robust}. The decision at each stage would takes into account the failures that happened in the previous stages. Other constrains that are combined with the robust optimization include fairness, privacy issues and so on \cite{mirzasoleiman2017deletion,kazemi2018scalable}.
The concept of sequence (or string) submodularity for sequence functions is a generalization of submodularity, which has been introduced recently in several studies \cite{zhang2015string,streeter2008online,zhang2013near}
The above works all consider element-based robust submodular maximization. Networked submodularity is considered in \cite{tschiatschek2017selecting,mitrovic2018submodularity}, where the sequential relationship among elements is encoded by a directed acyclic graph. Following the networked submodularity setting, the work in \cite{mitrovic2019adaptive} introduces the idea of adaptive sequence submodular maximization, which aims to utilize the feedback obtained in previous iterations to improve the current decision. In this paper, we follow the networked submodularity setting, and study the RoseNets problem. It is unclear whether all of the above algorithms can be properly extended to our problem, as converting a set function to a sequence function and submodularity to networked submodularity could result in an arbitrarily bad performance. Establishing the approximation guarantees for RoseNets problem would require a more sophisticated analysis, which calls for more in-depth theoretical efforts.
\section{System Model and Problem Definition}
In this paper, we follow the networked submodular sequence setting \cite{tschiatschek2017selecting,mitrovic2018submodularity}. Let $V = \{v_1,v_2,...,v_n\}$ be the set of $n$ elements. A set of edges $E$ represents that there is additional utility in picking certain elements in a certain order. More specifically, an edge $e_{ij} = (v_i,v_j)$ represents that there is additional utility in selecting $v_j$ after $v_i$ has already been chosen. Self-loops (i.e., edges that begin and end at the same element) represents that there is individual utility in selecting an element.
Given a directed graph $G = (V,E)$, a non-negative monotone submodular set function $h: 2^E \rightarrow \mathbb{R}_{\ge 0}$, and a parameter $k$, the objective is to select a non-repeating sequence $\sigma$ of $k$ unique elements that maximizes the objective function:
\[
f(\sigma)=h(E(\sigma)),
\]
where $E(\sigma)$ contains all the edges $(v_i,v_j)\in E$ that $v_i$ is select before $v_j$ in $\sigma$.
We say $E(\sigma)$ is the set of edges induced by the sequence $\sigma$. It is important to note that the function $h$ is a submodular set function over the edges, not over the elements. Furthermore, the objective function $f$ is neither a set function, nor is it necessarily submodular on the elements. We call such a function $f(\sigma)$ as a \textit{networked submodular} function.
We define $f(\sigma - Z)$ to represent the residual value of the objective function after the removal of elements in set $Z$. In this paper, the \underline{ro}bust \underline{se}quence \underline{net}worked \underline{s}ubmodular maximization (RoseNets) problem is formally defined below.
\begin{definition}
Given a directed graph $G=(V,E)$, a networked submodular function $f(\cdot)$ and robustness parameter $\tau$, the RoseNets problem aims at finding a sequence $\sigma$ such that it is robust against the worst possible removal of $\tau$ nodes:
\[
\max_{\sigma:|\sigma|\le k} \min_{Z\in \sigma,|Z|\le \tau} f(\sigma - Z).
\]
\end{definition}
The robustness parameter $\tau$ represents the size of the subset $Z$ that is removed. After the removal, the objective value should remain as large as possible. For $\tau = 0$, the problem reduces to the classic sequence submodular maximization problem \cite{mitrovic2018submodularity}.
\section{Robust Algorithm and Theoretical Results}
Direct applying the Sequence Greedy algorithm \cite{mitrovic2018submodularity} to solve the RoseNets problem would return an arbitrary bad solution. We can construct a very simple example for an illustration. See in Figure 2. Let the edge weights of $(A,B),(B,C),(B,E),(B,F)$ be 0.9 and $(C,D),(C,G),(D,G)$ be 0.5. Let the net-submodular utility function $f$ of the selected sequence be the summation of all weights of the induced edge set by the sequence. Suppose we are to select a sequence with 5 elements. Using the Sequence Greedy algorithm, sequence $\langle A,B,C,E,F \rangle$ will be selected\footnote{Suppose elements are selected in the alphabetic order if the edge weights are equal. Similar examples can be easily constructed when elements are selected at random.} for maximizing the utility, i.e., $(0.9\cdot 4=3.6)$. However, if $\tau=2$, i.e., two elements would be removed, removing $B$ and any other one element (worst case) makes the utility become 0.
\begin{figure}
\centering
\includegraphics[width=2.2in]{example2}\\
\caption{Example of Sequence Greedy}
\vspace{-1.5em}
\end{figure}
\subsection{RoseNets Algorithm}
We wish to design an algorithm that is robust against the removal of an arbitrary subset of $\tau$ selected elements. In this paper, we propose the RoseNets Algorithm, which can approximately solves the RoseNets problem and is shown in Algorithm 1. Note we consider the case that $k\ge 3$.
The limitation of the Sequence Greedy algorithm is that the selected sequence is vulnerable. The overall utility might be concentrated in the first few elements. Algorithm 1 is motivated by this key observation and works in two steps. In Step 1 (the first \textit{while} loop), we select a sequence $\sigma_1$ of $\tau$ elements from $V$ in a greedy manner as in Sequence Greedy. In Step 2 (the second \textit{while} loop), we select another sequence $\sigma_2$ of $k-\tau$ elements from $V\backslash \sigma_1$, again in a greedy manner as in Sequence Greedy. Note that when we select sequence $\sigma_2$, we perform the greedy selection as if sequence $\sigma_1$ does not exist at all. This ensures that the value of the final returned sequence $\sigma = \sigma_1 \oplus \sigma_2$ is not concentrated in either $\sigma_1$ or $\sigma_2$. The complexity of Algorithm 1 is $O(k|E|)$, which is in terms of the number of function evaluations used in the algorithm.
To show the differences and benefits of the RoseNets algorithm, we go back to see the example in Figure 2. When $k=5$ and $\tau=2$, the RoseNets algorithm will select the sequence $\sigma_1=\langle A,B \rangle$ and sequence $\sigma_2=\langle C,D,G \rangle$. When selecting $\sigma_2$, the RoseNets algorithm would not consider element $B$ in $\sigma_1$. Thus element $E$ and $F$ are regarded as making not contribution to the utility function. The RoseNets algorithm will return the sequence $\sigma=\langle A,B,C,D,G \rangle$. The worst case of removing $\tau=2$ elements, is to remove $B$ and any one element in $\{C,D,G\}$. The residual utility is 0.5. Remember the case for Sequence Greedy algorithm, the residual utility of the worst case is 0. This example shows the benefits of the RoseNets algorithm.
Both the examples in Figure 1 and Figure 2 imply that a robust sequence should have complex network structure. The utility should not concentrate in a center element, but aggregated from all the edges among elements in the sequence. Such a robust sequence can only be selected by multiple trials of greedy selection, with the trials neglecting each other. Otherwise, the central element (if exists) with its high edge weight neighbors are probability selected, as node $B$ in Figure 2. If such a node is removed, the utility would slump. In this paper, we implement such intuitive strategy by using two trials of greedy selection. Intuitively, invoking more times of greedy selection trial may improve the approximation ratio. However, as we are to take a first step in the RoseNets problem while our theoretical analysis is already non-trivial, we leave the design of multi-part selection algorithm and the approximation ratio analysis for future works.
\begin{algorithm}[t]
\small
\caption{RoseNets Algorithm}
$\sigma=\emptyset$, $\sigma_1=\emptyset$, $\sigma_2=\emptyset$\;
\While{$|\sigma_1| < \tau$}{
\If{$|\sigma_1|=\tau-1$}{
$E'=\{e_{ij}| v_j \notin \sigma_1) \land (v_i=v_j \vee v_i \in \sigma_1)\}$\;
$e_{ij}=\arg\max_{e_{ij} \in E'} h(e|E(\sigma_1))$\;
$\sigma_1 = \sigma_1 \oplus v_j$\;
}
\Else{
$e_{ij}=\arg\max_{\{e_{ij}|v_j\notin \sigma_1\}} h(e|E(\sigma_1))$\;
\If{$v_j=v_i$ or $v_i\in \sigma_1$}{
$\sigma_1 = \sigma_1 \oplus v_j$\;
\Else{
$\sigma_1 = \sigma_1 \oplus v_i \oplus v_j$\;
}
}
}
}
\While{$|\sigma_2| < k-\tau$}{
\If{$|\sigma_2|=k-\tau-1$}{
$E'=\{e_{ij}| v_j \notin \sigma_1 \cup \sigma_2 \land (v_i=v_j \vee v_i \in \sigma_2)\}$\;
$e_{ij}=\arg\max_{e_{ij}\in E'} h(e|E(\sigma_2))$\;
$\sigma_2 = \sigma_2 \oplus v_j$\;
}
\Else{
$e_{ij}=\arg\max_{\{e_{ij}|v_i\notin \sigma_1, v_j \notin \sigma_1 \cup \sigma_2\}} h(e|E(\sigma_2))$\;
\If{$v_j=v_i$ or $v_i\in \sigma_2$}{
$\sigma_2 = \sigma_2 \oplus v_j$\;
\Else{
$\sigma_2 = \sigma_2 \oplus v_i \oplus v_j$\;
}
}
}
}
$\sigma=\sigma_1\oplus \sigma_2$\;
\Return $\sigma$
\end{algorithm}
\subsection{Theoretical Results}
Let $\alpha=2 d_{\text{in}}+1$, $\beta=1+d_{\text{in}}+d_{\text{out}}$, $\gamma=e^{\frac{k-3}{k-2}}$, $\eta=e^{\frac{k-2\tau-1}{k-\tau-1}}$. Note $d_{\text{in}}$ and $d_{\text{out}}$ are the maximum in and out degree of the network respectively. For convience, we denote $f(v|\sigma)$ and $f(\sigma'|\sigma)$ as the marginal gain of attending $v$ and $\sigma'$ to sequence $\sigma$ respectively. We denote $\sigma^*(V,k,\tau)$ as the optimal solution of the RoseNets problem with element set $V$, cardinality $k$ and robust parameter $\tau$, and $g_\tau(\sigma)$ be the minimum value of $f(\sigma)$ after $\tau$ elements are removed from $\sigma$.
\begin{theorem}
Consider $\tau=1$, Algorithm 1 achieves an approximation ratio of \[\max\{\frac{1-e^{-(1-1/k)}}{\alpha\beta},\frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\beta \gamma^{\frac{1}{d_{\text{in}}}}-1}\}.\]
\end{theorem}
\begin{theorem}
Consider $1\le \tau \le k$, Algorithm 1 achieves an approximation ratio of \[\max\{\frac{1-e^{-(1-1/k)}}{\alpha\beta},\frac{\tau\alpha\beta(\eta^{\frac{1}{d_{\text{in}}}}-1)}{\tau\alpha\eta^{\frac{1}{d_{\text{in}}}}- \beta (1-e^{-(1-1/k)}) }\}.\]
\end{theorem}
In Theorem 1, it is hard to compare the two approximation ratios directly due to their complex mathematical expression. Thus we consider specific network setting to show the different advantages of the two terms.
First, it is easy to verify that both the two terms are monotonically increasing function of $k$. When $k=3$, we have $\frac{1-e^{-(1-1/k)}}{\alpha\beta}-\frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\beta \gamma^{\frac{1}{d_{\text{in}}}}-1} = \frac{1-e^{-2/3}}{\alpha \beta}>0$. Thus we know that when $k$ is small, the first term is larger. When $k \to \infty$, the first term has a limit value of $ (1-1/e)/\alpha\beta$, and the second term has a limit value of $\frac{e^{\frac{1}{d_{\text{in}}}}-1}{\beta e^{\frac{1}{d_{\text{in}}}}-1}$.When $d_{\text{in}}=1$, then $\alpha=3$ and $(1-1/e)/\alpha\beta-\frac{e^{\frac{1}{d_{\text{in}}}}-1}{\beta e^{\frac{1}{d_{\text{in}}}}-1} =(1-1/e)/3\beta-\frac{e-1}{\beta e-1}=\frac{-1-2\beta e +\frac{1}{e}+2\beta}{3\beta(\beta e-1)}<0$. In this case, the second term is larger than the first term in Theorem 1. Thus we can conclude that under specific network structure, the second term would be larger with large $k$.
Similarly, for Theorem 2, when $k=3$ and $\tau=1$, $\frac{1-e^{-(1-1/k)}}{\alpha\beta}-\frac{\tau\alpha\beta(\eta^{\frac{1}{d_{\text{in}}}}-1)}{\tau\alpha\eta^{\frac{1}{d_{\text{in}}}}- \beta (1-e^{-(1-1/k)})} = \frac{1-e^{-2/3}}{\alpha \beta}>0$. Thus we know when $k$ and $\tau$ is small, the first term is larger. When $k \to \infty$ while $\tau$ remains a constant, the first term has a limit value of $ (1-1/e)/\alpha\beta$, the second term has a limit value of $\frac{\tau\alpha\beta(e^{\frac{1}{d_{\text{in}}}}-1)}{\tau\alpha e^{\frac{1}{d_{\text{in}}}}- \beta (1-e^{-1}) }$. When $d_{\text{in}}=1$ and $d_{\text{out}}<\frac{3\tau e^2}{e-1}-2$, then $\alpha=3$, $\beta < \frac{3\tau e^2}{e-1}$ and $(1-1/e)/\alpha\beta-\frac{\tau\alpha\beta(e^{\frac{1}{d_{\text{in}}}}-1)}{\tau\alpha e^{\frac{1}{d_{\text{in}}}}- \beta (1-e^{-1}) } = \frac{(1-\frac{1}{e})(3\tau e -\beta(1-\frac{1}{e}))-9\beta^2 \tau(e-1)}{3\beta(3\tau e -\beta(1-\frac{1}{e}))}<0$. In this case, the second term is larger than the first term in Theorem 2. Thus we can conclude that under specific network structure, the second term would be larger with large $k$.
According to the above analysis, we know that the value of $k$ and $\tau$, together with the network topology, significantly affect the approximation ratio. It would be an interesting future direction to explore how the approximation ratio change when the parameters and network topology change.
To prove the above two theorems, we need the following three auxiliary lemmas. Due to space limitations, we here assume Lemma 1, 2 and 3 hold and show the proof of Theorem 1 below. We provide the proofs of Lemma 1, 2, 3 and Theorem 2 in the supplementary material.
\begin{lemma}
There exists an element $v$ for sequence $\sigma_1$ and $\sigma_2$ satisfies that $f(v|\sigma_1) \ge \frac{1}{d_{\text{in}}|\sigma_2|} f(\sigma_2|\sigma_1)$.
\end{lemma}
\begin{lemma}
Consider $c \in (0,1]$ and $1 \le k' \le k$. Suppose that the sequence selected is $\sigma$ with $|\sigma| = k$ and that there exists a sequence $\sigma'$ with $|\sigma'| = k-k'$ such that $\sigma' \subseteq \sigma$ and $f(\sigma') \ge c f(\sigma)$. Then we have $f(\sigma) \ge \frac{e^{\frac{k'}{d_{\text{in}}k}}-1} {e^{\frac{k'}{d_{\text{in}}k}}-c} f(\sigma^*(V,k,0))$.
\end{lemma}
\begin{lemma}
Consider $1 \le \tau \le k$. The following holds for any $Z \subseteq V$ with $|Z| \le \tau$: $g_\tau (\sigma^*(V,k,\tau)) \le f(\sigma^*(V-Z,k-\tau,0))$.
\end{lemma}
\subsection{Proof of Theorem 1}
Given $\tau=1$, the selected sequence $\sigma_1$ has one element. And we have $\sigma_2=k-1$. Let $\sigma_1=\{v_1\}$. Then the final sequence is $\sigma= \{v_1\} \oplus \sigma_2$. Suppose the the removed vertex from the sequence is $z$.
First, we show a lower bound on $f(\sigma_2)$:
\begin{equation}
\begin{aligned}
f(\sigma_2) & \ge \frac{1-e^{-(1-1/k)}}{\alpha} f(\sigma^*(V\backslash v_1,k-1,0)) \\
& \ge \frac{1-e^{-(1-1/k)}}{\alpha} g_\tau(\sigma^*(V,k,\tau))
\end{aligned}
\end{equation}
The first inequality is due to the approximation ratio of Sequence Greedy algorithm for net-submodular maximization \cite{mitrovic2018submodularity}. The second inequality is due to Lemma 3.
Now, we can see the removed element $z$ can be either $v_1$ or an element in $\sigma_2$. In the following, we will consider these two cases:
\textbf{Case 1.} Let $z=v_1$. Then we have
\begin{equation}
f(\sigma -z ) = f(\sigma_2) \ge \frac{1-e^{-(1-1/k)}}{\alpha} g_\tau(\sigma^*(V,k,\tau))
\end{equation}
\textbf{Case 2.} Let $z\in \sigma_2$. We then further consider two cases:
\textbf{Case 2.1.} Let $f(\sigma_2) \le f(\sigma_2-z)$.
In this case, the removal does not reduce the overall value of the remaining sequence $\sigma_2-\{z\}$. Then we have
\begin{equation}
\begin{aligned}
f(\sigma-v) & =f(v_1 \oplus (\sigma_2-z)) \ge f(\sigma_2-z) \\
& \ge f(\sigma_2) \ge \frac{1-e^{-(1-1/k)}}{\alpha} g_\tau(\sigma^*(V,k,\tau))
\end{aligned}
\end{equation}
\textbf{Case 2.2.} Let $f(\sigma_2) > f(\sigma_2-z)$.
We define $q= \frac{f(\sigma_2)-f(\sigma_2-z)}{(d_{\text{in}}+d_{\text{out}})f(\sigma_2)}$, to represent the ratio of the loss of removing element $z$ from sequence $\sigma_2$ to the value of the sequence $\sigma_2$. Obviously, we have $q\in (0,\frac{1}{d_{\text{in}}+d_{\text{out}}}]$ since $f(\sigma_2) > f(\sigma_2-z)$.
First, we have
\begin{equation}
\begin{aligned}
(d_{\text{in}} + & d_{\text{out}}) q f(\sigma_2)= f(\sigma_2) - f(\sigma_2-z) \\
& = f(\sigma_2^1 \oplus z \oplus \sigma_2^2) - f(\sigma_2^1 \oplus \sigma_2^2) \\
& = f(\sigma_2^1) + f(z|\sigma_2^1)+f(\sigma_2^2| (\sigma_2^1\oplus z)) \\
& \quad\quad\quad\quad\quad\quad - f(\sigma_2^1) -f(\sigma_2^2|\sigma_2^1) \\
& = f(z|\sigma_2^1)+f(\sigma_2^2| (\sigma_2^1\oplus z)) - f(\sigma_2^2|\sigma_2^1) \\
& \le d_{\text{in}} h(e_z^{\text{in}}) + d_{\text{out}} h(e_z^{\text{out}}) \\
& \le (d_{\text{in}} + d_{\text{out}}) \max \{h(e_z^{\text{in}}),h(e_z^{\text{out}})\}
\end{aligned}
\end{equation}
where $h(e_z^{\text{in}})$/$h(e_z^{\text{out}})$ are the edge that has maximum utility over all the incoming/outgoing edges of $z$. The first inequality is due to the fact that the marginal gain of a vertex $z$ to the prefix and subsequent sequence is at most $d_{\text{in}} h(e_z^{\text{in}})$ and $d_{\text{out}} h(e_z^{\text{out}})$. Then the second inequality follows intuitively.
Given Equation (4), we need to prove four inequalities for finally proving the theorem.
First, suppose the first vertex of $\sigma_2$ is $v_2$. By the monotonicity of function $f(\cdot)$ and Equation (4), we have
\begin{equation}
\begin{aligned}
f(\sigma-\{z\}) & \ge f(v_1 \oplus v_2) \\
& \ge \max \{h(e_z^{\text{in}}),h(e_z^{\text{out}})\} \ge q f(\sigma_2), \text{ and } \\
f(\sigma-\{z\}) & = f(v_1 \oplus (\sigma_2-z)) \\
& \ge f(\sigma_2 -z) \ge (1-(d_{\text{in}} + d_{\text{out}})q) f(\sigma_2)
\end{aligned}
\end{equation}
Given Equation (5), we have Inequality 1 as below.
\textbf{Inequality 1:}
\[
f(\sigma-z)\ge \max\{ q \cdot f(\sigma_2),(1-(d_{\text{in}} + d_{\text{out}})q) \cdot f(\sigma_2)\}.
\]
We know $\max\{x,1-bx\} \ge \frac{1}{1+b}$ for $x\in (0,\frac{1}{b}]$ and $b>0$.\footnote{As $x$ is monotone increasing and $1-bx$ is monotone decreasing, the function achieves the maximum value when $x=1-bx$} Thus we have Inequality 2 as below.
\textbf{Inequality 2: }$\max\{q,1-(d_{\text{in}} + d_{\text{out}})q\} \ge 1/\beta.$
Note that the first two elements $v_2,v_3$ in $\sigma_2$ satisfy that
\[
f(v_2\oplus v_3) \ge \max \{h(e_z^{\text{in}}),h(e_z^{\text{out}})\} \ge q f(\sigma_2)
\]
Thus by replacing the parameters in Lemma 2 and Lemma 3, we have the following result, which implies Inequality 3.
\[
\begin{aligned}
& f(\sigma_2) \ge \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} f(\sigma^*(V\backslash v_1,k-\tau,0)) \\
& \Longrightarrow \text{ \textbf{Inequality 3: }}f(\sigma_2) \ge \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} g_{\tau} (\sigma^*(V,k,\tau))
\end{aligned}
\]
Now define $\ell_1(q) = q \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q}$ and $\ell_2(q) = (1-(d_{\text{in}} + d_{\text{out}})q) \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} \} \ge \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\beta \gamma^{\frac{1}{d_{\text{in}}}}-1}
$. It is easy to verify that for $k\ge 3$ and $q\in (0,\frac{1}{d_{\text{in}}+d_{\text{out}}}]$, $\ell_1(q)$/$\ell_2(q)$ is monotonically increasing/decreasing. Note when $q=\frac{1}{\beta}$, $\ell_1(q)=\ell_2(q)=\frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\beta \gamma^{\frac{1}{d_{\text{in}}}}-1}$. We consider two cases for $q$: (1) when $q \in (0,\frac{1}{\beta}]$, we have $\max\{\ell_1(q),\ell_2(q)\}\ge \ell_2(\frac{1}{\beta})$ as $\ell_2(q)$ is monotonically decreasing; (2) when $q \in (\frac{1}{\beta},\frac{1}{d_{\text{in}}+\text{out}}]$, we have $\max\{\ell_1(q),\ell_2(q)\}\ge \ell_1(\frac{1}{\beta})$ as $\ell_1(q)$ is monotonically increasing. Thus we have the Inequality 4 as below.
\textbf{Inequality 4:}
\[
\max\{ q \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} , (1-(d_{\text{in}} + d_{\text{out}})q) \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} \} \ge \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\beta \gamma^{\frac{1}{d_{\text{in}}}}-1}
\]
Combining Inequality 1, Inequality 2 and Equation (1), we can have the first lower bound in Theorem 1:
\[
\begin{aligned}
& f(\sigma -z )\ge \max\{ q \cdot f(\sigma_2),(1-(d_{\text{in}} + d_{\text{out}})q) f(\sigma_2)\} \\
\ge & \max\{q,1-(d_{\text{in}} + d_{\text{out}})q\} \frac{1-e^{-(1-1/k)}}{\alpha} g_\tau(\sigma^*(V,k,\tau)) \\
\ge & \frac{1-e^{-(1-1/k)}}{\alpha\beta} g_\tau(\sigma^*(V,k,\tau))
\end{aligned}
\]
Combining Inequality 1, Inequality 3 and Inequality 4, we can have the second lower bound in Theorem 1:
\[
\begin{aligned}
& f(\sigma -z ) \ge \max\{ q \cdot f(\sigma_2),(1-(d_{\text{in}} + d_{\text{out}})q) \cdot f(\sigma_2)\} \\
& \ge \max\{ q \cdot \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} , \\
& \quad\quad\quad (1-(d_{\text{in}} + d_{\text{out}})q) \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\gamma^{\frac{1}{d_{\text{in}}}}-q} \} g_\tau(\sigma^*(V,k,\tau)) \\
& \ge \frac{\gamma^{\frac{1}{d_{\text{in}}}}-1}{\beta \gamma^{\frac{1}{d_{\text{in}}}}-1} g_\tau(\sigma^*(V,k,\tau))
\end{aligned}
\]
Now we are done.
\section{Experiments}
We compare the performance of our algorithms RoseNets to the non-robust version Sequence Greedy (\textbf{Sequence} for short) \cite{mitrovic2018submodularity}, the existing submodular sequence baseline (\textbf{OMegA}) \cite{tschiatschek2017selecting}, and a naive baseline (\textbf{Frequency}) which outputs the most popular items the user has not yet reviewed.
To evaluate the performance of the algorithms, we use three evaluation metrics in this paper. The first one is \textbf{Accuracy Score}, which simply counts the number of accurately recommended items. While this is a sensible measure, it does not explicitly consider the order of the sequence. Therefore, we also consider the \textbf{Sequence Score}, which is a measure based on the Kendall-Tau distance \cite{kendall1938new}. This metric counts the number of ordered pairs that appear in both the predicted sequence and the true sequence. These two metrics are also used in \cite{mitrovic2019adaptive}. The third metric is the \textbf{Utility Function Value} of the selected sequence.
We use a probabilistic coverage utility function as our Net-submodular function $f$. Mathematically,
\[
f(\sigma)=h(E_1)=\sum_{j\in V} [1- \prod_{(i,j)\in E_1} (1-w_{ij})],
\]
where $E_1\in E$ is the edges induced by sequence $\sigma$. And to simulate the worst case removal, we remove the first $\tau$ elements to evaluate the robustness of the algorithms.
\subsection{Amazon Product Recommendation}
Using the Amazon Video Games review dataset \cite{ni2019justifying}, we conduct experiments for the task of recommending products to users. In particular, given a specific user with the first 4 products she has purchased, we want to predict the next $k$ products she will buy. We first build a graph $G = (V,E)$, where $V$ is the set of all products and $E$ is the set of edges between these products. The weight of each edge, $w_{ij}$, is defined to be the conditional probability of purchasing product $j$ given that the user has previously purchased product $i$. We compute $w_{ij}$ by taking the fraction of users that purchased $j$ after having purchased $i$ among all the users that purchased $i$. There are also self-loops with weight $w_{ii}$ that represent the fraction of users that purchased product $i$ among all the users. We focused on the products that have been purchased at least 50 times each, leaving us with a total of 9383 unique products. Also we select the users that have purchased at least 29 products, leaving use 909 users. We conduct recommendation task on these 909 users and take the average value of each evaluation metric.
Figure 3 shows the performance of the comparison algorithms using the accuracy score, sequence score and utility function value respectively. In Figure 3(a), 3(b), 3(c) and 3(d), we find that after the removal of $\tau$ elements, the RoseNets outperforms all the comparisons. Such results demonstrate that the RoseNets algorithm is effective and robust in real applications since accuracy score and sequence score are common evaluation metrics in practical. In Figure 3(e) and 3(f), the only difference is that the OMegA algorithm is outperforming when $\tau$ is small or $k$ is large. The OMegA algorithm aims to find a global optimal solution. It topologically resorts all the candidates after each element selection. It can return a solution with better utility function value when $k$ is large and $\tau$ is small, but runs much slower than RoseNets or Sequence. Also, it shows poor performance in accuracy and sequence score. Thus the RoseNets algorithm is more effective and robust in real applications.
In addition, we also show the case of RoseNets and Sequence with $\tau=0$. We can see that in all the experimental results for the three metrics, the RoseNets outperforms Sequence, which is consistent to our expectation. On utility function value, the Sequence($\tau=0$) is better than the RoseNets($\tau=0$). However, on accuracy score and sequence score, the RoseNets($\tau=0$) is very close to the Sequence($\tau=0$), sometimes shows better performance. The former result is due to the effectiveness of greedy framework. The RoseNets algorithm indeed invokes Sequence for two trials independently, which intuitively cannot achieve comparable performance with one trial Sequence execution, due to the Net-submodularity. But the latter result shows that directly implementing greedy selection is not always outperforming. This is due to the intrinsic property of the greedy algorithm. Though $1-1/e$ is almost the best approximation ratio, some heuristic algorithms may achieve better performance in specific cases. However, if we invoke more trials of Sequence algorithm, better robust experimental results and approximation ratio might be achieved but the utility value would become lower. This is because more trials of independent greedy selection would give high probability of triggering the diminishing return phenomenon. In real applications, this is a trade-off for designing robust algorithms that requires a balance between high efficiency on robustness and maximization of utility value.
\begin{figure}[t]
\centering
\subfigure[\small{$\tau=10$ and $k=[11,20]$}]{
\includegraphics[height=1.24in]{rec-acc1}}
\hspace{-0.5em}
\subfigure[\small{$k=15$ and $\tau=[0,10]$}]{
\includegraphics[height=1.24in]{rec-acc1-t}}
\hspace{-0.5em}
\subfigure[\small{$\tau=10$ and $k=[11,20]$}]{
\includegraphics[height=1.24in]{rec-seq1}}
\hspace{-0.5em}
\subfigure[\small{$k=15$ and $\tau=[0,10]$}]{
\includegraphics[height=1.24in]{rec-seq1-t}}
\hspace{-0.5em}
\subfigure[\small{$\tau=10$ and $k=[11,20]$}]{
\includegraphics[height=1.24in]{rec-fun1}}
\hspace{-0.5em}
\subfigure[\small{$k=15$ and $\tau=[0,10]$}]{
\includegraphics[height=1.24in]{rec-fun1-t}}
\caption{Recommendation Application}
\vspace{-1.5em}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[\small{$\tau=5$ and $k=[6,15]$}]{
\includegraphics[height=1.24in]{wik-acc}}
\hspace{-0.5em}
\subfigure[\small{$k=10$ and $\tau=[0,10]$}]{
\includegraphics[height=1.24in]{wik-acc-t}}
\hspace{-0.5em}
\subfigure[\small{$\tau=5$ and $k=[6,15]$}]{
\includegraphics[height=1.24in]{wik-seq}}
\hspace{-0.5em}
\subfigure[\small{$k=10$ and $\tau=[0,10]$}]{
\includegraphics[height=1.24in]{wik-seq-t}}
\hspace{-0.5em}
\subfigure[\small{$\tau=5$ and $k=[6,15]$}]{
\includegraphics[height=1.24in]{wik-fun}}
\hspace{-0.5em}
\subfigure[\small{$k=10$ and $\tau=[0,10]$}]{
\includegraphics[height=1.24in]{wik-fun-t}}
\caption{Link Prediction Application}
\vspace{-1.5em}
\end{figure}
\subsection{Wikipedia Link Prediction}
Using the Wikispeedia dataset \cite{west2009wikispeedia}, we consider users who are surfing through Wikipedia towards some target article. Given a sequence of articles the user has previously visited, we want to guide her to the page she is trying to reach. Since different pages have different valid links, the order of pages we visit is critical to this task. Formally, given the first 4 pages each user visited, we want to predict which page she is trying to reach by making a series of suggestions for which link to follow. In this case, we build the graph $G = (V,E)$, where $V$ is the set of all pages and $E$ is the set of existing links between pages. Similarly to the recommendation case, the weight $w_{ij}$ of an edge $(i,j)\in E$ is the probability of moving to page $j$ given that the user is currently at page $i$, i.e., the fraction of moves from $i$ to $j$ among all the visit of $i$. In this case, we build no self-loops as we assume we can only move using links. Thus we cannot jump to random pages. We condense the dataset to include only articles and edges that appeared in a path, leaving us 4170 unique pages and 55147 edges. We run the algorithm on paths with length at least 29, which leaves us 271 paths. We conduct link prediction task on these 271 paths and take the average value of each evaluation metric.
Figure 4 shows the performance of the comparison algorithms using the accuracy score, sequence score and utility function value respectively. In Figure 4, we find that after the removal of $\tau$ elements, the RoseNets outperforms all the comparisons in all the cases. These results demonstrate that the RoseNets algorithm is effective and robust in real applications. The OMegA algorithm does not show comparable performance to RoseNets algorithm any more. This is because that (1) the path need to be predicted is very long, and (2) the intersection part of different paths is not as large as the case in Amazon recommendation experiments. Thus the global algorithm OMegA cannot exploit its advantages. We still can find in Figure 4(a), 4(c) and 4(e) that when $k$ becomes larger, the performance of OMegA algorithm increases faster. This in turn demonstrates that the RoseNets algorithm is more general, effective and robust, which does not assume specific real application scenario.
Another difference in the link prediction application is that, the RoseNets($\tau=0$) outperforms Sequence($\tau=0$) almost in all the cases on accuracy and sequence score. This again verifies that directly implementing greedy selection may sometimes far away from the optimal solution. As discussed in the end of the recommendation case, an interesting future direction is to explore the trade-off between high efficiency on robustness and the maximization of utility value by invoking more independent greedy selection trials.
\section{Conclusion}
In this paper, we are the first to study the RoseNets problem, which combines the robust optimization and sequence networked submodular maximization. We design a robust algorithm with an approximation ratio that is bounded by the number of the removed elements and the network topology. Experiments on real applications of recommendation and link prediction demonstrate the effectiveness of the proposed algorithm. For future works, one direction is to develop robust algorithms that can achieve higher approximation ratio. An intuitive improvement is to invoke multiple trials of independent greedy selection. Another direction is to consider the robustness against the removal of edges. This is non-trivial since different removal operation would change the network topology and affect the approximation ratio.
\section{Acknowledgments}
This work is supported by the National Natural Science Foundation of China (Grant No: 62102357, U1866602, 62106221, 62102382), and the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (Grant No: SN-ZJU-SIAS-001). It is also partially supported by the Zhejiang Provincial Key Research and Development Program of China (2021C01164).
{\small{ |
3,212,635,537,640 | arxiv | \section{}
\section{\label{sec:level1}Introduction}
This study investigates the localization in the time-average of the absolute squares of the time-evolving wave function on the desymmetrized stadium billiard that occurs after the Gaussian wavepacket is launched as the initial state.
In chaotic billiards like a stadium, the nodal patterns of stationary states with unique characteristics were discovered approximately three decades ago \cite{heller}. The patterns often have a unique enhancement along classical unstable periodic orbits. Such a phenomenon is called scar in quantum stationary states of a finite chaotic region. The eigen states with scars are called scar states. In contrast, in integrable billiards, the nodal patterns are essentially repetitive and synthetic.
The eigen states are a genuine quantum mechanical concept, whereas the periodic orbits are apparently classical mechanical objects. The scar state is an important discovery expressing a providential quantum-classical correspondence.
\begin{figure}
\includegraphics[width=13cm]{fig1.eps}
\caption{
(a)-(f) Illustrations of time-evolution of the Gaussian wavepacket in a desymmetrized stadium billiard. The $x$-coordinate is set along the bottom line of the stadium, and the $y$-coordinate is on the left straight boundary. Thus, the origin of the coordinate is located on the left bottom corner.
The wavepacket is launched from $\mathbf{r}_0=(1/2, 1/2)$ and begins travelling with the launching angle $\theta = -\pi/4$(a), which is defined in the counterclockwise direction from the direction of the $x$-axis,
$|\mathbf{p}_0| = 250$, and $\sigma_0=0.15$.
The orbit corresponds to periodic orbit No.7 in \cite{Bogomolny}.
After approximately $t=5 \times 10^3$, the wave function almost defuses all over the stadium(f).
\label{fig.1}}
\end{figure}
A semiclassical approximation emerged as a powerful tool to clarify scar states in quantum systems along the classical unstable periodic orbits. This method has been used to construct theories of scars in coordinate space \cite{ Bogomolny} and phase space \cite{LesHouches, Berry-Wignerdist}; they successfully clarify the contribution of the periodic orbits to the scar states.
Both theories discuss the scars in energy dependence because the scars first are discovered in the eigen states.
Bogomolny \cite{ Bogomolny} proposed a Green's function in terms of actions of classical periodic orbits to expose the periodic orbits as the origins of the scar in the coordinate space. Berry's theory \cite{ Berry-Wignerdist} utilizes the Wigner function under approximation in the phase space to clarify the cause of the scars.
In particular, Heller's lecture \cite{LesHouches} revealed the dynamical properties of scars, stating that the time-evolving wavepackets propagate near the periodic orbits.
Especially, the Heller group focused on homoclinic orbits and the return of the Gaussian wavepacket
to the neighborhood of its launching point in finite regions. In addition, they realized the importance of the autocorrelation function and its Fourier counter part: the weighted spectrum \cite{heller2, TH, gaussian, KH-LinearNonlinear, KH-shorttime}.
Finally, the enhancement or localization in the time-average of the time-evolving wavepacket was discovered \cite{ourpaper, ourpaper2}.
In this study, it is called as the ``dynamical scar". It has a distinctly close relation to scar states because it also emerges along a periodic orbit \cite{prep}. In this study, the scar states are shown to heavily contribute to the dynamical states. The window function \cite{St} for the semiclassical approximation to describe the enhancement is derived from the weighted power spectrum.
However, it is known that reflection symmetries of billiard's shape sometimes prevents the detection of its genuine chaotic characteristics. To remove the discrete symmetries, we studied the localization in a desymmetrized $2\times4$ stadium billiard \cite{Bunimovich}. The desymmetrization eliminates the two discrete mirror symmetries of the full stadium shape and makes the chaotic properties more evident. We use Table I in Ref.\cite{Bogomolny} to distinguish the periodic orbits; however, the table is for a full stadium, and not for a desymmetrized stadium. Therefore, it should be used with caution.
If the periodic orbits pass over the horizontal and vertical axes of the symmetries, they may have to be folded at the crossing points for the desymmetrized stadium (cf. Fig.2,3).
\section{Gaussian wavepacket as a probe for dynamical properties}
The time-dependent Schr\"{o}dinger equation
\begin{equation}
i \hbar \frac{\partial \Psi}{\partial t}= - \frac{\hbar^2}{2m} \nabla ^2 \Psi + V \Psi
\end{equation}
governs dynamical properties of quantum systems. By adopting the quarter of the $2 \times 4$ stadium (FIG.1$-$3) as the 2D chaotic finite structure, the potential is simply set to $V=0$ inside the billiard and $V=\infty$ outside.
The Gaussian wavepacket is a conventional tool used for elucidating the time-evolution of quantum states \cite{TH, gaussian, KH-LinearNonlinear, KH-shorttime}.
It has been one of the fundamental quantum objects since the early stage of quantum mechanics.
Its initial form in a 2D region is
\begin{equation}
\Psi_0 (\mathbf{r}) = \frac{1}{ \sigma_0 \sqrt{\pi} }
exp \left[ \frac{i}{\hbar} \mathbf{p}_0 ( \mathbf{r} - \mathbf{r}_0 ) - \frac{(\mathbf{r} - \mathbf{r}_0)^2}{2 {\sigma_0}^2} \right] ,
\end{equation}
where $\mathbf{r}=(x, y)$ is a point inside the nanostructure,
$\mathbf{r_0}=(x_0, y_0)$ is the initial location of the center of the wavepacket,
and ${\mathbf{p}_0}=(p_{0x},p_{0y})$ is the packet's initial momentum.
The standard deviation of the Gaussian packet $\sigma_0$ determines its size.
\begin{figure}
\includegraphics[width=13cm]{fig2.eps}
\caption{
The time-average of the evolving wavepacket $A(\mathbf{r})$ in FIG.1.
The weak concentration appears along the broken yellow lines which represent the corresponding unstable periodic orbit.
It shows the shape of the desymmetrized orbit No.7 in \cite{Bogomolny}.
\label{fig.2}}
\end{figure}
If the Gaussian wavepacket is placed in a flat infinite space,
it travels as a bunch with the initial velocity of the center of the wavepacket $\mathbf{v}_0=\mathbf{p}_0 / m $.
The absolute value of the wavepacket shows that its shape is always Gaussian;
however, its size increases as $|\sigma(t)|=\sigma_0 \sqrt{1+ \left( \frac{\hbar t}{{m}{\sigma_0}^2} \right)^2 } $. If time is sufficiently long, $\sigma(t) \approx \frac{\hbar}{m \sigma_{0} } t $.
In this study, the wavepacket travels in the finite region,
and repeated reflections on the boundary eventually diffuse it all around the billiard (Fig.1; cf. \cite{ourpaper, ourpaper2}).
Initially it behaves like a bunch of viscous liquid.
The travelling wavepacket then gradually and progressively shows less specific texture.
Finally, in chaotic billiards, the snapshots of wave function ripple all over the billiard with irregular granular pattern.
On the contrary, the autocorrelation function has surprisingly already revealed long time recurrence \cite{gaussian}. Moreover, this obliquely implies the localization on the periodic orbit.
\section{Dynamical scar}
One of the most fundamental concepts in quantum physics is the use of the absolute square of the wave function to derive any physical properties; usually its time average is important to investigate a quantum effect.
Therefore, the time-average of the absolute square of the wave function is as follows:
\begin{equation}
A_{T}(\mathbf{r})= \frac{1}{T} \int_0^T |\Psi(\mathbf{r},t)|^2 dt .
\end{equation}
This is an appropriate tool to detect the localization that is concerned. Here, $T$ expresses the time required to measure the time-average.
For numerical calculation, it is discretized
as
\begin{equation}
A_{T}(\mathbf{r}_{i})=\frac{1}{N_t}\sum_{j=0}^{N_t} | \Psi( \mathbf{r}_{i}, t_{j} )|^{2} ,
\end{equation}
on the mesh points $\mathbf{r}_i=(x_i,y_i)$,
and the integration over time is the summation over
the discretized times $t_j = j \Delta t $, where $\Delta t$ is a time step. The summation must then be divided by the integer $N_t$ representing the number of whole time steps and apparently $T=N_t \Delta t$.
In this study, the natural units $\hbar=m=1$ are always applied for actual numerical evaluation. The time step is set at $\Delta t= 2.5 \times 10^{-2} $,
$T=9 \times 10^4$, or $N_t = 3.6 \times 10^6$,
and the lattice constant is 0.2.
A typical example of calculated $A_T$ is presented in Fig.2.
The time-average expresses clear localization along unstable periodic orbits despite no specific patterns in the snapshots of the wavepackets (e.g. Fig.1(f)).
It is apparently similar to the scars of a stationary wave function \cite{heller}. Furthermore, different launching conditions exibit the same phenomena on various periodic orbits, as shown in Fig.3 (also see \cite{prep}). The enhancement appears clearly around the periodic orbit if the initial location of the center of the wavepacket and its velocity are on and along the orbit.
These are referred to as ``dynamical scars" to distinguish them from the scar states in stationary eigen states.
These are an enhancement in the time-average of time-dependent wave function.
Any states in quantum systems can be expanded using these eigenfunctions as
\begin{equation}
\Psi(\mathbf{r},t)=\sum_n c_n \psi_n (\mathbf{r},t)=\sum_n c_n \phi_n (\mathbf{r}) \exp(- \frac{i}{\hbar} E_n t),
\end{equation}
where $\psi_n (\mathbf{r},t) = \phi_n (\mathbf{r}) \exp(- \frac{i}{\hbar} E_n t)$ is the $n$-th eigen state of the system with energy $E_n$.
The expansion coeffient $c_n$ must satisfy the condition $\sum_n |c_n|^2=1$.
In this study, the initial state is set $\Psi(\mathbf{r},t=0)=\Psi_0(\mathbf{r})$.
The expansion coefficient $c_n$ can be determined using the initial wavepacket $\Psi_0$ as
\begin{equation}
c_n = \int \phi_n^* \Psi_0 (\mathbf{r}) d \mathbf{r} .
\end{equation}
Moreover, the expansion can be used to elucidate the time-average of $|\Psi(\mathbf{r},t)|^2$ as
\begin{align}
A(\mathbf{r})=& \lim_{T \to \infty}A_{T}(\mathbf{r})
= \lim_{T \to \infty} \frac{1}{T} \int_0^T |\Psi(\mathbf{r},t)|^2 dt \nonumber \\
=& \lim_{T \to \infty} \frac{1}{T} \int_0 ^T \biggl[ \sum_n |c_n|^2 |\phi_n (\mathbf{r})|^2 + \sum_{n \neq m} c_m^* c_n \phi_m^* \phi_n exp{ \Bigl\{ \frac{i}{\hbar} (E_m - E_n) t
\Bigr\} } \biggr] dt \nonumber \\
=& \sum_n |c_n|^2 |\phi_n (\mathbf{r})|^2 ,
\end{align}
assuming $E_n \neq E_m$, if $n \neq m$.
In other words, by Eq.(7), if the coefficients $c_n$ of the scar eigen states on the same periodic orbit have dominantly larger values, ``dynamical scars" of the periodic orbits are observed in the time-average $A(\mathbf{r}$) \cite{ourpaper,ourpaper2,prep}.
Therefore, at least theoretically, the time-average (7) can be written in energy integration as follows:
\begin{equation}
A(\mathbf{r})
= \int \sum_n |c_n|^2 |\phi_n (\mathbf{r})|^2 \delta (E-E_n) dE .
\end{equation}
However, the Dirac delta function must be treated carefully to allow comparison of numerical results and experimental data. The behavior of the delta functions is often smoothed by the limitation of the precision of numerical calculation and experimental measurement.
Eq.(8) can be considered as the summation of the related wave fuctions and the specific contribution weight that closely correspond to the weighted spectrum because it includes the factor $|c_n|^2$. In numerical calculation, the weighted spectrum would be smoothed by the numerical discretization and the precision of the calculation. The Dirac delta function could be replaced with a smoothed function.
\begin{figure}
\includegraphics[width=13cm]{fig3.eps}
\caption{The time-averages of the evolving wavepackets $A(\mathbf{r})$ in stadium billiard with different initial conditions. In both cases for the initial Gaussian wavepackets, $|\mathbf{p}_0| = 250$ and $\sigma_0=0.15$. (a) The wavepacket is launched from $(1/2, \sqrt{3}/6)$ and its lanuching angle is $\theta=\pi/6$. This shows the shape of the desymmetrized orbit No.12 in \cite{Bogomolny}. (b) The wavepacket launched from $(1/4, 1/2)$ has an angle defined as $tan \theta=2$. This corresponds to orbit No.14 in \cite{Bogomolny}. The launching angles are defined as those in Fig.1. The broken yellow lines correspond to the classical unstable periodic orbits.
\label{fig.3}}
\end{figure}
\section{Window function}
The correlation function between the travelling wavepacket (5) and initial state (2) $C_0(t) = \int \Psi_0^* (\mathbf{r}) \Psi(\mathbf{r},t) d\mathbf{r}^2$ closely relates to the weighted spectrum. The autocorrelation function is expressed by the eigenfunction expansion (5) as
\begin{align}
C_0(t) &= \int \Psi_0^*(\mathbf{r}) \Psi(\mathbf{r},t) d^2 \mathbf{r} \nonumber \\
&= \int (\sum_m c_m^* \phi_m^*) ( \sum_n c_n \phi_n e^{-\frac{i}{\hbar}E_n t}) d^2 \mathbf{r} \nonumber \\
&= \sum_n |c_n|^2 e^{-\frac{i}{\hbar}E_n t} .
\end{align}
The weighted spectrum can be defined through its Fourier transform as
\begin{align}
\tilde{C}_0 (E) &=\frac{1}{2\pi} \int_{-\infty}^{\infty} C_0(t) e^{\frac{i}{\hbar}Et}dt \nonumber \\
&=\frac{1}{2\pi} \int_{-\infty}^{\infty} \sum_n |c_n|^2 e^{\frac{i}{\hbar}(E-E_n)t}dt \nonumber \\
&= \hbar \sum_n |c_n|^2 \delta(E-E_n) \nonumber \\
&= \hbar P(E).
\end{align}
This only represents the bare weighted power spectrum $P(E)= \sum_n |c_n|^2 \delta(E-E_n)$ with the Planck constant.
The smoothed version of the weighted spectrum and the Green's function introduce a neat form of the time-average. The smoothed weighted spectrum function (SWSF) can be written as
\begin{equation}
P_\epsilon(E)=\sum_{n} |c_n|^2 \delta_\epsilon (E-E_n) .
\end{equation}
In addition, we have $\lim _ {\epsilon \rightarrow 0} P_{\epsilon}(E) =P(E)$.
When $\epsilon$ becomes infinitesimal, $\lim_{\epsilon \rightarrow 0} \delta_\epsilon (x) = \delta(x)$.
Here, the Lorentzian form of the smoothed version delta function is introduced as
\begin{equation}
\delta_\epsilon (E-E_n) = \frac{\epsilon}{\pi} \frac{1}{(E-E_n)^2 + \epsilon^2} .
\end{equation}
Realistic systems have finite precision and always show errors because of numerical applications, limit of measurement, etc.
Owing to these inevitable limitations of the systems, the Dirac delta functions are replaced by some finite regular functions. Its infinity and singular behavior cannot be recreated exactly in a computation; they seem very large but are finite, and are singular-like; however, the peaks are not numerically infinite. The width of the Lorenzian $\epsilon$ would be the order of the mean level spacing $\overline{\Delta E}$ under such limitation because much finer energy difference would not be distinguishable. The replacement is allowed, considering the width of the Lorentzian $\epsilon$ should be equal or larger than the order of the mean level spacing $\overline{\Delta E}$.
By using this expression, the smoothed Green's function
\begin{equation}
\rm{Im}G_\epsilon (\mathbf{r},\mathbf{r};E)=-\pi \sum_n |\phi_n (\mathbf{r})|^2 \delta_\epsilon (E-E_n)
\end{equation}
is also introduced.
Under such circumstances, a square of the delta functions can be treated using Berry's method \cite{Berry85}. The smoothed delta function (12) has a remarkable property:
\begin{equation}
\bar{\delta_\epsilon} (E-E_n) =2 \pi \epsilon [\delta_\epsilon (E-E_n)]^2 = \frac{2\epsilon^3}{\pi} \frac{1}{ \{(E-E_n)^2+{\epsilon}^2 \}^2},
\end{equation}
where $\bar{\delta_\epsilon} (E-E_n)$ is another version of the smoothed delta function $\lim_{\epsilon \rightarrow 0}\bar{\delta_\epsilon} (x) = \delta(x)$.
Next, we use an alternative practical version of the time-average
\begin{equation}
A_{\epsilon}(\mathbf{r})
= \int \sum_n |c_n|^2 |\phi_n (\mathbf{r})|^2 \bar{\delta_\epsilon} (E-E_n) dE .
\end{equation}
The original time-average $A$ is in the limit $A(\mathbf{r}) = \lim_{\epsilon \rightarrow 0} A_{\epsilon} (\mathbf{r})$.
By multiplying the two terms (11) and (13), we obtain
\begin{align}
P_\epsilon(E)& \rm{Im} \it{G}_\epsilon(\mathbf{r},\mathbf{r};E ) \nonumber \\
&=\sum_{n} |c_n|^2 \delta_\epsilon(E-E_n)
\bigl \{ -\pi \sum_{n'}|\phi_{n'}(\mathbf{r})|^2 \delta_\epsilon(E-E_{n'}) \bigr\} \nonumber \\
&=-\pi \sum_{n,n'} |c_n|^2 |\phi_{n'}(\mathbf{r})|^2 \delta_\epsilon(E-E_n) \delta_\epsilon(E-E_{n'}) \nonumber \\
&= -\pi \sum_{n} |c_n|^2 |\phi_{n}(\mathbf{r})|^2 \left[ {\delta_\epsilon}(E-E_n) \right]^2 \nonumber \\
&= \frac{-1}{2\epsilon} \sum_{n} |c_n|^2 |\phi_{n}(\mathbf{r})|^2 \bar{\delta_\epsilon}(E-E_n) .
\end{align}
Here Eq.(14) is also applied for this deformation.
Finally, Eq.(16) is used to provide the following expression for the time-average by using the Green's function
\begin{align}
A_{\epsilon}(\mathbf{r})
&=-2\epsilon\int_{-\infty}^{\infty} P_{\epsilon}(E) \rm{Im} \it{G}_{\epsilon} (\mathbf{r},\mathbf{r};E) dE \nonumber \\
&= \int_{-\infty}^{\infty} w(E) \rm{Im} \it{G}_{\epsilon} (\mathbf{r},\mathbf{r};E) dE,
\end{align}
where the window function $w(E)$ is introduced \cite{St} through SWSF (11) as
\begin{equation}
w(E)=-2 \epsilon P_{\epsilon}(E) = - \frac{2\epsilon}{\hbar} \tilde{C_0}(E).
\end{equation}
In other words, $w(E)$ is the weight for the integration over the energy region to evaluate the time-average $A_{\epsilon} (E)$ from tne imaginary part of the smoothed Green's function (13). This is the specific quantum phenomenon that is focused upon in this study. It determines where the window should be transparent in the energy spectrum.
\begin{figure}
\includegraphics[width=13cm]{fig4.eps}
\caption{Window function (weighted spectrum) of the Gaussian wavepacket $w (E)$ (a dotted curve) for orbit No.7 in Fig.2 is compared with its expansion coefficients $|c_n|^2$ (bars). Here, the parameters of the initial Gaussian (Eq.(1)) are the same as those in Fig.1. Insets show the eigen states corresponding to the high peaks. The 4-digit numbers near the insets represent the counts from the ground state to the excited states in the insets.
``Dynamical scars" are often observed on the classical orbit No.7, as in Fig.2.
The plot of $|c_n|^2$ is extremly spiky; however,
it is a typical structure of the ``totalitarian" case in \cite{KH-LinearNonlinear}. (color online)
\label{fig.4}}
\end{figure}
\begin{figure}
\includegraphics[width=13cm]{fig5.eps}
\caption{Window function (weighted spectrum) of the Gaussian wavepacket $w (E)$ (a dotted curve) for orbit No.7 in Fig.2 and its averaged behavior of the expansion coefficients $|c_n|^2$ (a solid curve). Here, the averaging is performed in the energy range of $20\epsilon$. These two lines match very closely. (color online)
\label{fig.5}}
\end{figure}
In a two-dimensional flat and infinite space, the travelling wavepacket can be calculated exactly. The autocorrelation function is then well approximated as,
\begin{equation}
C_f(t) = \int \Psi_0^* (\mathbf{r} ) \Psi(\mathbf{r},t) d^2 \mathbf{r} \approx exp ( - \frac{v^2 t^2}{4 \sigma_0^2}-\frac{i}{\hbar} E_0 t ) ,
\end{equation}
and its real phase part
\begin{equation}
C_{R}(t) \approx exp ( - \frac{v^2 t^2}{4 \sigma_0^2} )
\end{equation}
satisfactorily represents the damping behavior of the correlation function $C_f (t)$.
In a chaotic finite region, the autocorrelation function should differ as
\begin{equation}
C(t) \approx \sum_n exp {\left\{ - \frac{v^2 (t-n\tau)^2}{4 \sigma_0^2}-\frac{i}{\hbar} E_0 (t-n\tau) \right\}}
exp( - \frac{\lambda}{2} |t|) ,
\end{equation}
where $\tau$ is the period of a paticular periodic orbit,
along which the initial wavepacket is launched \cite{LesHouches}.
The summation implies that the finite region allows the wavepacket to repeatedly return to its original location.
Moreover, its chaoticity makes it spread all over the billiard exponetially under the Lyapnov exponent $\lambda$ of the periodic orbit.
It can be reformed using the Poisson sum rule as
\begin{equation}
C(t)=\sum_{n} \frac{1}{\hbar} \frac{\Delta}{\sqrt{\pi}} \frac{\sigma_0}{v}
exp{\left\{ -\frac{\sigma_0^2}{v^2 \hbar^2} (E_n-E_0)^2 \right\}}
e^{- \frac{i}{\hbar}E_n t} e^{- \frac{\lambda}{2}|t|} ,
\end{equation}
where $\Delta=2 \pi \hbar / \tau (=\hbar \omega)$,
$E_n=\Delta n$, and $E_{0}=\frac{\mathbf{{p}_0}^2}{2m}$.
The weighted power spectrum can then be derived through the Fourier transform of the autocorrelation function (22) as follows:
\begin{align}
\tilde{C}(E)&=\frac{1}{2 \pi} \int_{-\infty}^{\infty} C(t) e^{\frac{i}{\hbar}Et} dt \nonumber \\
&=\sum_{n=-\infty}^{\infty} \frac{1}{\hbar} \frac{\Delta}{\sqrt{\pi}} \frac{\sigma_0}{v}
exp{\left\{ -\frac{\sigma_0^2}{v^2 \hbar^2} (E_n-E_0)^2 \right\}} \times \frac{1}{\pi} \frac{\lambda/2}{((E-E_n)/\hbar)^2+(\lambda/2)^2} .
\end{align}
This also includes the Lorentzian function of (12); however, the origin of its peaky behavior is completely different from $\epsilon$. The Lyapnov exponent $\lambda$ is purely due to the chaotic property of our system and does not exist in $C_f (t)$.
Therefore, replacing $\tilde{C}_0 (E)$ with $\tilde{C}(E)$, the relation between the window function and power spectrum should be modified to
\begin{equation}
w(E) \cong - \frac{2\epsilon}{\hbar} \tilde{C}(E) .
\end{equation}
Then, by Eq.(23), the window function is expected to be
\begin{align}
w (E) \approx -2\epsilon & \frac{1}{\sqrt{\pi}} \frac{\sigma_0}{ v }
exp{\left\{ -\frac{\sigma_0^2}{v^2 \hbar^2} (E-E_0)^2 \right\}}
\times
\nonumber \\
&\times
\frac{ \Delta}{\pi} \sum_{n=-\infty}^{+\infty} \frac{ \lambda/2}{(E-E_p-n\Delta)^2+(\hbar \lambda/2)^2}.
\end{align}
Here, $E_p$ represents the energy at the highest maximum of the serial local peaks with width $\lambda$, which is the Lyapnov exponent of the billiard, and $\Delta$ is the energy gap between local peaks.
The interplay of the Gaussian envelope shape with its width $v \hbar / \sigma_0$ is due to the size of the initial Gaussian (1) and the narrow peaks, with width $\lambda$, represented by the Lorentzian.
Finally, $w(E)$ is well estimated through Eq.(25) by replacing the eigen energies $E_n$ of the eigenstates in the exponential function of Eq.(23) with an ordinary energy variable $E$. In reality, the resulting numerical difference of $w(E)$ is slight under the replacement. Then, by using the summation symbol, Eq.(25) simply adds the Lorentzian ``delta" functions, which are smoothed by the Lyapnov exponent $\lambda$.
\begin{figure}
\includegraphics[width=13cm]{fig6.eps}
\caption{
Window function (the weighted spectrum) of the Gaussian wavepacket $w(E)$ (a dotted curve) for orbit No.14 in Fig.3(b) is compared with its expansion coefficients $|c_n|^2$ (bars). Here, the parameters of the initial Gaussian (Eq.(1)) are the same as those in Fig.3(b). Insets show the eigen states corresponding to the high peaks. The 4-digit numbers near the insets represent the counts from the ground state to the excited states in the insets. ``Dynamical scars" are often present on classical orbit No.14.
The extremely spiky characterictic feature of this $|c_n|^2$ plot is the same as that of No.7 (Fig.4). (color online)
\label{fig.6}
}
\end{figure}
\begin{figure}
\includegraphics[width=13cm]{fig7.eps}
\caption{Window function (the weighted spectrum) of the Gaussian wavepacket $w (E)$ (a dotted curve) for orbit No.14 in Fig.3(b), and its averaged behavior of the expansion coefficients $|c_n|^2$(a solid curve). Here, the averaging is performed in the energy range of $20\epsilon$. These two lines match very closely. (color online)
\label{fig.7}}
\end{figure}
In chaotic billiard systems, the actual weighted power spectrum $\tilde{C}(E)$, which is evaluated from numerically obtained eigen states, is known to have an extremely spiky and oscillatory behavior \cite{LesHouches, gaussian, KH-LinearNonlinear, KH-shorttime}.
The existence of the scar states in chaotic billiard systems leads to a relatively smaller amount of selected eigen states contributing dominantly to $A(\mathbf{r})$. The $|c_n|^2$ histograms clearly show this tendency. Fig.4 and 6 show the histograms for No.7 and 14 respectively, where the numbering stands for a specific periodic orbit in the stadium, as shown in Table 1 of \cite{Bogomolny}.
In Fig.4, the red curve represents $w (E)$ for No.7, with $\lambda=0.418|\mathbf{p}_0|$. The constant 0.418 is the geometric Lyapunov exponent and was evaluated from the monodromy matrix of the corresponding periodic orbit \cite{Bogomolny}. In addition, $\epsilon$ is set to the averaged energy level spacing $\Delta E =0.0003412 \times 10^4$.
Other parameters related to the initial Gaussian are the same as those in FIG.1.
They are simply the linear-dynamical predictions of the window function \cite{LesHouches, gaussian, KH-LinearNonlinear, KH-shorttime}.
The local peaks of the actual weighted spectrum are located at almost equal energy intervals, that is, $\Delta=0.03193 \times 10^4$; this is very close to the theoretical estimation $\Delta_{th}=\frac{\hbar}{m}(\frac{2\pi}{L})|{\mathbf{p}_0|}=0.03253 \times 10^4$, where $L=4.8284$ is the length of the specific periodic orbit.
Through semiclassical approximation, the classical action on the classical periodic orbit is determined as $S_{r}(\xi, \xi; E_0)=\oint_r \mathbf{p} d\mathbf{r} = L \sqrt{2mE_0}$.
It must increase by as much as $2\pi \hbar$, adding $\Delta_{th}$ to its energy $E_0$.
As aformentioned, $w(E)$ is less spikier than the actual $|c_n|^2$ histogram. In addition, it is the ``totalitarian" case in Ref. \cite{KH-LinearNonlinear}. In the weighted spectrum of the "totalitarian" system, some paticular states have dominant contributions. The scars can often be found in such states. Still, its smoothed behavior follows the estimated envelop function: the window $w(E)$. (The opposite case is called the "egalitarian" in \cite{KH-LinearNonlinear}. Then the weighted spectrum essentially follows the window function. ) It simultaneously allows the emergence of ``dynamical scars". Similar to the scar states, if only one primitive periodic orbit has a dominant contribution, the ``dynamical scars" become visible.
In actuality, the eigen states at peaks often become the scar states of the corresponding periodic orbit (cf. Fig4, 6). Of course, the eigen states with larger $c_n$ would also contribute to the ``dynamical scars". However, in some cases, the ``dynamical scars" are blurred by the superposition of the other orbits on the eigen state.
The histogram of $|c_n|^2$s is extremely spiky, although it is possible to enlucidate its smoothed version (Fig.5) formed
by averaging the energy range, which is sufficiently larger than the energy spacing of levels but much smaller than the required energy.
It agrees strikingly with the window function $w (E)$.
The same situation occurs for periodic orbit No.14 (Fig3.(b)) in FIG.6, and for orbit No.5, which is already published in \cite{prep}.
In FIG.6, the red curve represents $w (E)$, with $\lambda=0.3684|\mathbf{p}_0|$.
The local peaks' energy intervals $\Delta=0.02340 \times 10^4$ are extremely close to its prediction $\Delta_{th}=\frac{\hbar}{m}(\frac{2\pi}{L}){|\mathbf{p}_0|}=0.02428 \times 10^4$ ($L=6.47$).
Moreover, other parameters related to the initial Gaussian are the same as those in Fig.3(b).
In addition, the processes in the smoothed histogram are the same. The smoothed histogram matches very closely with its window function $w (E)$ (Fig.7).
\begin{figure}
\includegraphics[width=13cm]{fig8.eps}
\caption{The time-average of the evolving wavepacket $A(\mathbf{r})$ on the bouncing ball mode of stadium billiard. The initial Gaussian wavepacket is set $|\mathbf{p}_0| = 250$ and $\sigma_0=0.15$. The wavepacket is launched from$ (1/2, \sqrt{3}/4)$ and the lauching angle is $\theta = \pi / 2$. The broken yellow line corresponds to the classical periodic orbit. It belongs to the one-parameter family of the bouncing ball mode, whose members bounce up and down between two parallel straight sections of the boundary infinitely, and the launching point is on the line.
\label{fig.8}}
\end{figure}
\begin{figure}
\includegraphics[width=13cm]{fig9.eps}
\caption{Expansion coefficients $|c_n|^2$ (upper graph) and window function (the weighted spectrum) of the Gaussian wavepacket $w (E)$ (lower graph) for the bouncing ball mode. These graphs are almost identical. The 4-digit numbers near the insets represent the counts from the ground state to the excited states. The parameters of the wavepacket are the same as those in Fig.8. (color online)
\label{fig.9}}
\end{figure}
Moreover, the bouncing ball mode produces a considerably unique result(Fig.8). This exceptional mode is the only nonchaotic periodic orbit in the stadium billiard. It has a zero Lyapunov exponent and no chaotic origin because it bounces between the parallel walls of the billiard in terms of classical mechanics. However, the parameter $\lambda$ still cannot be set to zero or be infinitesimally small in our numerical calculation because the Lorentzian approaches the Dirac delta function in such a limit; this cannot be presented exactly in numerical calculation. Numerical results clarify that only the wave functions with scars on the boucing ball mode significantly contribute to the ``dynamical scar''. Fig.9 compares the numerical histogram and the estimated weighted spectrum, both of which show strikingly good agreement. Numerically calculated interval between the peaks is $\Delta=0.07524$, whereas its theoretical estimation is $\Delta_{th}=\frac{\hbar}{m}(\frac{2\pi}{L})|p_0|=0.07854$ ($L=2$). Note that the width of the sharp peaks $\lambda$ in the weighted spectrum is replaced by averaged level spacing $\overline{\Delta E}$, instead of the theoretically exact value of vanishing Lyapnov exponent $\lambda=0$.
It also implies that this system does not have much finer energy resolution than $\overline{\Delta E}$.
As mentioned earlier, with a good agreeement between $w(E)$ and the averaged behavior of $|c_n|^2$, the semiclassical approximation can be expected to function satisfactorily in this field. Moreover, it reminds us of the ``totalitarian" aspect of the system.
If we choose a sufficiently small window size to reasonably suppose that only one eigen state would be in the window simultaneously, it essentially resembles the result of Ref.\cite{Bogomolny} for the scar states. However, in this study the window size is much larger because the initial wavepacket must involve the contribution of eigen states in a broader energy range.
Thus, a scar is not directly observed in the snapshot of time-dependent wave functions (Fig.1(f)). The ``dynamical scar" is the superposition of many corresponding states in the energy window.
\begin{figure}
\includegraphics[width=13cm]{fig10.eps}
\caption{Comparison of the semiclassically approximated time-average of evolving wavepacket (29) on periodic orbit No.7(a dotted curve) and its numerically calculated localization(a solid curve). They are presented as functions of the distance $\xi$ from the point (0,1), which is measured along the broken yellow line in Fig.2. At the distance $\xi_C=\sqrt{2+\sqrt{2}}=1.8478...$, the semiclassical approximation diverges. At distances $0$, $\sqrt{2}=1.4142...$ and $1+\sqrt{2}=2.4142..$, the boundary walls are present. At the boundary, the wave function becomes zero and shows a peculiar rough wavy behavior. (color online)
}
\end{figure}
\section{Semiclassical approximation}
Through semiclassical approximation \cite{Bogomolny},
the localization becomes the summation of two parts:
\begin{equation}
A(\mathbf{r}) \cong \langle \rho_0 (\mathbf{r}, E) \rangle
+ \int w(E)
\rm{Im}
\it{G}_{osc}(\mathbf{r},\mathbf{r};E)dE
=\langle \rho_{\rm{0}} (\mathbf{r}, E) \rangle+A_{osc}(\mathbf{r}) ,
\end{equation}
where
\begin{align}
G_{osc}&(\mathbf{r},\mathbf{r};E) \cong \frac{2}{(2 \pi)^{1/2} \hbar^{3/2}}
\times
\nonumber \\
&\times
\sum_{\gamma, n} \frac{D_{\gamma,n} (\xi)^{1/2}}{v}
\left\{
exp \left[ \frac{i}{\hbar} (S_{\gamma, n}(\xi, \xi; E) + \frac{W_{\gamma,n} (\xi) }{2}\eta^2) \right]
- i \frac{\pi \nu_{\gamma,n} }{2} -i \frac{3}{4}\pi
\right\} .
\nonumber \\
\end{align}
The first term of Eq.(26) in the right-hand side is the smooth part $ \langle \rho_0 \rangle $, and the second is the oscillatory term $A_{osc}$.
Further the angle branckets $\langle \cdots \rangle $ denote an average over the energy range that the window function $w(E)$ covers, and $ \rho_0 (\mathbf{r}, E) $ is the classical probability density of finding a particle with energy $E$ at point $\mathbf{r}$. Needless to say, $w(E)$ depends on the shape of the (initial) wavepacket.
The $\xi$ axis is set along the concerned periodic orbit, and the $\eta$ axis perpendicular to it at point $\xi$. The classical action
of the $n$-fold repeated orbit can be derived as $S_{\gamma,n}=nS_\gamma$ from the action of the primitive orbit $\gamma$: $S_\gamma$.
Then, $T_{\gamma,n} (\mathbf{r},E)=nT_\gamma$, $T_\gamma$ is the period of the primitive orbit $\gamma$. Its maximal number of conjugate points $\nu_{\gamma,n}=n\nu_\gamma$ can be derived from the primitive $\nu_\gamma$.
In addition,
$W_{\gamma,n} (\xi)$, $D_{\gamma,n} (\xi)$
are versions for the $n$-fold periodic orbit and can be expressed by
$D_\gamma=-(\frac{\partial^2 S_\gamma}{\partial \eta' \partial \eta''})_{\eta'=\eta''=0}$
and $W_\gamma(\xi)=(\frac{\partial^2 S_\gamma}{\partial \eta'^2} + \frac{\partial^2 S_\gamma}{\partial \eta' \partial \eta''} + \frac{\partial^2 S_\gamma}{\partial \eta''^2})_{\eta'=\eta''=0}$ for the primitive orbit. They
can be derived from $D_\gamma$:
$D_{\gamma,n}(\xi)=D_\gamma \frac{\mu_1 - \mu_2}{\mu_1^n - \mu_2^n}$, $W_{\gamma,n}(\xi)=D_{\gamma,n}(\mu_1^n + \mu_2^n - 2)$.
Note that $\mu_1$, $\mu_2=\mu_1^{-1}$ are the eigenvalues of the monodromy matrix of the primitive orbit.
It is assumed that only one specific periodic orbit $\gamma=C$ shows a prime contribution. Moreover, primitive orbit $n=1$ is expected to be dominant on the periodic orbit because the factor $D_{C,n}$ vanishes rapidly with increasing $n$. Therefore, the oscillatory part of $A$ can be approximated on the classical orbit $C$ ($\eta=0$) as
\begin{align}
A_{osc}&(\xi) \cong
\frac{2\sqrt{2}}{\pi \hbar^{7/2}} \frac{\sigma_0}{v} \epsilon \Delta
\sum_{j} exp[- \frac{{\sigma_0}^2}{\hbar^2 v^2}(E_j - E_0)^2 ] \frac{|D_{C}|^{1/2}}{v} \times
\nonumber \\
&\times
\int \frac{1}{\pi}
\frac{ \lambda /2}{ \{(E-E_j)/\hbar\}^2 + (\lambda/2)^2}
Im
\{ i \exp [\frac{i}{\hbar} S_{C} -i \frac{\pi}{2}\nu_C + i \pi N_C - i \frac{1}{4} \pi ] \}
dE .
\nonumber \\
\end{align}
Note that $N_C$ is the number of hits on the boundary, when a particle travels around the closed orbit $C$, and $D_C = D_{C,1}$. Under the semiclassical approximation, at $E=E_j$,
it can be well assumed that
$exp \{ \frac{i}{\hbar} S_{C}(\xi,\xi;E_j)-i \frac{\pi}{2} \nu_C + i \pi N_C -i \frac{1}{4} \pi \} =1 $.
Finally, the integration in Eq.(28) can be performed using the complex integral, and the localization is evaluated as
\begin{align}
A(\xi)&=\langle \rho \rangle + A_{osc}(\xi, E)
\nonumber \\
&= \frac{1}{Area} + \frac{2\sqrt{2}}{\pi \hbar^{5/2}} \frac{\sigma_0}{v} \epsilon \Delta \frac{|D_{C}(\xi)|^{1/2}}{v}
\sum_j exp [ -\frac{ {\sigma_0}^2 }{\hbar^2 v^2} (E_j - E_0)^2 ] e^{-T_j \frac{\lambda}{2} }
\nonumber \\
\end{align}
where $S_{C}(\xi, \xi;E_j + i \frac{\hbar \lambda}{2} ) \cong S_{C}(\xi, \xi;E_j)+i T_j \frac{\lambda \hbar}{2} $ is used, $T_j$ is the period of the periodic orbit at $E=E_j$, and $Area$ is just the area of the billiard.
Finally, the averaged level spacing $\overline{\Delta E}$, which is the criterion of the energy resolution limit of the billiard system, is adopted for $\epsilon$
\begin{figure}
\includegraphics[width=13cm]{fig11.eps}
\caption{Comparison of the semiclassically approximated time-average of evolving wavepacket (29) on periodic orbit No.14(a dotted curve) and its numerically calculated localization(a solid curve). They are presented as functions of the distance $\xi$ from the point (0,0), which is measured along the broken yellow line in Fig.3(b). At the distance $\xi_C=\sqrt{5+\sqrt{5}}=2.6900...$, the semiclassical approximation diverges. At distances $0$, $\frac{\sqrt{5}}{2}=1.1180...$, $\sqrt{5}=2.2361...$, and $1+\sqrt{5}=3.2361...$, the boundary walls exist. At the boundary, the wave function becomes zero and shows a peculiar rough wavy behavior. (color online)
\label{fig.11}}
\end{figure}
The evaluated localization $A$ on the periodic orbit No.7(FIG.2) is presented in Fig.10. Assuming the wave function is completely flat in the finite region, $\langle \rho \rangle $ must be the inverse of the area of the billiard: $ \lbrace (4+\pi)/4 \rbrace ^{-1} =0.5601...$ throughout the stadium.
Owing to the scar or the contribution of the classical periodic orbit, the concentration enhances the absolute square of the wave function by at least $10\%$ on the periodic orbit above the average behavior $\langle \rho \rangle $, except in the neighborhood of the singularity around the conjugate point.
Of course, it cannot recreate the wavy behavior, which is especially sharp close to the boundary because Eq.(29) does not show the exact effect of the boundary condition. The approximation is determined essentially through the length of the orbits and the energy. Actually, the wave must be zero at the boundary according to the Dirichlet condition, and all dominant eigenfunctions' phases become almost coherent near the boundary.
Fig.11 shows the semiclassical approximation of No.14. In addition, it presents essentially the same results as No.7 (Fig.10).
The singularity at the conjugate point is inevitable for the semiclasical approximation; however, it is also beyond the scope of the approximation in the neighborhood of the point. The semiclassical approximation of the wave function diverges at the point due to factor $D_C = 1/m_{12}$, and $m_{12}$ is the off-diagonal element of the monodromy matrix\cite{Bogomolny} for the unstable classical periodic orbit $C$. In our study, $m_{12}=-2\{(2+\sqrt{2})-\xi^2\}$ for No.7 (Fig.10), and $m_{12}=-2\{(5+\sqrt{5})-\xi^2\}$ for No.14 (Fig.11). In both cases $\xi$ is measured from the left wall and along the orbits. The monodromy matrix element $m_{12}$ becomes zero and $D_C$ diverges at the conjugate point $\xi_{C}$, where the classical orbits near the classical periodic orbit converge. The conjugate points are located at $\xi_{C}=\sqrt{2+\sqrt{2}}$ for No.7 mearsured from the point $(0,1)$, and $\sqrt{5+\sqrt{5}}$ for No.14 measured from $(0,0)$. In reality, a relatively strong enhancement exists around the point.
Apart from these properties, the semiclassical approximation works well, and Eq.(29) still matches remarkably with the numerically evaluated time-averages on the orbits.
\section{Conclusion}
The quantum phenomenon: the ``dynamical scar" is analyzed from the aspect of the eigen state expansion of the incident wavepacket and the semiclassical approximation. By launching a Gaussian wavepacket along a classical unstable periodic orbit, its weighted power spectrum $\bar{C}(E)$ accomplishes a good match with its averaged histogram of expansion coefficients $|c_n|^2$s.
By utilizing $\bar{C}(E)$ as the energy window function for the semiclassical approximation, the ``dynamical scars" can be evaluated. The periodic orbit critically contributes to the approximation. However, it has nonrealistic singularities close to the conjugate points on the orbit.
The window function $w(E)$, which is manipulated from $\tilde{C}(E)$, plays a crucial role for the approximation.
By setting the window size small so that only one eigen state can exist inside the window energy range, our discussion then becomes the same as the scar state theory of Bogomolny \cite{Bogomolny}. In this study, the window size was sufficiently large to include more than several scarred eigen states to make the ``dynamical scar" clearly visible. Simultaneously, this may be why we cannot observe scars in the snapshots of traveling wave functions after their diffusing throughout the billiard (Fig.1(f)). The ``dynamical scar" is the interplay of many related scarred states inside the range of the energy window.
|
3,212,635,537,641 | arxiv | \section{Introduction}
Massive type IIA supergravity \cite{Romans:1985tz} admits a consistent truncation on the six-sphere to maximal supergravity in four dimensions with gauge group $\textrm{ISO}(7) = \textrm{SO}(7) \ltimes \mathbb{R}^7$ \cite{Guarino:2015jca,Guarino:2015vca}. The gauging is dyonic, in the sense of \cite{Dall'Agata:2012bb,Dall'Agata:2014ita} (see also \cite{Inverso:2015viq}). By virtue of the consistency of the truncation, all solutions of the four-dimensional theory uplift on $S^6$ to solutions of massive type IIA supergravity. In particular, the critical points (which can only be AdS) of the four-dimensional scalar potential give rise to supersymmetric and non-supersymmetric ten-dimensional solutions of the form $\textrm{AdS}_4 \times S^6$. This product is generically warped and the metric on $S^6$ displays an isometry group $G \subset \textrm{SO}(7)$ related to the residual symmetry within ISO(7) supergravity of the critical point it uplifts from. Using this technique, new massive type IIA solutions have been found \cite{Guarino:2015jca,Varela:2015uca,Pang:2015vna} and previously known ones \cite{Romans:1985tz,Behrndt:2004km,Lust:2008zd} have been recovered. Other supersymmetric AdS$_4$ solutions of massive type IIA supergravity have been recently found using other methods in \cite{Apruzzi:2015wna,Rota:2015aoa}. Previous constructions of supersymmetric AdS$_4$ solutions in massive type IIA supergravity include \cite{Lust:2004ig,Grana:2006kf,Tomasiello:2007eq,Koerber:2008rx,Petrini:2009ur,Lust:2009mb}.
In this paper, we investigate the ten-dimensional uplift of the ${\cal N}=3$ SO(4)--invariant critical point of dyonic ISO(7) supergravity. This $D=4$ critical point was found in \cite{Gallerati:2014xra}. A local form of its massive type IIA uplift has already appeared in \cite{Pang:2015vna}. Here, we provide an alternate local form of this ${\cal N}=3$ AdS$_4$ solution of massive IIA supergravity (equation (\ref{SO4SolN=3})) and discuss its geometric features. The internal space of the ${\cal N}=3$ solution is topologically $S^6$, endowed with a geometry that can be locally regarded as an $S^2$ bundle over a half-$S^4$. This is a generalisation of the twistor bundle over a quaternionic-K\"ahler manifold of positive curvature, see {\it e.g.}~\cite{Cvetic:2002kj, Tomasiello:2007eq} for reviews. The twistor fibration allows one to engineer nearly-K\"ahler or half-flat geometries on six-manifolds $M_6$ of topology different than $S^6$, see {\it e.g.}~\cite{Tomasiello:2007eq}. In turn, a well known class of ${\cal N}=1$ (direct) product solutions AdS$_4 \times M_6$ of massive IIA supergravity entails a nearly-K\"ahler \cite{Behrndt:2004km,Behrndt:2004mj} or a half-flat structure \cite{Lust:2004ig,Tomasiello:2007eq,Koerber:2008rx} on $M_6$.
It is suggestive that this ${\cal N}=3$ solution formally corresponds to an elaboration of these ${\cal N}=1$ constructions. This is reminiscent of the situation for a well-known class of $D=11$ direct product solutions involving AdS$_4$ and a tri-Sasaki seven-manifold. Recall that the latter can be regarded as an $S^3$ bundle over a quaternionic-K\"ahler base, equipped with an Einstein metric on the total space. This class of solutions is ${\cal N}=3$, see {\it e.g.}~\cite{Acharya:1998db}. On each tri-Sasaki manifold, a second Einstein metric can be obtained by squashing the $S^3$ fibers by a certain constant amount. The resulting $D=11$ AdS$_4$ solution is ${\cal N}=1$, see \cite{Awada:1982pk,Acharya:1998db}. The analogy with these ${\cal N}=3$ and ${\cal N}=1$ solutions of $D=11$ supergravity should not be taken too far, though. The internal metric of the massive IIA ${\cal N}=3$ solution is certainly not Einstein, unlike the ${\cal N}=1$ nearly-K\"ahler solutions of \cite{Behrndt:2004km}. In the IIA ${\cal N}=3$ solution, the $S^2$ fibers are squashed, not by a constant, but by a warping function of the $S^4$ hemisphere base. The connection does not have definite duality properties, unlike in the usual twistor fibration. Finally, the ${\cal N}=3$ solution involves a warped, rather than direct, product of AdS$_4$ and the internal topological $S^6$. Like in the $D=11$ tri-Sasaki case, though, the SO(3) R-symmetry acts on the fibers of the ${\cal N}=3$ massive IIA solution.
The type IIA ${\cal N}=3$ solution displays a local SO(4) symmetry, inherited from that preserved by the ${\cal N}=3$ critical point of the $D=4$ supergravity. More generally, we construct in section \ref{sec:SO(4)-sector} the restricted, in the sense of \cite{Guarino:2015qaa}, duality hierarchy \cite{deWit:2008ta,Bergshoeff:2009ph} of $D=4$ ISO(7) supergravity that is invariant under this SO(4). This result is particularly useful, as it allows us to consistently embed the entire, dynamical SO(4)--invariant sector of the $D=4$ ${\cal N}=8$ supergravity into massive type IIA. The explicit consistent uplift formulae are presented in section \ref{sec:SO4sectorinIIA}. These formulae give the ten-dimensional uplift of any SO(4)--invariant solution of the ISO(7) supergravity, including solutions with running scalars. The local and global features of this consistent embedding formulae are discussed at length, and generalisations are given. Section \ref{sec:urthertruncs} discusses further truncations. The truncation to the dynamical G$_2$--invariant sector of \cite{Guarino:2015vca} is recovered, and an example that illustrates the usefulness of the duality hierarchy approach is worked out. In section \ref{sec:AdSsolutions}, we turn our attention to the massive IIA uplift of solutions of the $D=4$ supergravity, focusing on vacuum solutions. In particular, a new local form of the ${\cal N}=3$ AdS$_4$ solution in massive IIA is provided. Finally, in section \ref{sec:N=3susy}, the solution is demonstrated to indeed be ${\cal N}=3$ by explicitly building the triplet of Killing spinors that it preserves.
\section{A $D=4$, SO(4)--invariant duality hierarchy} \label{sec:SO(4)-sector}
We are interested in the sector of $D=4$ ${\cal N}=8$ dyonically-gauged ISO(7) supergravity \cite{Guarino:2015qaa} that retains the fields that are invariant under the SO(4) subgroup of ISO(7) defined by the embedding \cite{Gallerati:2014xra}
\begin{equation}
\label{embedding_SO4}
\textrm{SO}(7) \, \supset \,
%
\textrm{SO}(3)^\prime \times \textrm{SO}(4)^\prime \, \supset \,
%
\textrm{SO}(3)_{\textrm{d}} \times \textrm{SO}(3)_{\textrm{R}} \, \equiv \,
%
\textrm{SO}(4) \ ,
\end{equation}
with $\textrm{SO}(4)^\prime \equiv \textrm{SO}(3)_{\textrm{L}} \times \textrm{SO}(3)_{\textrm{R}}$ and $ \textrm{SO}(3)_{\textrm{d}}$ the diagonal subgroup of $\textrm{SO}(3)^\prime \times \textrm{SO}(3)_{\textrm{L}}$. Equivalently, this SO(4) is also the maximal subgroup of the G$_2$ contained in SO(7),
\begin{equation}
\label{embedding_SO4_2}
\textrm{SO}(7) \, \supset \,
\textrm{G}_{2} \, \supset \,
%
%
\textrm{SO}(4) \ .
\end{equation}
The Lagrangian corresponding to this sector of the ${\cal N}=8$ ISO(7) supergravity was given in \cite{Guarino:2015qaa}, and the vacuum structure was studied in detail there. Here, we complete the analysis of the SO(4)--invariant sector by determining the restricted, in the sense of \cite{Guarino:2015qaa}, duality hierarchy \cite{deWit:2008ta,Bergshoeff:2009ph} in this sector.
The SO(4)--invariant sector of ${\cal N}=8$ ISO(7) supergravity corresponds to an ${\cal N}=1$ supergravity coupled to two chiral multiplets that parametrise a K\"ahler submanifold
\begin{eqnarray}
\label{ScalManN=1}
\frac{\textrm{SU}(1,1)}{\textrm{U}(1)} \times \frac{\textrm{SU}(1,1)}{\textrm{U}(1)} \,
\end{eqnarray}
of E$_{7(7)}/$SU(8). The $\textrm{SU}(1,1)^2$ in the numerator is the commutant of the SO(4) in (\ref{embedding_SO4}) or (\ref{embedding_SO4_2}) inside E$_{7(7)}$. According to table 2 of \cite{Guarino:2015qaa}, the SO(4)-singlets of the restricted, SL(7)--covariant tensor hierarchy considered therein give rise to one two-form and two three-form potentials in this sector. To summarise and fix the notation, the SO(4)--invariant, restricted duality hierarchy of $D=4$ ${\cal N}=8$ supergravity contains the following real fields:
\begin{eqnarray} \label{fieldContentHierarchy}
\textrm{1 metric} & : & \quad ds_4^2 \; , \nonumber \\
\textrm{4 scalars} & : & \quad \varphi \; , \; \chi \; , \; \phi \; , \; \rho \; , \nonumber \\
\textrm{1 two-form} & : & \quad B \; ,
\nonumber \\
\textrm{2 three-forms} & : & \quad C^1 \; , \; C^2 \; .
\end{eqnarray}
The embedding of the scalars into the ${\cal N}=8$ E$_{7(7)}/$SU(8) manifold was discussed at length in \cite{Guarino:2015qaa}. In turn, the two- and three-form potentials in (\ref{fieldContentHierarchy}) are embedded into the SL(7)--covariant two-, ${\cal B}_I{}^J$, and three-forms, ${\cal C}^{IJ}$, defined in \cite{Guarino:2015qaa} via
\begin{eqnarray} \label{embeddingTH}
{\cal B}_i{}^j = \tfrac47 \, B \, \delta^j_i \; , \qquad
{\cal B}_{\hat i}{}^{\hat j} = -\tfrac37 \, B \, \delta^{\hat j}_{\hat i} \; , \qquad
{\cal C}^{ij} = C^1 \, \delta^{ij} \; , \qquad
{\cal C}^{\hat{i}\hat{j}} = C^2 \, \delta^{\hat{i}\hat{j}} \; ,
\end{eqnarray}
and ${\cal B}_i{}^{\hat{j}} = {\cal B}_{\hat{j}}{}^i = {\cal C}^{i {\hat{j}}} =0$. Here, we have split the SL(7) indices $I, J = 1 , \ldots ,7$ as $I = (i , {\hat{i}})$, $i = 1,2,3$, ${\hat{i}}=0,1,2,3$, as in appendix \ref{subset:S6Geom}.
In a conventional formulation, only the metric and the scalars in (\ref{fieldContentHierarchy}) enter the $D=4$ Lagrangian. This reads \cite{Guarino:2015qaa}
\begin{equation}
\label{L_SO4}
\mathcal{L} = (R - V) \, \textrm{vol}_4 + \tfrac{6}{2} \left[ d\varphi \wedge * d\varphi + e^{2 \varphi} \, d\chi \wedge * d\chi \right] + \tfrac{1}{2} \left[ d\phi \wedge * d\phi + e^{2 \phi} \, d\rho \wedge * d\rho \right] \ ,
\end{equation}
\begin{center}
\begin{table}[t]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{cc|cccc|cc}
\noalign{\hrule height 1pt}
$\mathcal{N}$ & $G$ & $c^{-1/3} \, \chi$ & $c^{-1/3} \, e^{-\varphi}$ & $c^{-1/3} \, \rho$ & $c^{-1/3} \, e^{-\phi}$ & $g^{-2} \, c^{1/3} \, V$ & ref. \\
\noalign{\hrule height 1pt}
$\mathcal{N}=3$ & $\textrm{SO}(4)$ & $\frac{1}{2^{4/3}}$ & $\frac{3^{1/2}}{2^{4/3}} $ & $-\frac{1}{2^{1/3}}$ & $\frac{3^{1/2}}{2^{1/3}}$ & $-\frac{2^{16/3}}{3^{1/2}} $ & \cite{Gallerati:2014xra} \\[5pt]
$\mathcal{N}=1$ & $\textrm{G}_{2}$ & $-\frac{1}{2^{7/3}} $ & $\frac{5^{1/2} \, 3^{1/2}}{2^{7/3}}$ & $-\frac{1}{2^{7/3}} $ & $\frac{5^{1/2} \, 3^{1/2}}{2^{7/3}}$ & $- \frac{2^{28/3} \, 3^{1/2}}{5^{5/2}} $ & \cite{Borghese:2012qm} \\[5pt]
\hline
$\mathcal{N}=0$ & $\textrm{SO}(7)_+$ & $0$ & $\frac{1}{5^{1/6}}$ & $0$ & $\frac{1}{5^{1/6}}$ & $-3 \, 5^{7/6}$ & \cite{DallAgata:2011aa} \\[5pt]
$\mathcal{N}=0$ & $\textrm{G}_{2}$ & $\frac{1}{2^{4/3}}$ & $\frac{3^{1/2}}{2^{4/3}}$ & $\frac{1}{2^{4/3}}$ & $\frac{3^{1/2}}{2^{4/3}}$ & $-\frac{2^{16/3}}{3^{1/2}}$ & \cite{Borghese:2012qm} \\[5pt]
$\mathcal{N}=0$ & $\textrm{SO}(4)$ & $0.412$ & $0.651$ & $0.068$ & $1.147 $ & $-23.513$ & \cite{Guarino:2015qaa} \\
\noalign{\hrule height 1pt}
\end{tabular}
\end{center}
\caption{\small{Critical points of the scalar potential (\ref{VSO4}), namely, of ${\cal N}=8$ ISO(7)-dyonically-gauged supergravity with invariance equal or larger than the SO(4) subgroup of SO(7) defined in (\ref{embedding_SO4}). For each point we give the residual supersymmetry ${\cal N}$ and bosonic symmetry $G$ within the full ${\cal N}=8$ theory, its location, the cosmological constant $V$ and the reference where it was first found. We have employed the shorthand $c \equiv m/g$. All of these data are reproduced from \cite{Guarino:2015qaa}.}\normalsize}
\label{Table:SO4Points}
\end{table}
\end{center}
\noindent with the scalar potential given by \cite{Guarino:2015qaa}
\begin{equation}
\label{VSO4}
\begin{array}{lll}
V &=& \frac{1}{2} \, g^{2} \, e^{-\phi } (1+e^{2 \varphi } \chi ^2)
\left[ -24 \, e^{\varphi +\phi } - 8 \, e^{2 \phi } + e^{2 \varphi } \, \Big(-3+ (8 \chi ^2-3 \rho ^2) \, e^{2 \phi } \Big) \right. \\[4mm]
&+& \left. e^{4 \varphi } \, \chi ^2 \, \Big( 9 + (3 \rho +4 \chi )^2 \, e^{2 \phi } \Big) \right]
- g m \, \chi ^2 \, (3 \rho + 4 \chi ) \, e^{6 \varphi +\phi }
+ \frac{1}{2} \, m^2 \, e^{6 \varphi +\phi } \ .
\end{array}
\end{equation}
The constants $g$ and $m$ are the electric and magnetic gauge couplings of the parent ${\cal N}=8$ ISO(7) supergravity.
When $g m \neq 0$, the scalar potential (\ref{VSO4}) contains AdS critical points that spontaneously break the ${\cal N}=8$ supersymmetry and ISO(7) gauge symmetry of the full $D=4$ supergravity to some supersymmetry ${\cal N}$ and residual symmetry $G$. See table \ref{Table:SO4Points} for a summary. The ${\cal N}=3$ SO(4)--invariant point manifests itself as non-supersymmetric within the subtruncation (\ref{L_SO4}), (\ref{VSO4}), see \cite{Guarino:2015qaa} for further details. All these critical points are inherent to the dyonic ISO(7) gauging and disappear in the purely electric $g \neq 0, m=0$, or purely magnetic, $g = 0, m \neq 0$ limits. Accordingly, these four-dimensional solutions naturally uplift to massive type IIA supergravity on $S^6$ and do not have direct counterparts in either massless IIA on $S^6$ or massive IIA on $T^6$.
The three- and four-form field strengths of the SO(4)--invariant two-form, $B$, and three-form potentials, $C^1$, $C^2$, are
\begin{eqnarray} \label{eq:FieldStrengths}
H_{\sst{(3)}} = d B -2g \, C^1 + 2g \, C^2 \; ,
\qquad H^1_{\sst{(4)}} = d C^1 \; ,
\qquad H^2_{\sst{(4)}} = d C^2 \; .
\end{eqnarray}
These expressions follow from the generic expressions given in (2.8), (2.9) of \cite{Guarino:2015qaa} evaluated on equation (\ref{embeddingTH}) above. These field strengths are subject to the Bianchi identities
\begin{eqnarray} \label{eq:D=4Bianchis}
dH_{\sst{(3)}} = -2g \, H^1_{\sst{(4)}} + 2g \, H^2_{\sst{(4)}} \; ,
\qquad d H^1_{\sst{(4)}} \equiv 0 \; ,
\qquad d H^2_{\sst{(4)}} \equiv 0 \; .
\end{eqnarray}
These in turn correspond to the SO(4)--invariant truncation of the generic, ${\cal N}=8$ SL(7)--covariant Bianchi identities given in (2.13) of \cite{Guarino:2015qaa}.
Not all of the fields in the SO(4)--invariant, restricted tensor hierarchy (\ref{fieldContentHierarchy}) carry independent degrees of freedom: the field strengths of the form potentials are subject to duality relations, see \cite{Bergshoeff:2009ph,Guarino:2015qaa} for a generic discussion. Particularising the SL(7)-covariant duality equations (2.17), (2.18) of \cite{Guarino:2015qaa} to the present case, we find the following duality relations obeyed by the SO(4)--invariant field strengths:
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{eq:FieldStrengthDuality}
H_{\sst{(3)}} &=& * \Big( d\phi - e^{2\phi} \rho \, d\rho - d\varphi + e^{2\varphi} \chi \, d\chi \Big) \; , \nonumber \\[10pt]
H^1_{\sst{(4)}} &=& \Big[ g \, e^{\varphi} \big(1+e^{2\varphi} \chi^2 \big) \Big( 4 - 4 \, e^{\phi+3 \varphi } \rho \chi^3 + e^{\varphi - \phi } \big(1- 3 e^{2\varphi} \chi^2 \big) \big(1+ e^{2\phi} \rho^2 \big) \Big) \nonumber \\[4pt]
&& \quad + m\, e^{\phi + 6\varphi } \rho \chi^2 \Big] \, \textrm{vol}_4 \; , \nonumber \\[10pt]
H^2_{\sst{(4)}} &=& \Big[ g \, \big(1+e^{2\varphi} \chi^2 \big) \Big( 3 e^{\varphi} - 3 \, e^{\phi+4 \varphi } \rho \chi^3 +6 e^{\phi} \big(1+e^{2\varphi} \chi^2 \big) -4 e^{\phi} \big(1+e^{2\varphi} \chi^2 \big)^2 \Big) \nonumber \\[4pt]
&& \quad + m\, e^{\phi + 6\varphi } \chi^3 \Big] \, \textrm{vol}_4 \; .
\end{eqnarray}
}The Bianchi identities (\ref{eq:D=4Bianchis}), combined with the duality relations (\ref{eq:FieldStrengthDuality}), reproduce the scalar equations of motion that follow from the Lagrangian (\ref{L_SO4}), (\ref{VSO4}).
Even though it does not play a critical role in the IIA uplift, it is nevertheless useful to consider the SL(7)--singlet four-form field strength whose duality relation was given in (2.25) of \cite{Guarino:2015qaa}. In the SO(4)--invariant case at hand, this duality relation reads
\begin{eqnarray} \label{eq:H4DualityAdditional}
\tilde{H}_{\sst{(4)}} &=& e^{\phi+6\varphi} \Big[ g \, \chi^2 \big( 3\rho +4 \chi \big) - m \, \Big] \, \textrm{vol}_4 \; .
\end{eqnarray}
Using (\ref{eq:FieldStrengthDuality}), (\ref{eq:H4DualityAdditional}), the scalar potential (\ref{VSO4}) can be checked to be related to the four-form field strengths $H^1_{\sst{(4)}}$, $H^2_{\sst{(4)}}$ and $\tilde{H}_{\sst{(4)}}$ through
\begin{equation}
\label{DualitySU3H4V}
g \, ( 3 H^{1}_{{\sst{(4)}}} + 4 H^{2}_{{\sst{(4)}}} ) + m \, \tilde{H}_{{\sst{(4)}}} = -2 \, V \, \textrm{vol}_{4} \ .
\end{equation}
This is the SO(4)--invariant counterpart of the full ${\cal N}=8$ expressions (2.28), (2.29) of \cite{Guarino:2015qaa}. At any of the critical points of the scalar potential (\ref{VSO4}), that were summarised in table \ref{Table:SO4Points} above, these four-form field strengths turn out to obey
\begin{equation}
\label{EOM_SU3}
g \, (3 H^{1}_{{\sst{(4)}}}|_0 + 4 \, H^{2}_{{\sst{(4)}}}|_0 ) +7 m \, \tilde{H}_{{\sst{(4)}}}|_0 = 0 \ , \qquad
H^1_{\sst{(4)}} |_0 = H^2_{\sst{(4)}} |_0 \ ,
\end{equation}
where $|_0$ denote evaluation at a critical point.
We conclude by recovering two interesting sectors of $D=4$ ${\cal N}=8$ ISO(7) supergravity from the SO(4)--invariant sector. Firstly, according to the branching rule (\ref{embedding_SO4}), the $\textrm{SO}(3)^\prime \times \textrm{SO}(4)^\prime$--invariant sector is contained in the SO(4) sector. This is recovered by setting the pseudoscalars to zero,
\begin{equation} \label{eq:SO4toSO3pSO4p}
\chi = \rho = 0 \; ,
\end{equation}
while retaining all other fields in the duality hierarchy (\ref{fieldContentHierarchy}). The $\textrm{SO}(3)^\prime \times \textrm{SO}(4)^\prime$--invariant Lagrangian, tensor field strengths, Bianchi identities and duality relations follow by letting $\chi = \rho = 0 $ in the expressions above. Secondly, as discussed in \cite{Guarino:2015qaa}, the G$_2$--invariant sector can be also recovered from the SO(4)--sector. This is apparent from the branching (\ref{embedding_SO4_2}). The G$_2$--invariant sector is recovered from the SO(4)--invariant sector through the identifications
\begin{equation} \label{eq:SO4toG2}
\varphi = \phi \; , \qquad
\chi = \rho \; , \qquad
B=0 \; , \qquad
C^1 = C^2 \equiv C \; ,
\end{equation}
along with $H_{{\sst{(3)}}} = 0 $ and $H^{1}_{{\sst{(4)}}} = H^{2}_{{\sst{(4)}}} \equiv H_{{\sst{(4)}}} $. These identifications bring the Lagrangian and duality relations to their G$_2$--invariant counterparts, given in section 4 of \cite{Guarino:2015qaa}.
\section{\mbox{Truncation from type IIA supergravity}} \label{sec:SO4sectorinIIA}
We are now ready to give the complete, non-linear embedding of the dynamical SO(4)--invariant sector of $D=4$ ${\cal N}=8$ ISO(7) supergravity into massive type IIA. As discussed in \cite{Guarino:2015vca}, the embedding of the full ${\cal N}=8$ theory is naturally expressed, at the level of the IIA metric, dilaton and form potentials, in terms of the restricted, SL(7)-duality hierarchy introduced in \cite{Guarino:2015qaa}. Accordingly, the complete IIA embedding of the SO(4)--invariant sector is naturally written in terms of the tensor hierarchy discussed in section \ref{sec:SO(4)-sector}.
\subsection{Consistent embedding formulae}
\label{subsec:SU3UpliftSubsec}
The SO(4)--invariant consistent embedding formulae can be obtained by particularising the ${\cal N}=8$ formulae given in (3.12), (3.13) of \cite{Guarino:2015vca} (see also \cite{Guarino:2015jca}) to the case at hand. It is a matter of simple algebra to find the embedding of the two- and three-form potentials of the $D=4$ tensor hierarchy (\ref{fieldContentHierarchy}) into their $D=10$ counterparts, using their ${\cal N}=8$ embedding (\ref{embeddingTH}). In contrast, as is usually the case, the embedding of the $D=4$ scalars entails a lengthy computation. Here, we give the final result, referring to appendix \ref{subset:S6Geom} for further details on the relevant geometric structures that arise in the calculation.
In order to express the result, it is convenient to introduce constrained coordinates $\tilde{\mu}^i$, $i=1,2,3$, on the two-sphere $S^2$,
\begin{eqnarray} \label{eq:S2}
\delta_{ij} \, \tilde{\mu}^i \tilde{\mu}^j = 1 \; ,
\end{eqnarray}
and right-invariant one-forms\footnote{The right-invariant one-forms $\rho^i$ on $S^3$ shouldn't be confused with the $D=4$ pseudoscalar $\rho$.} $\rho^i$, $i=1,2,3$, on the three-sphere $S^3$. These are subject to the Maurer-Cartan equations
\begin{eqnarray} \label{eq:MC}
d\rho^i = - \tfrac12 \epsilon^i{}_{jk} \, \rho^j \wedge \rho^k \; .
\end{eqnarray}
It is also convenient to introduce the following combinations of $D=4$ scalars \cite{Guarino:2015qaa}
\begin{eqnarray}
X = 1+ e^{2\varphi} \chi^2 \; , \qquad Y = 1 + e^{2\phi} \rho^2 \; , \qquad Z = e^{2\varphi} \chi \big( e^\phi \rho - e^\varphi \chi \big) \; ,
\end{eqnarray}
and the following functions of $D=4$ scalars and an angle $\alpha$ on the internal $S^6$,
\begin{eqnarray} \label{deltas}
&& \Delta_1 = e^{\phi} \sin^2 \alpha + e^{\varphi } \cos^2 \alpha \; , \nonumber \\[5pt]
&& \Delta_2 = e^{\varphi} X \sin^2 \alpha + e^{2\varphi -\phi} Y \cos^2 \alpha \; , \nonumber \\[5pt]
&& \Delta_3 = X \Delta_1 \Delta_2 - Z^2 \sin^2 \alpha \, \cos^2 \alpha \; .
\end{eqnarray}
Using these definitions, the complete nonlinear embedding of the SO(4)--invariant field content (\ref{fieldContentHierarchy}) of ISO(7) supergravity into type IIA reads,
{\setlength\arraycolsep{1pt}
\begin{eqnarray} \label{KKSO4sectorinIIA}
d\hat{s}^2_{10} & = & e^{\frac18 \varphi} X^{1/4} \Delta_1^{1/8} \Delta_3^{1/4} \Big[ \, ds_4^2 \nonumber \\[6pt]
&& \;\; +g^{-2} X \Delta_1 \Delta_3^{-1} \cos^2 \alpha \, \delta_{ij} D \tilde{\mu}^i D \tilde{\mu}^j + g^{-2} e^{-\varphi} X^{-1} d\alpha^2+ g^{-2} X^{-1} \Delta_1^{-1} \sin^2 \alpha \, d\tilde{s}^2 (S^3) \Big] , \nonumber \\[12pt]
e^{\hat{\phi}} &=& e^{\frac{11}{4} \varphi} X^{-1/2} \Delta_1^{3/4} \Delta_3^{-1/2} \; , \nonumber \\[12pt]
\hat{A}_{\sst{(3)}} &=& C^1 \cos^2\alpha + C^2 \sin^2\alpha -g^{-1} \, \sin \alpha \cos \alpha \, B \wedge d\alpha \nonumber \\[5pt]
&& + \tfrac12 \, g^{-3} \, \chi \, \sin \alpha \cos \alpha \, d\alpha \wedge \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge \rho^k \nonumber \\[5pt]
&& -\tfrac14 \, g^{-3} \, e^{\varphi} \chi X \Delta_3^{-1} \big( X \Delta_1 + Z \cos^2 \alpha \big) \sin^2 \alpha \cos^2 \alpha \, \epsilon_{ijk} \, D \tilde{\mu}^i \wedge D \tilde{\mu}^j \wedge \rho^k \nonumber \\[5pt]
&& +\tfrac14 \, g^{-3} \, e^{\varphi} \chi \Delta_1^{-1} \, \sin^2 \alpha \cos^2 \alpha \,\tilde{\mu}_i D \tilde{\mu}_j \wedge \rho^i \wedge \rho^j \nonumber \\[5pt]
&& +\tfrac{1}{48} \, g^{-3} \, X^{-1} \Delta_1^{-2} \big( e^\phi \rho X \Delta_1 + e^\varphi \chi Z \cos^2 \alpha \big) \sin^4 \alpha \, \epsilon_{ijk} \, \rho^i \wedge \rho^j \wedge \rho^k \; , \nonumber \\[12pt]
\hat{B}_{\sst{(2)}} &=& -\tfrac12 \, g^{-2} \, e^{2\varphi} \chi X^{-1} \, \sin \alpha \, d\alpha \wedge \tilde{\mu}_i \, \rho^i \nonumber \\[5pt]
&& -\tfrac12 \, g^{-2} \, e^{2\varphi+\phi} \Delta_3^{-1} \big( \rho X \Delta_1 - \chi Z \sin^2 \alpha \big) \cos^3 \alpha \, \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge D \tilde{\mu}^k \nonumber \\[5pt]
&& +\tfrac12 \, g^{-2} \, e^{2\varphi+\phi} \chi X^{-1} \Delta_1^{-1} \, \sin^2 \alpha \cos \alpha \, D \tilde{\mu}_i \wedge \rho^i \nonumber \\[5pt]
&& + \tfrac18 \, g^{-2} \, e^{2\varphi} \chi X^{-2} \Delta_1^{-2} \big( e^\varphi X \Delta_1 - e^\phi Z \sin^2 \alpha \big) \sin^2 \alpha \cos \alpha \, \epsilon_{ijk} \, \tilde{\mu}^i \rho^j \wedge \rho^k \, , \nonumber \\[12pt]
\hat{A}_{\sst{(1)}} &=& -\tfrac12 \, g^{-1} \, e^{-2\varphi} Z \Delta_1^{-1} \, \sin^2 \alpha \cos\alpha \, \tilde{\mu}_i \, \rho^i \; ,
\end{eqnarray}
where we use the ten-dimensional Einstein frame conventions of appendix A of \cite{Guarino:2015vca}.
}Indices $i,j$ are raised and lowered with $\delta_{ij}$, and $d\tilde{s}^2 (S^3)$ is the round metric on the $S^3$ on which the $\rho^i$ are defined. We have also introduced the following covariant derivative and one-form ${\cal A}^i$,
\begin{eqnarray} \label{covDerKKGen}
D \tilde{\mu}^i = d\tilde{\mu}^i + \epsilon^i{}_{jk} {\cal A}^j \tilde{\mu}^k \; , \qquad \textrm{with} \qquad
{\cal A}^i = - \tfrac12 Z X^{-1} \Delta_1^{-1} \sin^2 \alpha \, \rho^i \; .
\end{eqnarray}
These embedding formulae depend on the (non-vanishing) $D=4$ electric gauge coupling $g$, but not on the magnetic coupling $m$. Thus, they simultaneously describe the embedding of the dynamical SO(4)--invariant sector of the purely electric, $m = 0$, and dyonic, $m \neq 0$, ISO(7) gauging of $D=4$ ${\cal N}=8$ supergravity into massless and massive, respectively, type IIA supergravity.
The consistent embedding formulae (\ref{KKSO4sectorinIIA}) are valid in full generality for $D=4$ dynamical fields. However, being expressed in terms of the tensor hierarchy (\ref{fieldContentHierarchy}), they contain redundant degrees of freedom. As discussed in general in \cite{Guarino:2015vca}, these redundancies can be eliminated by expressing the consistent embedding in terms of the IIA field strengths and using the $D=4$ duality relations. In the case at hand, the only contributions from the $D=4$ form field strengths (\ref{eq:FieldStrengths}) happen to occur in the IIA four-form $\hat F_{\sst{(4)}}$,
\begin{eqnarray} \label{F4D=4FS}
\hat F_{\sst{(4)}} = H_{\sst{(4)}}^1 \cos^2 \alpha + H_{\sst{(4)}}^2 \sin^2 \alpha +g^{-1} \sin \alpha \cos \alpha \, d\alpha \wedge H_{\sst{(3)}} + \cdots
\end{eqnarray}
where the dots stand for $D=4$ scalar and derivative-of-scalar contributions without Hodge dualisations. Equation (\ref{F4D=4FS}) follows from (\ref{KKSO4sectorinIIA}) after using the $D=4$ definitions (\ref{eq:FieldStrengths}). It thus provides a ten-dimensional cross-check on the four-dimensional calculation of section \ref{sec:SO(4)-sector}. More importantly, equation (\ref{F4D=4FS}) now expresses the consistent embedding in terms of the independent metric and scalar degrees of freedom contained in the $D=4$ Lagrangian (\ref{L_SO4}), (\ref{VSO4}), when the duality relations (\ref{eq:FieldStrengthDuality}) are employed. A simpler example will be presented in section \ref{sec:dilatons}.
A long calculation allows us to compute the scalar contributions to the IIA field strengths. For simplicity, we present the result for constant $D=4$ scalars\footnote{The complete, dynamical IIA field strengths contain the contributions in (\ref{F4D=4FS}), (\ref{KKfieldstrengths}), plus omitted contributions from $d\varphi$, $d\phi$, $d\chi$, $d\rho$ with no Hodge dualisations.}
{\setlength\arraycolsep{1pt}
\begin{eqnarray} \label{KKfieldstrengths}
\hat{F}_{\sst{(4)}} & = & U \, \textrm{vol}_4 \nonumber \\[8pt]
&& + \tfrac14 \, \Big[ m g^{-4} \, e^{4\varphi +\phi } \chi X^{-1} \Delta_3^{-1} \, \big[ \rho X \Delta_1 - \chi Z \sin^2 \alpha \big] \cos^2 \alpha
%
%
-2g^{-3} \, \chi \nonumber \\
%
&& \qquad + 2 g^{-3} \, e^{-\phi} \Delta_1^{-1} \Delta_3^{-2} \sin^2 \alpha \cos^2 \alpha \nonumber \\
&& \qquad\quad\; \times \Big( \big( e^\phi X - e^{\varphi} Y \big) e^\varphi X \Delta_1 + \big( e^\phi - e^{\varphi} \big) e^\phi X \Delta_2 - e^\phi Z^2 \big(\cos^2 \alpha - \sin^2 \alpha \big) \Big) \nonumber \\
%
%
&& \qquad\quad\; \times \Big( e^\phi Z \big[ \rho X \Delta_1 - \chi Z \sin^2 \alpha \big] \cos^2 \alpha +e^\varphi \chi X \Delta_1 \big[ X \Delta_1 + Z \cos^2\alpha \big] \Big) \nonumber \\
&& \qquad - g^{-3} \, e^{\phi} Z \Delta_1^{-1} \Delta_3^{-1} \sin^2 \alpha \cos^2 \alpha \nonumber \\
&& \qquad\quad\quad \times \Big( 2\big[ (e^\phi -e^\varphi) \rho X - \chi Z \big] \cos^2 \alpha -3\big[ \rho X \Delta_1 - \chi Z \sin^2 \alpha \big] \Big) \nonumber \\
%
%
&& \qquad + 2 g^{-3} \, e^{\varphi} \chi X \Delta_3^{-1} \nonumber \\
&& \qquad\quad\; \times \Big( \big[ X \Delta_1 - Z \cos^2 \alpha \big] - \big[ 2 X \Delta_1 - 3 Z \sin^2 \alpha \big] \cos^2 \alpha -\big( e^\phi - e^{\varphi} \big) X \sin^2 \alpha \cos^2 \alpha \Big) \Big] \nonumber \\
%
&& \qquad\qquad \times \sin \alpha \cos \alpha \, d\alpha \wedge \epsilon_{ijk} \, D \tilde{\mu}^i \wedge D \tilde{\mu}^j \wedge \rho^k \nonumber \\[8pt]
&& - \tfrac18 \, \chi e^\varphi \, \Delta_1^{-1}\Delta_3^{-1} \Big[ m g^{-4} \, e^{3\varphi +\phi } X^{-2} \Delta_1^{-1} \, \Big( \chi e^{ \phi } \Delta_3 \sin^2 \alpha \nonumber \\
&& \qquad \qquad \qquad \qquad \qquad \qquad \quad + \big( e^\varphi X \Delta_1 - e^\phi Z \sin^2 \alpha \big) \big( \rho X \Delta_1 - \chi Z \sin^2 \alpha \big) \cos^2 \alpha \Big) \nonumber \\
%
%
&& \qquad \qquad \qquad\quad \;\;\; - 2 g^{-3} \, \Big( \Delta_3 + \big( X \Delta_1 + Z \cos^2 \alpha \big) \big(X \Delta_1 +Z \sin^2 \alpha \big) \Big) \Big] \nonumber \\
%
&& \qquad\qquad \times \sin^2 \alpha \cos^2 \alpha \, D \tilde{\mu}_i \wedge D \tilde{\mu}_j \wedge \rho^i \wedge \rho^j \nonumber \\[8pt]
&& + \tfrac14 \, X^{-1} \Delta_1^{-1} \Big[ m g^{-4} \, e^{4\varphi +\phi } \chi^2 X^{-1} \sin^2 \alpha \nonumber \\
%
&& \qquad \qquad \quad \;\;\; + 2 g^{-3} \, e^\varphi Z \Delta_1^{-2} \Delta_3^{-1} \Big(e^\varphi \chi X \Delta_1 \big[ X \Delta_1 + Z \cos^2 \alpha \big] \nonumber \\
%
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + e^\phi Z \big[ \rho X \Delta_1 -\chi Z \sin^2 \alpha \big] \cos^2 \alpha \Big) \sin^2\alpha \cos^2\alpha \nonumber \\
%
%
&& \qquad \qquad \quad \;\;\; - g^{-3} \, \chi \Delta_1^{-2} \Big( 2 \, \Delta_1^2 \big[ X \Delta_1 + Z \sin^2 \alpha \big] - 2 \, e^\varphi \big[ e^\varphi X \Delta_1 -e^\phi Z \sin^2 \alpha \big] \cos^2\alpha \nonumber \\
%
&& \qquad \qquad \qquad \qquad \qquad \quad \;\; + e^\varphi \Delta_1 \big[ 2 X \Delta_1 + Z \cos^2 \alpha \big] \sin^2 \alpha \Big) \Big] \nonumber \\
%
&& \qquad\qquad \times \sin \alpha \cos \alpha \, d\alpha \wedge \tilde{\mu}_i \, D \tilde{\mu}_j \wedge \rho^i \wedge \rho^j \nonumber \\[8pt]
&& - \tfrac{1}{48} \, X^{-2} \Delta_1^{-2} \Big[ m g^{-4} \, e^{4\varphi } \chi^2 X^{-1} \big[ e^\varphi X \Delta_1 -e^\phi Z \sin^2 \alpha \big] \nonumber \\
%
&& \qquad \qquad \quad \;\;\; - g^{-3} \, \Delta_1^{-1} \Big( X \Delta_1 \big[ 2 e^\phi \rho X \Delta_1 -3 e^\varphi \chi Z \sin^2 \alpha \big] \nonumber \\
%
&& \qquad \qquad \qquad \qquad \qquad \quad +2 e^\varphi X \big[ e^\phi \rho X \Delta_1 + e^\varphi \chi Z \cos^2 \alpha \big] \nonumber \\
%
&& \qquad \qquad \qquad \qquad \qquad \quad -2 \chi Z \Delta_1 \big[ 3 X \Delta_1 +2 Z \sin^2 \alpha \big] + e^\phi \chi Z^2 \sin^4 \alpha \Big) \Big] \nonumber \\
%
%
&& \qquad\qquad \times \sin^3 \alpha \cos \alpha \, d\alpha \wedge \epsilon_{ijk} \, \rho^i \wedge \rho^j \wedge \rho^k \, , \nonumber \\[25pt]
\hat{H}_{\sst{(3)}} & = & \tfrac12 \, g^{-2} \, e^{2\varphi } \Delta_3^{-2} \, \Big[ 2 \Big( \big( e^\phi X - e^{\varphi} Y \big) e^\varphi X \Delta_1 + \big( e^\phi - e^{\varphi} \big) e^\phi X \Delta_2 \nonumber \\
%
&& \qquad \qquad \qquad \qquad \quad - e^\phi Z^2 \big(\cos^2 \alpha - \sin^2 \alpha \big) \Big) \big( \rho X \Delta_1 - \chi Z \sin^2 \alpha \big) \cos^2 \alpha \nonumber \\
%
&& \qquad \qquad \qquad \quad \; -e^\phi \Delta_3 \Big( 2 \big[ \big( e^\phi - e^{\varphi} \big) \rho X - \chi Z \big] \cos^2 \alpha \nonumber \\
%
&& \qquad \qquad \qquad \qquad \quad -3 \big( \rho X \Delta_1 - \chi Z \sin^2 \alpha \big) \Big) \Big] \sin \alpha \cos^2 \alpha \, d\alpha \wedge \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge D \tilde{\mu}^k \nonumber \\[5pt]
&& - \tfrac12 \, g^{-2} \, e^{2\varphi } X^{-1} \Delta_1^{-2} \Delta_3^{-1} \, \Big[ 2 e^{\varphi + \phi } Z \big( \rho X \Delta_1 - \chi Z \sin^2 \alpha \big) \cos^2 \alpha \nonumber \\
%
%
&& \qquad \qquad \qquad \qquad \qquad \quad - \chi \, \Delta_3 \, e^\varphi \big( \Delta_1 +2 e^\phi \big) \Big] \sin \alpha \cos^2 \alpha \, d\alpha \wedge D \tilde{\mu}_i \wedge \rho^i \nonumber \\[5pt]
&& + \tfrac18 \, g^{-2} \, X^{-2} \Delta_1^{-2} \Delta_3^{-1} \, \Big[ e^{3\varphi } \chi X \Delta_1 \Delta_3 + e^{2\varphi + \phi } \big( 2 X \Delta_1 + Z \sin^2 \alpha \big) \Big( \chi \Delta_3 \nonumber \\
%
%
&& \qquad\qquad\qquad\qquad \qquad - Z \big( \rho X \Delta_1 - \chi Z \sin^2 \alpha \big) \cos^2 \alpha \Big) \Big] \sin^2 \alpha \cos \alpha \, \epsilon_{ijk} \, D \tilde{\mu}^i \wedge \rho^j \wedge \rho^k \nonumber \\[5pt]
&& + \tfrac18 \, g^{-2} \, e^{2\varphi } \chi X^{-2} \Delta_1^{-2} \, \Big[ 2 \, e^{2\varphi } X \cos^2 \alpha - 2 \, \Delta_1 \big( X \Delta_1 + Z \sin^2\alpha \big) \nonumber \\
%
%
&& \qquad\qquad\qquad\qquad \qquad - \big( e^\varphi X \Delta_1 - e^\phi Z \sin^2 \alpha \big) \sin^2 \alpha \Big] \sin \alpha \, d\alpha \wedge \epsilon_{ijk} \, \tilde{\mu}^i \rho^j \wedge \rho^k \; , \nonumber \\[25pt]
\hat{F}_{\sst{(2)}} & = & \tfrac12 \, m g^{-2} \, e^{2\varphi+\phi} \Delta_3^{-1} \big( \chi Z \sin^2 \alpha -\rho X \Delta_1 \big) \cos^3 \alpha \, \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge D \tilde{\mu}^k \nonumber \\[5pt]
&&
+\tfrac12 \, \Big[ m g^{-2} \, e^{2\varphi+\phi} \chi X^{-1} - g^{-1} \, e^{-2\varphi} Z \Big] \Delta_1^{-1} \, \sin^2 \alpha \cos \alpha \, D \tilde{\mu}_i \wedge \rho^i \nonumber \\[5pt]
&& -\tfrac12 \, \Big[ m g^{-2} \, e^{2\varphi} \chi X^{-1} + g^{-1} \, e^{- \varphi} Z \Delta_1^{-2} \big( 2\cos^2 \alpha - e^{- \varphi} \sin^2\alpha \, \Delta_1 \big) \Big] \, \sin \alpha \, d\alpha \wedge \tilde{\mu}_i \, \rho^i \nonumber \\[5pt]
&& +\tfrac18 \, X^{-1} \Delta_1^{-2} \, \Big[ m g^{-2} \, e^{2\varphi} \chi X^{-1} \big( e^\varphi X \Delta_1 - e^\phi Z \sin^2 \alpha \big) + 2 g^{-1} \, e^{- 2\varphi} Z\big( X \Delta_1 + Z \sin^2 \alpha \big) \Big] \nonumber \\
&&\qquad\qquad\qquad \; \times \sin^2 \alpha \cos \alpha \, \epsilon_{ijk} \, \tilde{\mu}^i \rho^j \wedge \rho^k \; ,
\end{eqnarray}
}together with $\hat F_{\sst{(0)}} = m$ \cite{Guarino:2015jca}. In agreement with the discussions in \cite{Guarino:2015vca,Varela:2015uca}, the field strengths (\ref{KKfieldstrengths}) now do depend on the magnetic gauge coupling $m$ of the $D=4$ supergravity, unlike the gauge potentials (\ref{KKSO4sectorinIIA}). By the consistency of the truncation, the metric and dilaton in (\ref{KKSO4sectorinIIA}), together with the constant-scalar field strengths (\ref{KKfieldstrengths}), solve the field equations of massive type IIA supergravity at any critical point of the $D=4$ scalar potential (\ref{VSO4}). We will make this explicit for the ${\cal N}=3$ critical point in section \ref{sec:AdSsolutions}.
The Freund--Rubin term $U \, \textrm{vol}_4$ in $\hat F_{\sst{(4)}}$ follows from the general SL(7)--covariant four-form expression given in \cite{Guarino:2015vca}. It can be written in terms of the SO(4)--invariant four-form field strengths $H_{\sst{(4)}}^1$ and $H_{\sst{(4)}}^2$ as
\begin{equation} \label{eq:FR1}
U \, \textrm{vol}_4 = H_{\sst{(4)}}^1 \cos^2 \alpha +H_{\sst{(4)}}^2 \sin^2 \alpha \;
\end{equation}
or, using the dualisation equations (\ref{eq:FieldStrengthDuality}), as
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\label{USO4}
U &=& \Big[ g \, e^{\varphi} \big(1+e^{2\varphi} \chi^2 \big) \Big( 4 - 4 \, e^{\phi+3 \varphi } \rho \chi^3 + e^{\varphi - \phi } \big(1- 3 e^{2\varphi} \chi^2 \big) \big(1+ e^{2\phi} \rho^2 \big) \Big) \nonumber \\[4pt]
&& \quad + m\, e^{\phi + 6\varphi } \rho \chi^2 \Big] \cos^2\alpha \nonumber \\[4pt]
&& + \Big[ g \, \big(1+e^{2\varphi} \chi^2 \big) \Big( 3 e^{\varphi} - 3 \, e^{\phi+4 \varphi } \rho \chi^3 +6 e^{\phi} \big(1+e^{2\varphi} \chi^2 \big) -4 e^{\phi} \big(1+e^{2\varphi} \chi^2 \big)^2 \Big) \nonumber \\[4pt]
&& \quad + m\, e^{\phi + 6\varphi } \chi^3 \Big] \sin^2\alpha \; ,
\end{eqnarray}
}in terms of the $D=4$ scalars. Note that, while the IIA field strengths (\ref{KKfieldstrengths}) are evaluated for constant scalars, the Freund--Rubin term (\ref{USO4}) is valid beyond that assumption: it takes on the same form also for dynamical scalars. Some calculation reveals that $U$ is related to the $D=4$ scalar potential (\ref{VSO4}) and its derivatives via
\begin{equation} \label{UintermsofV}
g \, U = -\tfrac{1}{3} \, V +\tfrac13 \Big( \partial_\phi V -\rho \, \partial_\rho V \Big) \, \cos^2\alpha +\tfrac{1}{12} \Big( \partial_\varphi V -2 \partial_\phi V - \chi \, \partial_\chi V +2 \rho \, \partial_{\rho} V \Big) \, \sin^2\alpha \; .
\end{equation}
At the critical points of the potential, recorded in table \ref{Table:SO4Points}, this expression reduces to
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{UintermsofVCritical}
g \, U_0 &=& -\tfrac{1}{3} \, V_0 \; ,
\end{eqnarray}
}in agreement with the general ${\cal N}=8$ discussion of \cite{Guarino:2015vca}. See respectively \cite{Varela:2015uca} and \cite{Godazgar:2015qia,Varela:2015ywx} for related discussions in the massive IIA on $S^6$ and $D=11$ on $S^7$ contexts.
\subsection{Local and global structure} \label{sec:Regularity}
For arbitrary values of the $D=4$ scalars, the six-dimensional internal local geometry in (\ref{KKSO4sectorinIIA}) can be regarded as the warped product of an interval $I$, on which $\alpha$ takes values, and a family of five-dimensional spaces parametrised by $\alpha$. At fixed $\alpha$, the five-dimensional space corresponds to an $S^2$ bundle over $S^3$, with connection one-forms $ {\cal A}^i$ given in (\ref{covDerKKGen}). All such bundles are trivial. In the present case, this can be seen by the fact that, at fixed $\alpha$, the curvature of the connection $ {\cal A}^i$ is identically zero by the Maurer-Cartan equations (\ref{eq:MC}). This local characterisation is useful to discuss the global extension of the geometry, to which we now turn. It is not the only possible local description, though. A different local characterisation will be given below.
Globally, the internal geometry extends smoothly into $S^6$. This is expected from the fact that the $D=4$ theory (\ref{L_SO4}), (\ref{VSO4}) arises upon consistent Kaluza--Klein truncation of massive type IIA on $S^6$ via (\ref{KKSO4sectorinIIA}), and the Kaluza--Klein deformations are not supposed to change the internal topology. That the topology of the compactification space is indeed $S^6$ is most easily seen by continuously deforming the geometry into the G$_2$--invariant locus (\ref{eq:SO4toG2}). On this locus, the internal metric in (\ref{KKSO4sectorinIIA}) reduces to the usual, round Einstein metric (\ref{RoundS6}) on $S^6$. The local line element (\ref{RoundS6}) is adapted to the topological construction of $S^6$ as the `join' of $S^2$ and $S^3$, provided the $S^6$ angle $\alpha$ is restricted to the interval
\begin{equation} \label{anglealpha}
\alpha \in I \equiv [ 0 , \tfrac{\pi}{2} ] \; .
\end{equation}
On the G$_2$--invariant locus and at $\alpha = 0$, the $S^2$ remains finite and the $S^3$ collapses; at the other endpoint, $\alpha = \frac{\pi}{2}$, the opposite happens.
The expression (\ref{KKSO4sectorinIIA}) makes it straightforward to continuously deform the internal geometry to the round metric on $S^6$, since it is given as a function of the $D=4$ scalar manifold (\ref{ScalManN=1}). However, once the scalars are fixed to their specific values at some critical point of the potential (\ref{VSO4}), as {\it e.g.}~in the explicit ${\cal N}=3$ solution (\ref{SO4SolN=3}) below, tracking down the deformation into the round $S^6$ geometry is no longer obvious. In such cases, it is more useful to directly characterise the internal $S^6$ by verifying that it still corresponds to the join of $S^2$ and $S^3$. Namely, that the shrinking patterns of $S^2$ and $S^3$ at each endpoint of the interval $I$ remain valid away from the G$_2$--invariant locus. To see this, we use the definitions (\ref{deltas}) to compute the behaviour of the relevant metric functions at both endpoints of $I$. At the lower end,
\begin{eqnarray} \label{lowerend}
&& e^{ \varphi} X^2 \Delta_1 \Delta_3^{-1} \, \cos^2\alpha \, \xrightarrow[\alpha \rightarrow 0]{} \, e^{ \phi - \varphi} XY^{-1} - e^{ 2\phi - 2 \varphi} ( X^2 -e^{ -2 \varphi} Z^2 ) Y^{-2} \, \alpha^2 \; + \ {\cal O}(\alpha^4) \; , \nonumber \\[4pt]
&& e^{\varphi} \Delta_1^{-1} \, \sin^2\alpha \, \xrightarrow[\alpha \rightarrow 0]{} \, \alpha^2 +{\cal O}(\alpha^4) \; .
\end{eqnarray}
Thus, $S^2$ remains finite and $S^3$ shrinks to zero size for all values of the $D=4$ scalars. At the upper end,
\begin{eqnarray} \label{upperend}
&& e^{ \varphi} X^2 \Delta_1 \Delta_3^{-1} \, \cos^2\alpha \, \xrightarrow[\alpha \rightarrow \frac{\pi}{2}]{} \, (\tfrac{\pi}{2} -\alpha)^2 +{\cal O}( (\tfrac{\pi}{2}-\alpha)^4) \; , \nonumber \\[4pt]
&& e^{\varphi} \Delta_1^{-1} \, \sin^2\alpha \, \xrightarrow[\alpha \rightarrow \frac{\pi}{2}]{} \, e^{ -\phi + \varphi} + e^{ -2\phi + 2 \varphi} (\tfrac{\pi}{2} -\alpha)^2 +{\cal O}( (\tfrac{\pi}{2}-\alpha)^4) \; ,
\end{eqnarray}
and the opposite happens: $S^2$ shrinks and $S^3$ remains finite for all $D=4$ scalar values.
An alternate local characterisation of the internal geometry in (\ref{KKSO4sectorinIIA}) may be given as follows. The local internal geometry may also be regarded as an $S^2$ bundle over the four-dimensional local geometry $M_4 \equiv I \times S^3$, where $I$ is the interval (\ref{anglealpha}) parametrised by $\alpha$. This local construction is a generalisation of the twistor fibration over a four-dimensional Riemannian space $M_4$. In the usual twistor construction, the metric $ds^2 (M_4)$ on $M_4$ is taken to be Einstein with (anti)self-dual Weyl tensor. The local metric on the six-dimensional twistor bundle is
\begin{eqnarray} \label{eq:twistormetric}
ds^2_6 = \tfrac14 \delta_{ij} D\tilde{\mu}^i D\tilde{\mu}^j + \tfrac12 ds^2 (M_4) \; ,
\end{eqnarray}
see {\it e.g.}~\cite{Gibbons:1989er}. Here, $\tilde{\mu}^i$ parametrise an $S^2$ as in (\ref{eq:S2}), and the covariant derivatives $D\tilde{\mu}^i$ are defined as in the left most equation in (\ref{covDerKKGen}), in terms of a $M_4$--valued connection ${\cal A}^i$. Being four-dimensional and Einstein, $M_4$ is automatically quaternionic-K\"ahler. The curvature of the connection,
\begin{equation} \label{eq:curvature}
{\cal F}^i = d{\cal A}^i + \tfrac12 \epsilon^i{}_{jk} {\cal A}^j \wedge {\cal A}^k \; ,
\end{equation}
is proportional to the quaternionic-K\"ahler forms $J^i$ on $M_4$. The self-duality or antiself-duality of the Weyl tensor on $M_4$ devolves in the antiselfduality or self-duality of ${\cal F}^i$ with respect to the metric $ds^2 (M_4)$. For example, the twistor bundle on $M_4 = S^4$ coincides with the three-dimensional complex projective space, $\mathbb{CP}^3$. Taking $ds^2 (M_4)$ to be the usual round metric on $S^4$,
\begin{eqnarray}
ds^2 (S^4) = d\alpha^2 + \sin^2\alpha \, ds^2 (S^3)
\end{eqnarray}
(with $\alpha$ here ranging in $ 0 \leq \alpha \leq \pi$) and
\begin{eqnarray}
{\cal A}^i = \tfrac12 ( 1- \cos\alpha) \, \rho^i ,
\end{eqnarray}
where the $\rho^i$ are the right-invariant Maurer-Carten one-forms on $S^3$, subject to (\ref{eq:MC}), the twistor bundle metric (\ref{eq:twistormetric}) becomes the homogeneous nearly-K\"ahler metric on $\mathbb{CP}^3$.
The local internal metric in (\ref{KKSO4sectorinIIA}) is a generalisation of the twistor construction. In our case, $M_4 \equiv I \times S^3$ is the upper $S^4$ hemisphere, given the range (\ref{anglealpha}) of $\alpha$. The metric $ds^2_4 (M_4)$ induced on it is not selfdual Einstein for any values of the $D=4$ scalars. On the G$_2$--invariant locus (\ref{eq:SO4toG2}) the $S^2$ fibration trivialises, ${\cal A}^i = 0$, and the local geometry becomes locally a warped product of $S^2$ and $I \times S^3$. Away from the G$_2$--invariant locus, the $S^2$ is warped (unlike in (\ref{eq:twistormetric})), and non-trivially fibered through (\ref{covDerKKGen}) over $I \times S^3$. The curvature (\ref{eq:curvature}) of the connection ${\cal A}^i$ is
\begin{equation} \label{eq:ConnectionFS}
{\cal F}^i = -e^{\varphi} Z X^{-1} \Delta_1^{-2} \sin \alpha \cos \alpha \, d \alpha \wedge \rho^i + \tfrac18 Z X^{-2} \Delta_1^{-2} (2 X \Delta_1 + Z \sin^2 \alpha ) \sin^2 \alpha \, \epsilon^i{}_{jk} \rho^j \wedge \rho^k \; ,
\end{equation}
and its Hodge dual with respect to the metric induced on $I \times S^3$,
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{eq:ConnectionDualFS}
* {\cal F}^i &=& \tfrac12 e^{-\frac12 \varphi} Z X^{-2} \Delta_1^{-\frac32} (2 X \Delta_1 + Z \sin^2 \alpha ) \sin \alpha \, d \alpha \wedge \rho^i \nonumber \\[4pt]
&& -\tfrac14 e^{\frac32 \varphi} Z X^{-1} \Delta_1^{-\frac52} \sin^2 \alpha \cos \alpha \, \epsilon^i{}_{jk} \rho^j \wedge \rho^k \; .
\end{eqnarray}
}The non-trivial connection $ {\cal A}^i$ is neither selfdual nor antiself-dual for any values of the $D=4$ scalars, as nowhere on the scalar manifold (\ref{ScalManN=1}) do (\ref{eq:ConnectionFS}), (\ref{eq:ConnectionDualFS}) obey $* {\cal F}^i = \pm {\cal F}^i$.
Massive type IIA supergravity admits a class of ${\cal N}=1$ direct product solutions AdS$_4 \times M_6$ where $M_6$ is nearly-K\"ahler \cite{Behrndt:2004km,Behrndt:2004mj} or half-flat \cite{Lust:2004ig,Tomasiello:2007eq,Koerber:2008rx}. For example, $M_6$ can be taken to be the round $S^6$ equipped with its homogeneous nearly-K\"ahler structure, see appendix \ref{subset:S6Geom} for a review in the present context. On topologies different from $S^6$, a natural way to engineer nearly-K\"ahler geometries or half-flat geometries of the required type is via the usual twistor fibration \cite{Tomasiello:2007eq}. For example, $M_6$ can be taken to be $\mathbb{CP}^3$ with metric (\ref{eq:twistormetric}). Our local geometry (\ref{KKSO4sectorinIIA}) restricted to the G$_2$--invariant locus (\ref{eq:SO4toG2}) reduces to the round, homogeneous nearly-K\"ahler structure on $S^6$. Away from the G$_2$ locus, as in the ${\cal N}=3$ solution of section \ref{sec:AdSsolutions}, the geometry can be locally described by the generalised twistor fibration discussed above.
On the G$_2$--invariant locus (\ref{eq:SO4toG2}), the symmetry of the configuration (\ref{KKSO4sectorinIIA}) is enhanced to a homogeneously acting G$_2$. See section \ref{subsec:G2fromSO4} for further details. Away from the G$_2$ locus, the isometry of the internal geometry is the SO(4) subgroup of SO(7) defined in either (\ref{embedding_SO4}) or (\ref{embedding_SO4_2}). The group SO(4) acts by isometries with cohomogeneity one, and is also preserved by the supergravity forms. The $ \textrm{SO}(3)_{\textrm{d}}$ subgroup of SO(4) rotates the $S^2$ fibers, and $ \textrm{SO}(3)_{\textrm{R}}$ acts on the $S^3$ base. The supersymmetry of the ${\cal N}=3$ solution will be discussed in section \ref{sec:N=3susy}.
Some generalisations can be envisaged. When the $D=4$ scalars are restricted to the G$_2$--invariant locus (\ref{eq:SO4toG2}), the type IIA solution (\ref{KKSO4sectorinIIA}) depends only on the homogeneous nearly-K\"ahler structure on $S^6$. In this case, the $S^6$ can be replaced with any other nearly-K\"ahler manifold. This situation was discussed in \cite{Varela:2015uca}. Away from the G$_2$ locus, the solution can be also generalised. Now, the generalisation entails replacing $S^3$ with the cyclic lens space $S^3/\mathbb{Z}_p$, with the identification acting on the Hopf fiber. While $S^3/\mathbb{Z}_p$ is a smooth manifold, the total six-dimensional geometry corresponding to this generalisation displays orbifold singularities.
\section{Further truncations} \label{sec:urthertruncs}
It is useful to obtain particular cases of the uplifting formulae derived above. Here, we will discuss the truncations to the sectors of the $D=4$ supergravity with G$_2$ and $\textrm{SO}(3)^\prime \times \textrm{SO}(4)^\prime$ symmetry.
\subsection{Truncation to the G$_2$ sector}
\label{subsec:G2fromSO4}
The sector of $D=4$ ISO(7) supergravity that retains singlets under the G$_2$ subgroup of SO(7) was analysed in detail in \cite{Guarino:2015qaa}, and its explicit ten-dimensional embedding worked out in \cite{Guarino:2015vca}. Its consistent IIA embedding was recovered from that of the SU(3)--invariant sector in \cite{Varela:2015uca}. Here, we will recover the embedding of the G$_2$-sector from the SO(4)--invariant consistent truncation formulae of section \ref{subsec:SU3UpliftSubsec}.
The $D=4$ G$_2$--invariant sector is recovered from the SO(4) sector by imposing the identifications (\ref{eq:SO4toG2}). Bringing these relations to the consistent embedding formulae (\ref{KKSO4sectorinIIA}), we find that the connection (\ref{covDerKKGen}) trivialises, ${\cal A}^i =0$, and that the scalar dependence of the internal metric factorises in front of the round Einstein metric $ds^2(S^6)$ on $S^6$ foliated as in (\ref{RoundS6}). The internal $S^6$ dependence drops out from the dilaton. Finally, all the dependence of the IIA potentials on the internal $S^6$ combines into the homogeneous nearly-K\"ahler structure ${\cal J}$, $\Upomega$ on $S^6$, through the expressions (\ref{NKintermsofmus}). More concretely, (\ref{KKSO4sectorinIIA}) reduces to
{\setlength\arraycolsep{0pt}
\begin{eqnarray} \label{G2embeddingGeom}
&& d \hat{s}_{10}^2 = e^{\frac{3}{4} \varphi} \big( 1+e^{2 \varphi} \chi^2 \big)^{\frac{3}{4}} ds^2_4 + g^{-2} e^{-\frac{1}{4} \varphi} \big( 1+e^{2 \varphi} \chi^2 \big)^{-\frac{1}{4}} ds^2(S^6) \; , \nonumber \\[5pt]
&& e^{\hat \phi} = e^{\frac{5}{2} \varphi} \big( 1+e^{2 \varphi} \chi^2 \big)^{-\frac32} \; , \nonumber \\[5pt]
&& \hat A_{\sst{(3)}} = C + g^{-3} \chi \, \textrm{Im} \, \Upomega \; , \qquad
\hat B_{\sst{(2)}} = g^{-2} \, e^{2 \varphi} \chi \big( 1+e^{2 \varphi} \chi^2 \big)^{-1} \, {\cal J} \; , \qquad
\hat A_{\sst{(1)}} = 0 \; ,
\end{eqnarray}
}in agreement with the formulae for the consistent truncation to the G$_2$--invariant sector given in (4.3) of \cite{Guarino:2015vca}. Similarly, the constant-scalar field strengths (\ref{KKfieldstrengths}) reduce to the corresponding contributions in (4.4) of \cite{Guarino:2015vca}.
\subsection{Dilatons}
\label{sec:dilatons}
According to (\ref{eq:SO4toSO3pSO4p}), the $\textrm{SO}(3)^\prime \times \textrm{SO}(4)^\prime$ --invariant sector of ${\cal N}=8$ ISO(7) supergravity retains only the dilatons $\phi$, $\varphi$, along with the two- and three-forms in the tensor hierarchy (\ref{fieldContentHierarchy}). From (\ref{KKSO4sectorinIIA}), (\ref{eq:SO4toSO3pSO4p}), it is apparent that the field strengths in this subsector will not contain terms in $d \varphi$ or $d\phi$, prior to imposing the dualisation (\ref{eq:FieldStrengthDuality}). In other words, equation (\ref{F4D=4FS}) for $\hat F_{\sst{(4)}}$ is exact (the dots can be disregarded) and $\hat H_{\sst{(3)}}$ and $\hat F_{\sst{(2)}}$ are zero. Using the dualisation conditions (\ref{eq:FieldStrengthDuality}), the full non-linear embedding of the $D=4$ metric plus dilaton sector into massive type IIA reads, at the level of the field strengths,
{\setlength\arraycolsep{1pt}
\begin{eqnarray} \label{KKSO4sectorinIIADilatons}
d\hat{s}^2_{10} & = & e^{\frac18 \varphi} \Delta_1^{1/8} \Delta_3^{1/4} \Big[ \, ds_4^2 \nonumber \\[6pt]
&& \;\; +g^{-2} \Delta_1 \Delta_3^{-1} \cos^2 \alpha \, d\tilde{s}^2 (S^2) + g^{-2} e^{-\varphi} d\alpha^2+ g^{-2} \Delta_1^{-1} \sin^2 \alpha \, d\tilde{s}^2 (S^3) \Big] , \nonumber \\[12pt]
e^{\hat{\phi}} &=& e^{\frac{11}{4} \varphi} \Delta_1^{3/4} \Delta_3^{-1/2} \; , \nonumber \\[12pt]
\hat F_{\sst{(4)}} & =& \Big[ g \, \big( 4 \, e^{\varphi} + e^{2\varphi - \phi } \big) \cos^2\alpha + \big( 3 e^{\varphi} + 2 e^{ \phi } \big) \sin^2\alpha \Big] \, \textrm{vol}_4 +g^{-1} \sin \alpha \cos \alpha \, d\alpha \wedge * \big( d\phi - d\varphi \big) \nonumber \\[12pt]
\hat H_{\sst{(3)}} & =& \hat F_{\sst{(2)}} = 0 \; ,
\end{eqnarray}
}with $\Delta_1$, $\Delta_3$ given by (\ref{deltas}) with $\chi = \rho=0$. In this sector, the fibration of $S^2$ over $ I \times S^3$ also trivialises, ${\cal A}^i=0$. Accordingly, the symmetry preserved by the configuration (\ref{KKSO4sectorinIIADilatons}) is the $\textrm{SO}(3)^\prime \times \textrm{SO}(4)^\prime$ subgroup of SO(7) defined in (\ref{embedding_SO4}), with $\textrm{SO}(3)^\prime$ and $\textrm{SO}(4)^\prime$ respectively acting on the $S^2$ and the $S^3$. By using the $D=4$ duality hierarchy (\ref{eq:FieldStrengthDuality}), the consistent embedding (\ref{KKSO4sectorinIIADilatons}) becomes expressed in terms of independent four-dimensional degrees of freedom only: the dilatons and their derivatives, and the metric, explicitly and through the Hodge dual.
\section{${\cal N}=3$ SO(4)--invariant AdS$_4$ solution of massive type IIA}
\label{sec:AdSsolutions}
By the consistency of the embedding, the ten-dimensional metric and dilaton in (\ref{KKSO4sectorinIIA}), along with the field strengths that follow from the potentials given in that equation, satisfy the equations of motion of massive type IIA supergravity provided the equations of motion that follow from the $D=4$ Lagrangian (\ref{L_SO4}), (\ref{VSO4}) are imposed. In particular, (\ref{KKSO4sectorinIIA}), (\ref{KKfieldstrengths}) evaluated on the critical points of the scalar potential (\ref{VSO4}) summarised in table \ref{Table:SO4Points} give rise to AdS$_4$ solutions of massive type IIA. The ${\cal N}=1$ and ${\cal N}=0$ critical points with G$_2$ symmetry uplift to the solutions respectively found in \cite{Behrndt:2004km} and \cite{Lust:2008zd}. The non-supersymmetric SO(7)--invariant critical point gives rise to a solution given in \cite{Romans:1985tz}. See \cite{Varela:2015uca} for these solutions in our conventions. In all these solutions with at least G$_2$ symmetry, the fibration trivialises, ${\cal A}^i = 0$, and the metric becomes the round, SO(7)--symmetric Einstein metric on $S^6$. In the G$_2$--invariant solutions, the symmetry is reduced by the supergravity forms, which take values along the homogeneous nearly-K\"ahler structure on $S^6$.
Here we are interested in the uplift of the ${\cal N}=3$ critical point of ISO(7) supergravity \cite{Gallerati:2014xra}. Bringing the vacuum expectation values of the $D=4$ scalars recorded in table \ref{Table:SO4Points} to the formulae (\ref{KKSO4sectorinIIA}), (\ref{KKfieldstrengths}), and rescaling the external $D=4$ metric and the Freund--Rubin term with the cosmological constant recorded in the table so that AdS$_4$ is unit radius, as in \cite{Varela:2015uca}, we find the massive type IIA uplift of the ${\cal N}=3$ solution. In Einstein frame it reads,
{\setlength{\arraycolsep}{1pt}
\begin{eqnarray} \label{SO4SolN=3}
d\hat{s}^2_{10} & = & L^2 \, \big( 3 + \cos 2\alpha \big)^{1/8} \Big( 3 \cos^4 \alpha + 3 \cos^2 \alpha +2 \Big)^{1/4} \Big[ \, ds^2(\textrm{AdS}_4) \nonumber \\[4pt]
&& \qquad \quad + \frac{2 \big( 3 + \cos 2\alpha \big) \cos^2 \alpha}{ 3 \cos^4 \alpha +3 \cos^2 \alpha +2 } \, \delta_{ij} D \tilde{\mu}^i D \tilde{\mu}^j + 2 \, d\alpha^2+ \frac{8 \sin^2 \alpha }{3 + \cos 2\alpha} \, d\tilde{s}^2 (S^3) \Big] , \nonumber \\[12pt]
\label{DilatonN=3} e^{\hat{\phi}} &=& e^{\phi_0} \, \frac{\big( 3 + \cos 2\alpha \big)^{3/4}}{\big( 3 \cos^4 \alpha +3 \cos^2 \alpha +2 \big)^{1/2}} \; , \nonumber \\[12pt]
L^{-3} e^{\frac{1}{4} \phi_0} \, \hat{F}_{\sst{(4)}} & = & 3\sqrt{2} \, \textrm{vol} (\textrm{AdS}_4 ) \nonumber \\[5pt]
&& -\frac{ 4\sqrt{6} \, \big( 2 \cos^4\alpha + 3 \cos^2 \alpha +3 \big) \sin \alpha \cos^3 \alpha }{ \big( 3 + \cos 2\alpha \big) \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, d\alpha \wedge \epsilon_{ijk} \, D \tilde{\mu}^i \wedge D \tilde{\mu}^j \wedge \rho^k \nonumber \\[5pt]
&& +\frac{ \sqrt{6} \, \big( 5 + 3 \cos 2\alpha \big) \sin^2 \alpha \cos^2 \alpha }{ 2 \, \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, D \tilde{\mu}_i \wedge D \tilde{\mu}_j \wedge \rho^i \wedge \rho^j \nonumber \\[5pt]
&& -\frac{ 4 \sqrt{6} \, \sin^5 \alpha \cos \alpha }{ \big( 3 + \cos 2\alpha \big)^2 } \, d\alpha \wedge \tilde{\mu}_i \, D \tilde{\mu}_j \wedge \rho^i \wedge \rho^j \nonumber \\[5pt]
&& -\frac{ 2 \sqrt{2} \, \big( 5 + 3 \cos 2\alpha \big) \sin^3 \alpha \cos \alpha}{ \sqrt{3} \, \big( 3 + \cos 2\alpha \big)^3 } \, d\alpha \wedge \epsilon_{ijk} \, \rho^i \wedge \rho^j \wedge \rho^k \, , \nonumber \\[12pt]
L^{-2} e^{-\frac{1}{2} \phi_0} \, \hat{H}_{\sst{(3)}} & = & -\frac{ 2\sqrt{3} \, \big( 3 \cos^6\alpha+ 8 \cos^4\alpha + 11 \cos^2 \alpha +2 \big) }{ \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big)^2 } \sin \alpha \cos^2 \alpha \, d\alpha \wedge \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge D \tilde{\mu}^k \nonumber \\[5pt]
&& +\frac{ 8\sqrt{3} \, \big( \cos^4\alpha + \cos^2 \alpha +2 \big) \sin \alpha \cos^2 \alpha }{ \big( 3 + \cos 2\alpha \big) \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, d\alpha \wedge D \tilde{\mu}_i \wedge \rho^i \nonumber \\[5pt]
&& +\frac{ \sqrt{3} \, \big( 3 + \cos 2\alpha \big) \sin^2 \alpha \cos \alpha }{ 2 \, \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, \epsilon_{ijk} \, D \tilde{\mu}^i \wedge \rho^j \wedge \rho^k \nonumber \\[5pt]
&& -\frac{ 2 \sqrt{3} \, \sin^5 \alpha }{ \big( 3 + \cos 2\alpha \big)^2 } \, d\alpha \wedge \epsilon_{ijk} \, \tilde{\mu}^i \rho^j \wedge \rho^k \; , \nonumber \\[12pt]
L^{-1} e^{\frac{3}{4} \phi_0} \, \hat{F}_{\sst{(2)}} & = & \frac{ \sqrt{2} \, \big( 5 + 3 \cos 2\alpha \big) \cos^3 \alpha }{ 4 \, \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge D \tilde{\mu}^k
+\frac{ 2 \sqrt{2} \, \sin^2 \alpha \cos \alpha}{ 3 + \cos 2\alpha } \, D \tilde{\mu}_i \wedge \rho^i \nonumber \\[5pt]
&& -\frac{ 4 \sqrt{2} \, \sin^3 \alpha }{ \big( 3 + \cos 2\alpha \big)^2 } \, d\alpha \wedge \tilde{\mu}_i \, \rho^i
+\frac{ 3 \sin^4 \alpha \cos \alpha }{ \sqrt{2} \, \big( 3 + \cos 2\alpha \big)^2 } \, \epsilon_{ijk} \, \tilde{\mu}^i \rho^j \wedge \rho^k \; , \nonumber \\[12pt]
L \, e^{\frac{5}{4} \phi_0} \, \hat{F}_{\sst{(0)}} & = & \frac{ \sqrt{3}}{2\sqrt{2}} \;,
\end{eqnarray}
in the IIA conventions of appendix A of \cite{Guarino:2015vca}.
}The covariant derivative of $\tilde{\mu}^i$ and the corresponding connection ${\cal A}^i$ are, from (\ref{covDerKKGen}),
\begin{eqnarray}
D \tilde{\mu}^i = d\tilde{\mu}^i + \epsilon^i{}_{jk} {\cal A}^j \tilde{\mu}^k \; , \qquad \textrm{with} \qquad
{\cal A}^i = \frac{\sin^2 \alpha }{3 + \cos 2\alpha} \, \rho^i \; ,
\end{eqnarray}
and we have defined $L^2 \equiv 2^{-\frac{31}{12}} \, 3^{\frac{3}{8}} \, g^{-\frac{25}{12}} \, m^{\frac{1}{12}}$ and $e^{\phi_0} \equiv 2^{-\frac{1}{6}} \, 3^{\frac{1}{4}} \, g^{\frac{5}{6}} m^{-\frac{5}{6}}$. A set of gauge potentials for the (internal) field strengths in (\ref{SO4SolN=3}) follows from (\ref{KKSO4sectorinIIA})\footnote{This set of gauge potentials and the metric in (\ref{SO4SolN=3}) are related to the expressions given in \cite{Pang:2015vna} by identifying their $S^6$ angle $\xi_{\textrm{PR}}$ with our $\alpha$, $\xi_{\textrm{PR}}=\alpha$, relating their $S^6$ embedding coordinates $\mu_{\textrm{PR}}^{\hat{i}}$, $\nu_{\textrm{PR}}^i$ with our $\mu^{\hat{i}}$, $\mu^i$ through
$\mu_{\textrm{PR}}^{\hat{i}}=\sin\alpha \, \tilde{\mu}^{\hat{i}}$, $\nu_{\textrm{PR}}^1=\cos\alpha \, \tilde{\mu}^3$, $\nu_{\textrm{PR}}^2=\cos\alpha \, \tilde{\mu}^2$, $\nu_{\textrm{PR}}^3=-\cos\alpha \, \tilde{\mu}^1$,
letting $\frac{2^{\frac{7}{8}}}{9\sqrt{2}} L^2_{\textrm{PR}}= L^2$, and rearranging significantly. The explicit expressions (\ref{muS4coords}), (\ref{LIandRIMCforms}) for the $S^3$ embedding coordinates $\tilde{\mu}^{\hat{i}}$ and the right-invariant forms $\rho^i$ are also useful for this comparison. Note, however, that our expressions for the ${\cal N}=3$ solution follow directly from the uplifting formulae (\ref{KKSO4sectorinIIA}) for the dynamical SO(4)--invariant sector of ${\cal N}=8$ ISO(7) supergravity, which were not given in \cite{Pang:2015vna}.}:
{\setlength\arraycolsep{1pt}
\begin{eqnarray} \label{KKformpotentialsN=3}
L^{-3} e^{\frac{1}{4} \phi_0} \, \hat{A}_{\sst{(3)}} &=& \frac{2\sqrt{2}}{\sqrt{3}} \, \sin \alpha \cos \alpha \, d\alpha \wedge \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge \rho^k \nonumber \\[5pt]
&& -\frac{ 2 \sqrt{2} \, \sin^2 \alpha \cos^2 \alpha }{ \sqrt{3} \, \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, \epsilon_{ijk} \, D \tilde{\mu}^i \wedge D \tilde{\mu}^j \wedge \rho^k \nonumber \\[5pt]
&& +\frac{4 \sqrt{2} \, \sin^2 \alpha \cos^2 \alpha}{\sqrt{3} \, \big( 3 + \cos 2\alpha \big) } \,\tilde{\mu}_i D \tilde{\mu}_j \wedge \rho^i \wedge \rho^j \nonumber \\[5pt]
&& -\frac{2 \sqrt{2} \, \big( 2 + \cos 2\alpha \big) \sin^4 \alpha }{3 \sqrt{3} \, \big( 3 + \cos 2\alpha \big)^2 } \, \epsilon_{ijk} \, \rho^i \wedge \rho^j \wedge \rho^k \; , \nonumber \\[12pt]
L^{-2} e^{-\frac{1}{2} \phi_0} \, \hat{B}_{\sst{(2)}} &=& -\frac{2}{\sqrt{3}} \, \sin \alpha \, d\alpha \wedge \tilde{\mu}_i \, \rho^i
+ \frac{ \big( 5 +3 \cos 2\alpha \big) \cos^3 \alpha }{ \sqrt{3} \, \big( 3 \cos^4\alpha + 3\cos^2 \alpha +2 \big) } \, \epsilon_{ijk} \, \tilde{\mu}^i D \tilde{\mu}^j \wedge D \tilde{\mu}^k \nonumber \\[4pt]
&& +\frac{4 \sin^2 \alpha \cos \alpha}{\sqrt{3} \, \big( 3 + \cos 2\alpha \big) } \, D \tilde{\mu}_i \wedge \rho^i
+\frac{\big( 7 + \cos 2\alpha \big) \sin^2 \alpha \cos \alpha}{\sqrt{3} \, \big( 3 + \cos 2\alpha \big)^2 } \, \epsilon_{ijk} \, \tilde{\mu}^i \rho^j \wedge \rho^k \, , \nonumber \\[12pt]
L^{-1} e^{\frac{3}{4} \phi_0} \, \hat{A}_{\sst{(1)}} &=& \sqrt{2} \; \frac{\sin^2 \alpha \cos\alpha }{3 + \cos 2\alpha } \, \tilde{\mu}_i \, \rho^i \; .
\end{eqnarray}
}
All the comments made in section \ref{sec:SO4sectorinIIA} for the generic solution away from the G$_2$-locus apply to the specific ${\cal N}=3$ solution (\ref{SO4SolN=3}). The internal metric and supergravity forms extend smoothly on $S^6$. Locally, the solution can be regarded as a (trivial) $S^2$ bundle over $S^3$ foliated by $\alpha$ or, alternatively, as the warped generalisation of the twistor fibration discussed in section \ref{sec:Regularity}. The angle $\alpha$ has range (\ref{anglealpha}), $\tilde{\mu}^i$ parametrise $S^2$ via (\ref{eq:S2}) and $\rho^i$ are the right-invariant Maurer-Cartan one-forms on $S^3$, subject to (\ref{eq:MC}). The solution displays a cohomogeneity-one isometry group $\textrm{SO}(4) \equiv \textrm{SO}(3)_{\textrm{d}} \times \textrm{SO}(3)_{\textrm{R}}$, where $\textrm{SO}(3)_{\textrm{d}}$ and $\textrm{SO}(3)_{\textrm{R}}$ respectively act on the $S^2$ fibers and the $S^3$ base. The solution can be generalised by replacing $S^3$ with the cyclic Lens space $S^3/\mathbb{Z}_p$, a generalisation that introduces orbifold singularities. The ${\cal N}=3$ supersymmetry of the solution is shown in the next section.
\section{Supersymmetry of the ${\cal N}=3$ solution} \label{sec:N=3susy}
The gravitini of the $D=4$ ${\cal N}=8$ ISO(7) supergravity lie in the spinor representation of SO(7). Under (\ref{embedding_SO4}), this branches as\footnote{More precisely, here and below we refer to the Spin groups, $\mathrm{SU}(2)^\prime$, $\mathrm{SU}(2)_{\mathrm{L}}$, $\mathrm{SU}(3)_{\mathrm{R}}$ and $\mathrm{SU}(2)_{\mathrm{d}}$.}
\begin{eqnarray}
\label{eq:8ofSO7branching}
%
\mathbf{8}\;\stackrel{\mathrm{SO}(3)^\prime \times \mathrm{SO}(3)_{\mathrm{L}} \times \mathrm{SO}(3)_{\mathrm{R}} }{\longrightarrow} \; ( \mathbf{2} , \mathbf{2}, \mathbf{1}) + ( \mathbf{2} , \mathbf{1}, \mathbf{2})
%
%
\;\stackrel{ \mathrm{SO}(3)_{\mathrm{d}} \times \mathrm{SO}(3)_{\mathrm{R}} }{\longrightarrow} \;
%
( \mathbf{1} , \mathbf{1} ) + ( \mathbf{3} , \mathbf{1} ) + ( \mathbf{2} , \mathbf{2} ) \; .
%
\end{eqnarray}
At the ${\cal N}=1$ G$_2$--invariant AdS critical point, only the $( \mathbf{1} , \mathbf{1} )$ gravitino remains massless, while all others pick up masses \cite{Guarino:2015qaa}. The full symmetry of this solution within the $D=4$ ${\cal N}=8$ supergravity is $\textrm{OSp}(4|1) \times \textrm{G}_2$. At the ${\cal N}=3$, SO(4)--invariant critical point, it is the $ ( \mathbf{3} , \mathbf{1} )$ gravitini that remain massless \cite{Guarino:2015qaa}. While the ${\cal N}=3$ critical point is invariant under $\mathrm{SO}(4) \equiv \mathrm{SO}(3)_{\mathrm{d}} \times \mathrm{SO}(3)_{\mathrm{R}}$, the massless gravitini are only invariant under the second factor, and transform as a triplet under the first factor. The symmetry of the ${\cal N}=3$ solution within the ${\cal N}=8$ theory is thus $\textrm{OSp}(4|3) \times \mathrm{SO}(3)_{\mathrm{R}}$, with $\mathrm{SO}(3)_{\mathrm{d}} \subset \textrm{OSp}(4|3)$ identified as the R-symmetry group.
These (super)symmetry groups are preserved by the ten-dimensional uplift, so the above considerations should allow us to identify the $G$-structures carried by the family of type IIA configurations (\ref{KKSO4sectorinIIA}). The $\mathbb{R}^7$ that furnishes the fundamental representation of the semisimple, SO(7), part of the $D=4$ gauge group is to be identified with the ambient space of the uplifting $S^6$. In other words, this SO(7) can be regarded as the generic structure group of the ambient $\mathbb{R}^7$, with the internal supersymmetry parameters transforming in the $\mathbf{8}$. On the G$_2$--invariant locus (\ref{eq:SO4toG2}), the type IIA configuration (\ref{KKSO4sectorinIIA}) is ${\cal N}=1$. The G$_2$--invariant supersymmetry parameter corresponds, via (\ref{embedding_SO4_2}), to the $( \mathbf{1} , \mathbf{1} )$ singlet in (\ref{eq:8ofSO7branching}). The structure of $\mathbb{R}^7$ gets reduced to G$_2$ (holonomy), which in turn descends into $S^6$ as a nearly-K\"ahler SU(3)-structure.
Away from the G$_2$ locus, as in the solution (\ref{SO4SolN=3}), the IIA configuration (\ref{KKSO4sectorinIIA}) is ${\cal N}=3$. The supersymmetry parameter transforms under $\mathrm{SO}(3)_{\mathrm{d}} \times \mathrm{SO}(3)_{\mathrm{R}}$ as the $( \mathbf{3} , \mathbf{1} )$ in (\ref{eq:8ofSO7branching}). The ambient $\mathbb{R}^7$ is thus equipped with an SU(2)--structure, with $\mathrm{SU}(2) \equiv \mathrm{SO}(3)_{\mathrm{R}}$ and R-symmetry $\mathrm{SO}(3)_{\mathrm{d}}$. Recall that an SU(2)--structure in seven dimensions is characterised by a real one-form and a real two-form, transforming as triplets of the R-symmetry group, see {\it e.g.}~\cite{DallAgata:2003txk}. Denoting the $\mathbb{R}^7$ coordinates by $x^I$, $I=1, \ldots, 7$, and splitting $I= (i, \hat{i})$, $i=1,2,3$, $\hat{i}=0,1,2,3$ as in appendix \ref{subset:S6Geom}, the one- and two-forms of our seven-dimensional $\mathrm{SO}(3)_{\mathrm{R}}$--structure can be identified as $dx^i$ and $\tfrac12 (J^i)_{\hat{i}\hat{j}} \, dx^{\hat{i}} \wedge dx^{\hat{j}} $, with $(J^i)_{\hat{i}\hat{j}}$ defined in (\ref{eq:QK4}). These indeed transform as triplets under the $\mathrm{SO}(3)_{\mathrm{d}}$ R-symmetry. The SU(2)-structure on $\mathbb{R}^7$ descends on $S^6$ as an identity structure. The latter is characterised by an $\mathrm{SO}(3)_{\mathrm{d}}$ triplet of scalars, of one-forms, and of two-forms, that can be constructed as spinor bilinears.
Rather than characterising the identity structure, we will directly contruct the $\mathrm{SO}(3)_{\mathrm{d}}$ triplet of Killing spinors, focusing on the ${\cal N}=3$ solution (\ref{SO4SolN=3}) for definiteness. In principle, one would expect that the consistency of the uplift should determine the relevant Killing spinors from combinations of those of the round $S^6$. In practice, however, such formulae have never been worked out (although see {\it e.g.}~\cite{Nicolai:2011cy} for a discussion). It then turns out to be more efficient, though still a rather demanding exercise, to construct the Killing spinors by direct integration of the type IIA Killing spinor equations on the background (\ref{SO4SolN=3}). Here we give the end result and sketch the main steps to derive it. Further details can be found in appendices \ref{app:KillingSpinors} and \ref{app:doublets}.
Let $\hat \epsilon$ be the ten-dimensional Majorana supersymmetry parameter, let $\zeta^i_{\pm}$, $i=1,2,3$, be three of the chiral and antichiral Killing spinors of AdS$_4$, and let $\chi^i$ be an $\mathrm{SO}(3)_{\mathrm{d}}$ triplet of Dirac spinors on the internal six-dimensional geometry corresponding to the solution (\ref{SO4SolN=3}). We take
\begin{equation} \label{eps10D}
\hat{\epsilon} = \zeta_{i+} \otimes \chi^i+\zeta_{i -} \otimes \chi^{ic} \; ,
\end{equation}
with the $\mathrm{SO}(3)_{\mathrm{d}}$ indices contracted, and raised and lowered with $\delta_{ij}$. The superscript $c$ denotes Majorana conjugation. The ten-dimensional spinor $\hat{\epsilon}$ given by (\ref{eps10D}) is manifestly Majorana, by the second relation in (\ref{eq:KSEAdS4}). We require that (\ref{eps10D}) annihilates the supersymmetry variations of the type IIA fermions. Using the AdS$_4$ Killing spinor equations (\ref{eq:KSEAdS4}) obeyed by $\zeta^i_{\pm}$, this turns out to be equivalent to the following set of equations for $\chi^i$ and $\chi^{ic}$, defined on the six-dimensional internal geometry:
\begin{subequations}
\begin{align}
& e^{-\widetilde A} \chi^{ic}+ \bigg[ d \tilde{\slashed{A} }+ \tfrac14 e^{\hat{\phi}} \left( \hat{F}_{\sst{(0)}} + \slashed{\hat{F}}_{\sst{(2)}} \hat\gamma+ \slashed{\hat{G}}_{\sst{(4)}} - i \hat{G}_{\sst{(0)}}\right)\bigg]\chi^i=0 \; ,\label{eq:6dSUSY1}\\[4pt]
&\bigg[ d \hat{\slashed{\phi}} + \tfrac{1}{2} \slashed{\hat{H}}_{\sst{(3)}} \hat \gamma+ \tfrac14 e^{\hat{\phi}} \left(5 \hat{F}_{\sst{(0)}} + 3 \slashed{\hat{F}}_{\sst{(2)}} \hat\gamma+ \slashed{\hat{G}}_{\sst{(4)}} +i \hat{G}_{\sst{(0)}} \right)\bigg]\chi^i=0 \; ,\label{eq:6dSUSY2}\\[4pt]
&\bigg[\nabla_{\underline{M}} +\tfrac{1}{4} \slashed{\hat{H}}_{\underline{M}} \hat\gamma+ \tfrac18 e^{\widetilde A+\hat{\phi}} \left( \hat{F}_{\sst{(0)}} - \slashed{\hat{F}}_{\sst{(2)}} \hat\gamma+ \slashed{\hat{G}}_{\sst{(4)}} + i \hat{G}_{\sst{(0)}} \right)\gamma_{\underline{M}} \bigg]\chi^i=0\label{eq:6dSUSY3} \; .
\end{align}
\end{subequations}
Here,
\begin{equation} \label{eq:stringframeWF}
e^{2\widetilde A} = e^{\frac12 \phi_0} L^2 \, (3 + \cos 2\alpha)^{1/2} \; ,
\end{equation}
is the string frame, for convenience, warp factor of the solution (\ref{SO4SolN=3}), $\hat{\phi}$ the dilaton therein, $\hat{F}_{\sst{(0)}}$, $\hat{F}_{\sst{(2)}}$, $\hat{H}_{\sst{(3)}}$, the IIA field strengths, $\hat{G}_{\sst{(4)}}$ the internal component of $\hat{F}_{\sst{(4)}}$ and $\hat{G}_{\sst{(0)}} \equiv 3\sqrt{2} \, L^{3} e^{-\frac{1}{4} \phi_0} \, e^{-4\widetilde A}$. Also, $\gamma_{\underline{M}}$ are the six-dimensional gamma matrices, with $\underline{M} = 1, \ldots , 6$ tangent-space indices, $\hat{\gamma}$ is the six-dimensional chirality matrix, $\slashed{\hat{H}}_{\sst{(3)}} \equiv \frac{1}{3!} \hat{H}_{\underline{M} \underline{N} \underline{P}} \, \gamma^{\underline{M} \underline{N} \underline{P}}$, and $\slashed{\hat{H}}_{\underline{M}} \equiv \frac{1}{2!} \hat{H}_{\underline{M} \underline{N} \underline{P}} \, \gamma^{ \underline{N} \underline{P}}$, etc., with $\gamma^{\underline{M_1} \ldots \underline{M_n} } \equiv \gamma^{[\underline{M_1} } \cdots \gamma^{\underline{M_n]} } $.
As argued above, the spinor $\chi^i$ must transform in the $( \mathbf{3} , \mathbf{1} )$ of the $\mathrm{SO}(4) \equiv \mathrm{SO}(3)_{\mathrm{d}} \times \mathrm{SO}(3)_{\mathrm{R}}$ symmetry group of the solution (\ref{SO4SolN=3}). As shown in appendix \ref{app:KillingSpinors}, the most general such spinor may be written as
\begin{equation}\label{eq:6d triplets}
\chi^i = \tfrac{1}{2} \, e^{\frac{\widetilde A}{2}} \, \bigg[\left(\begin{array}{c} f_{1+}\\ f_{1-} \end{array}\right )\otimes\eta^i_1+\left(\begin{array}{c} f_{2+}\\ f_{2-} \end{array}\right )\otimes\eta^i_2+\left(\begin{array}{c} f_{3+}\\ f_{3-} \end{array}\right )\otimes\eta^i_3+\left(\begin{array}{c} f_{4+}\\ f_{4-} \end{array}\right )\otimes\eta^i_4\bigg] \; .
\end{equation}
The factor of $\tfrac{1}{2} \, e^{\frac{\widetilde A}{2}}$ is chosen for convenience, $f_{1 \pm}$, etc., are functions of $\alpha$, and $\eta^i_1 , \ldots , \eta^i_4$ are independent triplets of spinors on $S^2 \times S^3$ built as tensor products of the Killing spinors of $S^2$ and $S^3$. Specifically, let $\psi^\alpha$, $\alpha = 1,2$, be a doublet of spinors of $S^2$, constructed from the $S^2$ Killing spinors, and $\hat{\psi}^{\alpha}=(\sigma_2)^{\alpha}_{~\beta}\psi^{\beta c}$ for $\psi^{\alpha c}$ the Majorana conjugate of $\psi^{\alpha}$. The index $\alpha$ here labels the doublet of $\textrm{SO}(3)^\prime$ in (\ref{embedding_SO4}) which rotates $S^2$. Let $\xi^\alpha$, $\alpha =1,2$, be the two Killing spinors of $S^3$ that transform as a doublet under the $\mathrm{SO}(3)_{\mathrm{L}}$ in (\ref{embedding_SO4}) and are singlets under $\mathrm{SO}(3)_{\mathrm{R}}$. Then,
\begin{eqnarray} \label{eq:triplets}
\eta^i_1 \equiv (\sigma_2 \sigma_i)_{\alpha\beta} \, \psi^{\alpha}\otimes \xi^{\beta} \; , \qquad
\eta^i_2 \equiv (\sigma_2 \sigma_i)_{\alpha\beta} \, \hat\psi^{ \alpha}\otimes \xi^{\beta} \; , \nonumber \\[4pt]
\eta^3_i \equiv \tilde{\mu}_i (\sigma_2 )_{\alpha\beta} \, \psi^{\alpha}\otimes \xi^{\beta} \; , \qquad
\eta^4_i \equiv \tilde{\mu}_i(\sigma_2 )_{\alpha\beta} \, \hat\psi^{ \alpha}\otimes \xi^{\beta} \; ,
\end{eqnarray}
where $\sigma^i$ are the Pauli matrices and $\tilde{\mu}_i$ are defined in (\ref{eq:S2}). The Pauli matrix $\sigma_2$ appears as the $\mathrm{SO}(3)_{\mathrm{d}}$ charge conjugation matrix.
Inserting $\chi^i$ given by (\ref{eq:6d triplets}), (\ref{eq:triplets}) and its Majorana conjugate $\chi^{ic}$ into the Killing spinor equations (\ref{eq:6dSUSY1})--(\ref{eq:6dSUSY3}), an involved calculation produces a(n overdetermined) system of algebraic relations among the functions $f_{1 \pm}$, etc., and a differential equation on the interval (\ref{anglealpha}) for a combination of them. The details are summarised in appendix \ref{app:KillingSpinors}. Significant further massaging allows us to bring the solution of this set of algebraic and differential equations into the form
\begin{align}\label{eq:fs}
f_{1+}&=- i f_{2-} = \cos\left(\tfrac{\beta}{2}\right) e^{ i (\Psi_+-\frac{1}{2}\Theta)} \; ,\nonumber\\[4pt]
f_{2+}&= i f_{1-} = \sin\left(\tfrac{\beta}{2}\right) e^{- i (\Psi_-+\frac{1}{2}\Theta)} \; ,\nonumber\\[4pt]
f_{3+}&= i f_{4-} = \frac{\sqrt{2}\cos\alpha}{\sqrt{\cos^2\alpha+1}}\cos\left(\tfrac{\beta}{2}\right) e^{ i (\frac{\pi}{3}+\Psi_+-\frac{1}{2}\Theta)} \; ,\nonumber\\[4pt]
f_{4+}&=- i f_{3-} = \frac{\sqrt{2}\cos\alpha}{\sqrt{\cos^2\alpha+1}}\sin\left(\tfrac{\beta}{2}\right) e^{ i (\frac{\pi}{3}-\Psi_--\frac{1}{2}\Theta)} \; ,
\end{align}
up to an arbitrary overall normalisation. We have defined the following functions of $\alpha$:
\begin{align} \label{eq:auxfuncs}
\tan\Theta& \equiv \sqrt{\frac{2}{3}}\frac{1}{\cos\alpha\sqrt{\cos^2\alpha+1}} \; , \nonumber\\[4pt]
\cos\beta& \equiv \frac{\sqrt{2} \sin\alpha}{\sqrt{\cos^2\alpha+1}\sqrt{3\cos^4\alpha+3 \cos^2\alpha+ 2}} \; ,\nonumber\\[4pt]
\tan\Psi_{\pm}& \equiv \pm\sqrt{\frac{2}{3}}\frac{1}{\cos\alpha\sqrt{\cos^2\alpha+1}}-\frac{\sqrt{3\cos^4\alpha+3 \cos^2\alpha+ 2}\sin\alpha}{\sqrt{3}\cos\alpha \sqrt{\cos^2\alpha+1}(\sqrt{2}\cos\alpha-\sqrt{\cos^2\alpha+1})} \; .
\end{align}
The $\mathrm{SO}(3)_{\mathrm{d}}$ triplet of $\mathrm{SO}(3)_{\mathrm{R}}$--invariant spinors $\chi^i$ given by (\ref{eq:6d triplets}) with (\ref{eq:fs}), (\ref{eq:auxfuncs}) solve the Killing spinor equations (\ref{eq:6dSUSY1})--(\ref{eq:6dSUSY3}) on the ${\cal N}=3$ solution (\ref{SO4SolN=3}) of massive type IIA supergravity. As an additional check, we have also verified that the three independent $\mathcal{N}=1$ pure spinors that follow from (\ref{eq:6d triplets}) with (\ref{eq:fs}), (\ref{eq:auxfuncs}) solve the pure spinor supersymmetry conditions for AdS$_4$ solutions of massive IIA supergravity given in \cite{Grana:2006kf}.
Equipped with the ${\cal N}=3$ Killing spinors, we can proceed to compute the spinor bilinear forms and the torsion classes of the corresponding identity structure. Here we will only give the scalar bilinears. We expect one $\mathrm{SO}(3)_{\mathrm{d}}$ triplet of scalar bilinears, based on the fact that the six-dimensional identity structure is inherited from an $\mathrm{SO}(3)_{\mathrm{R}}$--structure on the ambient $\mathbb{R}^7$. Let us see how this scalar triplet arises from spinor bilinears. In principle, two such real or purely imaginary scalar bilinears can be constructed out of $\chi^i$, namely, $\chi^{i\dag}\chi^{j}$ and $\chi^{i\dag}\hat\gamma\chi^{j}$. Both of these sit in principle in the $\bm{3} \times \bm{3} \rightarrow \bm{1} + \bm{3} + \bm{5}$ of $\mathrm{SO}(3)_{\mathrm{d}}$. Direct computation from (\ref{eq:6d triplets}), (\ref{eq:fs}), (\ref{eq:auxfuncs}) shows that
\begin{equation}\label{eq:norms}
\chi^{i\dag}\hat\gamma\chi^{j}=- i e^{\widetilde A} \, \frac{ \sqrt{2}\cos\alpha}{\sqrt{\cos^2\alpha+1}} \, \epsilon^{ijk} \tilde{\mu}_k \; ,\qquad
\chi^{i\dag}\chi^{j}= e^{\widetilde A} \, \delta^{ij} \; .
\end{equation}
Thus, for both bilinears, the $\bm{5}$ components vanish identically. The first bilinear is the triplet argued above, and the second one is a singlet which, however, is not independent but is algebraically related to the former.
Equation (\ref{eq:norms}) provides a further consistency check on our Killing spinors. It was shown in \cite{Grana:2006kf} that, for ${\cal N}=1$ supersymmetric warped product solutions of massive IIA supergravity containing AdS$_4$, the ${\cal N}=1$ internal Killing spinor $\chi$ must satisfy $\chi^{\dag}\hat\gamma\chi=0$ and $\chi^{\dag}\chi \propto e^{\widetilde A}$, where $e^{2\widetilde A}$ is the string frame warp factor. It is straightforward to see from (\ref{eq:norms}) with $i=j$ that each individual $\chi^i$, $i=1,2,3$, satisfies these ${\cal N}=1$ conditions.
\section{Outlook}
In this paper we have studied an ${\cal N}=3$ solution of massive IIA supergravity first considered in \cite{Pang:2015vna,Pang:2015rwd}. We have described in detail the sector of ISO(7) supergravity with SO(4) invariance, which includes this solution as a point in its moduli space. This has allowed us to better understand its geometry. The solution consists of a fibration over an interval $I$ of a certain $S^2$-bundle $M_5$ over $S^3$, with the $S^2$ shrinking at one endpoint of the interval and the $S^3$ at the other, so that the full topology is that of an $S^6$ (as expected for vacua of the ISO(7) supergravity).
Moreover, we have been able to obtain the spinorial parameters $\chi^i$, $i=1,2,3$ under which it is supersymmetric, thus confirming the expectation that it has ${\cal N}=3$ supersymmetry. This expectation was based on the amount of supersymmetry of the vacuum in the four-dimensional ISO(7) supergravity; but while uplift formulas are available for all physical fields, they are not for the supersymmetry parameters, and thus so far a full proof that the solution is ${\cal N}=3$ was lacking.
Our results open the way to several possible developments. First of all, the structure of the spinorial parameters $\chi^i$ is not completely fixed by the SO(4) invariance. The solution has cohomogeneity one: the SO(4) orbits are copies of the $S^2$ bundle over $S^3$, and thus a priori the isometry group leaves several functions of the coordinate $\alpha$ on $I$ that appear in the $\chi^i$ undetermined. For the present solution these are fixed by the Killing spinor equations, but it is easy to set up a more general Ansatz where both these functions and those in the physical fields are allowed to vary, without breaking the SO(4) invariance and in particular ${\cal N}=3$ supersymmetry (whose R-symmetry is one of the SO(3) factors in the SO(4)).
Several arguments lead one to suspect the existence of more general ${\cal N}=3$ solutions in massive IIA. On $\mathbb{CP}^3$, such solutions are predicted to exist by holography \cite{gaiotto-t} and found \cite{gaiotto-t2} in first approximation in a regime where the Romans mass $\hat{F}_{\sst{(0)}}$ is small. Varying $\hat{F}_{\sst{(0)}}$ beyond this regime suggests the existence of a line of solutions. Since $\mathbb{CP}^3$ can be written as a foliation of copies of $T^{1,1}$, it is plausible that such solutions might be related to the ones we are considering here, and that thus there might be a line of deformations in this case, too. A possible analogy is offered by ${\cal N}=2$ solutions: in that case, a line of solutions exists \cite{Aharony:2010af} that connects the ${\cal N}=6$ massless solution on $\mathbb{CP}^3$ to an analogue of the solution in \cite{Guarino:2015jca} obtained by replacing $\mathbb{CP}^2$ with $\mathbb{CP}^1\times \mathbb{CP}^1$.
\section*{Acknowledgements}
We thank Alberto Zaffaroni for discussions. NTM is funded by the Italian Ministry of Education, Universities and Research under the Prin project ``Non Perturbative Aspects of Gauge Theories and Strings'' (2015MP2CX4) and INFN. AT is supported in part by INFN. OV is supported by NSF grant PHY-1720364 and, partially, by grant FPA2015-65480-P (MINECO/FEDER UE) from the Spanish Government.
|
3,212,635,537,642 | arxiv | \section{Introduction}
Multidimensional continued fractions (MCFs) were introduced by Jacobi \cite{Jac} in order to answer to a question posed by Hermite \cite{Her}, namely the existence of an algorithm defined over the real numbers that becomes eventually periodic when it processes algebraic irrationalities. In other words, Hermite asked for a generalization of the classical continued fraction algorithm that produces a periodic expansion if and only if the input is a quadratic irrational. The Jacobi algorithm deals with cubic irrationals and it was generalized to higher dimensions by Perron \cite{Per}. However, the Jacobi--Perron algorithm does not solve the Hermite problem, since it has never been proved that it becomes eventually periodic when processing algebraic irrationals. Many studies have been conducted on MCFs and their modifications, see, e.g., \cite{Ass}, \cite{Bea}, \cite{Ber}, \cite{Gar}, \cite{Ger}, \cite{Hen}, \cite{Kar}, \cite{Mur1}, \cite{Mur2}, \cite{Sch1}, \cite{Sch2}, \cite{Tam}.
In the 1970s, some authors started to study one--dimensional continued fractions over the $p$--adic numbers \cite{Bro1}, \cite{Rub}, \cite{Schn}. From these studies, it appeared difficult to find an algorithm working on the $p$--adic numbers that produces continued fractions having the same properties holding true over the real numbers (regarding approximation, finiteness and periodicity). In particular, no algorithm which provides a periodic expansion for all quadratic irrationalities has been found. Continued fractions over the $p$--adic numbers have been also studied recently in several works, like \cite{Bed1}, \cite{Bed2}, \cite{Bro2}, \cite{Cap}, \cite{Han}, \cite{Lao}, \cite{Mil}, \cite{Oot}, \cite{Poo}, \cite{Til}, \cite{Weg}.
Motivated by the above researches, in \cite{MT}, the authors started the study of MCFs in $\mathbb Q_p$, providing some results about convergence and finiteness. In particular, they gave a sufficient condition on the partial quotients of a MCF that ensures the convergence in $\mathbb Q_p$. Moreover, they presented an algorithm that terminates in a finite number of steps when rational numbers are processed.
The scope of that work was introducing the subject and providing some general properties; the terminating input of this algorithm was not fully characterized and the periodicity properties were not studied at all. This paper represents a continuation of the previous work, extending the investigation in these two directions. In particular, in Section \ref{sec:pre}, we fix the notation and we show some properties that can also be of general interest for MCFs. Section \ref{sec:fin} is devoted to the finiteness of the $p$-adic Jacobi--Perron algorithm, providing some results that improves a previous work \cite{MT} and showing also some differences with the real case. In $\mathbb R$, it is known that the Jacobi--Perron of dimension $2$ detects rational dependence, i.e., it terminates in a finite number of steps if and only if it processes rational linearly dependent inputs. On the contrary we show that this is not always true in $\mathbb Q_p$ and we also prove that in this case infinite partial quotients of the MCF have $p$--adic valuations equal to $-1$. Moreover, we give a condition that ensures the finiteness of the $p$--adic Jacobi--Perron algorithm in any dimension in terms of the $p$--adic valuation of the partial quotient. In Section \ref{sec:per}, we study the periodicity of MCF in $\mathbb Q_p$.
Specifically, we introduce the characteristic polynomial related to a purely periodic $p$--adic MCF and we see that, as in the real case, it admits a $p$--adic dominant root which generates a field containing the limits of the MCF.
Consequently, we see that a periodic MCF of dimension $m$ converges to algebraic irrationalities of degree less or equal than $m+1$ as in the real case. A further investigation on the characteristic polynomial allows to characterize some cases where the degree is maximum.
We conclude our work with a conjecture which, if proved to be true, would give a characterization of the MCFs arising by applying the $p$-adic Jacobi-Perron algorithm to $m$-tuples consisting of $\QQ$-linear dependent numbers.
\section{Preliminaries and notation} \label{sec:pre}
The classical Jacobi--Perron algorithm processes a $m$--tuple of real numbers $\bm{\alpha}_0 = (\alpha_0^{(1)}, \ldots, \alpha_0^{(m)})$ and represents them by means of a $m$--tuple of integer sequences $(\mathbf{a}^{(1)}, \ldots, \mathbf{a}^{(m)}) = ((a_n^{(1)})_{n\geq 0}, \ldots, (a_n^{(m)})_{n\geq 0})$ (finite or infinite) determined by the following iterative equations:
\begin{equation*}
\begin{cases} a_n^{(i)} = [\alpha_n^{(i)}], \quad i = 1, ..., m, \cr
\alpha_{n+1}^{(1)} = \cfrac{1}{\alpha_n^{(m)} - a_n^{(m)}}, \cr
\alpha_{n+1}^{(i)} = \cfrac{\alpha_n^{(i-1)} - a_n^{(i-1)}}{\alpha_n^{(m)} - a_n^{(m)}}, \quad i = 2, ..., m, \end{cases} n = 0, 1, 2, ...
\end{equation*}
The integer numbers $a_n^{(i)}$ and the real numbers $\alpha_n^{(i)}$, for $i = 1, \ldots m$ and $n = 0, 1, \ldots$, are called \emph{partial quotients} and \emph{complete quotients}, respectively. The sequences of the partial quotients represent the starting vector $\bm{\alpha}_0$ by means of the equations
\begin{equation} \label{eq:MCF}\begin{cases} \alpha_n^{(i-1)} = a_n^{(i-1)} + \cfrac{\alpha_{n+1}^{(i)}}{\alpha_{n+1}^{(1)}}, \quad i = 2, ..., m \cr
\alpha_n^{(m)} = a_n^{(m)} + \cfrac{1}{\alpha_{n+1}^{(i)}} \end{cases} n = 0, 1, 2, ...\end{equation}
which produce objects that generalize the classical continued fractions and are usually called \emph{multidimensional continued fractions} (MCFs).
The Jacobi--Perron algorithm has been translated into the $p$--adic field in \cite{MT}, using the function $s$ defined below that will play the role of the floor function. We define the set
$$\mathcal{Y}=\ZZ\left [\frac 1 p\right ]\cap \left (-\frac p2,\frac p 2\right).$$
\begin{definition}
The \emph{Browkin $s$-function} $s:\QQ_p\longrightarrow \mathcal{Y}$ is defined by
$$s(\alpha)= \sum_{j=k}^0 x_jp^j,$$
for every $\alpha\in\QQ_p$ written as
\(\alpha=\sum_{j=k}^\infty x_jp^j, \hbox{with } k, x_j\in\ZZ \hbox{ and } x_j\in \left (-\frac p 2,\frac p 2\right).\)
\end{definition}
Hence, the $p$--adic Jacobi--Perron algorithm processes a $m$--tuple of $p$--adic numbers $\bm{\alpha}_0 = (\alpha_0^{(1)}, \ldots, \alpha_0^{(m)})$ by the following iterative equations
\begin{equation} \label{eq:alg} \begin{cases} a_n^{(i)} = s(\alpha_n^{(i)}) \cr
\alpha_{n+1}^{(1)} = \cfrac{1}{\alpha_n^{(m)} - a_n^{(m)}} \cr
\alpha_{n+1}^{(i)} = \alpha_{n+1}^{(1)}\cdot (\alpha_n^{(i-1)} - a_n^{(i-1)}) = \cfrac{\alpha_n^{(i-1)} - a_n^{(i-1)}}{\alpha_n^{(m)} - a_n^{(m)}}, \quad i = 2, ..., m
\end{cases}
\end{equation}
for $n = 0, 1, 2, \ldots$, which define a $p$--adic MCF $[(a_0^{(1)}, a_1^{(1)}, \ldots), \ldots, (a_0^{(m)}, a_1^{(m)}, \ldots)]$ representing the starting $m$--tuple $\bm{\alpha}_0$ in the following way:
\begin{equation*}
\alpha_n^{(i)} = a_n^{(i)} + \cfrac{\alpha_{n+1}^{(i+1)}}{\alpha_{n+1}^{(1)}}, \quad \alpha_{n}^{(m)}= a_n^{(m)} + \cfrac{1}{\alpha_{n+1}^{(1)}}
\end{equation*}
for $i = 1, \ldots, m-1$ and any $n \geq 0$. The partial quotients satisfy the following conditions:
\begin{equation} \label{eq:conv}
\begin{cases}
\lvert a_n^{(1)} \rvert > 1 \cr
\lvert a_n^{(i)} \rvert < \lvert a_n^{(1)} \rvert, \quad i = 2, \ldots, m
\end{cases}
\end{equation}
for any $n \geq 1$, where in the following $\lvert \cdot \rvert$ will always denote the $p$--adic norm. Moreover, for any $n \geq 1$, we have
\begin{align} & \lvert a_n^{(1)} \rvert = \lvert \alpha_n^{(1)} \rvert,
\hbox{ and for $i = 2, \ldots, m$}\nonumber\\
& \lvert a_n^{(i)} \rvert =
\left\{
\begin{array}{l} \lvert \alpha_n^{(i)}\rvert
\hbox{ if $\lvert \alpha_n^{(i)} \rvert\geq 1$}\\
0 \hbox{ if $\lvert \alpha_n^{(i)} \rvert < 1$}\end{array}
\right .\label{eq:alpha-norme}\\
& \lvert \alpha_n^{(i)} \rvert < \lvert \alpha_n^{(1)} \rvert \quad . \nonumber
\end{align}
\begin{remark}
In \cite{MT}, the authors showed that equations \eqref{eq:conv} ensure the convergence of a MCF in $\mathbb Q_p$, i.e., given a sequence of partial quotients satisfying \eqref{eq:conv} (even if they are not obtained by a specific algorithm), then the corresponding MCF converges to a $m$--tuple of $p$--adic numbers.
\end{remark}
Similarly to the real case, we have the $n$--\emph{th convergents} of a multidimensional continued fraction defined by
$$
Q^{(i)}_n=\frac {A^{(i)}_{n}}{A^{(m+1)}_{n}},
$$
for $i=1,\ldots, m$ and $n\in\NN$, where
\begin{equation} \label{eq:nd-conv}
A^{(i)}_{-j} = \delta_{ij}, \quad A^{(i)}_{0} = a^{(i)}_0, \quad A^{(i)}_{n} = \sum_{j=1}^{m}a^{(j)}_{n}A^{(i)}_{n-j} + A_{n-m-1}^{(i)}
\end{equation}
for $i = 1, \ldots, m + 1$, $j = 1, \ldots, m$ and any $n \geq 1$, where $\delta_{ij}$ is the Kronecker delta. It can be proved by induction that for every $n \geq 1$ and $i = 1, \ldots, m$, we have
\begin{equation} \label{eq:alpha0}
\alpha_0^{(i)}=\frac {\alpha_n^{(1)}A^{(i)}_{n-1}+ \alpha_n^{(2)}A^{(i)}_{n-2}+\ldots +\alpha_n^{(m+1)}A^{(i)}_{n-m-1} }{\alpha_n^{(1)}A^{(m+1)}_{n-1}+ \alpha_n^{(2)}A^{(m+1)}_{n-2}+\ldots +\alpha_n^{(m+1)}A^{(m+1)}_{n-m-1} }
\end{equation}
We can also use the following matrices for evaluating numerators and denominators of the convergents:
\begin{equation}\label{eq:matriciA}
\mathcal{A}_n = \begin{pmatrix} a_n^{(1)} &1 &0&\ldots &0\\
a_n^{(2)} &0 &1&\ldots &0\\
\vdots& \vdots& \vdots& \vdots& \vdots\\
a_n^{(m)} &0 &0&\ldots &1\\ 1 &0 &0&\ldots &0\end{pmatrix}
\end{equation}
for any $n \geq 0$. Indeed, if we put
\begin{equation*}
\mathcal{B}_n = \begin{pmatrix} {A^{(1)}_{n}} &{A^{(1)}_{n-1}} &\ldots & {A^{(1)}_{n-m}}\\
{A^{(2)}_{n}} &{A^{(2)}_{n-1}} &\ldots & {A^{(2)}_{n-m}}\\
\vdots &\vdots&\vdots& \vdots\\
{A^{(m+1)}_{n}} &{A^{(m+1)}_{n-1}} &\ldots & {A^{(m+1)}_{n-m}}\end{pmatrix}
\end{equation*}
we have
\begin{equation*}
\mathcal{B}_n = \mathcal{B}_{n-1}\mathcal{A}_n = \mathcal{A}_0\mathcal{A}_1\ldots \mathcal{A}_n, \quad \det \mathcal{B}_n = (-1)^{m(n+1)}.
\end{equation*}
We also recall some properties proved in \cite{MT}.
\begin{proposition}
With the notation above, we have
\begin{equation*} |A_n^{(m+1)}|= \prod_{h=1}^n |a^{(1)}_h|.\end{equation*}
for any $n\geq 1$.
\end{proposition}
\begin{proposition} \label{prop:V}
Given the sequences $(V_n)_{n\geq-m}$, $i = 1, \ldots, m$, defined by
$$V_n^{(i)} = A_n^{(i)} - \alpha_0^{(i)} A_n^{(m+1)}$$
we have
\begin{enumerate}
\item $\displaystyle \lim_{n \rightarrow +\infty} \lvert V_n^{(i)} \rvert = 0$
\item $V^{(i)}_n=\sum_{j=1}^{m+1}a_n^{(j)}V^{(i)}_{n-j}$
\item $\sum_{j=1}^{m+1} \alpha^{(j)}_{n}V^{(i)}_{n-j}=0.$
\end{enumerate}
\end{proposition}
Finally, we prove the following propositions that will be useful in the next sections.
\begin{proposition} \label{prop:sum-prod}
For $n\geq 1$, we have
$$\sum_{i=1}^{m+1}\alpha^{(i)}_n A^{(m+1)}_{n-i}=\prod_{j=1}^n \alpha^{(1)}_j.$$
\end{proposition}
\begin{proof} We proceed by induction on $n$. If $n=1$ the left-hand side is equal to $\alpha^{(1)}_1 A^{(m+1)}_{0}= \alpha^{(1)}_1$. For $n>1$ we can use the inductive hypothesis and write
\begin{eqnarray*}
\prod_{j=1}^{n-1} \alpha^{(1)}_j&=& \sum_{i=1}^{m+1}\alpha^{(i)}_{n-1}A^{(m+1)}_{n-1-i}\\
&=&A^{(m+1)}_{n-m-2} + \sum_{i=1}^{m}\alpha^{(i)}_{n-1}A^{(m+1)}_{n-1-i}\\ &=&A^{(m+1)}_{n-m-2} + \sum_{i=1}^{m}\left (a^{(i)}_{n-1}+\frac{\alpha^{(i+1)}_n}{\alpha^{(1)}_n}\right )A^{(m+1)}_{n-1-i} \\ &=& (\sum_{i=1}^{m}a^{(i)}_{n-1}A^{(m+1)}_{n-1-i} + A^{(m+1)}_{n-m-2} ) + \sum_{i=1}^{m}\left (\frac{\alpha^{(i+1)}_n}{\alpha^{(1)}_n}\right )A^{(m+1)}_{n-1-i}\\
&=& A_{n-1}^{(m+1)} + \sum_{i=1}^{m}\left (\frac{\alpha^{(i+1)}_n}{\alpha^{(1)}_n}\right )A^{(m+1)}_{n-1-i}\\
&=& \frac 1 {\alpha^{(1)}_n} \left( \alpha^{(1)}_nA_{n-1}^{(m+1)} + \sum_{i=1}^{m}\alpha^{(i+1)}_nA^{(m+1)}_{n-1-i} \right) \\
&=& \frac 1 {\alpha^{(1)}_n} \sum_{i=1}^{m+1}\alpha^{(i)}_nA^{(m+1)}_{n-i}. \end{eqnarray*}
proving the claim.
\end{proof}
\begin{proposition}\label{prop:boundA}
For $i=1,\ldots, m$ and $n\in\NN$ we have
$$|A^{(i)}_n|_\infty< \frac{p^{n+1}} 2,$$
where $|\cdot|_\infty$ denotes the Euclidean norm.
\end{proposition}
\begin{proof}
We prove the thesis by induction. For $n=0$,
$$ |A^{(i)}_0|_\infty =|a^{(i)}_0|_\infty <\frac p 2;$$
for $n\leq m$
$$A^{(i)}_n= a^{(1)}_nA^{(i)}_{n-1}+\ldots + a^{(n)}_nA^{(i)}_0+a^{(n-i)}$$
By induction hypothesis, and since $|a^{(i)}_k|_\infty <\frac p 2$ for every $k$,
$$|A^{(i)}_n|_\infty < \frac {p^{n+1}} 4+\frac {p^{n}} 4+\ldots +\frac{p^2} 4 + \frac p 2=\frac {p^2} 4\left (\frac {p^{n}-1}{p-1}\right )+\frac p 2< \frac {p^{n+1}} 2.$$
For $n>m$ , we have
$$A^{(i)}_n= a^{(1)}_nA^{(i)}_{n-1}+\ldots + a^{(m)}_nA^{(i)}_{n-m}+A^{(i)}_{n-m-1}.$$
Again by induction hypothesis, and since $|a^{(i)}_k|_\infty <\frac p 2$ for every $k$,
$$|A^{(i)}_n|_\infty < \frac {p^{n+1}} 4+\frac {p^{n}} 4+\ldots +\frac{p^{n-m+2}} 4 + \frac{p^{n-m}} 2=\frac {p^{n-m+2}} 4\left (\frac {p^{m}-1}{p-1}\right )+ \frac{p^{n-m}} 2< \frac {p^{n+1}} 2.$$
\end{proof}
\begin{proposition} \label{prop:minors}
Given the MCF $[(a_0^{(1)}, a_1^{(1)}, \ldots), \ldots, (a_0^{(m)}, a_1^{(m)}, \ldots)]$
\begin{itemize}
\item[a)] every minor of $\mathcal{B}_n$ is a polynomial in $\ZZ[a^{(i)}_j, i=1,\ldots , m,\ j=0,\ldots n]$ and each monomial has the form $$\lambda c_0c_1\ldots c_{n}$$ where $\lambda\in\ZZ$ and $c_j=1$ or $c_j=a^{(i)}_j$ for some $i=1,\ldots m$.
\item[b)] The summand $\lambda a^{(1)}_0\ldots a^{(1)}_n$ does not appear in any principal minor of $\mathcal{B}_n$ except for the $1\times 1$ minor obtained by removing all rows and columns indexed by $2,\ldots, m+1 $; in this case $\lambda=\pm 1$.
\end{itemize}
\end{proposition}
\begin{proof}
\ \\
$a)$
We prove the thesis by induction on $n$. For $n=0$, we have $\mathcal B_n = \mathcal A_0$ and the thesis immediately follows. Suppose now that the statement holds for $n$ and consider $\mathcal{B}_{n+1}$. Let $M$ be a square submatrix of $\mathcal{B}_{n+1}$. If $M=\mathcal{B}_{n+1}$ then $\det(M)=\pm 1$ and we are done. So we suppose that some rows and columns miss in $M$. If $M$ does not contain the first column, then $M$ is a square submatrix of $\mathcal{B}_n$ and the result holds by inductive hypothesis. Therefore we suppose that $M$ contains the first column of $\mathcal{B}_{n+1}$ which is
\begin{equation}\label{eq:primacolonna}\begin{pmatrix} A^{(1)}_{n+1}\\ A^{(2)}_{n+1}\\
\vdots \\
A^{(m+1)}_{n+1}\end{pmatrix} =
\begin{pmatrix} a^{(1)}_{n+1}A^{(1)}_n+a^{(2)}_{n+1}A^{(1)}_{n-1}+\ldots + a^{(m)}_{n+1} A^{(1)}_{n-m+1}+A^{(1)}_{n-m}\\
a^{(1)}_{n+1}A^{(2)}_n+a^{(2)}_{n+1}A^{(2)}_{n-1}+\ldots + a^{(m)}_{n+1} A^{(2)}_{n-m+1}+A^{(2)}_{n-m}\\
\vdots\\
a^{(1)}_{n+1}A^{(m+1)}_n+a^{(2)}_{n+1}A^{(m+1)}_{n-1}+\ldots +a^{(m)}_{n+1} A^{(m+1)}_{n-m+1}+A^{(m+1)}_{n-m}\end{pmatrix}
\end{equation}
By the properties of the determinant, $\det(M)$ is the sum for $i=1,\ldots, m+1$ of the determinants of all matrices $M_i$ where $M_i$ is obtained from $M$ by replacing the first column by a subvector of
$$ a^{(i)}_{n+1} \begin{pmatrix} A^{(1)}_{n+1-i}\\
A^{(2)}_{n+1-i}\\
\vdots \\
A^{(m+1)}_{n+1-i}\end{pmatrix}$$
(to get an uniform notation, we put $a^{(m+1)}_k=1$, for every $k\in\NN$ ). Then we see that either two columns of $M_i$ are proportional, so that $\det(M_i)=0$, or $\det(M_i)=\pm a^{(i)}_{n+1}\det(M'_i)$ where $M'_i$ is a submatrix of $\mathcal{B}_n$. Then the claim holds by inductive hypothesis.\\
$b)$ Let $M$ be the square submatrix obtained from $\mathcal{B}_n$ by removing all rows and columns indexed by $I\subseteq\{1,\ldots, m+1\}$, and suppose that the summand $\lambda a^{(1)}_0\ldots a^{(1)}_n$ appears in $M$. Then by $a)$, $M$ must contain the first column of $\mathcal{B}_n$, so that it must contain also the first row. Moreover, since $\det(\mathcal{B}_n)=\pm 1$, at least one row and the corresponding column are missing. We argue again by induction on $n$. If $n=0$, then the last row must miss, (otherwise $\det(M)\in\{1,0\}$) so that the last column too must miss; then the row indexed by $m$ has the form $(a_0^{(m)},0,\ldots, 0)$ and this implied that it must miss, unless $m=1$, so that column $m$ is missing and so on. It follows that $I={2,\ldots, m+1}$, $\lambda=1$. Now suppose that the result holds for $\mathcal{B}_n$. The first column of $\mathcal{B}_{n+1}$ being as in \eqref{eq:primacolonna}, wee deduce by $a)$ that $\lambda a^{(1)}_0\ldots a^{(1)}_n$ must be a summand of $\det(M_1)$, where $M_1$ is obtained from $M$ by replacing the first column by a subvector of
$$ a^{(1)}_{n+1} \begin{pmatrix} A^{(1)}_{n}\\
A^{(2)}_{n}\\
\vdots \\
A^{(m+1)}_{n}\end{pmatrix}.$$
Then we see that the second column (and the second row) must miss in $M$ (otherwise $\det(M)=0$). Therefore $\det(M_1)=a^{(1)}_{n+1}\det(M'_1)$ where $M'_1$ is a square submatrix of $\mathcal{}B_n$ giving rise to a principal minor. Since $\lambda a^{(1)}_0\ldots a^{(1)}_n$ is a summand in $\det(M'_1)$, by inductive hypothesis $I=\{2,\ldots,m+1\}$ and
$\lambda=1$.
\end{proof}
\section{On the finiteness of the $p$--adic Jacobi--Perron algorithm} \label{sec:fin}
In \cite{MT}, the authors gave some results about the finiteness of the $p$-adic Jacobi--Perron algorithm. We recall these results below.
\begin{proposition} \label{prop:lin-dip}
If the $p$--adic Jacobi--Perron algorithm stops in a finite number of steps when processing the $m$--tuple $(\alpha^{(1)},\ldots , \alpha^{(m)}) \in \mathbb Q_p^m$, then $1,\alpha^{(1)},\ldots , \alpha^{(m)}$ are $\QQ$-linearly dependent.
\end{proposition}
\begin{proposition} \label{prop:finite}
For an input $(\alpha_0^{(1)},\ldots , \alpha_0^{(m)})\in\QQ_p^m$, the $p$--adic Jacobi--Perron algorithm terminates in a finite number of steps.
\end{proposition}
Thus, a full characterization of the input vectors which lead to a finite Jacobi--Perron expansion is still missing in the $p$--adic case.
On the other hand, in the real field it is known that the Jacobi--Perron algorithm stops in a finite number of steps if and only if $1,\alpha^{(1)},\ldots , \alpha^{(m)}$ are $\QQ$-linearly dependent for $m=2$, whereas this is not true for $m \geq 3$, see \cite[Theorem 44]{Sch1} and \cite{Dub, DubA}. Counterexamples in the latter case are provided by $m$-tuples of algebraic numbers belonging to a finite extension of $\QQ$ of degree $<m+1$ and giving rise to a periodic MCF. This shows that the finiteness an the periodicity of the Jacobi--Perron algorithm are in some way interrelated.
In this section we shall assume that $1,\alpha^{(1)},\ldots , \alpha^{(m)}$ are linearly dependent over $\QQ$, and associate to every linear dependence relation
a sequence of integers $(S_n)_{n\geq 0}$, which will be useful in the investigation of the finiteness of the $p$--adic Jacobi--Perron algorithm. In particular, in the case $m=2$, we shall provide a condition that must be satisfied by the partial quotients of an infinite MCF obtained by the $p$-adic Jacobi--Perron algorithm processing a couple $(\alpha, \beta)$, where $1, \alpha, \beta$ are $\QQ$-linearly dependent.
We shall show in next section that, unlike the real case, even for $m=2$ there exist some input vectors $\bm{\alpha}$ such that $1,\alpha^{(1)},\ldots , \alpha^{(m)}$ are $\QQ$-linearly dependent but their $p$-adic Jacobi--Perron expansion is periodic (and hence not finite). \\
Let us consider $\bm{\alpha}_0=(\alpha^{(1)}_0,\ldots , \alpha^{(m)}_0) \in \mathbb Q_p^m $ and assume that there is a linear dependence relation
\begin{equation}\label{eq:lindeprel} x_1 \alpha_0^{(1)} + \ldots + x_m \alpha_0^{(m)} + x_{m+1} = 0\end{equation}
with $x_1,\ldots, x_{m+1} \in \mathbb Z$ coprime. Then we can associate to it the sequence
\begin{equation} \label{eq:s} S_n = x_1 A_{n-1}^{(1)} + \ldots +x_m A_{n-1}^{(m)} + x_{m+1} A^{(m+1)}_{n-1}\end{equation}
for any $n \geq -m$, where $A_n^{(i)}$ are, as usual, the numerators and denominators of the convergents of the MCF of $\bm{\alpha}_0$ defined by \eqref{eq:nd-conv}. It is straightforward to see that the following identities hold:
\begin{align} & S_n \alpha_n^{(1)} + \ldots + S_{n-m+1}\alpha_n^{(m)} + S_{n-m} = 0, \hbox{ for any $n \geq 0$};\label{eq:uno}\\
& S_n = a_{n-1}^{(1)} S_{n-1} + \ldots + a_{n-1}^{(m)} S_{n-m} + S_{n-m-1}, \hbox{ for any $n \geq 1$};\label{eq:due} \\
& S_n = (a_{n-1}^{(1)} - \alpha_{n-1}^{(1)}) S_{n-1} + \ldots + (a_{n-1}^{(m)} - \alpha_{n-1}^{(m)}) S_{n-m}, \hbox{ for any $n \geq 1$};\label{eq:saa}\\
& S_n = x_1 V_{n-1}^{(1)} + \ldots + x_m V_{n-1}^{(m)}, \hbox{ for any $n \geq -m+1$}.\label{eq:quattro}\\
&\label{eq:traspostaB}\begin{pmatrix} S_n\\ S_{n-1}\\
\vdots \\ S_{n-m} \end{pmatrix} = \mathcal{B}_{n-1}^T \begin{pmatrix} x_1\\ x_2\\\vdots \\ x_{m+1} \end{pmatrix}
\end{align}
where the superscript $T$ denotes transposition.
\begin{proposition} \label{prop:s}
Given the sequence $(S_n)_{n \geq -m}$ defined by \eqref{eq:s}, we have that $S_n \in \mathbb Z$, for any $n \geq -m$ and the $\gcd$ of $S_n,\ldots, S_{n-m}$ is a power of $p$. Moreover,
\[|S_n| < \max_{1\leq i \leq m} \{|S_{n-i}|\}\]
so that if the MCF for $(\alpha_0^{(1)},\ldots,\alpha_0^{(m)}) $ is infinite, then \[\lim_{n \rightarrow +\infty} S_n = 0 \hbox{ in } \QQ_p.\]
\end{proposition}
\begin{proof}
By definition $S_n \in \mathbb Z\left[ \cfrac{1}{p} \right]$, for any $n \geq -m$, and $S_{-m+1}, \ldots S_0 \in \mathbb Z$. Then, using formula \eqref{eq:saa},
and observing that $v_p(a_{n-1}^{(i)} - \alpha_{n-1}^{(i)}) > 0$, for $i = 1, \ldots, m$, where $v_p(\cdot)$ is the $p$-adic valuation, we get $S_n \in \mathbb Z$. The assertion about the $\gcd$ is easily proved by induction, using formula \eqref{eq:due}.
Since $|a_n^{(i)} - \alpha_n^{(i)}| < 1$, from \eqref{eq:saa}, we have
\[|S_n| \leq \max_{1 \leq i \leq m} \{|a_n^{(i)} - \alpha_n^{(i)}| |S_{n-1}|\} < \max_{1 \leq j \leq m} \{|S_{n-i}|\}.\]
Finally, by Proposition \ref{prop:V} and formula \eqref{eq:quattro}
we see that $\lim_{n \rightarrow +\infty} S_n = 0$ in $\mathbb Q_p$.
\end{proof}
An immediate consequence of Proposition \ref{prop:s} is the following
\begin{corollary}\label{cor:ennesuemme} For $n\geq 0$, write $n=qm+r$ with $q,r\in\ZZ$ and $0\leq r <m$; then $v_p(S_n)> q$. In particular $v_p(S_n)> \left[ \frac n m\right]$ for every $n\geq 0$.\end{corollary}
Proposition \ref{prop:s} and Corollary \ref{cor:ennesuemme} describe the behaviour of the sequence $(S_n)$ with respect to the $p$-adic norm. We study now its behaviour with respect to the euclidean norm. We start by a general result.
\begin{proposition} \label{prop:T}
Let $(T_n)_{n \geq -m}$ be any sequence in $\RR$ satisfying
$$T_n = y_n^{(1)} T_{n-1} + \ldots + y_{n}^{(m)} T_{n-m} + T_{n-m-1}, \quad n \geq 1 $$
where $(y_n^{(1)})_{n \geq 1}, \ldots (y_n^{(m)})_{n \geq 1}$ are sequences of elements in $\mathcal{Y}$; then
$$\lim_{n \rightarrow + \infty} \cfrac{T_n}{p^n} = 0$$
in $\RR$.
\end{proposition}
\begin{proof}
In the following $\lvert \cdot \rvert_\infty$ stands for the Euclidean norm. We have
\begin{align*}
\left |\frac{T_n}{p^n} \right |_\infty
&<\frac 1 2 \left |\frac{T_{n-1}}{p^{n-1}} \right |_\infty + \frac 1 {2p} \left |\frac{T_{n-2}}{p^{n-2}} \right |_\infty + \ldots + \frac 1 {2p^{m-1}} \left |\frac{T_{n-m}}{p^{n-m}} \right |_\infty + \frac 1 {p^{m+1}} \left |\frac{T_{n-m-1}}{p^{n-m-1}} \right |_\infty\\
&\leq K_p \max\left\{\left|\frac{T_{n-1}}{p^{n-1}} \right |_\infty, \left|\frac{T_{n-2}}{p^{n-2}} \right |_\infty, \ldots, \left|\frac{T_{n-m}}{p^{n-m}} \right |_\infty, \left|\frac{T_{n-m-1}}{p^{n-m-1}} \right |_\infty \right\},
\end{align*}
where $K_p = \cfrac{1}{p^{m+1}} + \cfrac{1}{2} \sum_{k=0}^{m-1} \cfrac{1}{p^k} < 1$. Therefore
$$ \left |\frac{T_n}{p^n} \right |_\infty< K_p^{n-2} \max\left\{\left|\frac{T_{m}}{p^{m}} \right |_\infty,\left|\frac{T_{m-1}}{p^{m-1}} \right |_\infty, \ldots, \left|\frac{T_{1}}{p} \right |_\infty, \left|T_{0} \right |_\infty \right\}$$
and the claim follows.
\end{proof}
\begin{corollary} \label{cor:liminftyS}
For the sequence $(S_n)_{n \geq -m+1}$, we have
\[\lim_{n \rightarrow + \infty} \cfrac{S_n}{p^n} = 0\]
in $\mathbb R$.
\end{corollary}
\begin{proof}
By formula \eqref{eq:due}, the sequence $(S_n)_{n \geq -m}$ satisfies the hypothesis of Proposition \ref{prop:T}.
\end{proof}
We shall use the properties stated above to establish some partial converse of Proposition \ref{prop:lin-dip}. The case $(\alpha_0^{(1)},\ldots , \alpha_0^{(m)})\in\QQ^m$ is dealt by Proposition \ref{prop:finite}, so that we can assume that $(\alpha_0^{(1)},\ldots , \alpha_0^{(m)})\in\QQ_p^m\setminus \QQ^m$. Notice that in the case $m=2$ under this hypothesis there can be only one linear dependence relation \eqref{eq:lindeprel}, so that the sequence $S_n$ depends only on the sequence $(\alpha_0^{(1)},\alpha_0^{(2)})\in\QQ_p^2$.
\begin{proposition}\label{prop:denomlimgen}
Assume that the sequence $\left( \cfrac{S_n}{p^n} \right )$ has bounded denominator, i.e. there exist $k\in \ZZ$ such that $v_p(S_n)\geq n+k$, for every $n$. Then the Jacobi--Perron algorithm stops in finitely many steps when processing the input $(\alpha_0^{(1)},\ldots , \alpha_0^{(m)})$.
\end{proposition}
\begin{proof}
Assume that the Jacobi--Perron algorithm does not stop. Put $z_n=p^k \cfrac{S_n}{p^n}$; then $z_n\in\ZZ $ and the sequence $(z_n)$ tends to $0$ in the euclidean norm, by Corollary \ref{cor:liminftyS}. It follows that $z_n$ (and hence $S_n$) is 0 for $n\gg 0$, and this is impossible by formula \eqref{eq:uno}.
\end{proof}
The following theorem is the main result of this section.
To get an uniform notation, we shall put $\alpha^{(m+1)}_n=a^{(m+1)}_n=1$ for every $n$.
\begin{theorem}\label{teo:finitenessgen}
Assume that $1, \alpha_0^{(1)},\ldots , \alpha_0^{(m)}$ are $\QQ$-linearly dependent and
\begin{equation}\label{eq:condfin} v_p(a^{(j)}_n)-v_p(a^{(1)}_n)\geq j-1\quad\hbox{for $j=3,\ldots, m+1$ and any $n$ sufficiently large.} \end{equation}
Then the Jacobi--Perron algorithm stops in finitely many steps when processing the input $(\alpha_0^{(1)},\ldots , \alpha_0^{(m)})$.\end{theorem}
Notice that the condition $v_p(a^{(2)}_n)-v_p(a^{(1)}_n)\geq 1$ is always true by conditions \eqref{eq:conv}.
\begin{proof}
By \eqref{eq:uno} we get
$$\frac{S_n}{p^n}=-\frac{S_{n-1}}{p^{n-1}}\gamma_n^{(1)} -\ldots -\frac{S_{n-m}}{p^{n-m}}\gamma_n^{(m)},$$
where for $j=1,\ldots, m$
$$\gamma_n^{(j)}=\frac {\alpha_n^{(j+1)}} {p^j\alpha_n^{(1)}}.$$
By equations \eqref{eq:conv}, \eqref{eq:alpha-norme} and hypotheses \eqref{eq:condfin} we have $v_p(\gamma_n^{(j)})\geq 0$
for $n$ sufficiently large. Therefore $v_p\left (\frac{S_n}{p^n}\right ) \geq \min\left\{v_p\left (\frac{S_{n-1}}{p^{n-1}}\right ),\ldots,v_p\left (\frac{S_{n-m}}{p^{n-m}}\right )\right\}$ for $n$ sufficiently large, so that $v_p\left (\frac{S_n}{p^n}\right )\geq K$ for some $K\in\ZZ$. Then we conclude by Proposition \ref{prop:denomlimgen}.
\end{proof}
In the case $m = 2$, Theorem \ref{teo:finitenessgen} assumes the following simple form.
\begin{corollary}\label{cor:finitenessdimdue} For $m = 2$, if $1, \alpha_0^{(1)}, \alpha_0^{(2)}$ are linearly dependent over $\QQ$ and the $p$-adic Jacobi--Perron algorithm does not stop then $v_p(a_n^{(1)})=-1$ for infinitely many $n\in\NN$.
\end{corollary}
In the next section we shall present some examples where the hypotheses of Corollary \ref{cor:finitenessdimdue} are satisfied.
\begin{remark}
In the classical real case, for $m=2$, it is possible to prove that the Jacobi--Perron algorithm detects rational dependence because the sequences $(V_n^{(1)})$ and $(V_n^{(2)})$ are bounded with respect to the euclidean norm. In fact, this implies that the set of triples $(S_n, S_{n-1}, S_{n-2})$ is finite and consequently the corresponding MCF is finite or periodic. Moreover it is possible to show that a periodic expansion can not occur and consequently the Jacobi--Perron algorithm stops when processes two real numbers $\alpha, \beta$ such that $1, \alpha, \beta$ are $\QQ$-linearly dependent, see \cite{Sch1} for details.
In the $p$-adic case, the sequences $(V_n^{(i)})$ are bounded (because they approach zero in $\mathbb Q_p$, see Proposition \ref{prop:V}); but the argument above does not apply, because the $p$-adic norm is non-archimedean. However, considering that $v_p(S_n) > \frac{n}{2}$ by Corollary \ref{cor:ennesuemme}, it could be interesting to focus on the sequence of integers $\left(\cfrac{S_n}{p^{n/2}}\right)$. When this sequence is bounded with respect to the euclidean norm, it is possible to argue similarly to the real case and deduce the finiteness of the $p$-adic Jacobi--Perron algorithm on the given input.
\end{remark}
\section{On the characteristic polynomial of periodic multidimensional continued fractions} \label{sec:per}
The classical Jacobi--Perron algorithm was introduced over the real numbers with the aim of providing periodic representations for algebraic irrationalities. However, the problem regarding the periodicity of MCFs is still open, since it is not known if every algebraic irrational of degree $m + 1$ belongs to a real input vector of lenght $m$ for which the Jacobi--Perron algorithm is eventually periodic. On the contrary, periodic MCFs have been fully studied over the real numbers. Indeed, it is known that a periodic MCF represents real numbers belonging to an algebraic number field of degree less or equal than $m + 1$, see \cite{Ber2} for a survey on this topic. Moreover, for $m = 2$, Coleman \cite{Col} gave also a criterion for establishing when the periodic MCF converges to cubic irrationalities.
In this section, we start the study of the periodicity of MCFs over $\mathbb Q_p$. In particular, we shall see that, analogously to the real case, a periodic $p$-adic $m$-dimesional MCF represents algebraic irrationalities of degree less or equal than $m + 1$.
Let us consider a purely periodic MCF of period $N$:
\begin{equation} \label{eq:MCF-period} (\alpha_0^{(1)}, \ldots, \alpha_0^{(m)}) = \left[\left(\overline{a_0^{(1)}, \ldots, a_{N-1}^{(1)}}\right), \ldots, \left(\overline{a_0^{(m)}, \ldots, a_{N-1}^{(m)}}\right)\right],\end{equation}
i.e., $a_{k+N}^{(i)}=a^{(i)}_k$ for every $k\in \NN$ and $i = 1, \ldots, m$. By \eqref{eq:MCF}, we also have $\alpha_{k+N}^{(i)}=\alpha^{(i)}_k$ for every $k\in \NN$ and $i = 1, \ldots, m$, from which, follows
\begin{equation}\label{eq:periouno} \alpha^{(i)}_0=\frac {\alpha^{(1)}_0A^{(i)}_{N-1}+\ldots + \alpha^{(m)}_0A^{(i)}_{N-m}+A^{(i)}_{N-m-1}}{\alpha^{(1)}_0A^{(m+1)}_{n-1}+\ldots + \alpha^{(m)}_0A^{(m+1)}_{n-m}+A^{(m+1)}_{N-m-1}}\end{equation}
using \eqref{eq:alpha0}.
We define the matrix
$$\mathcal{M} := \mathcal{B}_{N-1}=\prod_{j=0}^{N-1}\mathcal{A}_j=\begin{pmatrix} {A^{(1)}_{N-1}} &{A^{(1)}_{N-2}} &\ldots & {A^{(1)}_{N-m-1}}\\
{A^{(2)}_{N-1}} &{A^{(2)}_{N-2}} &\ldots & {A^{(2)}_{N-m-1}}\\
\vdots &\vdots&\vdots& \vdots\\
{A^{(m+1)}_{N-1}} &{A^{(m+1)}_{N-2}} &\ldots & {A^{(m+1)}_{N-m-1}}\end{pmatrix}$$
whose characteristic polynomial $P(X)$ will be also called the characteristic polynomial of the periodic MCF \eqref{eq:MCF-period}.
From equation \eqref{eq:periouno}, we have
\[\mathcal{M}\begin{pmatrix}\alpha^{(1)}_0\\ \vdots\\ \alpha^{(m)}_0\\ 1 \end{pmatrix} =
\left (
\alpha^{(1)}_0 A^{(m+1)}_{N-1} +\alpha^{(2)}_0 A^{(m+1)}_{N-2} +\ldots + \alpha^{(m)}_0 A^{(m+1)}_{N-m}+A^{(m+1)}_{N-m-1} \right )\begin{pmatrix}\alpha^{(1)}_0\\ \vdots\\ \alpha^{(m)}_0\\ 1 \end{pmatrix}.\]
Moreover, by Proposition \ref{prop:sum-prod} we know that $\sum_{i=1}^{m+1}\alpha^{(i)}_N A^{(m+1)}_{N-i} = \alpha_1^{(1)} \cdots \alpha_N^{(1)}$ and, since $\alpha_0^{(1)} = \alpha_N^{(1)}$, we have
\[\mathcal{M}\begin{pmatrix}\alpha^{(1)}_0\\ \vdots\\ \alpha^{(m)}_0\\ 1 \end{pmatrix} = \alpha^{(1)}_0\ldots \alpha^{(1)}_{N-1} \begin{pmatrix}\alpha^{(1)}\\ \vdots\\ \alpha^{(m)}\\ 1 \end{pmatrix}.\]
Therefore $\mu :=\alpha^{(1)}_0\ldots \alpha^{(1)}_{N-1}$ is an eigenvalue of $\mathcal{M}$ and a root of the characteristic polynomial $P(X)$.
In the next theorems, we shall see that $\mu$ is the $p$-adic dominant eigenvalue, that is the root greatest in $p$-adic norm of $P(X)$ and that the limits of the periodic MCF \eqref{eq:MCF-period} are strictly related to $\mu$. Note that it is not a loss of generality to consider purely periodic MCFs, since the algebraic properties of the complete quotients of a MCF coincide with those of the input vector.
\begin{theorem} \label{thm:main}
Given the purely periodic MCF
\[(\alpha_0^{(1)}, \ldots, \alpha_0^{(m)}) = \left[\left(\overline{a_0^{(1)}, \ldots, a_{N-1}^{(1)}}\right), \ldots, \left(\overline{a_0^{(m)}, \ldots, a_{N-1}^{(m)}}\right)\right]\]
and its characteristic polynomial $P(X)$, then $\mu = \alpha^{(1)}_0\ldots \alpha^{(1)}_{N-1}$ is the greatest root in $p$-adic norm.
\end{theorem}
\begin{proof}
We consider $a_n^{(1)} = \cfrac{\tilde a_n^{(1)}}{p^{k_n}}$, for any $n \geq 0$, where $k_n \geq 0$ ($k_n > 0$, for $n > 0$). We define the quantity $k = k_0 + \ldots + k_{N-1}$ and the matrix
\[ \mathcal{M}':=p^k\mathcal{M}=\mathcal{A}'_0\ldots \mathcal{A}'_{N-1} \]
where
$$\mathcal{A}'_i=p^{k_i}\mathcal{A}_i=\begin{pmatrix} {\tilde a^{(1)}_{i}} &p^{k_i} &0 &\ldots & 0\\
p^{k_i}{a^{(2)}_{i}} &0 &p^{k_i} &\ldots & 0\\
\vdots &\vdots&\vdots& \vdots&\vdots \\
p^{k_i}a^{(m)}_{i} &0 &0& \ldots & p^{k_i}\\
p^{k_i} &0 &0 &\ldots & 0\end{pmatrix}\equiv \begin{pmatrix} {\tilde a^{(1)}_{i}} &0 &0 &\ldots & 0\\
0 &0 &0 &\ldots & 0\\
\vdots &\vdots&\vdots& \vdots&\vdots \\
0 &0 &0& \ldots & 0\\
0 &0 &0 &\ldots & 0\end{pmatrix}\pmod p.$$
Therefore
\begin{equation}\label{eq:emmeprimo} \mathcal{M}'\equiv \begin{pmatrix} {\tilde a^{(1)}_{0}\ldots \tilde a^{(1)}_{N-1}} &0 &0 &\ldots & 0\\
0 &0 &0 &\ldots & 0\\
\vdots &\vdots&\vdots& \vdots&\vdots \\
0 &0 &0& \ldots & 0\\
0 &0 &0 &\ldots & 0\end{pmatrix}\pmod p.\end{equation}
Let $Q(X)$ be the characteristic polynomial of $\mathcal{M'}$. Then, $\lambda$ is an eigenvalue of $\mathcal{M}$ if and only if $p^k\lambda $ is an eigenvalue of $\mathcal{M'}$. If $\lambda_1, \ldots, \lambda_{m+1}$ are the eigenvalues of $\mathcal M$, then
\begin{equation*} Q(X) = \prod_{i=1}^{m+1}(X-p^k\lambda_i) = p^{k(m+1)}\prod_{i=1}^{m+1} \left (\frac x {p^{k}}-\lambda_i\right ) = p^{k(m+1)} P\left ( \frac X{p^{k}}\right)
\end{equation*}
so that
$$P(X)=\frac 1 {p^{k(m+1)}}Q(p^kX).$$
From \eqref{eq:emmeprimo} we have
$$Q(X) \equiv X^m(X-\tilde a_0^{(1)}\ldots \tilde a_{N-1}^{(1)})\pmod p.$$
Thus
$$Q(X)=X^{m+1}+\delta_mX^m+\ldots +\delta_0$$
with
$$\delta_m\equiv \tilde a_0^{(1)}\ldots \tilde a_{N-1}^{(1)}\pmod p, \quad \delta_i\equiv 0\pmod p\hbox{ for } i=0,\ldots, m-1, \quad \delta_0=\pm p^{k(m+1)}.$$
It follows\\
\begin{align*}
P_\mu(X) &= \frac 1 {p^{k(m+1)}}Q(p^kX)\\
&= \frac 1 {p^{k(m+1)}}(p^{k(m+1)}X^{m+1}+\delta_mp^{km}X^m+\ldots +\delta_ip^{ki}X^i+\ldots+\delta_0)\\
&= X^{(m+1)}+\frac {\delta_m}{p^k}X^m+\ldots +\frac {\delta_i}{p^{k(m+1-i)}} X^i+\ldots \pm 1\\
&=X^{m+1}+\gamma_mX^m+\ldots +\gamma_0,
\end{align*}
where
$$\gamma_i=\frac {\delta_i}{ p^{k(m+1-i)}}\quad\hbox{ for } i=0,\ldots, m,\quad (\gamma_0=\pm 1).$$
Now, we put $\mu_i=v_p(\delta_i)$for $i=1,\ldots, m$ and observe that
$$\mu_m=0,\quad \mu_i>0\hbox{ for } i=1,\ldots, m-1, \quad \mu_0=k(m+1).$$
We can see that
\begin{align*}
v_p(\gamma_m)& =v_p(a_0^{(1)}\ldots a_{N-1}^{(1)}) =\sum_{i=0}^{N-1}v_p(a_{i}^{(1)})=-k\\
v_p(\gamma_i) &=v_p(\delta_i)-k(m+1-i)\\
&=\mu_i+ik-(m+1)k\quad\hbox{for } i=0,\ldots, m.
\end{align*}
Now we want to study the Newton polygon (see \cite{Gou}) of $P(X)$ for proving that $\mu$ is the root greatest in $p$-adic norm.
The line, in the real plane, passing through the points $(i,v_p(\gamma_i))$ and $(m+1,0)$ has equation
\begin{equation} \label{eq:line} y=\frac {v_p(\gamma_i)}{m+1-i}(-x+m+1), \end{equation}
for any $i = 1, \ldots, m-1$. We will denote with $s_i$ the slope of this line.
From the fact that
$$v_p(\gamma_i)=\mu_i-k(m+1-i)\hbox{ and } \mu_i=v_p(\delta_i)>0,$$
we get
$$\frac {v_p(\gamma_i)}{m+1-i}=\frac {\mu_i}{m+1-i}-k>-k,$$
i.e., the point in the real plane with coordinates $(m,v_p(\gamma_m))=(m,-k)$ lies strictly under the line \eqref{eq:line}, for any $i = 1, \ldots, m-1$.
Thus, the Newton polygon associated to the polynomial $P(X)$ has slopes $(s_1, \ldots, s_m)$, which is a strictly increasing sequence, where the last slope $s_m$ is equal to $k$.
Hence, the claim of the theorem follows from \cite[Theorem 6.4.7]{Gou} and the fact that the sequence $(s_1,\ldots, s_m)$ of slopes is strictly increasing.
\end{proof}
\begin{theorem} \label{thm:munoraz}
Given the purely periodic MCF
\[(\alpha_0^{(1)}, \ldots, \alpha_0^{(m)}) = \left[\left(\overline{a_0^{(1)}, \ldots, a_{N-1}^{(1)}}\right), \ldots, \left(\overline{a_0^{(m)}, \ldots, a_{N-1}^{(m)}}\right)\right]\]
and its characteristic polynomial $P(X)$, then
\begin{itemize}
\item[a)] $\QQ(\mu)=\QQ(\alpha^{(1)}_0,\ldots,\alpha^{(m)}_0)$
\item[b)] $\mu\not\in\QQ$
\end{itemize}
where $\mu$ is the greatest root in $p$-adic norm of $P(X)$.
\end{theorem}
\begin{proof} \ \\
$a$) Since $\mu=\alpha^{(1)}_0\ldots \alpha^{(1)}_{N-1}$, obviously
$\mu\in \QQ(\alpha^{(1)}_0,\ldots,\alpha^{(m)}_0)$; conversely by Theorem \ref{thm:main} the nullspace of $\mathcal{B}_{N-1}-\mu I_{m+1}\in M_{m+1}(\QQ(\mu))$ is $1$-dimensional (where $I_{m+1}$ is the $(m+1) \times (m+1)$ identity matrix and $M_{m+1}(\QQ(\mu))$ denotes the set of $(m+1) \times (m+1)$ matrices with entries in $\QQ(\mu)$).
Therefore it is generated by a vector $\mathbf{\beta}=(\beta_1,\ldots, \beta_{m+1})$ with entries in $\QQ(\mu)$, which must be proportional to $(\alpha^{(1)}_0,\ldots,\alpha^{(m)}_0,1)$. It follows that $\alpha^{(i)}=\frac {\beta_i}{\beta_{m+1}}\in\QQ(\mu)$ for $i=1,\ldots, m$.\\
$b$) Assume that $\mu\in\QQ$, then $\alpha^{(1)}_0,\ldots,\alpha^{(m)}_0 \in\QQ$. But in this case the MCF corresponding to $(\alpha^{(1)}_0,\ldots,\alpha^{(m)}_0)$ is finite (see \cite{MT}), so that it cannot be periodic.
\end{proof}
From the previous theorems, we have that a periodic MCF converges to a $m$--tuple of algebraic irrationalities of degree less or equal than $m+1$,belonging to the field generated over $\QQ$ by the the root greatest in $p$-adic norm of the characteristic polynomial. In the characteristic polynomial is irreducible, then the algebraic irrationalities are of the maximum degree. In the following, we see some further properties of the roots of the characteristic polynomial and then we focus on the case $m=2$ for some specific considerations.
\begin{lemma} \label{lemma}
Let $P(X) = X^{m+1} + \gamma_m X^m + \ldots + \gamma_1 X + (-1)^{m(N+1)+1}$ be the characteristic polynomial of the purely periodic MCF $(\alpha_0^{(1)}, \ldots, \alpha_0^{(m)}) = \left[\left(\overline{a_0^{(1)}, \ldots, a_{N-1}^{(1)}}\right), \ldots, \left(\overline{a_0^{(m)}, \ldots, a_{N-1}^{(m)}}\right)\right]$. We have that
\begin{itemize}
\item[a)] every $\gamma_i$ is a polynomial in $\ZZ[a^{(i)}_j, i=1,\ldots , m,\ j=0,\ldots N-1]$ and each monomial has the form $\lambda c_0c_1\ldots c_{N-1}$ where $\lambda\in\ZZ$ and $c_j=1$ or $c_j=a^{(i)}_j$ for some $i=1,\ldots, m$;
\item[b)] the monomial $a_0^{(1)} \cdots a_{N-1}^{(1)}$ appears only in $\gamma_m$.
\end{itemize}
\end{lemma}
\begin{proof}
Let us observe that any coefficient $\gamma_i$ is the sum of the principal minors of the matrix $\mathcal B_{N-1}$ of order $m+1-i$, for $i = 1, \ldots, m$. Hence the thesis follows from Proposition \ref{prop:minors}. \\
\end{proof}
\begin{theorem} \label{thm:normm1}
Given the purely periodic MCF
\[(\alpha_0^{(1)}, \ldots, \alpha_0^{(m)}) = \left[\left(\overline{a_0^{(1)}, \ldots, a_{N-1}^{(1)}}\right), \ldots, \left(\overline{a_0^{(m)}, \ldots, a_{N-1}^{(m)}}\right)\right],\]
every root of its characteristic polynomial $P(X) = X^{m+1} + \gamma_m X^m + \ldots + \gamma_1 X + (-1)^{m(N+1)+1}$ has $p$-adic norm less than 1, except for the root greatest in $p$-adic norm $\mu=\alpha_0^{(1)}\cdots \alpha_{N-1}^{(1)}$.
\end{theorem}
\begin{proof}
By Lemma \ref{lemma} and $|a_n^{(1)}|>1$, $|a_n^{(1)}|>|a_n^{(j)}|$ for $n\in\NN$, $j=2,\ldots , m+1$, we have $|\gamma_i| \leq |a_0^{(1)} \cdots a_{N-1}^{(1)}|$, for any $i = 1, \ldots, m$. Moreover, this inequality becomes equality if and only if $i = m$. If $\lambda_1 = \mu, \lambda_2, \ldots, \lambda_k$ are the roots of $P(X)$ with $p$-adic norm $\geq 1$, then $\gamma_{m+1-k} \geq |\mu| = |a_0^{(1)}\cdots a_{N-1}^{(1)}|$. Recalling that $\gamma_{m+1-k}$ is also the $k$-th elementary symmetric function of the roots, this implies $k = 1$ and the thesis follows.
\end{proof}
\begin{theorem} \label{thm:gersh}
Let $z$ be a complex root of the characteristic polynomial $P(X)$ of the purely periodic MCF $\left[\left(\overline{a_0^{(1)}, \ldots, a_{N-1}^{(1)}}\right), \ldots, \left(\overline{a_0^{(m)}, \ldots, a_{N-1}^{(m)}}\right)\right]$. Then
$$|z|_\infty < {p^N} .$$
\end{theorem}
\begin{proof}
By Gershgorin theorem \cite{Gers} there esists a row $j=1,\ldots, m+1$ in $\mathcal{B}_{N-1}$ such that
$$|z-A^{(j)}_{N-j}|_\infty \leq \sum_{k=1,\ldots, m+1,\ k\not=j }|A^{(j)}_{N-k}|_\infty.$$ In particular
\[|z|_\infty \leq \sum_{k=1}^{m+1}|A^{(j)}_{N-k}|_\infty < \frac 1 2 \sum_{k=1}^{m+1} p^{N-k+1}\]
by Proposition \ref{prop:boundA}. Moreover,
\[\frac 1 2 \sum_{k=1}^{m+1} p^{N-k+1} = \frac 1 2 p^{N-m}\sum_{k=0}^{m} p^{k} = \frac 1 2 p^{N-m}\frac {p^{m+1}-1}{p-1} \leq p^N.\]
\end{proof}
The previous theorems are useful in order to give some further information about the algebraic properties of the limits of a periodic MCF. We firstly consider the case $N=1$:
\begin{proposition}
The characteristic polynomial of a purely periodic MCF with period $N=1$ does not have any rational root. In particular when $m=2$ the characteristic polynomial is irreducible over $\QQ$, and the limits of the MCF generate a cubic field.
\end{proposition}
\begin{proof}
Let $z$ be a rational root of the characteristic polynomial; by the rational root theorem it must be (up to a sign) a power of $p$. By Theorem \ref{thm:munoraz} $b)$, we know that $z\not=\mu$, and this implies that $v_p(z) \geq 1$ by Theorem \ref{thm:normm1}. But $|z|_\infty <p $ by Theorem \ref{thm:gersh}, a contradiction.\end{proof}
In general, a rational root of a MCF with period of length $N$ must satisfy $|z|_\infty < p^N$ and $v_p(z) \geq 1$, so that for the rational root theorem it must be of the kind $\pm p^k$, with $k \leq N-1$. The next proposition gives a necessary condition for the existence of such a root, in the case
$m=2$ and $N=2$.
\begin{proposition}
Let us consider the purely periodic MCF $\left[ \left(\overline{a_0, a_1}\right), \left(\overline{b_0,b_1}\right) \right]$. Then its characteristic polynomial $P(X)$ is irreducible over $\QQ$ unless the following condition is verified, possibly interchanging the indices 0 and 1:
\begin{equation}\label{eq:condizirr}
\begin{array}{l} \bullet \hbox{ $a_0$ is of the form $\pm \frac 1 p + w$ with $w\in \ZZ, |w|_\infty\leq \frac{p-1} 2, w\not=0$; and}\\
\bullet \hbox{ either $v_p(a_1p+1)=v_p(a_1)+1 $ (which implies $v_p(b_1)=v_p(a_1)+1, v_p(b_0)=0$)} \\ \hbox{or $a_1$ is of the form $\pm \frac 1 p + u$ with $u\in \ZZ, |u|\leq \frac{p-1} 2, u\not=0$}; \\
\hbox{in the latter case one between $b_0$ and $b_1$ is zero and the other one is equal to $-wu\pm p$.}\end{array}\end{equation}
\end{proposition}
\begin{proof}
Write $$P(X)=X^3+\gamma_2X^2+\gamma_1 X-1,$$
then $$\gamma_2=-(a_0a_1+b_0+b_1),\quad\quad \gamma_1=b_1b_0-a_0-a_1$$
so that
$$P(X)=X(X-b_0)(X-b_1)-(a_0X+1)(a_1X+1).$$
We put $k_1=-v_p(a_1), k_2=-v_p(a_2), k=k_1+k_2$.
By Theorems \ref{thm:normm1} and \ref{thm:gersh} the only possible rational roots of $P(X)$ are $\pm p$. So assume
\begin{equation}
\label{eq:equazionp}
P(\pm p)= \pm p(\pm p-b_0)(\pm p-b_1)-(\pm a_0p+1)(\pm a_1p+1)=0.\end{equation}
Notice that the valuation of the first summand is $\geq -k+3$ and that of the second summand is $\geq -k+2$. Therefore, the valuation of the second summand must be $\geq -k+3$. This implies that at least one between $a_0$ and $a_1$, say $a_0$, must satisfy $v(\pm a_0p+1)>-k_0+1$, that is $a_0p\equiv \mp 1\pmod p$. Since $a_0\in\mathcal{Y}$ this implies $a_0=\mp\frac 1 p+w$ with $w\in \ZZ, |w|_\infty \leq \frac{p-1} 2$ and \eqref{eq:equazionp} becomes
\begin{equation}
\label{eq:equazionp2}
\pm (\pm p-b_0)(\pm p-b_1)-w(\pm a_1p+1)=0.\end{equation}
We show that $w\not=0$: otherwise one between $b_0$ and $b_1$ should be equal to $\pm p$, which is a contradiction because $b_0,b_1\in\mathcal{Y}$. \\
The right-hand side of \eqref{eq:equazionp2} has valuation $\geq -k_1+1$; and $v_p(\pm p-b_0)\geq 0$, $v_p(\pm p-b_1)\geq -k_1+1$. If the valuation of the right side is exactly $-k_1+1$ then it must be $v(b_0)=0, v(b_1)=-k_1+1$. On the other hand, if the valuation of the right side is $>-k_1+1$ then $a_1p\equiv \mp 1\pmod p$. As above this implies $a_0=\mp\frac 1 p+u$ with $u\in \ZZ, |u|_\infty\leq \frac{p-1} 2, u\not=0$ and \eqref{eq:equazionp2} becomes
\begin{equation}
\label{eq:equazionp3}
\pm (\pm p-b_0)(\pm p-b_1)-wup=0.\end{equation} This implies that one between $b_0$ and $b_1$ is $0$, the other one (say $b_i$) has valuation $0$, and satisfies $\pm p-b_i=wu$.
\end{proof}
In order to provide numerical examples, the following proposition will be useful.
\begin{proposition}\label{prop:trovaABC}
Let us consider the purely periodic 2-dimensional MCF $(\alpha, \beta) = \left[ \left(\overline{a_0, \ldots, a_{N-1}}\right), \left(\overline{b_0, \ldots, b_{N-1}}\right) \right]$ and suppose that its characteristic polynomial $P(X)$ is reducible. Let $z=\pm p^k$ be the (unique) rational root of $P(X)$, then the $1$-dimensional eigenspace $\mathcal{L}\subseteq \QQ^3$ of the transpose of $\mathcal{B}_{N-1}$ associated to $z$ coincides with the space $\mathcal{L'}$ of rational vectors $(x,y,z)$ such that $x\alpha+y\beta+z=0$.
\end{proposition}
\begin{proof} Notice firstly that the space $\mathcal{L'}$ is one-dimensional, because Theorem \ref{thm:munoraz} and the reducibility of $P(X)$ imply that $[\QQ(\alpha,\beta):\QQ]=2$. Therefore there is a linear dependence relation
$$x_1\alpha +x_2\beta +x_3=0$$
with coprime $x_1,x_2,x_3\in\ZZ$, and $(x_1,x_2,x_3)$ generates $\mathcal{L'}$.
Since $\alpha_N=\alpha $, $\beta_N=\beta$, by property \eqref{eq:uno} of the sequence $(S_n)$ defined by \eqref{eq:s}, the vector $(S_N, S_{N-1}, S_{N-2})$ must be a rational multiple of $(x_1,x_2,x_3)$; then by \eqref{eq:quattro} $(x_0,y_0,z_0)$ is an eigenvector associated to a rational eigenvalue, so that it belongs to $\mathcal{L}$.
\end{proof}
\begin{example}
Condition \eqref{eq:condizirr} is essential. Consider the following examples.
\begin{itemize}
\item For $p=5$, the periodic MCF $(\alpha, \beta) = \left[\left(\overline{\frac 4 5, \frac {11}5}\right), \left(\overline{1, 2}\right)\right]$ has characteristic polynomial
$$P(X)=X^3-\frac{119}{25} X^2-X-1 $$
and
$$P(X)=(X-5)\left (X^2-\frac 6 {25}X+\frac 1 5\right ).$$
Moreover by using Proposition \ref{prop:trovaABC} we find the linear dependence relation between $\alpha, \beta$ and $1$:
$$20\alpha + 5\beta + 4=0.$$
\item For $p=3$, the periodic MCF $(\alpha, \beta) = \left[\left(\overline{\frac 2 3, \frac {5}3}\right), \left(\overline{1, 0}\right)\right]$ has characteristic polynomial
\begin{equation*} P(X) =X^3-\frac{19}{9} X^2-\frac 7 3 X-1=(X-3)\left (X^2+\frac 8 {9}X+\frac 1 3\right ).\end{equation*}
and
$$6\alpha+3\beta +2=0.$$
\item For $p=3$, the periodic MCF $(\alpha, \beta) = \left[\left(\overline{\frac 2 3, \frac {13}9}\right), \left(\overline{1, \frac{1}{3}}\right)\right]$ has characteristic polynomial
\begin{equation*} P(X) =X^3-\frac{62}{27} X^2-\frac {16}{9} X-1 =(X-3)\left (X^2+\frac {19} {27}X+\frac 1 3\right ).\end{equation*}
The linear dependence relation between $\alpha, \beta, 1$ is the same as in the previous case:
$$6\alpha + 3\beta + 2=0.$$
\end{itemize}
\end{example}
The above examples also show $\QQ$-linearly dependent numbers having a periodic (hence not finite) expansion by the $p$-adic Jacobi--Perron algorithm.\\
At the present time we were not able to find examples of $m$-tuples of $\QQ$-linearly dependent $p$-adic numbers whose MCF is infinite and not periodic. Therefore, we state the following
\begin{conjecture} Let $\bm{\alpha}= (\alpha^{(1)}, \ldots, \alpha^{(m)})\in \QQ_p^m$ be such that $1, \alpha^{(1)},\ldots, \alpha^{(m)}$ are $\QQ$-linearly dependent. Then the $p$-adic Jacobi-Perron algorithm for $\bm{\alpha}$ is finite or periodic.
\end{conjecture}
\section*{Acknowledgments}
We thank Matteo Semplice for stimulating conversations.
|
3,212,635,537,643 | arxiv | \section*{ABSTRACT}
\begin{quote}
\begin{small}
We have found provably optimal algorithms for full-domain discrete-ordinate
transport sweeps on a class of grids in 2D and 3D Cartesian geometry that
are regular at a coarse level but arbitrary within the coarse blocks. We describe these algorithms and show that they always execute the full eight-octant (or
four-quadrant if 2D) sweep in the minimum possible number of stages for a given $P_x
\times P_y \times P_z$ partitioning. Computational results confirm that our
optimal scheduling algorithms execute sweeps in the minimum possible stage
count. Observed parallel efficiencies agree well with our performance model.
Our PDT transport code
has achieved approximately $68\%$ parallel efficiency with $>1.5M$ parallel threads, relative to 8 threads, on a
simple weak-scaling problem with only three energy groups, 10 direction per octant, and
4096 cells/core. We demonstrate similar efficiencies on a much more realistic set of
nuclear-reactor test problems, with unstructured meshes that resolve fine geometric details.
These results demonstrate that discrete-ordinates transport sweeps can be executed with high
efficiency using more than $10^6$ parallel processes.
\emph{Key Words}: transport sweeps, parallel transport, parallel algorithms, PDT, STAPL, performance
models, unstructured mesh
\end{small}
\end{quote}
\setlength{\baselineskip}{14pt}
\normalsize
\input{s1-introduction}
\input{s2-sweeps}
\input{s3-proofs}
\input{s4-optimal}
\input{s5-results}
\input{s6-conclusions}
\section*{ACKNOWLEDGEMENTS}
Part of this work was funded under a collaborative research contract from
Lawrence Livermore National Security, LLC. Part of this work was performed
under the auspices of the Center for Radiative Shock Hydrodynamics at the
University of Michigan and part under the auspices of the Center for Exascale
Radiation Transport at Texas A\&M University, both of which have been funded
by the DOE NNSA ASC Predictive Science
Academic Alliances Program. Part of this work was funded under a collaborative
research contract from the Center for Exascale Simulation of Advanced Reactors
(CESAR), a DOE ASCR project. Part of this work was performed under the auspices
of the U.S. Department of Energy by Lawrence Livermore National Laboratory under
Contract DE-AC52-07NA27344. This research used resources of the Argonne
Leadership Computing Facility, which is a DOE Office of Science User Facility
supported under Contract DE-AC02-06CH11357.
\input{s7-references}
\newpage
\begin{appendix}
\include{s8-appendix1}
\include{s9-appendix2}
\end{appendix}
\end{document}
\section{INTRODUCTION} \label{sec:intro}
Deterministic particle-transport methods approximate the particle angular flux
(or density or intensity) in a multidimensional phase space as a function of
time. The independent variables that define the solution phase space are
position (3 variables), energy (1), and direction (2). The most widely used
discretizations in energy are \emph{multigroup} methods, in which the solution
is calculated for discrete energy ``groups.'' The most common directional
discretizations are \emph{discrete-ordinates} methods, in which the solution is
calculated only for specific directions. In the most widely used methods, the
solution for a given spatial cell, energy group, and direction depends only on:
1) the total volumetric source within the cell, and 2) the angular flux for that
group and direction that is incident upon the cell surface. Each incident flux
is the outgoing flux from an adjacent ``upstream'' cell or is given by boundary
conditions.
To solve the transport equation for the full spatial domain, for a given collection
of energy groups, and for a single direction, one approach is to start with the cell
(or cells) whose incident fluxes for that direction are all provided by boundary
conditions. (For any direction from a typical quadrature set and a rectangular spatial
domain, this would be one cell at one corner of the domain.) Once the solution
is found for this cell, its outgoing fluxes complete the dependencies for its
downstream neighbors, whose solutions may then be computed. Their outgoing
fluxes satisfy their downstream neighbors' dependencies, etc., so each set of
cells that gets completed readies another set, and the computation ``sweeps''
across the entire domain in the direction being solved. Performing this process
for the full set of cells and directions is called a transport sweep.
The full-domain boundary-to-boundary sweep, in which all angular fluxes in a set
of energy groups are calculated given previous-iterate values for the volumetric
fixed-plus-collisional source, forms the foundation for many iterative methods
that have desirable properties \cite{Adams-Larsen}. One such property is that
iteration counts do not change with mesh refinement and thus do not grow as
resolution is increased in a given physical problem---an important consideration
for the high-resolution transport problems that require efficient massively
parallel computing. A transport sweep calculates $\psi^{(l+1/2)}_{m,g}$ via the
numerical solution of:
\begin{equation}
\label{eq:source_iteration}
\vec\Omega_m \cdot \vec \nabla \psi^{(\ell + 1/2)}_{m,g} + \sigma_{t,g}
\psi^{(\ell + 1/2)}_{m,g} = q^{(\ell)}_{tot,m,g} \; \; , \quad \text{all }m, \quad \text{all } g \in \text{ the groupset,}
\end{equation}
where $q^{(\ell)}_{tot,m,g}$ includes the collisional source evaluated using
fluxes from a previous iterate or guess (denoted by superscript $\ell$). We
emphasize that this is a complete boundary-to-boundary sweep of all directions,
respecting all upstream/downstream dependencies, with no iteration on interface
angular fluxes.
The parallel execution of a sweep is complicated by the dependencies of cells on
upstream neighbors. A task dependence graph (TDG) for one direction
in a 2D example (Figs.~\ref{fig:tdg}a and b) illustrates the issue: tasks at a
given level of the graph cannot be executed until some tasks finish on the
previous level. This originally led to a widespread perception that parallel
sweeps cannot be efficient beyond a few thousand parallel processes and provided
motivation for researchers to seek iterative methods that do not use full-domain
sweeps \cite{denovo,Zerr}. Such methods offer the possibility of easier
scaling to high process counts---for a single iteration's calculation---but
iteration counts may increase as each process's physical subdomain size
decreases, which tends to happen as resolution and process count both
increase.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.25]{fig_1.jpg}
\caption{(a) Example 2D problem with a spatial grid of $4 \times 8$ cells or
cellsets. (b) Example TDG for a sweep of a single direction (down and right).
(c) KBA partitioning, with a columns of cellsets assigned to each of four
processors. Tasks on a given level of the graph can be executed in parallel.}
\label{fig:tdg}
\end{figure}
In this paper we focus on discrete-ordinates transport sweeps and describe new
parallel sweep algorithms. We demonstrate via theory, models, and computational
results that our new {\em provably optimal} sweep algorithms enable efficient parallel sweeps
out to O($10^6$) {parallel processes}, even with modest problem sizes
($O$(1M) cell-energy-direction elements per process).
We describe a framework for understanding and exploiting the available concurrency in a sweep, recognizing that fundamental dependencies prevent sweeps from being ``embarrassingly parallel.'' We discuss and integrate algorithmic features from past research efforts, providing a comprehensive view of the trade-space available for sweep optimization \cite{KBA,Dorr,ComptonClouse,Pautz,BaileyFalgout}.
The key components of a sweep algorithm are {\em partitioning} (dividing the domain
among processes), {\em aggregation} (grouping cells, directions, and energy
groups into ``tasks''), and {\em scheduling} (choosing which task to execute if
more than one is available).
The KBA algorithm devised by Koch, Baker, and Alcouffe \cite{KBA} and the algorithm by Compton and Clouse \cite{ComptonClouse} exploit parallel concurrency enabled by particular partitioning and aggregation choices. We generalize this as follows. Given a grid with $N_r$ spatial cells, let us aggregate cells in to $N^{cs}$ brick-shaped cellsets in a $N_x^{cs} \times N_y^{cs} \times N_z^{cs}$ array. We then distribute these cellsets across processes, with the possibility of assigning more than one cellset to each process. This corresponds to ``blocks'' in KBA and spatial domain ``overloading'' in other work \cite{ComptonClouse,BaileyFalgout}. It is possible to also distribute energy groups and/or quadrature directions across different processes, but in this work we focus on spatial decomposition.
In this paper we
limit our analysis to ``semi-structured'' spatial meshes that can be unstructured at a fine level
but are orthogonal at a coarse level, allowing for aggregation into a regular
grid of $N^{cs}_x \times N^{cs}_y \times N^{cs}_z$ brick-shaped cellsets. Fully
irregular grids introduce complications that we will not address in this paper.
We assume spatial domain decomposition in which each process owns a
contiguous brick-shaped subdomain. In a future communication we expect to
address decompositions in which a process may own non-contiguous portions of the
spatial domain \cite{mc2015-sweep}. In the analysis of sweep optimality presented
below, we assume load-balanced cellsets, with each cellset containing the same
number of cells with the same number of spatial degrees of freedom.
The work presented here is based on a recent conference
paper \cite{opt-sweep} but is augmented to include: 1) an extension of our
optimal-sweep algorithm to reflecting boundaries, 2) an improved performance
model, 3) updated and extended numerical results, and 4) a relaxation of
constraints on spatial meshes.
The KBA algorithm devised by Koch, Baker, and Alcouffe \cite{KBA} is the most
widely known parallel sweep algorithm. KBA {\em partitions} the problem by
assigning a column of cells to each process, indicated by the four diagonal task
groupings in Fig.\ref{fig:tdg}c. KBA parallelizes over planes logically
perpendicular to the sweep direction---over the breadth of the TDG. Early and
late in a single-direction sweep, some processes are idle, as in stages 1-3 and
9-11 in Fig.\ref{fig:tdg}. In this example, parallel efficiency for an {\em
isolated} single-direction sweep could be no better than $8/11 \approx 73\%$.
KBA is much better, because when a process finishes its tasks for the first
direction it begins its tasks for the next direction in the octant-pair that has
the same sweep ordering. That is, each process begins a new TDG as soon as it
completes its work on the previous TDG, until all directions in the octant-pair
finish. This is equivalent to concatenating all of an octant-pair's TDGs into a
single much longer TDG. This lengthens the ``pipe'' and increases efficiency.
If there were $n$ directions in the octant pair, then the pipe length is
$n\times8$ in this example, and the efficiency would be
$(n\times8)/(3+n\times8)$ if communication times were negligible.
The scheduling algorithm described here is valid for any spatial grid of $N_r$
cells that can be aggregated into $N^{cs} = N^{cs}_x \times N^{cs}_y \times
N^{cs}_z$ brick-shaped cellsets. A familiar example of a non-orthogonal grid
with this property is a reactor lattice. As the term ``lattice'' implies, these
grids are regular at a coarse level despite being unstructured at the cell
level. Additionally, an unstructured mesh that is ``cut'' along full-domain
planes can employ the algorithm described here. As we describe below, prismatic
grids that are extrusions of 2D meshes into $N_z$ cell-planes---such as those
commonly found in 3D nuclear-reactor analysis---offer advantages
in optimizing sweeps, but extruded grids are not required by our algorithm.
The coarse regularity of brick-shaped cellsets allows us to {\em partition} the domain into a $P_x
\times P_y \times P_z$ process grid, with $P=$ number of processes $=P_xP_yP_z$.
The work to be performed in the sweep is to calculate the angular intensity for
each of the $N_m$ directions in each of the $N_g$ energy groups in each of the
$N_r$ spatial cells, for a total of $N_mN_gN_r$ fine-grained work units. The
finest-grained work unit is calculation of a single direction and energy group's
unknowns in a single cell; thus, we describe the sweeps that we analyze here as
use ``cell-based.'' Methods based on solutions along characteristics permit
finer granularity of the computation; in particular, ``face-based'' sweeps are
possible, and with long-characteristic methods ``track-based'' sweeps are
possible. Face-based and track-based sweeps offer advantages over cell-based
sweeps in terms of potential parallel efficiency, but in this paper we focus on
cell-based sweeps.
We {\em aggregate} fine-grained work units into coarser-grained tasks, with each
task being the solution of the angular fluxes in $A_g$ groups, $A_m$
directions, and $A_r$ spatial cells. (The $A$s are ``aggregation factors.'')
Since our scheduling algorithm is based on brick cellsets, $A_r$ is constrained
by the level of regularity in the grid. We use the term ``cell subset'' to
refer to the smallest orthogonal units of the mesh, which we can combine into
cellsets as we see fit. Thus, if our grid is a lattice of $N^{sub}_x \times
N^{sub}_y \times N^{sub}_z$ brick subsets of $A^{sub}_r$ cells, then $A_r$ will
be an integer multiple of $A^{sub}_r$. Our choice of ``subset aggregation factors''
$A_x$, $A_y$, and $A_z$ determines our cellset layout, with each
$N^{cs}_u=N^{sub}_u/A_u$.
In order to maintain load balance, we require that each process in our
partitioning scheme own the same number of cellsets $\omega_r \equiv N^{cs}/P$.
Here, $\omega_r$ is the spatial ``overload factor'', and if it is greater than
one we say that our partitioning and aggregation scheme is ``overloaded'', since
processes own multiple cellsets. This can be broken down as $\omega_r =
\omega_x \times \omega_y \times \omega_z$, with $\omega_u = N^{cs}_u/(P_uA_u)$.
As will be clear from the efficiency formulas in Sec.~\ref{sec:par_sweeps},
there can be significant benefit from overloading.
With everything partitioned and aggregated, each process is responsible for
$\omega_r$ cellsets, $\omega_g \equiv N_g/A_g$ group-sets, and $\omega_m \equiv
N_m/A_m$ direction-sets, for a total of $\omega_m\omega_g\omega_r$ tasks.
The $A_m$ directions that are aggregated together are required to be within the same
octant. The sweep for directions in a given octant must begin at one of the
eight corners of the spatial domain and proceed to the opposite corner.
If direction-sets from multiple octants are launched at the same time,
there will be ``collisions'' in which a process or set of processes will have
multiple tasks available for execution. A {\em scheduling} algorithm is
required for choosing which task to execute.
Scheduling algorithms are a primary focus of this paper.
Our work builds on heuristics-based scheduling algorithms that previous researchers devised \cite{ComptonClouse,Pautz,BaileyFalgout} to address the schedule conflicts that arise from launching simultaneous sweep fronts from all corners of the spatial domain. In this paper we introduce a family
of scheduling algorithms that execute the complete 8-octant sweep in the minimum
possible number of ``stages,'' where a stage is defined as execution of a single task (cellset/direction-set/groupset) and subsequent communication, by each process that has work available. We outline a proof of optimality for one
member of the family, discuss the others, and present computational results, which demonstrate that our optimal scheduling algorithms do indeed complete their sweeps in the minimum possible number of stages and provide high efficiency even at high process counts.
With an optimal scheduling algorithm in hand we know how many stages a sweep will
require. This is a simple function of the partitioning and aggregation parameters chosen for any given problem. With
stage count known, there is a possibility of predicting execution time via a
performance model, and then using the model to choose partitioning and
aggregation factors that minimize execution time for the given problem on the
given number of processes on the given machine. The result is what we call an ``optimal sweep
algorithm.'' To recap, the ingredients of the optimal sweep algorithm are:
\begin{enumerate}
\item A sweep scheduling algorithm that executes in the minimum possible
number of stages for a given problem with given partitioning and aggregation
parameters;
\item A performance model that estimates execution time for a given problem
as a function of stage count, machine parameters, partitioning, and
aggregation;
\item An optimization algorithm that chooses the partitioning and
aggregation parameters to minimize the model's estimate of execution time.
\end{enumerate}
In the following section we discuss and quantify key characteristics of parallel sweeps,
including: 1) the idle stages that are inevitable if sweep dependencies are
enforced, and 2) a lower bound on stage count. We also develop and
discuss simple performance models. The third section describes our optimal
scheduling algorithms, which achieve the lower-bound stage count found in
Sec.~\ref{sec:par_sweeps}. For one algorithm we prove optimality for three kinds of
partitioning: $P_z=1$ (KBA partitioning), $P_z=2$ (``hybrid''), and $P_z>2$
(``volumetric''). (To simplify the discussion we \emph{define} $x, y, z$
such that $P_x \ge P_y \ge P_z$.) This is the first main contribution of this paper. In the
fourth section we present our {\em optimal sweep algorithm}, which is made
possible by our optimal scheduling algorithm. For optimal sweeps, we automate
the selection of partitioning and aggregation parameters that minimize execution
time, as predicted by our performance model, given the knowledge that sweeps
will complete in the minimum possible number of stages for a given set of
parameters. This is the second main contribution. Section 5 presents results ranging from 8 to approximately
1.5 million parallel processes, with two different optimal-scheduling algorithms and one
non-optimal algorithm. In all cases the optimal algorithms complete the sweeps
in the minimum possible number of stages, and performance agrees reasonably well
with the predictions of our performance model. We offer summary observations,
concluding remarks, and suggestions for future work in the final section.
{Appendices provide graphic illustrations of the behavior of optimally scheduled
sweeps in 2D and 3D.}
\section{PARALLEL SWEEPS}
\label{sec:par_sweeps}
Consider a $P = P_x \times P_y \times P_z$ process layout on a spatial grid of
$N_r$ cells. Suppose there are $N_m/8$ directions per octant and $N_g$ energy
groups that can be swept simultaneously. Then each process must perform
$(N_rN_mN_g)/(P)$ cell-direction-group calculations. We aggregate these into
tasks, with each task containing $A_r$ cells, $A_m$ directions, and $A_g$
groups. Then each process must perform $N_\mathrm{tasks}\equiv
\omega_r\omega_m\omega_g = (N_rN_mN_g)/(A_rA_mA_gP)$ tasks. At each stage at
least one process computes a task and communicates to downstream neighbors.
The complete sweep requires $N_\mathrm{stages}=N_\mathrm{tasks}+N_\mathrm{idle}$
stages, where $N_\mathrm{idle}$ is the number of idle stages for each process.
Parallel sweep efficiency (serial time per unknown / parallel time per unknown
per process) is therefore approximately
\begin{equation} \label{eqn:pareff}
\epsilon = \frac{T_\mathrm{task}N_\mathrm{tasks}}{\left[N_\mathrm{stages}
\right] \left[T_\mathrm{task}+T_\mathrm{comm}\right] }
=\frac{1}{\left[1+\frac{N_\mathrm{idle}}{N_\mathrm{tasks}} \right]
\left[1+\frac{T_\mathrm{comm}}{T_\mathrm{task}}\right]} \; ,
\end{equation}
where $T_\mathrm{task}$ is the time to compute one task and $T_\mathrm{comm}$ is
the time to communicate after completing a task. In the second line, the term in
the first [ ] is $1+$ the idle-time penalty and the term in the second [ ] is
$1+$ the comm penalty. Aggregating into small tasks ($N_\mathrm{tasks}$ large)
minimizes idle-time penalty but increases comm penalty: latency causes
$T_\mathrm{comm}/T_\mathrm{task}$ to increase as tasks become smaller. This
assumes the most basic comm model, which can be refined to account for
architectural realities (hierarchical networks, random variations, dedicated
comm hardware, latency-hiding techniques, etc.).
In the terms defined above we describe ``basic'' KBA as having $P_z=1$, $A_m=1$ ($\omega_m=N_m$),
$A_g=G$ ($\omega_g=1$), $A_x = N_x/P_x$, $A_y=N_y/P_y$,
and $A_z=$ selectable number of $z$-planes to be aggregated into each task. (A variant
described in the original KBA paper is to aggregate directions by octant, which
means $A_m = N_m/8$ and $\omega_m=8$.)
In
our language, $A_x=N_x/P_x$ and $A_y=N_y/P_y$ translate to
$\omega_x=\omega_y=1$. With $\omega_z=N_z/(P_zA_z)$, $\omega_m=N_m$ or 8, and
$\omega_g=1$, each process performs $N_\mathrm{tasks} = \omega_m\omega_z$ tasks.
With basic KBA, then, $\omega_z \times \omega_m/4$ tasks (two octants) are pipelined from a
given corner of the 2D process layout in a 3D problem. For any octant pair the far-corner
process remains idle for the first $P_x+P_y-2$ stages, so a two-octant sweep
completes in $\omega_z \times \omega_m/4+P_x+P_y-2$ stages. The other three octant-pair
sweeps are similar, so if an octant-pair's sweep does not begin until the
previous pair's finishes, the full sweep requires $\omega_m\omega_z+4(P_x+P_y-2)$
stages. The parallel efficiency of basic KBA is then
\begin{equation}
\label{eqn:kbae}
\epsilon_{KBA} = \frac{1}{\left[1+\frac{4(P_x+P_y-2)}{\omega_m\omega_z} \right]
\left[1+\frac{T_\mathrm{comm}}{T_\mathrm{task}}\right]}
\end{equation}
\vspace{-4mm}
KBA inspires our algorithms, but we do not force $P_z=1$ or force particular
aggregation values (such as $A_m=1$ or $A_m=N_m/8$), and \emph{we simultaneously sweep all octants.}
In contrast to KBA, this requires a scheduling algorithm---rules that determine
the order in which to execute tasks when more than one is available. Scheduling
algorithms profoundly affect parallel performance, as noted in~\cite{BaileyFalgout}.
KBA's choice of $\omega_x = \omega_y = 1$ means that each task completed
satisfies two downstream neighbors' dependencies, which is a substantial
benefit. As will be seen in Eqs.~(\ref{eqn:nfillxyz}-\ref{eqn:nfill}),
$\omega_x$ and $\omega_y$ values $>1$ cause idle time to
increase, so it is usually best to set only $\omega_z > 1$.
With basic KBA, the last process to become active is the one that owns the
far corner cellset for a direction. Since we launch all octants simultaneously,
the last processes to begin computation in our scheme are those at the {\em
center} of the process layout. The value of this ``pipefill penalty'', the
minimum possible number of stages before a sweepfront can reach the center-most
processes, is
\begin{equation} \label{eqn:nfillxyz}
N_\mathrm{fill} = \omega_x(\frac{P_x+\delta_x}{2}-1)
+ \omega_y(\frac{P_y+\delta_y}{2}-1) + \omega_z(\frac{P_z+\delta_z}{2}-1) ,
\end{equation}
where $\delta_u = 0$ or 1 for $P_u$ even or odd, respectively. If we set
$\omega_x=\omega_y=1$, this becomes
\begin{equation} \label{eqn:nfill}
N_\mathrm{fill} = \frac{P_x+\delta_x}{2}-1 + \frac{P_y+\delta_y}{2}-1 +
\omega_z(\frac{P_z+\delta_z}{2}-1) .
\end{equation}
Since this is the case in practice, we will use the latter equation in our
efficiency expressions.
Once the central processes begin working, they must complete $N_\mathrm{tasks}$
tasks, which requires a minimum of $N_\mathrm{tasks}$ stages. Once their last
tasks are completed, there is a pipe emptying penalty with the same value as
$N_\mathrm{fill}$. As long as dependencies are being respected, then, there is
a hard minimum number of idle stages:
\begin{equation} \label{eqn:nidle}
N_\mathrm{idle}^\mathrm{min} = 2 N_\mathrm{fill} = P_x+\delta_x-2 +
P_y+\delta_y-2 + \omega_z(P_z+\delta_z-2) .
\end{equation}
This inevitable idle time then gives us a hard minimum total stage count for a
full-domain sweep:
\begin{eqnarray} \label{eqn:nmin}
N_\mathrm{stages}^\mathrm{min} = N_\mathrm{idle}^\mathrm{min} + N_\mathrm{tasks}
= P_x+\delta_x-2 + P_y+\delta_y-2 + \omega_z(P_z+\delta_z-2)
+ \omega_r\omega_m\omega_g \;\;
\end{eqnarray}
Important observation: for a fixed value of $P$,
$N_\mathrm{idle}^\mathrm{min}$ is lower for $P_z=2$ than for the KBA choice of
$P_z=1$, for a given $P$. In both cases $P_z+\delta_z-2=0$, but with $P_z=2$,
$P_x+P_y$ is lower. We remark that Eqs.~(\ref{eqn:nfill}-\ref{eqn:nmin}) differ
from those of reference \cite{mc2015-sweep} because here we restrict ourselves to simple $P_x \times
P_y \times P_z$ partitioning, with contiguous spatial subdomains assigned to
each process.
If we could achieve the minimum stage count the optimal efficiency would be:
\begin{eqnarray} \label{eqn:opte}
\epsilon_{opt} = \frac{1}{
\left[1 + \frac{P_x + \delta_x + P_y - 4 + \delta_y + \omega_z(P_z + \delta_z - 2)}
{\omega_m\omega_g\omega_z} \right]
\left[1 + \frac{T_\mathrm{comm}}{T_\mathrm{task}}\right]}.
\end{eqnarray}
It is not obvious that any schedule can achieve the lower bound of Eq.\
(\ref{eqn:nmin}), because ``collisions" of the 8 sweepfronts force processes
to delay some fronts by working on others. Bailey and Falgout described a
``data-driven" schedule that achieved the minimum stage count in some tests, but
there remained an open question of what conditions would guarantee the minimum
count \cite{BaileyFalgout}.
\section{PROOFS OF OPTIMAL SCHEDULING}
\label{sec:proofs}
Here we describe a family of scheduling algorithms that we have found to be
``optimal'' in the sense that they complete the full eight-octant sweep in the
minimum possible number of stages for a given \{$P_u$\} and \{$A_j$\}. For one such
algorithm---the ``depth-of-graph'' algorithm, which gives priority to the task
that has the longest chain of dependencies awaiting its execution---we sketch
our proof of optimality. For another---the ``push-to-central'' algorithm, which
prioritizes tasks that advance wavefronts to central planes in the process
layout---we describe scheduling rules but do not prove optimality. These two
algorithms are endpoints of a one-parameter family of algorithms, each of which
should execute sweeps with the minimum stage count.
To facilitate the discussion and proofs that follow, let us define
\begin{equation} \label{def_i}
i\in(1,P_x)=\text{the }x\text{ index into the process array},
\end{equation}
with similar definitions for the $y$ and $z$ indices, $j$ and $k$. We will also
use
\begin{equation} \label{eq:def_X}
X=\frac{P_x+\delta_x}{2} \;, \;\;\; Y=\frac{P_y+\delta_y}{2} \;, \;\;\; Z
= \frac{P_z + \delta_z}{2} \;
\end{equation}
to define ``sectors'' of the process array, e.g. ($i\in(1,X), \; j\in(1,Y)$) is a sector.
We will use superscripts to represent octants/quadrants, e.g. $^{++}$ to denote
($\Omega_x>0, \; \Omega_y>0$), $^{-+-}$ to denote ($\Omega_x<0, \; \Omega_y>0,
\; \Omega_z<0$), etc.
The depth-of-graph algorithm is essentially the same as the ``data-driven''
schedule of Bailey and Falgout \cite{BaileyFalgout}, with the exception
of tie-breaking rules, which we find to be important. The behavior of the algorithm
will become clear in the proofs
that follow.
The push-to-central algorithm prioritizes tasks according to the
following rules.
\begin{enumerate}
\item{If $i \le X$, then tasks with $\Omega_x > 0$ have priority over tasks
with $\Omega_x < 0$, while for $i>X$ tasks with $\Omega_x < 0$ have priority.}
\item{If multiple ready tasks have the same sign on $\Omega_x$, then for $j\le
Y$ tasks with with $\Omega_y > 0$ have priority, while for $j>Y$ tasks with
$\Omega_y<0$ have priority.}
\item{If multiple ready tasks have the same sign on $\Omega_x$ and $\Omega_y$,
then for $k\le Z$ tasks with $\Omega_z>0$ have priority, while for $k>Z$ tasks
with $\Omega_z<0$ have priority.}
\end{enumerate}
Note that this schedule pushes tasks toward the $i=X$ central process plane
with top priority, followed by pushing toward the $j=Y$ (second priority) and
$k=Z$ (third priority) central planes.
The depth-of-graph and push-to-central algorithms differ only in regions of the
process-layout domain in which the ``depth'' priority differs from the
``central'' priority for some octants. In those regions for those octants, one
can view the two algorithms as differing only in the degree to which they allow
the two opposing octants' tasks to interleave with each other. The
push-to-central algorithm maximizes this interleaving while the depth-of-graph
algorithm minimizes it. One can vary the degree of interleaving between these
extremes to create other scheduling algorithms. Our analysis (not
shown here) indicates that each of these algorithms achieves the minimum possible
stage count.
\subsection{Depth-of-Graph Algorithm: General}
The essence of the depth-of-graph scheduling algorithm is that each process
gives priority to tasks with the most downstream dependencies, or the greatest
remaining \emph{depth of graph}. (By ``graph'' we mean the task dependency
graph, as pictured in Fig.~\ref{fig:tdg}.) This quantity, which we will denote
$D(O)$ for an octant $O$, is a simple function of
cellset location and octant direction. The depth-of-graph algorithm
prioritizes tasks according to the following rules.
\begin{enumerate}
\item{Tasks with higher $D$ have higher priority.}
\item{If multiple ready tasks have the same $D$, then tasks with $\Omega_x>0$
have priority.}
\item{If multiple ready tasks have the same $D$ and the same sign on
$\Omega_x$, then tasks with $\Omega_y>0$ have priority.}
\item{If multiple ready tasks have the same $D$ and the same sign on
$\Omega_x$ and $\Omega_y$, then tasks with $\Omega_z>0$ have priority.}
\end{enumerate}
We will develop our proof with the aid of indexing algebra, but the core concept
stems from Eq.~(\ref{eqn:opte}). The formula for $\epsilon_{opt}$ implies that
three conditions are sufficient for a schedule to be optimal:
\begin{enumerate}
\item The central processes must begin working at the earliest possible
stage.
\item The highest priority task must be available to the central processes
at every stage (i.e., once a central process begins working, it is not
idle until all of its tasks are completed).
\item The final tasks completed by the central processes must propagate
freely to the edge of the problem domain.
\end{enumerate}
If these three criteria are met, a schedule will be optimal as defined by
Eq.~(\ref{eqn:opte}). For $P_z=1$ and $P_y>1$ the four central processes are
defined by $i \in (X,X+1)$ and $j \in (Y,Y+1)$. For $P_z>1$ the eight central
processes are defined by these $i$ and $j$ ranges along with $k \in (Z,Z+1)$.
The ``corner'' processes begin at the first stage. This leads to satisfaction
of the first condition, for the four or eight sweep fronts (for $P_z=1$ or $>1$)
proceed unimpeded to the four or eight central processes, with no scheduling
decisions required. The second condition is not obvious, but we will
demonstrate that the depth-of-graph prioritization causes it to be met. Any
algorithm that satisfies the second item will likely achieve the third. We show
that depth-of-graph does.
We will examine the behavior of the depth-of-graph scheduling algorithm within
three separate partitioning schemes. The first, $P_z=1$, uses the same
partitioning as KBA; however, as mentioned, we do not impose the same
restrictions on our aggregation, and we launch tasks for all octant-pairs
simultaneously. The second uses $P_z=2$, which we call the ``hybrid''
decomposition since it shares traits with both the $P_z=1$ case and the $P_z>2$
case. We call the latter ``volumetric'', since it decomposes the domain into
regular, contiguous volumes.
Since the basic scheme of our algorithm sets priorities based on
\emph{downstream} depth of graph, we will use $D(O)$ to represent this quantity
for octant (or octant-pair) $O$:
\begin{equation} \begin{array}{cccc}
D(+-) = & (P_x-i) & + (j-1) & \\
D(-+-) = & (i-1) & + (P_y-j) & + (k-1) \\
\end{array} \; .
\end{equation}
Since much of the algebra for stage counts depends on the depth of a task
\emph{into} the task graph, we define a direction-dependent variable $s$:
\begin{equation}
s^{--+} = (P_x-i) + (P_y-j) + (k-1)
\end{equation}
etc. These are measures of upstream ($s$) and downstream ($D$) dependence chains and
are related by $D(O) + s(O) = $ total depth of graph $ -1= P_x + P_y + P_z - 3$.
We find that $D$ is convenient for discussing priorities, and $s$ is
convenient for quantifying the stage at which a task will be executed.
Our aggregation factors determine what we cluster together as a single task. A
task is the computation for a single set of $A_r$ cells, for a single set of
$A_g$ energy groups, for a single set of $A_m$ angles. We define $M\equiv$
the number of tasks per process per quadrant for $P_z=1$ and per octant for
$P_z>1$. This is different from $N_\mathrm{task}$ discussed above; it takes 1/4
the value if $P_z=1$ and 1/8 otherwise. We use $m^O\in(1,M)$ to represent a
specific task from the ordered list for octant (or quadrant) $O$, and $\mu$ to
represent the stage at which a task is completed. Thus, $\mu(m^O,i,j)$ is the
stage at which process $(i,j)$ performs task $m$ in octant $O$.
\subsubsection{Sector symmetry}
If $P_x$, $P_y$ and $P_z$ are all even, then the sectors are perfectly symmetric
about the planes $i = X + \textonehalf$, $j = Y + \textonehalf$, and $k = Z +
\textonehalf$ (half integer indices denote process subdomain boundaries). In
the case of $P_u$ odd, there is an asymmetry: the sector of greater $u$ is one
process narrower.
Equation (\ref{eqn:opte}) shows that the optimum number of stages for an odd
$P_x$ or $P_y$ (or $P_z>2$) equals that for $P_u + 1$. To simplify the analysis
we will convert cases with any odd $P_u$ to even cases with $P_u + 1$ (except
for $P_z=1$) by imagining additional ``ghost processes." The ``ghost
processes'' do not change the optimal stage count, and they leave us with
perfectly symmetric sectors. Thus, we assume that $\delta_x = \delta_y = 0$
(and $\delta_z=0$ for $P_z>2$), we focus on the sector with $i \in (1,X)$, $j
\in (1,Y)$ and $k \in (1,Z)$, and we know that other sectors behaves
analogously.
\subsection{$P_z=1$ Decomposition}
\label{sec:2D}
For this partitioning we aggregate such that $\omega_x=\omega_y=1$. In the next
section, we will discuss how $\omega_z$ and $\omega_m$ are optimized based on a
performance model, but for now they are treated as free variables. We have
defined $M = \omega_z\omega_g (\omega_m/4)$, which encapsulates the multiple
cellsets, group-sets, and direction-sets within an octant for a process.
Since the values $\omega_x=\omega_y=1$ ensure that every completed task will
satisfy dependencies downstream, our analysis will not directly involve
aggregation factors.
The foundation of the scheduling algorithm we are analyzing is downstream depth
of graph. $D(O)$ depends on angleset direction and
cellset location, and different regions of the problem domain will have
different priority orderings. These regions can be determined by index
algebra.
\subsubsection{Priority regions}
Since we assign priorities to octant-pairs (quadrants) based on $D(O)$, which is a simple function
of $i$ and $j$, it is a simple matter to determine in advance a process's
priorities. The domain thus divides into
distinct, contiguous regions with definite priorities. For example, process
$(1,1)$ executes tasks by quadrant in the order $++$, $+-$, $-+$, $--$. At times we will
find it convenient to refer to a region by its priority ordering. It is also
convenient to refer to a quadrant as a region's primary, secondary, etc., priority.
The boundaries between regions of different priorities are planes (or lines in
our 2D process layout) defined by the solutions of the equations $D(O_1) =
D(O_2)$ for distinct octants (or quadrants) $O_1$ and $O_2$. For example, let
us examine quadrants $++$ and $+-$.
\begin{align} \label{eq:j_eq_Y}
D(++) & = D(+-) \notag \\
\implies (P_x - i) + (P_y - j) & = (P_x - i) + (j-1) \notag \\
\implies j & = Y + \frac{1}{2}
\end{align}
(We continue to assume that $P_x$ and $P_y$ are even.) The non-integer value
means the plane passes \emph{between} processes. Thus, the even case cleanly
divides the problem domain into two regions: $j<Y$, where $++$ quadrants have
priority, and $j>Y$, where $+-$ quadrants have priority. Thus, there are no
ties to break for any process for these two quadrants.
Note that Eq.~(\ref{eq:j_eq_Y}) is also the solution of $D(-+) = D(--)$. There
is an analagous plane bounding the two quadrant pairs with differing signs in
$\Omega_x$, given by
\begin{equation}
i = X + \frac{1}{2} .
\end{equation}
There are also two quadrant pairs with sign differences in both $\Omega_x$ and
$\Omega_y$. Solving for these boundaries, we find
\begin{equation} \label{eq:i_plus_j}
(i-X) + (j-Y) = 1\;\;\; ; \;\;\;
(i-X) - (j-Y) = 0 .
\end{equation}
We see that the first Eq.~(\ref{eq:i_plus_j}) is a line of slope $-1$ through
the center of the domain, and integer values of $i$ and $j$ satisfy the
equation. Thus, with $P_u$ both even, there are ``diagonal lines" of processes
for which pairs of octants have the same priority---a tie. We use a simple
tie-breaking scheme here: the first tie-breaker goes to tasks with
$\Omega_x>0$, and the second tie-breaker is for $\Omega_y>0$. Once we apply our
tie-breaker, the diagonal-line processes can be thought of as belonging to the
region that prioritizes the winning quadrant.
Figure~\ref{fig:2d_regions} shows the central portion of a problem domain
divided into priority regions. We call attention to ``central'' processes,
shaded in the figure, which determine much of the behavior of our scheduling
algorithm. Note that the process layout in the figure could be either the
entire domain or a small central subset; the lines and all they signify are the
same either way.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.3]{./2d_regions2.png}
\caption{{\bf Priority regions} for $P_x$ and $P_y$ even. Lines $A$ through $D$
represent Eqs.\ (\ref{eq:j_eq_Y}-\ref{eq:i_plus_j}), respectively, with blue
versions after tie-breaker. Central processes are shaded. The figure is
``zoomed in'' on the central portion of the process domain, which may have
arbitrarily large extent.}
\label{fig:2d_regions}
\end{figure}
\subsubsection{Primary quadrant: filling the pipe}
\label{sec:p1_quad}
At the outset of a sweep only the four ``corner" processes have their incoming
fluxes (from boundary conditions). Each corner process completes its first
task at stage one, which satisfies dependencies for its downstream neighbors.
Thus begin the waves of task-flow called the sweep.
Let us examine the order in which processes complete tasks in their primary
quadrant (e.g., quadrant $++$ for sector $--$). We begin at stage $\mu=1$ with
process $(i,j)=(1,1)$ performing task $m=1$. Once this is completed (i.e., in
stage 2), processes $(1,2)$ and $(2,1)$ can perform task $1$, and process
$(1,1)$ moves on to task 2. In stage 3, processes $(1,3)$, $(2,2)$, and
$(3,1)$ perform task 1, processes $(1,2)$ and $(2,1)$ perform task 2, and
process $(1,1)$ performs task 3. We can generalize this pattern with the
simple expression
\begin{equation}
\label{sector_one_primary}
\mu(m^{++}, i, j) = (i-1) + (j-1) + m^{++} = s^{++} + m^{++}.
\end{equation}
For a given process ($i,j$ pair), this equation describes the task number
incrementing with each successive stage. For a given task (value of $m^{++}$),
it describes a set of processes along a line of slope $-1$ moving up and right
at each stage.
The procession of tasks proceeds in this way from each corner, with the
processes in each sector performing tasks from their primary quadrant as long
as they last. Thus, we find that primary quadrant task execution follows
Eq.~(\ref{sector_one_primary}) as well as the analogous:
\begin{equation}
\label{sector_two_primary}
\mu(m^{+-}, i, j) = (i-1) + (P_y-j) + m^{+-} = s^{+-} + m^{+-},
\end{equation}
\begin{equation}
\label{sector_three_primary}
\mu(m^{-+}, i, j) = (P_x-i) + (j-1) + m^{-+} = s^{-+} + m^{-+},
\end{equation}
and
\begin{equation}
\label{sector_four_primary}
\mu(m^{--}, i, j) = (P_x-i) + (P_y-j) + m^{--} = s^{--} + m^{--}.
\end{equation}
\subsubsection{Starting on the central processes}
As can be seen from Eqs.~(\ref{eq:j_eq_Y})-(\ref{eq:i_plus_j}) or
Fig.~\ref{fig:2d_regions}, each quadrant has top priority for one entire sector,
so even if other tasks are available, these stage counts will hold within the
initial sector. Thus, the central processes are reached in $X+Y-1$ stages,
just as in Eq.~(\ref{eqn:nfill}), which satisfies the first condition for
optimality: the central processes begin work at the first possible stage.
This result also gives us a start on the second condition: the central
processes stay busy until their work is done. It is clear from
Eqs.~(\ref{sector_one_primary}-\ref{sector_four_primary}) that successive tasks
in a given quadrant take place at successive stage counts. If the central
process gets its first task at stage $\mu$, it will receive the second at
$\mu+1$, etc. This guarantees that all of the tasks in a central process's
highest priority quadrant will arrive in sequence, allowing the process to
stay busy as it processes its first quadrant.
Observe the symmetry between sectors: as process $(X,Y)$ computes its $++$
tasks, $(X,Y+1)$ and $(X+1,Y)$ compute their $+-$ and $-+$ tasks, respectively,
and communicate to $(X,Y)$. This satisfies half of $(X,Y)$'s dependencies for
these two quadrants. The other dependencies remain; for example, tasks in the
$+-$ quadrant cannot be executed at $(X,Y)$ until $(X-1,Y)$ has computed them
too. This is addressed next.
\subsubsection{Second-priority tasks}
The second-priority quadrant for process $(X,Y)$ is $+-$. Its tasks began at
$(1,P_y)$ and propagated as a mirror image of the $++$ tasks from $(1,1)$. The
first $+-$ task becomes available to process $(1,Y)$ at stage
\begin{equation}
\mu = (1 - 1) + (P_y - Y ) + 1 = Y + 1 \; ,
\end{equation}
just as the first $++$ task becomes available to $(1,Y+1)$. However, $(1,Y)$
works on $++$ tasks until they are exhausted; only then will it begin the $+-$
tasks, whose dependencies are already satisfied (one
by boundary conditions, one by information from $(1,Y+1)$). This results in a delay on
secondary-quadrant tasks (e.g., quadrant $+-$ for sector $--$ processes), given by
\begin{equation*}
d = \text{delay} = \mu(1+M^{++}, 1, Y) - \mu(1^{+-}, 1, Y) = M - 1 \; .
\end{equation*}
As the tail of the $++$ task wave propagates along the processes at $j = Y$,
the first $+-$ task flows in right behind it. This is illustrated in the fifth frame of Fig.~\ref{fig:2d-sweep} in App.~\ref{sec:appendix1}. The wave-front propagates
as as a mirror image of the primary quadrant, starting from $(1,Y)$. The
dependencies from $j = Y+1$ have already been satisfied, those at the boundary
are given, and the final $++$ task has already swept past, so we find that
\begin{equation}
\mu = (i-1) + (P_y-j) + (M-1) + m^{+-}
\end{equation}
for processes that give quadrant $+-$ second priority. This includes the central
process, which thus transitions smoothly from $++$ task to $+-$ tasks, staying
busy until tasks from these two quadrants are finished.
\subsubsection{Third-priority tasks}
Beginning at $(X,1)$, the central process's third-priority quadrant
($-+$) begins its march through the ($--$) sector with a progression symmetric
to that of the central process's second-priority quadrant (discussed in the immediately
preceding subsection), with
\begin{equation}
\mu = (P_x-i) + (j-1) + (M-1) + m^{-+} \;,
\end{equation}
for the processes that give second priority to $-+$ second---the processes that
own cellsets below the diagonal in the $(--)$ sector. This region stops just
shy of the central process; its boundary is given by Eq.~(\ref{eq:i_plus_j}).
The second- and third-priority task waves arrive at the processes given by the
equal-depth equation at the same stage---the processes that own cellsets on
the diagonal---but the third-priority tasks lose the
tie-breaker. On each side of the boundary, processes continue to execute
their second-priority tasks as the third-priority tasks become available.
The central process (and the others along the diagonal line) finishes the last
second-priority quadrant's task at stage
\begin{align*}
\mu & = (X-1) + (P_y-Y) + (M-1) + M^{+-} \\
& = X + Y + 2M - 2 \; .
\end{align*}
The processes across the boundary finished their second-priority tasks the
stage before. Now that the second-priority tasks are finished, the processes
on both sides begin their third-priority tasks. The central process, $(X,Y)$,
has had the dependency from $(X,Y-1)$ met from the time it began its $+-$ tasks;
it simply prioritized the latter tasks over the available $-+$ work. Recalling our sector
symmetry, the dependency from $(X+1,Y)$ was met as it completed its first $++$
task. Thus, the central processes all stay busy through their third-priority
quadrants.
The two incoming task waves (from the second- and third-priority quadrants) arrived at the (tie-broken jagged-diagonal) region boundary intact. Each process adjacent to the region boundary executes all of its second-priority tasks. When the last second-priority task is complete, the two regions begin their third-priority tasks all along the (jagged-diagonal) region boundary.
Thus, the waves resume their propagation delayed but unbroken. For
$-+$ tasks,
\begin{equation} \label{eq:mu_second}
\mu = (P_x-i) + (j-1) + (2M-1) + m^{-+} \; .
\end{equation}
Since the $-+$ tasks lose the tie-breaker and are held up a stage earlier, the
$+-$ tasks actually continue their progress a stage earlier:
\begin{equation} \label{eq:mu_third}
\mu = (i-1) + (P_y-j) + (2M-2) + m^{+-} \; .
\end{equation}
Both task waves sweep along unimpeded, as they now have the highest priority in
their current regions. As they go, they are fulfilling (in advance)
dependencies for the adjacent sectors' fourth-priority tasks.
\subsubsection{Fourth-priority tasks}
We have seen that each central process begins its first task at the
earliest possible stage. We have also seen that its supply of tasks is
continuous through its third-priority quadrant. The dependencies for its
fourth-priority quadrant are the two neighboring central processes. Since each
fourth-priority quadrant was a higher priority for the other processes, and
since they have all completed their first three quadrants, they have now
satisfied each others' dependencies. Thus, the second condition for an optimal
schedule has been fulfilled.
The third condition is that the final tasks propagate without delay to the
problem boundaries. Since the third-priority task waves are already retreating
from the central processes, as shown by Eqs.\
(\ref{eq:mu_second}-\ref{eq:mu_third}), we know that there are no competing
tasks remaining. These equations also demonstrate that the fourth-priority
dependencies have already been satisfied. Just as we have seen task waves begin
propagating from $(1,1)$, $(1,Y)$ and $(X,1)$, the fourth-priority wave now
propagates smoothly from $(X,Y)$, with
\begin{equation}
\mu = (P_x-i) + (P_y-j) + (3M-1) + m^{--} \; .
\end{equation}
The final task of the fourth-priority quadrant will thus be executed by
process $(1,1)$ at stage
\begin{align}
\mu & = (P_x-1) + (P_y-1) + (3M-2) + M^{--} \\
& = P_x + P_y + 4M - 4 = P_x + P_y - 4 + N_\mathrm{tasks} \; ,
\end{align}
which is exactly the minimum we established in Eq.~(\ref{eqn:nmin}).
\subsection{$P_z=2$ Decomposition (``Hybrid'')}
\label{sec:hybrid}
Now consider the case of $P_z=2$. Above, we considered task groups in terms of
quadrants, which are actually sets of two octants. We did not specify the
ordering of tasks within a quadrant because the proof holds true regardless of
that order. Thus, we are free to do the entire ``upward'' ($\Omega_z>0$) octant
first, followed by the entire downward octant, and all of the properties we have
established above are unchanged.
The depth-of-graph algorithm schedules tasks for the $k=1$ processes exactly
this way, and the $k=2$ processes mirror the ordering. While the lower
processes solve their upward tasks, the upper processes solve their downward
tasks, so that by the time one is done, the other is waiting. All other
scheduling concerns are handled exactly the same as in $P_z=1$.
This leads to a result that may not be obvious: if we take a $P_z=1$ problem and
double both $N_z$ and $P_z$, then the task flow for the $k=1$ processes
is indistinguishable from the $P_z=1$ case with the original $N_z$. They work their $\Omega_z>0$ octants, as
before, and then their $\Omega_z<0$ octants, just as before. Thus,
using the hybrid decomposition instead of the $P_z=1$ allows for doubling the number
of cells in $z$ and doubling the number of processes \emph{with no increase in
solve time}.
This benefit rests on the initial step of computation. In $P_z=1$, only four
processes had tasks with no unsatisfied dependencies; now all eight corner
processes launch their primary octants at once. We experience the same
pipe-fill penalty as before, and the number of tasks per process is the same.
We perform twice the work with twice the processors in the same time,
except for a possible communication delay between upper and lower
halves of the problem.
\subsection{$P_z>2$ Decomposition with $\omega_x=\omega_y=\omega_z=1$ (``Volumetric'')}
In this section we examine what we call a volumetric decomposition, which
is the full extension of the decomposition into three dimensions. The
same requirements for optimality apply, and the task flow follows the same
principles. For this analysis, we will assume that $\omega_x=\omega_y=\omega_z=1$, so that
each process owns only a single cellset.
\subsubsection{Priority regions}
Much as before, the domain is divided into regions with different priority
orders based on relative depths of graph for different octants. For $P_z=1$,
there were six distinct pairs of colliding quadrants (as in
Eqs.~\ref{eq:j_eq_Y}-\ref{eq:i_plus_j}) and eight regions of different priority
orderings (as shown in Fig.~\ref{fig:2d_regions}). For $P_z>2$, there are 28
distinct pairs of colliding octants ($7+6+...+1 = 28$), and, as
we will see, 96 different regions with different priority orderings.
The regions are separated by the planes along
which two octants have equal depth of graph.
For $P_z=1$ or $P_z=2$, after the primary quadrant there were two quadrants
advancing in each sector. For $P_z>2$ there are three octants entering each
sector, which we will nickname $R$, $B$ and $G$ (for red, blue and green). We
will call the primary octant $P$. The priority regions $(P,R,...)$, $(P,B,...)$
and $(P,G,...)$ are defined by planes that we will call the $RB$, $RG$ and $BG$
boundaries, given by $D(R)=D(B)$, etc., as illustrated in Fig.~\ref{fig:rgb}.
Because these three planes intersect along a single line (perpendicular to the
plane of the figure), they divide the sector into six regions. (Later we will see that
other octants further divide the six into twelve.) Each octant has
second priority in two of these regions (adjacent) and third priority in
another two (non-adjacent).
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.2]{./dads_rgb.pdf}
\includegraphics[scale=0.2]{./rgb_fill2.pdf}
\includegraphics[scale=0.2]{./rgb_p3b.pdf}
\caption{Left: Planes of equal depth of graph divide a sector into regions of
different priority. Center: Each of the \{R,G,B\} octants has second priority in two adjacent regions
of the six regions. Right: Each of the \{R,G,B\} octants has third priority in two non-adjacent regions.}
\label{fig:rgb}
\end{figure}
Whereas for $P_z=1$ the secondary and tertiary quadrants finished a sector
before the final quadrant moved in, for $P_z>2$ we see six octants at play in a
sector. The three octants with directions opposite $R$, $B$ and $G$, which we
will call $\bar R$, $\bar B$ and $\bar G$, arrive before the first three are
finished. The boundary planes for each of these with the $RBG$ octants that are
not its inverse are the sector boundaries. The boundary planes with their
opposites are perpendicular to the problem domain's diagonals; each of these
carves up two of the six regions where the second octant of $RBG$ was
unchallenged.
Since $D(O) + D(\bar O) = \mathrm{constant}$, the top four octants for a
priority region are reversed for the final four. For example, a region with
$(P, R, G, \bar B,...)$ must in fact have the priorities $(P, R, G, \bar B, B,
\bar G, \bar R, \bar P)$. {(We will take this region as our example in
the description that follows.)} These {divisions give} us twelve
distinct priority regions per sector.
\subsubsection{First priority octants}
Things begin much as before, with eight corner processes initiating waves of
tasks in their primary octants. The stage counts are the same:
\begin{equation}
\mu(m, i, j, k) = s + m \; , \text{ where $s$ = stage index and $m$ = angleset index.}
\end{equation}
The waves propagate to the central processes in $X+Y+Z-2$ steps, as in Eq.\
(\ref{eqn:nfill}). The central processes receive all primary-octant tasks in
smooth succession, and they begin to satisfy their neighbors' dependencies.
\subsubsection{Second, third and fourth priority octants}
The second-, third- and fourth-priority octants ($R$, $G$ and $B$) collide with the
first-priority octant (P) at three of the sector's corners, and the standing
collision fronts spread across the sector boundaries. Once the tail of the P
wave passes, the $R$, $G$ and $B$ waves begin propagating inward from those points.
They initially propagate smoothly, each with delay of $M-1$ stages.
These three octants collide with each other at lines, and their collision fronts
spread from there. They all reach the central process at the same stage, $X +
Y + Z + M - 2$, and the central process begins its second-priority tasks.
In the $P_z=1$ case, everything holds static while the central process
executes its second-priority tasks, but in the $P_z>2$ case, the $R$, $G$ and $B$ octants
move into their tertiary regions (as in the rightmost image in Fig.~\ref{fig:rgb}) during this phase.
They suffer a
delay of $M$ at these interfaces, and then continue propagating in a sort of
rotation around the central process. Once the central process is ready, it
moves on to its third- and fourth-priority octant, which have been ready for it
since it began its second-priority tasks.
\subsubsection{Fifth, sixth and seventh priority octants}
As can be seen from sector symmetry, the next octants have been ready to enter
the sector since the $R$, $G$ and $B$ waves collided. However, the entry
processes had $2M$ tasks available with higher priority. Once these are
done (e.g., once the final $R$ and $G$ tasks are propagating along the RG
boundary), the next octant (e.g., $\bar B$) enters the sector. It has been
delayed $3M-2$ stages in total, and propagates as $\mu = s + m + 3M - 2$.
Each octant collides with its opposite at an entire plane, where each side
finishes its priority. When they switch sides, they continue with a planar
wavefront at a delay of $M$ (or $M-1$ for the winner of the tie-breaker).
As the waves continue to intersect, they continue to delay but not disrupt each
other, and they all sort of pivot around the central process.
While there are many collisions and priority regions in a sector, the central
process is never in danger of running out of available work. Thus, the second
condition for optimal scheduling is met.
\subsubsection{Final octants}
Symmetry assures that each central process's dependencies for its final octant
are met before it finishes the other octants. The previous octants have already
propagated well past the central process, fulfilling dependencies as they
went. The final octants meet no competition on their way to the problem
boundary.
Throughout this choreography, the fundamental requirements of an optimal scheduling
algorithm are met. The central processes get their work as early as possible,
they stay busy until they are done, and their final tasks propagate freely to the corners
of the problem domain, marking the end of the eight-octant boundary-to-boundary sweep.
\subsection{$P_z>2$ Decomposition with $\omega_z>1$}
The optimal scheduling strategies described for particular cases of partitioning
and aggregation in previous subsections also apply to the remaining case, in which $P_z>2$
and $\omega_z>1$. This is a merger of the ``hybrid'' ($P_z=2, \omega_z \ge 1$)
and ``volumetric'' ($P_z>2, \omega_z=1$) cases.
In the case of $P_z=2$ and $\omega_z \ge 1$, the central process receives its first task after
$X + Y -2$ stages, and there are $\omega_m \omega_z \omega_g$ work stages.
In the case of $P_z>2$ and $\omega_z=1$, the central process receives
its first task after $X+Y+Z-3$ stages, and there are $\omega_m \omega_g$ work stages.
In previous subsections we showed that the
for these cases, the scheduling algorithms described herein (such the ``depth-of-graph''
algorithm) execute full eight-octant sweeps in the minimum possible number of stages.
For the more general case of $P_z>2$ and $\omega_z>1$, it remains true that the
scheduling algorithms described herein execute the sweep in the minimum number
of stages. Showing this requires the same kinds of arguments used for previous cases.
One might ask whether it ever makes sense to have $\omega_z>1$ when $P_z > 2$.
Sometimes it does. Recall that an important ingredient in parallel efficiency is the ratio
of idle-stage count to working-stage count:
\begin{align}
\text{ratio} & = \frac{P_x + \delta_x + P_y + \delta_y -4+ \omega_z(P_z+\delta_z-2)}
{\omega_m\omega_g\omega_z}
\end{align}
If all $P_u>2$, which is the case under discussion, then we see that increasing $\omega_z$
while holding all other variables constant \emph{decreases} the idle-to-working ratio. Of
course, it also increases the communication-to-working ratio by making more, smaller
tasks. But in practice we find that the optimal partitioning and aggregation for large problems,
taking everything into account, includes $P_z>2$
and $\omega_z>1$.
\subsection{Reflecting Boundaries}
Our decomposition and scheduling algorithms have mirror symmetry across a problem's
$x$-$y$, $x$-$z$, and $y$-$z$ planes. This leads to the desirable property that problems
with reflecting boundaries will execute exactly like their full-domain counterparts.
That is, from the perspective of the processes
assigned to a given portion of the problem, it makes no difference if incoming angular fluxes on a central
mirror plane of a symmetric problem come from processes in the neighboring portion of the full spatial domain or from
reflection of outgoing angular fluxes---in either case the algorithm ensures that those tasks
are available when it is time to execute them. That is, for example, full-domain execution of a symmetric
3D problem with $P$ processes proceeds exactly like execution of one eighth of the problem
with $P/8$ processes and three reflecting boundaries.
\section{OPTIMAL SWEEPS}
\label{sec:opt_sweeps}
Here we describe how we have used our optimal {\em scheduling} algorithm to
generate an optimal {\em sweep} algorithm. Given an optimal schedule we know
exactly how many stages a complete sweep will take, and thus can estimate the
parallel efficiency of a sweep with such a schedule:
\begin{eqnarray} \label{eq:opteb}
\epsilon_{opt} = \frac{1}{
\left[1 + \frac{P_x + \delta_x + P_y + \delta_y -4+ \omega_z(P_z+\delta_z-2)}
{\omega_m\omega_g\omega_z} \right]
\left[1 + \frac{T_\mathrm{comm}}{T_\mathrm{task}}\right]}.
\end{eqnarray}
Given Eq.\ (\ref{eq:opteb}), we can choose the \{$P_x, P_y, P_z, \omega_m,
\omega_g, \omega_z$\} that
maximize efficiency and thus minimize total sweep time. This optimization over the
\{$P_u$\} and \{$\omega_j$\}, coupled with the scheduling algorithm that executes the
sweep in $N_{stages}^{min}$ stages, yields what we call an {\em optimal sweep}
algorithm.
The denominator of the efficiency expression is the product of two terms, and
optimization means minimizing this product. Several observations are in order.
First, aggregation into a larger number of smaller tasks causes the first term to
decrease (because
$\omega_m\omega_g\omega_z$ is the number of tasks) and the second term to
increase (because $T_{task}$ shrinks while the latency portion of $T_{comm}$
remains fixed). Thus, for a given \{$P_u$\} and given problem size there will
be some set of aggregation parameters that minimize the product.
Second, the
term $(P_z+\delta_z-2)$ vanishes when $P_z=1$ {\it or} $2$, leading to the
benefit of our ``hybrid'' partitioning discussed above: If we change from $P_z=1$
to $P_z=2$ and keep processor count and task size constant, the first term decreases
(because $P_x +P_y$
decreases) and the second stays about the same (because
$T_{task}$ stays the same).
Third, if we use $P_x \approx P_y \approx P_z$ and $\omega_z=\omega_y=\omega_x = 1$ (the usual ``volumetric'' decomposition
strategy), then $P_x+P_y+P_z$ grows as $P^{1/3}$ instead of the $P^{1/2}$ that
occurs when $P_z$ is fixed equal to 1 or 2. This hints that for very high processor counts
a volumetric decomposition might be best.
It is interesting to compare $\epsilon_{\text{KBA}}$
(which uses $P_z=1$ and sweeps two octants at a time) to
$\epsilon_{opt,hyb}$ (which uses $P_z=2$ and sweeps all eight octants simultaneously),
especially in the limit of large $P$ (which allows us to ignore the $\delta_u$
and the numbers $2$ and $4$ that appear in the equations). In the large-$P$
limit, with $P_x+P_y \approx P^{1/2}+P^{1/2}$, Eq.\ (\ref{eqn:kbae}) becomes
\begin{eqnarray} \label{eqn:kbae2}
\epsilon_{\text{KBA}} \xrightarrow{\text{large } P}
\frac{1}{\left[1+\frac{4(2P^{1/2})}{\omega_m\omega_z} \right]
\left[1+\frac{T_{comm}}{T_{task}}\right]}
\end{eqnarray}
Now consider $\epsilon_{opt,hyb}$ with $P_z=2$ and $P_x+P_y \approx 2(P/2)^{1/2} = \sqrt{2}P^{1/2}$.
For comparison we aggregate to the same number of tasks as in KBA (which is
likely sub-optimal for hybrid), so $\omega_g=1$ and $\omega_m$ is the same as in KBA. The result is
\begin{eqnarray} \label{eqn:opte2}
\epsilon_{opt,hyb} \xrightarrow{\text{large P}}
\frac{1}{\left[1+\frac{\sqrt{2}P^{1/2}}{\omega_m\omega_z} \right]
\left[1+\frac{T_{comm}}{T_{task}}\right]}.
\end{eqnarray}
An interesting question is how many more processors the hybrid partitioning with optimal scheduling can use
with the same efficiency as what we have called ``basic'' KBA. The answer comes from setting $4(2 P_{KBA}^{1/2}) = \sqrt{2}P_{opt,hyb}^{1/2}$, which yields the result $P_{opt,hyb}/P_{KBA} = 32$.
For example, even without
optimizing the \{$\omega_j$\}, our 8-octant scheduling algorithm with $P_z=2$ yields the same efficiency
on 128k cores as ``basic'' KBA on 4k cores. Optimizing the \{$\omega_j$\} can
improve this even further. The improvement stems from launching all octants
simultaneously, which significantly reduces process idle time, and managing the ``collisions'' of the multiple sweep fronts in a way that does not add extra stages. The cost is that
more storage is required during the sweep, because the angular fluxes on all of the
sweep fronts must be stored at the same time.
Our simplest performance model is Eq.\ (\ref{eq:opteb}) with the following definitions:
\begin{equation}\label{eqn:tcomm}
T_{comm} = M_L \times 3 \times T_{latency} + T_{byte} N_{bytes}
\end{equation}
\begin{equation}\label{eqn:ttask}
T_{task} = T_{wu} + A_x A_y A_z \left(T_{cell}
+ A_m \left[ T_m + A_g T_{g} \right] \right)
\end{equation}
where
\begin{align*}
T_{latency} &= \text{message latency time,} \\
T_{byte} & = \text{additional time to send one byte of message,} \\
N_{bytes} & = \text{total bytes a processor must send to downstream neighbors
at each stage,} \\
T_{wu} & = \text{time per task outside of comms and outside loop over cells in
cellset} \\
T_{cell} & = \text{time per task per spatial cell, outside loop over
directions in angleset} \\
T_m & = \text{time per task per spatial cell per direction, outside loop over
groups in groupset} \\
T_{g} & = \text{time in inner (group in groupset) loop to compute a single
cell, direction, and group.}
\end{align*}
$N_{bytes}$ is calculated based on the aggregation
and spatial discretization scheme; the other parameters are obtained through
testing. We use the parameter $M_L$ to explore performance as a function of
increased or decreased latency. The factor of 3 in the latency term is because
processors typically must send three messages at each stage of the sweep.
If we find that a high value of $M_L$ is needed
for our model to match our computational results, then we look for things to
improve in our code implementation.
We have implemented in our PDT code an ``auto" partitioning and aggregation
option. When this option is engaged, the code uses empirically determined
numbers for $T_{latency}$, $T_{byte}$, $T_{wu}$, $T_{cell}$, $T_m$, and $T_{g}$ for the
given machine. Then for the given problem size it searches for the combination
of $\{P_u\}$ and $\{A_j\}$ that minimizes the estimated solution time. This relieves
users of the burden of choosing these parameters and ensures that efficient choices are made. In the numerical
results shown in the following section we did not employ this option, because we
were exploring variations in performance as a function of aggregation parameters
and thus wanted to control them. However, we often use this option when we use
the PDT code to solve practical problems.
Angle aggregation carries complexities that group and cell aggregation do not. We
mentioned in Sec.~\ref{sec:intro} that all directions in an angleset must belong
to the same octant, for otherwise they would need to start on different corners of
the spatial domain. In PDT, the directions in an angleset
must all share a sweep ordering, because the loop over directions in an angleset
is inside the loop over cells. If the grid has only brick-shaped cells, then all directions
in a given octant have the same cell-to-cell dependencies.
In a completely unstructured grid, though, the cell-to-cell
sweep ordering (dependence graph) that respects
all upstream dependencies can be different for each quadrature direction.
In the current version of PDT, a sweep that respects all dependencies
would in such a situation
require a different angleset for each quadrature direction. An alternative is to
relax the strict enforcement of dependencies, using previous-iteration information
for angular fluxes from upstream cells that have not yet been calculated during
the sweep. For example, if cell $i$ is calculated before cell $j$ (because $i$ is
the upstream cell for most of the directions in the angleset), but for some directions
in the angleset cell $j$ is upstream of cell $i$, then for those particular directions
the $j$-to-$i$ angular flux would come from the previous iteration. If this happens
extensively, it can increase iteration counts. The ideal approach will be problem-dependent.
We mentioned in
Sec.~\ref{sec:intro} that 3D grids of polygonal-prism cells (polygons in a plane,
extruded into the third dimension) can offer advantages for sweeps, relative
to fully unstructured polyhedral grids. This is most pronounced when prismatic-cell
grids are used with ``product'' quadrature sets, which have multiple directions that have different polar angles
(angles relative to the axis of prismatic extrusion) but the same azimuthal angle (angle
in the plane of the polygons). All directions with the same azimuthal angle in the
same octant have the same cell-to-cell sweep ordering on prismatic grids,
which allows them to be aggregated without resorting to previous-iteration information. We typically take advantage of this in problems that lend themselves to prismatic grids, including the 3D nuclear-reactor problems illustrated in the next section.
It takes much more than a good parallel algorithm to achieve the scaling
results that we present in the following section.
Implementation details are important for any code that attempts to
scale up to and beyond $10^6$ parallel processes. Our PDT results are due in no small part to
the STAPL library, on which the PDT code is built. STAPL provides parallel data
containers, handles all communication, and much more. See
\cite{stapl-1,stapl-2,stapl-3,stapl-4,stapl-14,stapl-15a,stapl-15b,stapl-16,stapl-17} and \cite{stapl-git} for more details. We also present results from LLNL's ARDRA code, which has benefited from LLNL's long experience in efficient utilization of the world's fastest computers.
\section{COMPUTATIONAL RESULTS}
\label{sec:results}
In this section we present results from a series of test problems that demonstrate
how sweep times at high core counts compare to those at low core counts when
our optimal sweep algorithm is used. We begin with
simple brick-cell weak-scaling suites and then turn to a weak-scaling suite with
spatial grids that resolve geometric features in a pressurized-water
nuclear reactor.
\subsection{Regular Brick-Cell Grids, DFEM Spatial Discretization}
We begin with suites of brick-cell test problems in which as $P$ grows,
the size of the problem domain
increases while cell size, cross sections, and number of cells per parallel process
are unchanged. The simplest version has only one energy group, 80 directions
(10 per octant), and 4096 cells per core. We employ the PWLD spatial discretization \cite{pwld-a,pwld-b}, which has 8
unknowns per brick-shaped cell (one for each vertex). We will see later that
problems with more groups and angles exhibit higher
parallel efficiencies.
We studied weak scaling, holding constant the number of unknowns per processing
unit. We ran this series from $P=8$ to $P=384$k $=384\times 1024 = 393,216$
cores, with the
depth-of-graph and push-to-central scheduling algorithms. With an earlier
version of our code we also tested a non-optimal scheduling algorithm that
simply executes tasks in the order
they become ready, from $P=8$ to $P=128$k =131,072 cores. The problems were run
on the Vulcan computer at LLNL, an IBM BG/Q architecture with 16 cores per node.
All results are for $P_z=2$, and efficiencies are based on solve times
normalized to a $P=8$ run. Times do not include setup but {do} include
communications and convergence testing.
Figure~\ref{fig:scaling_pttl} shows results three different
scheduling algorithms: the depth-of-graph and push-to-central optimal schedules
and the (non-optimal) first-arrival schedule. We have compared the observed stage
counts against the minimum-possible stage counts described previously in this
paper, and in every case they agree exactly for both of the scheduling
algorithms that our theory claims are optimal. The figure indicates that the
non-optimal schedule does not perform as well as the optimal schedules, but it
degrades surprisingly slowly. We see that sweeps executed with optimal
schedules perform very efficiently out to large core counts, even with a
modest-sized problem (only one group and 80 directions).
\begin{figure}[h!]
\centering
\includegraphics[width = 0.75\linewidth]{./scaling_pttl.pdf}
\caption{Weak scaling results from three different
scheduling algorithms on an IBM BG/Q computer. ``Efficiency'' is 8-core execution time divided by the $P$-core time, with unknowns/core held constant. Red squares and black dots show observed results from the PDT code with two different optimal scheduling algorithms, depth-of-graph and push-to-center, respectively. They are identical, as predicted by theory. Black squares represent PDT results from a first-come-first-serve schedule algorithm, which is sub-optimal. Green triangles are predictions of the idealized performance model of Eq.~(\ref{eq:opteb}). Blue diamonds (mostly hidden by PDT optimal-schedule results) are from the same model with the latency multiplier $M_L=11$.}
\label{fig:scaling_pttl}
\end{figure}
Figure~\ref{fig:mira1g} provides results out to 768k cores for a three-group
version of our test problem using the
push-to-center optimal scheduling algorithm. Even though the test problem
had only three energy groups and only 10 quadrature directions per octant,
the code achieved more than 60\% parallel efficiency when scaling up from
8 to 786,432 cores. That is, the optimal sweep algorithm loses less than
40\% efficiency when scaling up by a factor of 96k on this small test problem.
Figure~\ref{fig:mira1g} also shows efficiency predictions of our performance model for
two different overhead burdens. The ``low-overhead" plot used $M_L=1$ in Eq.\
(\ref{eqn:tcomm}), which is what we would hope to achieve in a nearly perfect
implementation of our algorithms in our code. In this case the only overhead
would be actual message-passing latency. The ``high-overhead" plot used
$M_L=11$, and it agrees closely with our observed performance. This suggests
that there is per-task overhead in our code implementation that we should be
able to reduce. We are working on this.
\begin{figure}[h!]
\centering
\includegraphics[width= 0.75\linewidth]{./mira1g.png}
\caption{Weak scaling results from optimal sweep algorithm on IBM BG/Q computer. ``Efficiency'' is 8-core execution time divided by the $P$-core time, with unknowns/core constant and one MPI process per core. Red circles show results from PDT on one-group problems with 10 directions per octant, 4096 brick-shaped spatial cells per core, and one MPI process per core. The line with diamond markers is efficiency predicted by the performance model of Eq.~(\ref{eq:opteb}) with $M_L=1$. The line with $\mathbf{\times}$ is from the model with $M_L=11$. The highest core count shown is the entire Mira machine at Argonne National Lab: 768$\times$1024 cores.}
\label{fig:mira1g}
\end{figure}
Continuing to push to higher parallelism, we executed a weak scaling study out to $\approx 1.6$ million ($3 \times 2^{19}$) parallel threads by ``overloading'' each of the $768 \times 1024$ cores with 2 MPI processes. Our test problem is as before---4096 brick cells per thread, 10 directions per octant, and three energy groups. As Fig.~\ref{fig:mira3g2tpc} shows, the optimal sweep algorithm in PDT achieved 67\% parallel efficiency when scaling from 8 threads to approximately 1.6 million threads---it loses only 33\% efficiency when scaling up by a factor of 192k when there are three energy groups. Scaling improves further with more groups or more directions, because the work-to-communication ratio improves.
\begin{figure}[h!]
\centering
\includegraphics[width= 0.75\linewidth]{./mira3g2tpc.png}
\caption{Weak scaling results from optimal sweep algorithm on IBM BG/Q computer. ``Efficiency'' is the time to execute a sweep on a given number of processes divided by the time to execute a sweep on 8 processes, with work per process held constant. Results are from the PDT code on a series of 3-group problems with 10 directions per octant, 4096 brick-shaped spatial cells per core, and two MPI processes per core. The highest process count shown is 2$\times$768$\times$1024 = 1,572,864 MPI processes.}
\label{fig:mira3g2tpc}
\end{figure}
\subsection{Regular Brick-Cell Grids, Diamond Differencing Spatial Discretization}
ARDRA is a research code developed at LLNL to study parallel discrete ordinates transport. The code applies a general framework to domain decompose the angle, energy and spatial unknowns among available parallel processes. Typically, problems run with ARDRA are decomposed only in space (volumetrically) and energy. Spatial overloading is not currently supported, so one cellset equals one process's subdomain. In addition, ARDRA does not aggregate directions, which means a single direction per angleset. ARDRA's default spatial discretization is diamond differencing, with only one spatial unknown and only a few operations required to solve each cell.
The model of the time to completion for this algorithm is:
\begin{align}
T_{solve} & = G \; N_{stages} \left( T_{task} + T_{comm} \right) + T_{RHS} = N_{stages} \; \tilde T + T_{RHS} \; ,
\end{align}
where $G$ = number of groups and $T_{RHS}$ is the time to calculate the scattering source. Note that this time includes both sweep time and source-building time. With spatial-only decomposition, the source-building operation does not require communication among processes, and thus it is somewhat easier to scale well on total solve time than on sweeps alone. The corresponding efficiency model is:
\begin{align}
\epsilon & = \frac{T_{ref}}{T_p} = \frac{N_{stages}^{ref} \; \tilde T + T_{RHS}}{N_{stages}^p \; \tilde T + T_{RHS}}
= \frac{N_{stages}^{ref} \; + T_{RHS}/\tilde T}{N_{stages}^p + T_{RHS}/ \tilde T}
\label{eq:ardraeff}
\end{align}
The Ardra scaling results shown here are based on the Jezebel criticality experiment. We ran this problem in 3D with all vacuum boundary conditions, 48 energy groups, and three level-symmetric quadrature sets: S$_8$ (80 directions), S$_{12}$ (168), and S$_{16}$ (288). We performed two weak scaling studies: one with spatial parallelism only, and the second with a mixture of energy and spatial parallelism. We ran standard power iteration for $k$-effective, stopping the run at 11 iterations, which was adequate for collecting timing statistics. Both of our weak scaling studies start with one node of Sequoia (an IMB BG/Q machine), using 16 MPI ranks, with 1 rank per CPU core.
Both studies have an initial $48 \times 24 \times 24$ spatial mesh, but decompose the problem differently across the 16 ranks. In our first weak scaling study we decompose the problem into $12 \times 12 \times 12 = 1792$ cells per rank, with the resulting spatial decomposition on $N_{nodes}$ Sequoia nodes of $P_x = 4 N_{nodes}, P_y = 2 N_{nodes}$, and $P_z = 2 N_{nodes}$. Our second study uses 16-way on-node energy decomposition, with each rank having $48 \times 24 \times 24 = 16 \times 1792$ spatial cells but only 3 energy groups. Weak scaling is achieved by increasing the number of spatial cells proportional to increasing processor count.
Ardra's largest run was at Sequoias's full scale, which is 37.5 trillion unknowns using 1,572,864 MPI ranks. With the achieved 71\% parallel efficiency for total solution time, when using both energy and spatial parallel decomposition and the S$_{16}$ quadrature set, as shown in Fig.~\ref{fig:ardras16}. The figure also shows excellent agreement between observed results and the performance model of Eq.~(\ref{eq:ardraeff}).
\begin{figure}[h!]
\centering
\includegraphics[width= 0.65\linewidth]{./ArdraSolveS16.png}
\caption{Weak scaling results: total solution time (sweep plus other) from Ardra, with combined spatial and energy parallel partitioning and the S$_{16}$ level-symmetric quadrature set, on the Sequoia IBM BG/Q computer. The highest process count shown is 1,572,864 MPI processes (one per core, entire machine).}
\label{fig:ardras16}
\end{figure}
Figures \ref{fig:ardrasolvesp} and \ref{fig:ardrasweepsp} give efficiency results for all three quadrature sets on the test suite that used spatial-only decomposition. We offer several observations. First, the performance model is not perfect but does capture the trends observed in the ARDRA results. Second, total solve time scales much better than sweep-only time. Third, scaling improves substantially with increasing number of quadrature directions. This is easy to understand given that ARDRA is using only a single cellset per process, which means directions are the only means available for pipelining the work and getting the central processes busy. Fourth, comparison of the S$_{16}$ results from the figures shows that for this problem with this code, parallelizing across energy groups is a substantial win, moving parallel efficiency from just under 50\% to just over 70\% at a core count of 1.5M.
\begin{figure}[h!]
\centering
\includegraphics[width= 0.65\linewidth]{./ArdraSolveSpatial.png}
\caption{Weak scaling results from Ardra solution on the Sequoia IBM BG/Q computer. The highest process count shown 1,572,864 MPI processes (one per core, entire machine).}
\label{fig:ardrasolvesp}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width= 0.65\linewidth]{./ArdraSweepSpatial.png}
\caption{Weak scaling results from Ardra solution on the Sequoia IBM BG/Q computer. The highest process count shown 1,572,864 MPI processes (one per core, entire machine).}
\label{fig:ardrasweepsp}
\end{figure}
\subsection{Reflecting Boundaries}
Reflecting boundaries introduce \emph{dependencies} among octants of directions, and these dependencies hamper parallel performance. For example, in a problem with two reflecting boundaries that are orthogonal to each other (i.e., not opposing), only two octants of directions (not all eight) can be launched in parallel at the beginning of the sweep. It turns to be straightforward to quantify the performance of our optimal sweeps with reflecting boundaries in terms of the performance without reflecting boundaries.
Previously we mentioned that in our algorithm, at a reflecting boundary a processor feeds itself incident fluxes (by reflecting them from outgoing fluxes) at the same stages in the sweep at which a neighboring processor would feed them if the full problem domain were being run with twice as many processors. Consider a problem with reflective symmetry at $x=0$ and at $y=0$. If we run this problem with $4P$ processors on the full domain $x \in (-a,a) \times y \in (-b,b)$, we therefore expect essentially the same performance as if we run with $P$ processors on the reflected quarter domain $x \in (0,a) \times y \in (0,b)$. The difference: in the $4P$-processor full-domain case communication is required to feed the angular fluxes, whereas in the $P$-processor quarter-domain case a calculation is done to perform the reflection. In our experience the differences are negligible (a few percent), with the full problem sometimes slightly faster (with 4$P$ processors) and the quarter problem sometimes slightly faster (with $P$ processors).
It follows that the performance of our algorithm with $P$ processors on a problem with two reflecting boundaries is roughly the same as the performance with $4P$ processors on problems without reflecting boundaries, or with $8P$ processors on problems with three (mutually orthogonal) reflecting boundaries. This quantifies the sweep-efficiency penalty introduced by reflecting boundaries: if there are $n$ reflecting boundaries, then efficiency with $P$ processors is only what would be expected from $P \times 2^n$ processors on the full problem. This allows us to demonstrate how our sweep methodology would perform on up to 8 times as many processors as are actually available.
In the following section we test our sweeps on polygonal-prism grids that accurately represent interesting nuclear-reactor problems. In these problems there is often reflective symmetry on two orthogonal boundaries; thus, they present an opportunity to test how our sweeps would perform out to four times as many cores as are actually available to us.
\subsection{Polygonal-Prism Grids}
We turn now to spatial grids that can represent complicated geometric structures with high fidelity. In particular, we consider grids composed of right polygonal prisms, which are well suited to representing structures that have arbitrary complexity in two dimensions but some regularity in the third dimension. Nuclear reactors with cylindrical fuel pins are a good example and are the basis for the test problems we consider next.
Figures~\ref{fig:3dview} and~\ref{fig:2dzoom} illustrate the meshes used for testing our sweep methodology on polygonal-prism grids. Our sweep tests used core counts ranging from 1,632 to 767,584. As discussed previously, when we run one fourth of a problem using two reflecting boundaries, our 767,584-core results are essentially the results we would obtain if we ran the full problem with $4 \times 767,584 = 3,070,336$ cores. To maintain consistency with previous results (which had no reflecting boundaries), we plot our two-reflecting-boundary performance results in this section as a function of ``effective'' core count, which is 4 times the actual core count.
\begin{figure}[h!]
\centering
\includegraphics[width= 0.4\linewidth]{3D_rxmat.pdf} \qquad
\includegraphics[width= 0.45\linewidth]{rx_3slice.pdf}
\caption{Illustration of 3D mesh used in polygonal-prism sweep tests. Left figure is not a CAD drawing, but is a cutout portion of PDT's polygonal-prism mesh with cells colored according to materials (zircaloy cladding and guide tubes, simplified bottom inconel grid spacer, and simplified bottom inlet nozzle). Geometry and material properties, including simplifications, are as specified CASL VERA benchmark problem 3A \cite{verabenchmarks}. Right figure is a 3-slice view of an axial segment of the mesh for a quarter of the problem 3A fuel assembly, with the xy slice going through a zircaloy grid spacer. Figure~\ref{fig:2dzoom} shows details of the polygonal $xy$ mesh, which is extruded in $z$ to form polygonal prisms. Visualizations are from the Visit code \cite{visit}.}
\label{fig:3dview}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width= 0.5\linewidth]{2Dzoom.pdf}
\caption{Closeup of radial ($xy$) mesh in reactor test problem, showing quarters of 3 fuel pin cells and one guide tube. The gap between fuel and cladding is resolved, as specified in the benchmark problem statement. The rectangular cells on the pin-cell borders are filled with grid-spacer material in the specified axial intervals.}
\label{fig:2dzoom}
\end{figure}
In our study of sweeping on polygonal-prism grids we kept unknown count per core roughly the same as we scaled up in core count. In all problems we used 65 energy groups, which we divided into three ``groupsets'' of 12, 31, and 22 groups, respectively. This gave us three different sweep data points for each problem, because the sweeps were performed one groupset at a time. In this study we added spatial cells by adding fuel assemblies, beginning with a reflected quarter-assembly and ramping up to a reflected 4$\times$4 array of assemblies (a factor of 64 in number of fuel rods), and also by increasing axial resolution by almost a factor of 2. We also increased directional resolution by allowing quadrature sets to range from 64 directions/octant (low-energy groups, low resolution) to 768 directions/octant (high-energy groups, high resolution). Table~\ref{tab:rxscaling} provides details of the number of assemblies, axial cell count, and quadrature sets for each groupset, for each \textbf{full} problem in our test suite. As discussed previously, we obtained our results using one-fourth of the indicated cores on one-fourth of the indicated full problems, with two reflecting boundaries.
\begin{table}[h!]
\caption{\bf Test Problem Parameters, Polygonal-Prism Grids (full problem---see text)}
\label{tab:rxscaling}
\centering
\def1.0{1.0}
\begin{tabular}{| r | r | r | r | r | r | r | r |}
\hline
{\bf } & {\bf } & {\bf Axial} & {\bf Total} & GS 1 & GS 2 & GS 3 & Total \\
{\bf Cores} & {\bf Assys} & {\bf cells} & {\bf cells} & (12 grps) & (31 grps) & (22 grps) & unknowns \\
{\bf } & {\bf } & {\bf } & {\bf } & directions & directions & directions & /core \\ \hline
6528 & 1$\times$1 & 96 & 3.3 E6 & 8$\times$6$\times$32 & 8$\times$6$\times$16 & 8$\times$8$\times$8 & 2.3 E8 \\ \hline
27,744 & 2$\times$2 & 96 & 13.3 E6 & 8$\times$6$\times$32 & 8$\times$6$\times$16 & 8$\times$12$\times$8 & 2.4 E8 \\ \hline
78,608 & 2$\times$2 & 136 & 1.9 E7 & 8$\times$12$\times$32 & 8$\times$12$\times$16 & 8$\times$12$\times$12 & 2.2 E8 \\ \hline
314,432 & 4$\times$4 & 136 & 7.6 E7 & 8$\times$12$\times$32 & 8$\times$12$\times$16 & 8$\times$12$\times$16 & 2.4 E8 \\ \hline
1,414,944 & 6$\times$6 & 136 & 1.7 E8 & 8$\times$16$\times$48 & 8$\times$12$\times$24 & 8$\times$24$\times$16 & 2.1 E8 \\ \hline
3,070,336 & 8$\times$8 & 166 & 3.7 E8 & 8$\times$16$\times$48 & 8$\times$12$\times$24 & 8$\times$24$\times$16 & 2.1 E8 \\ \hline
\end{tabular}
\end{table}
Results are shown in Fig.~\ref{fig:rxscaling}, normalized to the single-assembly problem, which used 6528 cores with one MPI process per core. Results are in terms of ``grind times," which are defined to be time per sweep per unknown per core. Each data point is a grind time at 6528 cores divided by grind time at the indicated core count. Three different sets of points are plotted in the Figure---one for each groupset. In the PDT code, a task is a set of cells (cellset), a set of directions (angleset), and a set of groups (groupset). The work function that executes a task must prepare for the task (reading angular fluxes from upstream cellsets) and loop through the cells in the appropriate order for the given set of directions. For each cell there is a loop over directions in the angleset, and for each direction there is an innermost loop over groups in the groupset. Inside the inner loop an $N \times N$ linear system is solved for the PWLD angular fluxes, where $N$ is the number of spatial degrees of freedom in the particular cell being solved. With PWLD, $N$ is the number of vertices in the polyhedral cell. Because of the nesting of the loops, larger anglesets and groupsets produce lower grind times, if all else is equal, because the work done preparing for the task and pulling in cellwise information is amortized over the calculation of more unknowns. This is why the results differ for the different groupsets---they have different numbers of groups and different numbers of directions per angleset.
\begin{figure}[h!]
\centering
\includegraphics[width= 0.75\linewidth]{3D_rx_scaling.png}
\caption{Performance as a function of process count from optimal sweep algorithm on unstructured grids, using IBM BG/Q computers. ``Efficiency'' is sweep-execution time per unknown per process on a given number of processes divided by the same on 6528 processes, with work per process approximately constant. Different symbols show results for different ``groupsets.'' See text and Table~\ref{tab:rxscaling} for details.}
\label{fig:rxscaling}
\end{figure}
\section{CONCLUSIONS}
Sweeps can be executed efficiently at high core counts. One key to achieving
efficient performance is an optimal
scheduling algorithm that executes simultaneous multi-octant sweeps with the
minimum possible idle time. Another is partitioning and aggregation factors
that minimize total sweep time. An ingredient that helps to attain this is a
performance model that predicts performance with reasonable quantitative
accuracy. Of course, none of this is sufficient to attain excellent parallel
efficiency without great care in implementation. But with all of these
ingredients in place, sweeps can be executed with high efficiency beyond $10^6$
concurrent processes.
Our computational results demonstrate this. They also show that at
least two different sweep scheduling algorithms achieve the minimum possible
stage count, in agreement with our theory and ``proof." The common
perception that sweeps do not scale beyond a few thousand cores is simply
not correct. Even with a relatively small problem (3 energy groups, 80 total
directions, and 4096 cells per core) our PDT/STAPL code has achieved
approximately 67\% efficiency with 1.57 $\times 10^6$ MPI processes,
relative to an 8-process calculation, and the ARDRA code has achieved 71\% efficiency (total solve time) at the same process count on a problem with more energy groups and directions. With additional energy groups and
directions, parallel efficiency improves further. We have reason to believe
that further refinement of some implementation details will increase the efficiencies reported here.
The analysis and results in this summary are for 3D Cartesian grids with ``brick"
cells and for certain grids that are unstructured at a fine scale but structured at a
coarse scale. To illustrate the latter kind of grid we have shown results here from
a series of nuclear-reactor calculations whose grids resolve complicated geometries with
high fidelity. We are also working on sweeps for AMR-type grids, arbitrary
polyhedral-cell grids without a coarse structure, and grids for which it is difficult to achieve load balancing. We plan to present results in a future communication.
In this paper we have restricted our attention to spatial domain decomposition
with $P_x \times P_y \times P_z$ partitioning, in which each processor owns a
brick-shaped contiguous subdomain of the spatial domain. For some grids and
problems there may be efficiency gains if processors are allowed to ``own"
non-contiguous collections of cellsets, an option considered in \cite{mc2015-sweep} and \cite{BaileyFalgout}, with the terminology ``domain overloading." We expect to report on this family of partitioning and aggregation methods in the future.
In this paper we have restricted our attention to
Reflecting boundaries introduce direction-to-direction dependencies that
decrease available parallelism. We have shown that with our sweep algorithm,
the parallel solution with $P$ processors on a problem with $n$ mutually orthogonal
reflecting boundaries performs with the same efficiency as the parallel solution with
$2^n \times P$ processors on the full domain without reflecting boundaries.
Curvilinear coordinates introduce a different kind of direction-to-direction dependency, again reducing available parallelism and probably making sweeps somewhat less efficient than in Cartesian coordinates. We have not yet devoted much attention to parallel sweeps in curvilinear coordinates, but we expect to address this in the future.
\section{APPENDIX: EXAMPLE $P_z=1$ or 2 SWEEP}
\label{sec:appendix1}
While the stage algebra in Sec.~\ref{sec:proofs} is necessary for our proofs, a
visual illustration makes the actual behavior of our algorithms much more
accessible. We begin with an extremely simple example, a 2D sweep with $M=3$.
Note: This behavior is identical to that for $P_z=1$ or for either half of the
processes for $P_z=2$.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{2d-five-by-four.pdf}
\caption{{\bf Example sweep.} Stages progress left to right and top to bottom. Anglesets in the same quadrant have arrows of the same color. Example shows three anglesets per quadrant.}
\label{fig:2d-sweep}
\end{figure}
For our 2D example, we have illustrated the behavior of the ``depth-of-graph''
algorithm. In the next appendix, we present a 3D sweep using the
``push-to-central'' algorithm. In Fig.~\ref{fig:2d-sweep}, successive stages
are presented first from left to right and then from top to bottom. The
16-process layout presented could be either the full domain or only the center
of a larger domain; the behavior is the same whether there are processes outside
of this range or not. Bold lines depict boundaries between priority regions.
\section{APPENDIX: EXAMPLE 3D VOLUMETRIC SWEEP}
\label{sec:appendix2}
Figures~\ref{fig:stages-1-9}-\ref{fig:stages-46-52} illustrate an example sweep.
In the example, there are four anglesets per octant and 576 processes, with
$P_x=12$, $P_y=8$, and $P_z=6$. We illustrate the ``push-to-central''
algorithm; i.e., this behavior is different from that described in
Sec.~\ref{sec:proofs} in terms of priority regions. Here, the entire sector
shares the same priority ordering, specifically $(A, B, C, D, \bar D, \bar C,
\bar B, \bar A)$.
We show the order of task execution for processes with $P_x \in (1,X)$, $P_y
\in (1,Y)$, and $P_z \in (1,Z)$ using what we call ``open box'' diagrams (see
Fig.~\ref{fig:open-box}). The diagrams show the sets of processes in the
region with $P_x=1$ (top right), $P_y=1$ (bottom right), and $P_z=1$ (top left).
Tasks within a given octant are numbered from $1-4$ and are shown with arrows
representing the directions of dependencies. (The arrows may appear to have
different directions on different panels; this is because each panel has its own
orientation.) They're also color coded for clarity.
\begin{figure}[!htb]
\centering
\vspace{-4mm}
\includegraphics[scale=0.5]{open-box-guide.pdf}
\vspace{-5mm}
\caption{{\bf Open Box Diagrams.} Each panel represents a planar ``slice'' of
processes. The two axes adjacent to each panel define its orientation.
$(i,j,k)$ indices are shown for each process.}
\label{fig:open-box}
\end{figure}
This being the ``push-to-central'' algorithm, the collisions between task waves
are not static as they are for the ``depth-of-graph'' algorithm. Rather, task
waves of higher priority overtake the lower priority waves, which lie dormant
until they are able to re-emerge from the trailing end of the priority wave.
This happens between nearly every pair of octants. Here, then, the bold lines
represent sweepfront collisions, not priority region boundaries as in the 2D
example sweep.
Stage counts are included in the figure, as well as occasional notes pointing
out salient features in the behavior of the sweep algorithm and their connection
with stage counts. Again, the requirements for optimality are simply that the
central processes begin working at the first possible stage, that they stay
busy until their tasks are finished, and that the final octant's tasks proceed
uninterrupted to the boundary. To that end, the figures note the stages when
process $P(X,Y,Z)$ begins each octant. In this example, the optimal stage
count is $P_x + P_y + P_z + 8M = 52$, which is indeed achieved by the
``push-to-central'' algorithm seen below.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.385\linewidth,angle=90]{3d-nine-one.pdf}
\vspace{-12mm}
\caption{{\bf Stages 1-9.}}
\label{fig:stages-1-9}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.385\linewidth,angle=90]{3d-nine-two.pdf}
\vspace{-12mm}
\caption{{\bf Stages 10-18.}}
\label{fig:stages-10-18}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.385\linewidth,angle=90]{3d-nine-three.pdf}
\vspace{-12mm}
\caption{{\bf Stages 19-27.}}
\label{fig:stages-19-27}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.385\linewidth,angle=90]{3d-nine-four.pdf}
\vspace{-12mm}
\caption{{\bf Stages 28-36.}}
\label{fig:stages-28-36}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.385\linewidth,angle=90]{3d-nine-five.pdf}
\vspace{-12mm}
\caption{{\bf Stages 37-45.}}
\label{fig:stages-37-45}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.385\linewidth,angle=90]{3d-nine-six.pdf}
\vspace{-12mm}
\caption{{\bf Stages 46-52.}}
\label{fig:stages-46-52}
\end{figure}
|
3,212,635,537,644 | arxiv | \section*{Non-equilibrium condensation}
The optical mode in our system is provided by a 2D array of silver nanorods (NRs) where the LSPs of the individual nanoparticles are coherently coupled with each other through the diffraction orders propagating in the plane of the array, thus sustaining SLRs. The NRs array, whose schematic representation and scanning electron micrograph are shown in Figure 1a, has been fabricated using conformal imprint lithography onto a glass substrate (see Methods section). The long and short pitches of the lattice are 380 nm and 200 nm, respectively. The NRs are 200 nm long, 70 nm wide, and 20 nm high. We refer to the y-axis as the direction parallel to the long axis of the NRs, as indicated in Figure 1a. A 260 nm thick layer of PMMA doped with a Rylene dye was spin coated on top of the array. The absorption spectrum of this dye (shown in the Supporting Information), has one main electronic transition at $E_{X_{1}}$=2.24 eV and a vibronic replica at $E_{X_{2}}$=2.41 eV as indicated in Figure 1b (black dashed lines) where angle-resolved extinction measurements of the sample are shown as a function of the incident photon energy and the in-plane wave vector parallel to the short NRs axis ($k_{x}$), with incident light polarized along the long axis of the nanoparticles. The dispersion of the bare SLR in the absence of the molecules is indicated by blue dashed line, as resulting from the coupling of the (0,$\pm$1) lattice mode with the LSP of the NRs. The LSP resonance of the particles is visible in Fig. 1(b) as a flat band with E = 2.36 eV. On the other hand the energies associated to the electronic transition and the vibronic replica of the molecules are shown with horizontal black dashed lines. The lower polariton (LP) band induced by the strong coupling of these three resonances can be calculated by diagonalizing the three-level Hamiltonian, which includes the SLR and the coupling to the electronic transition and first vibronic replica of the molecules (see Supporting Information). The first two modes obtained by this diagonalization have been plotted in Figure 1b and correspond to the lower PEP band and middle PEP band (solid green lines). The agreement between the model and the extinction measurement is good for lower-polariton. However, for the middle polariton due to the presence of multiple modes at this energy range the quality of the fit is poor. The associated Rabi splitting is $\Omega_R \simeq 200$ meV. Note that at slightly higher energies (E = 2.12 eV), another dispersive band appears due to the presence of a guided mode in the molecular layer.
A peculiar property associated to the LP band is that under symmetric illumination ($k_{x}$= 0 $\mu m^{-1}$) the sample becomes nearly transparent, i.e., the transmission increases and the extinction approaches zero. The origin of this behavior has been widely investigated and explained by the multipolar nature of LSP modes coupled with the lattice,~\cite{Giannini:2010} whose electromagnetic field pattern does not couple efficiently to far-field radiation. This results in a lower extinction and much longer photon lifetimes.\cite{Lienau:2005prl,SRK:2011prx, Abass:2014acs} The corresponding electromagnetic near-field distribution is shown in Figure 1c as evaluated from finite difference time domain (FDTD) simulations in a unit cell of the lattice at E = 2.056 eV and $k_{x}$ = 0 $\mu m^{-1}$. Here, the black arrows correspond to the real part of the electric field along x- and y-direction, evaluated in a plane positioned at half height of the NR. A coupled multipolar field distribution is apparent from the field mapping, thus confirming the observed reduced polarizability of the array for the incident wave with this wave vector and frequency.
\begin{figure}
\begin{center}
\includegraphics[width=6.5in]{Fig1.pdf}
\end{center}
\caption{\textbf{PEP formation in a plasmonic lattice.} (a) Schematic representation and SEM image of the array of silver NRs. We indicate as y-axis the direction parallel to the long axis of the NRs. (b) Measured sample extinction and calculated dispersion, as obtained by solving a three-level Hamiltonian, showed as a function of the in-plane wavevector component parallel to the nanorods short axis, k$_{x}$. The black dashed lines indicate the energy of the electronic transition ($X_{1}$) and first vibronic replica ($X_{2}$) of the dye molecules. The blue dashed curve corresponds to the dispersion of the SLR. Green solid curves are the dispersions of the lower and middle PEPs. (c) Simulation of the total electric field intensity in the xy plane of a unit cell of the array illuminated by a plane wave with E = 2.056 eV and at $k_{x}=$ 0 $\mu m^{-1}$. The black arrows correspond to the real part of the electric field vector along x- and y-direction.}
\label{fig:Fig1}
\end{figure}
To characterize the photoluminescence (PL) properties, we excited the sample using a non-resonant pulsed laser (100 fs pulse duration, $E_{exc}=2.48$ eV and 1 kHz repetition rate) at normal incidence, polarized along the y-direction. The dependency of the photoluminescence peak intensity as a function of absorbed pump fluence and the change in the emission linewidth are shown in Fig. 2a,b. A clear transition to the nonlinear regime is observed above the threshold power of $P_{th}$ =20 $\mu$J/cm$^2$, resulting also in an enhanced temporal coherence due to the reduction of the linewidth. The PL dispersion as a function of $k_{x}$ is displayed in Figures 2c,d for pump fluences below ($P= 0.8P_{th}$) and above ($P= 1.1P_{th}$) threshold, respectively. At pump fluences below threshold (Fig. 2c), the PL dispersion follows the dispersion of the LP band shown in Fig. 1b. As the pump fluence is raised above $P_{th}$, the PL dispersion collapses into a sharp peak centered at E= 2.04 eV and $k_{x}=0$ $\mu$m$^{-1}$, as displayed in Fig. 2d ($P=1.1P_{th}$). One of the peculiar properties of organic-based polariton lasing is the presence of the vibrational progressions on the individual molecules as an efficient relaxation channel for exciton-polaritons.~\cite{KenaCohen:2010cy,mazza:2009prb,Ramezani:2017,Mazza2013} The presence of this relaxation channel helps PEPs to condense more efficiently at an energy set by the vibronic quanta of the molecules, $E_{X_{i}}$ showed in Fig.1 ($\Delta$= $E_{X_{2}}$-$E_{X_{1}}$=170 meV in our case).
The real space emission pattern also changes radically below and above threshold, as shown in Fig. 2e,f. While the PL is homogeneously distributed over the excited area below threshold (Fig. 2e), a structured stripe-like pattern extended along the y-direction arises at $P>P_{th}$ (Fig. 2f). The random allocation of the stripes across the emission pattern can be explained by sample imperfections and inhomogeneities as can be inferred by the different emission patterns (not shown) obtained on different regions of the sample.
\begin{figure}
\begin{center}
\includegraphics[width=5in]{Fig2.pdf}
\end{center}
\caption{\textbf{Photoluminescence properties of PEPs below and above threshold.} (a) Photoluminescence peak intensity versus the absorbed pump fluence. (b) The linewidth of the emission peak crossing the threshold. (c-d) Normalized angle-resolved photoluminescence for pump fluences (c) below ($P= 0.8P_{th}$) and (d) above ($P=1.1 P_{th}$) the condensation threshold. The false-color maps represent the emitted intensity in a linear scale. (e-f) Real-space emission maps showing homogeneous and structured emission, below (e) and above (f) threshold, respectively.}
\label{fig:Fig2}
\end{figure}
\section*{Time-resolved photoluminescence}
To gather fundamental insights into the nature of this coherent emission, we performed time- and energy-resolved PL measurements by using an imaging spectrometer coupled to a streak camera with a time resolution of $\approx$1.8 ps (see Supporting Information and Methods). When pumping at fluences below threshold, the PL from the doped polymer layer shows a decay time of about $\tau_B=30$ ps, both on the plasmonic array (red dots in Fig. 3a) and on the bare glass substrate (green dots in Fig. 3a). Differences on the lifetime between the molecules laying on the bare substrate and those coupled to the plasmonic array would be expected as a consequence of the Purcell effect or non-radiative quenching. However, in our experiment the PL emission mainly originates from molecules spread in the whole polymer layer height (260 nm), not all coupled to the SLR, the intensity of which is mainly localised around the metallic nanostructures. As the excitation power increases above the threshold, the decay time at the energy of the PEP reduces by one order of magnitude, to about $\tau_A =$ 3 ps (blue dots in Fig. 3a). The shortening of the emission lifetime above threshold is the result of the effective scattering from the exciton reservoir to the bottom of the lower polariton branch and the short cavity photon lifetime compared to the non-radiative polariton decay rate.
We investigated further the time resolved emission above threshold by fitting the emission spectra of Fig. 3b at each time with a Gaussian peak profile (see Supporting Information and Methods), and extracting the relative peak energy, as shown in Fig. 3c ($\approx$1 ps time steps are used). The corresponding fitted linewidths are also reported in Fig. 3d. An instantaneous blueshift as large as 2.5 meV appears when the system is excited. This blueshift is due to the mutual PEPs interactions and those of PEPs with the exciton reservoir.\cite{Daskalakis:2014ex} This is a clear signature of polariton condensation which, differently from photon lasing, manifests a density dependent energy shift \cite{}: the exciton reservoir depletes in time and the energy shows a continuous redshift towards the vacuum state (Fig. 3c) during the condensate's formation and decay. Indeed, as can be seen from the narrowing of the linewidth (Fig. 3d), the condensate is formed some ps after the excitation pulse, however the system continues to redshift due to the further reduction of the total population in the reservoir. This behaviour is fast enough to exclude any possible heating related effects which in any case would show a blue shift rather than a redshift with time and it is similar to what has been already observed in exciton-polariton condensates with inorganic semiconductor microcavities.~\cite{DeGiorgi:2014prl} This temporal dynamic not only demonstrates the presence of reservoir-PEPs interactions, but also shows that plasmon-exciton coupling still holds while condensation occurs and excludes photon lasing processes that may appear in the weak coupling limit if the coupling strength saturates.\cite{Yamamoto2010} Moreover, taking into account the absorption coefficient of the dye and the number of photons at the threshold power, we estimated the interaction constant $g = \Delta E/ N$ being $\approx 2 \cdot 10^{-23}~ eV ~ cm^{3}$ , where $\Delta E$ is the energy blueshift and $ N$ is the density of the initial excited electron-hole pairs. Considering the dilution of the dye this is in accordance with previous estimation of Frenkel excitons in organic semiconductors\cite{Daskalakis:2014ex, Lerario:2017}.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{Fig3.pdf}
\end{center}
\caption{\textbf{PEP temporal dynamics and interactions-driven blueshift.} (a) Time-resolved PL of the sample, below (red dots) and above (blue dots) condensation threshold. The green dots show time-resolved PL intensity for the bare polymer layer doped with molecules whereas the solid black line shows the temporal profile of the exciting laser pulse. (b) Time- and energy-resolved PL above PEP condensate threshold, as measured on the streak camera. (c) Emission peak energy at different time delay as obtained from Gaussian fit of the TRPL PEP condensate trace at different delay times. (d) Linewidth of the emission as extracted by a Gaussian fit of the time trace as a function of time.}
\label{fig:Fig3}
\end{figure}
\section*{Spatial and temporal coherence}
One of the most important characteristics of a polariton condensate is the macroscopic spatial coherence. In PEP systems, spatial coherence of the order of few microns has been already reported in the linear regime,~\cite{BellessaPRL2009,PTormaPRL2014} and was described in terms of plasmon-exciton hybridization induced by strong coupling. On the other hand, in standard plasmonic lasing, larger coherence has been observed but always inside the excitation spot.\cite{Hoang:2017}.
In our nonlinear system, the 2D emission image of the sample above threshold retrieves all the information about spatio-temporal correlations of the PEP condensate, that can be extracted by using interferometric techniques. In particular, we have employed a Michelson interferometer, schematically shown in Fig. S2 of the Supporting Information, with the sample non-resonantly excited with a 20 $\mu m$ pulsed laser spot (E=2.48 eV, 100 fs pulse duration, 1 kHz repetition rate, as detailed in the Methods section). The corresponding emission image was collected with an objective lens and separated with a beam splitter along two perpendicular optical paths, which define the two arms of the interferometer. The two images, rotated by 180 degrees with respect to each other around the autocorrelation point (as shown in the sketch of Fig. S2 in the SI), are superimposed and interfere at the entrance slits of a monochromator equipped with a CCD camera.
Since the sample is excited non-resonantly, the excitons relax incoherently into the PEP band, with a phase that is not imposed by the laser pump. At sufficiently high PEP densities, phononic relaxation from the exciton reservoir rapidly populates the long-living polariton state at $k_{x}=0$ $\mu m^{-1}$, leading to bosonic stimulated scattering and, finally, to condensation. By measuring the visibility of the interference fringes, we can thus obtain a complete spatial reconstruction of the first-order correlation function $g^{(1)}(r_1,r_2,\Delta t)$ of the condensate (see Supporting Information), at each temporal delay $\Delta t$ between the pulses in the two arms of the interferometer:
\begin{eqnarray}
g^{(1)}(\bf{r_1},\bf{r_2}, \Delta \normalfont{t})=\frac{<\Psi^*(\bf{r_1},\normalfont{0}) \Psi(\bf{r_2},\Delta \normalfont{t})>}{\sqrt{<\Psi^*(\bf{r_1},\normalfont{0}) \Psi(\bf{r_1},\normalfont{0})> <\Psi^*(\bf{r_2},\Delta \normalfont{t}) \Psi(\bf{r_2}, \Delta \normalfont{t})>}}\;,
\end{eqnarray}
where $\Psi^*$ and $\Psi$ are the creation and annihilation operators of the polaritons at the space-time point $(\bf{r},\normalfont{t})$.
A typical interference pattern, measured at a pumping power $P=1.2P_{th}$ and $\Delta t=0$, is shown in Fig. 4a, with the maximum fringes visibility in the center of the image (\textit{i.e.}, the autocorrelation point, $\bf{r}=\bf{r_0}$). The spatial map of the $g^{(1)}$$(\bf{-r},\bf{r},\normalfont{0})$ is displayed in Fig. 4c, as calculated from Fig. 4a, while the profile along the black dashed-line starting from the autocorrelation point is displayed in Fig. 4e. By fitting the experimental decay with an exponential function (see Supporting Information for details), a coherence length of $L_x\simeq100~\mu m$ is obtained. It is worth noting that the region of the condensate extends to much longer distances than the pump spot size (up to four/five times as shown in Fig. S3), mediated by the SLRs spreading in the periodic array.
In addition to the presence of 1D long-range spatial correlations along the stripes, one could wonder if, despite the disorder, the PEP condensate can still manifest the wholly 2D nature of SLRs. In order to verify this property, the $g^{(1)}$ along the direction perpendicular to the emitting stripes is measured in another position by rotating the sample, as displayed in Figs. 4d and 4f. We find that, regardless of the spatial fragmentation of the condensate, there is a high degree of coherence, which is maintained also between different stripes. In particular, by fitting all the $g^{(1)}$ maxima positions along the stripes, an exponential decay is still obtained, shown as a red curve in Fig. 4f, with a coherence length of $L_y \simeq 120~\mu m$. This value is very similar to the one obtained for the \textit{x}-direction, which demonstrates the 2D nature of the PEP condensate. In Fig. 4b, we clearly observe that by increasing the excitation power, both the intensity and the coherence length increase, as shown in Fig. 4b with black and red dots, respectively, manifesting the spontaneous buildup of a global phase at the PEP condensation energy due to the phase coherent Bose stimulated scattering process. Since the LP branch below threshold has a very low emission intensity, the pump rate dependence of the coherence length and the emission intensity have been estimated only above threshold (Fig. 4b).
It is worth nothing that for 2D systems above a finite temperature, $T_{BKT}$, and close to equilibrium the Berezinsky-Kosterliz-Touless (BKT) transition characterized by a quasi-long-range-order, with an algebraic decay of coherence should be observed.\cite{Caputo:2017BKT} However, pumping and dissipation in our PEP system play a major role as compared to thermalisation and interactions. Due to the extremely short LSPs lifetime in plasmonic-based condensates, which make them strongly out-of-equilibrium, it is not surprising that we do not find a BKT transition, but rather an exponential loss of spatial coherence.~\cite{KenaCohen:2015}
\begin{figure}
\begin{center}
\includegraphics[width=6.5in]{Fig4.pdf}
\end{center}
\caption{\textbf{Two-dimensional spatial coherence of the PEP condensate.} (a) Experimental interferogram. (b) Emission intensity and coherence length along the x-direction as a function of the excitation power. (c) Map of first order correlation function. The dot at the centre indicates the autocorrelation point $\bf{r_0}$. (e) Profile of the $g^{(1)}(-x,x,0)$ extracted along the black dashed line in (c), parallel to the \textit{x}-axis. (d) Map and (f) profile along the dashed line, parallel to the y-axis, of the first order correlation function, in the configuration with vertical PEP condensate stripes. The red line shows the exponential fit to the coherence data showing a decay lenght of about 100 $\mu m$ for both directions.}
\label{fig:Fig4}
\end{figure}
Finally, we also measured the coherence time of our PEP condensate as shown in Fig. 5a for three different delays on an individual emitting stripe. The $g^{(1)}$($\bf{r}=0$, $\Delta$t), at the autocorrelation point, is then calculated and plotted in Fig. 5b, displaying a loss of coherence with a decay time of $t_c \simeq 1.7$ ps, as obtained by fitting the experimental points with a quasi-Gaussian decay function (red line, see Supporting Information). We expect this coherence time to be underestimated due to the pulsed nature of the excitation. In fact, as shown in the sketch of Fig. 5b, when one of the arms of the interferometer is delayed with respect to the other, only a partial temporal superposition of the emission signals coming from the two arms can interfere, while an increasing background signal from not-overlapping pulses (shown as gray shaded regions in the sketch) reduces the overall visibility of the fringes.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{Fig5.pdf}
\end{center}
\caption{\textbf{Temporal coherence of the PEP condensate.} (a) Interference pattern of an individual emitting stripe, at different delay times $\Delta t$ (0.07 ps, 0.7 ps and 2.7 ps, respectively) between the two arms of the interferometer. (b) Time decay of the coherence in the autocorrelation point, g$^{(1)}$($\bf{r_{0}}$, $ \Delta t)$. The red line shows the stretched exponential fit to the data. Inset: sketch of the partial superposition between the signals of the two interferometer arms resulting from the pulsed excitation and delayed in time with each other. Only the overlapped signal part contributes to the interference.}
\label{fig:Fig5}
\end{figure}
In conclusion we have demonstrated the formation of a non-equilibrium plasmon-exciton-polariton condensate in a lattice of metal nanoparticles supporting a long lived plasmonic mode strongly coupled to Frenkel excitons in an organic dye. Time resolved measurements reveal the ultrafast ps dynamics leading to the condensate formation, with a 2.5 meV energy blueshift of the condensate due to PEP interactions. A widely extended two-dimensional spatial coherence is also observed, showing a high degree of robustness against disorder and inhomogeneities. As a result, our system can be described as a single macroscopic state, with a coherence length longer than $100$ $\mu m$. These findings are very promising for studying properties of quantum fluids at room temperature with ultrafast dynamics, thus opening the way towards future plasmon-exciton-polariton based condensates and devices.
\section{Methods}
\subsection{Sample fabrication}
The array of silver nanoparticles was fabricated using substrate conformal imprint lithography onto a glass substrate (n=1.51). Silver nanoparticles were covered by a 8 nm thick layer of SiO$_{2}$ and a 20 nm thick layer of Si$_{3}$N$_{4}$ to protect the silver from oxidation. A 200 nm thick layer of PMMA doped with rylene dye [N,N'-Bis(2,6-diisopropylphenyl)-1,7- and -1,6-bis(2,6-diisopropylphenoxy)-perylene-3,4:9,10tetracarboximide] with 35 wt\% concentration was spin-coated onto the array.
\subsection{Optical measurements}
To measure the optical extinction and the angle-resolved PL, we have used a set of rotation stages that can rotate the sample to measure the transmission at different angles of incidence or that collects the PL at different emission angles. The transmission and PL were measured with an optical fiber and an Ocean Optics spectrometer (USB2000). For the extiction measurements we used a broadband white lamp, while for the PL measurements in Fig. 2 the sample was excited non-resonantly at normal incidence with 100 fs amplified pulses at $E_{exc}$ = 2.48 eV excitation energy with 1 kHz repetition rate.
To study the long-range correlations on the PEP condensate, we used a Michelson interferometer. The sample was non-resonantly excited with a laser at $E_{exc}$ = 2.48 eV and 100 fs pulse width (10 kHz repetition rate, 4.5 mJ pulse energy) focused by a camera objective (3.5 cm working distance and 0.7 N.A.) into a spot of about 20 $\mu$m. In order to avoid excitation bleaching of the organic molecule, we reduced the average laser power by using a chopper with 10\% duty cycle. The PL was collected over a large area of the sample with a 40x objective and N.A. = 0.65. The real space PL maps are measured on the CCD camera by blocking one arm of the Michelson Interferometer and filtering the laser light with a long pass filter (LWP550). We used the same setup, coupled with a spectrometer equipped with a streak camera, to study the condensate temporal dynamics.
\subsection{Finite difference in time domain simulations}
The simulation of the near-field distribution shown in Figure 1 was done using a commercial package for Finite-Difference in Time-Domain simulations. A simulated volume of 380 nm $\times$ 200 nm $\times$ 2000 nm was used with periodic boundary conditions for the x- and y-direction to reproduce the periodic lattice. For the upper and lower boundaries along the z-direction, semi-infinite boundary conditions were used. The sample was illuminated with a broadband pulse incident at k = 0 $\mu m^{-1}$. We used reported values in the literature for the complex permittivites of Ag and the passivation layers of SiO$_{2}$ and Si$_{3}$N$_{4}$. \cite{palik} The permittivity of the dye doped polymer layer (shown in the S.I.) was determined by means of ellipsometry.
\begin{acknowledgement}
We are grateful of Marc A. Verschuuren for the fabrication of the samples. We also thank Femius Koenderink, Ke Guo, Dario Ballarini and Lorenzo Dominici for stimulating discussions. This research was financially supported by the Netherlands Organisation for Scientific Research (NWO) through the Industrial Partnership Program Nanophotonics for Solid State Lighting between Philips and NWO, the ERC project POLAFLOW (Grant No. 308136) and ERC-2017-PoC project ELECOPTER (grant No. 780757).
\end{acknowledgement}
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{45}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Kasprzak \latin{et~al.}(2006)Kasprzak, Richard, Kundermann, Baas,
Jeambrun, Keeling, Marchetti, Szymanska, Andr{\'e}, Staehli, Savona,
Littlewood, Deveaud, and Dang]{Kasprzak:2006jy}
Kasprzak,~J.; Richard,~M.; Kundermann,~S.; Baas,~A.; Jeambrun,~P.; Keeling,~J.
M.~J.; Marchetti,~F.~M.; Szymanska,~M.~H.; Andr{\'e},~R.; Staehli,~J.~L.;
Savona,~V.; Littlewood,~P.~B.; Deveaud,~B.; Dang,~L.~S. Bose-Einstein
condensation of exciton polaritons. \emph{Nature} \textbf{2006}, \emph{443},
409\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Amo \latin{et~al.}(2009)Amo, Sanvitto, Laussy, Ballarini, Valle,
Martin, Lema{\^i}tre, Bloch, Krizhanovskii, Skolnick, Tejedor, and
Vi{\~n}a]{Amo:2009}
Amo,~A.; Sanvitto,~D.; Laussy,~F.~P.; Ballarini,~D.; Valle,~E.~d.;
Martin,~M.~D.; Lema{\^i}tre,~A.; Bloch,~J.; Krizhanovskii,~D.~N.;
Skolnick,~M.~S.; Tejedor,~C.; Vi{\~n}a,~L. Collective fluid dynamics of a
polariton condensate in a semiconductor microcavity. \emph{Nature}
\textbf{2009}, \emph{457}, 291\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Amo \latin{et~al.}(2009)Amo, Lefr{\`e}re, Pigeon, Adrados, Ciuti,
Carusotto, Houdr{\'e}, Giacobino, and Bramati]{Amo:2009bl}
Amo,~A.; Lefr{\`e}re,~J.; Pigeon,~S.; Adrados,~C.; Ciuti,~C.; Carusotto,~I.;
Houdr{\'e},~R.; Giacobino,~E.; Bramati,~A. Superfluidity of polaritons in
semiconductor microcavities. \emph{Nature Physics} \textbf{2009}, \emph{5},
805\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lagoudakis \latin{et~al.}(2008)Lagoudakis, Wouters, Richard, Baas,
Carusotto, Andr{\'e}, Dang, and Deveaud-Pl{\'e}dran]{Lagoudakis:2008}
Lagoudakis,~K.~G.; Wouters,~M.; Richard,~M.; Baas,~A.; Carusotto,~I.;
Andr{\'e},~R.; Dang,~L.~S.; Deveaud-Pl{\'e}dran,~B. Quantized vortices in an
exciton-polariton condensate. \emph{Nature Physics} \textbf{2008}, \emph{4},
706\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Savvidis \latin{et~al.}(2000)Savvidis, Baumberg, Stevenson, Skolnick,
Whittaker, and Roberts]{Savvidis:2000}
Savvidis,~P.~G.; Baumberg,~J.~J.; Stevenson,~R.~M.; Skolnick,~M.~S.;
Whittaker,~D.~M.; Roberts,~J.~S. Angle-Resonant Stimulated Polariton
Amplifier. \emph{Phys. Rev. Lett.} \textbf{2000}, \emph{84}, 1547--1550\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[K{\'e}na-Cohen and Forrest(2010)K{\'e}na-Cohen, and
Forrest]{KenaCohen:2010cy}
K{\'e}na-Cohen,~S.; Forrest,~S.~R. Room-temperature polariton lasing in an
organic single-crystal microcavity. \emph{Nature Photonics} \textbf{2010},
\emph{4}, 371--375\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Plumhof \latin{et~al.}(2014)Plumhof, St{\"o}ferle, Mai, Scherf, and
Mahrt]{Plumhof:2013bn}
Plumhof,~J.~D.; St{\"o}ferle,~T.; Mai,~L.; Scherf,~U.; Mahrt,~R.~F.
{Room-temperature Bose-Einstein condensation of cavity exciton-polaritons in
a polymer.} \emph{Nature materials} \textbf{2014}, \emph{13}, 247--252\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Daskalakis \latin{et~al.}(2014)Daskalakis, Maier, Murray, and
K{\'e}na-Cohen]{Daskalakis:2014ex}
Daskalakis,~K.~S.; Maier,~S.~A.; Murray,~R.; K{\'e}na-Cohen,~S. {Nonlinear
interactions in an organic polariton condensate}. \emph{Nature materials}
\textbf{2014}, \emph{13}, 271--278\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lerario \latin{et~al.}(2017)Lerario, Fieramosca, Barachati, Ballarini,
Daskalakis, Dominici, De~Giorgi, Maier, Gigli, K{\'e}na-Cohen, and
Sanvitto]{Lerario:2017}
Lerario,~G.; Fieramosca,~A.; Barachati,~F.; Ballarini,~D.; Daskalakis,~K.~S.;
Dominici,~L.; De~Giorgi,~M.; Maier,~S.~A.; Gigli,~G.; K{\'e}na-Cohen,~S.;
Sanvitto,~D. Room-temperature superfluidity in a polariton condensate.
\emph{Nature Physics} \textbf{2017}, \emph{13}, 837\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hakala \latin{et~al.}(2009)Hakala, Toppari, Kuzyk, Pettersson,
Tikkanen, Kunttu, and T\"orm\"a]{Hakala:2009}
Hakala,~T.~K.; Toppari,~J.~J.; Kuzyk,~A.; Pettersson,~M.; Tikkanen,~H.;
Kunttu,~H.; T\"orm\"a,~P. Vacuum Rabi Splitting and Strong-Coupling Dynamics
for Surface-Plasmon Polaritons and Rhodamine 6G Molecules. \emph{Phys. Rev.
Lett.} \textbf{2009}, \emph{103}, 053602\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chikkaraddy \latin{et~al.}(2016)Chikkaraddy, de~Nijs, Benz, Barrow,
Scherman, Rosta, Demetriadou, Fox, Hess, and Baumberg]{Chikkaraddy:2016}
Chikkaraddy,~R.; de~Nijs,~B.; Benz,~F.; Barrow,~S.~J.; Scherman,~O.~A.;
Rosta,~E.; Demetriadou,~A.; Fox,~P.; Hess,~O.; Baumberg,~J.~J.
Single-molecule strong coupling at room temperature in plasmonic
nanocavities. \emph{Nature} \textbf{2016}, \emph{535}, 127--130\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zengin \latin{et~al.}(2015)Zengin, Wers{\"a}ll, Nilsson, Antosiewicz,
K{\"a}ll, and Shegai]{Zengin:2013}
Zengin,~G.; Wers{\"a}ll,~M.; Nilsson,~S.; Antosiewicz,~T.~J.; K{\"a}ll,~M.;
Shegai,~T. {Realizing Strong Light-Matter Interactions between
Single-Nanoparticle Plasmons and Molecular Excitons at Ambient Conditions}.
\emph{Physical Review Letters} \textbf{2015}, \emph{114}, 157401--6\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[T{\"o}rm{\"a} and Barnes(2015)T{\"o}rm{\"a}, and Barnes]{Torma:2015}
T{\"o}rm{\"a},~P.; Barnes,~W.~L. Strong coupling between surface plasmon
polaritons and emitters: a review. \emph{Reports on Progress in Physics}
\textbf{2015}, \emph{78}, 013901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Todisco \latin{et~al.}(2018)Todisco, De~Giorgi, Esposito, De~Marco,
Zizzari, Bianco, Dominici, Ballarini, Arima, Gigli, and
Sanvitto]{Todisco:2017}
Todisco,~F.; De~Giorgi,~M.; Esposito,~M.; De~Marco,~L.; Zizzari,~A.;
Bianco,~M.; Dominici,~L.; Ballarini,~D.; Arima,~V.; Gigli,~G.; Sanvitto,~D.
Ultrastrong Plasmon�Exciton Coupling by Dynamic Molecular Aggregation.
\emph{ACS Photonics} \textbf{2018}, \emph{5}, 143--150\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2017)Wang, Ramezani, V�kev�inen, T�rm�, Rivas,
and Odom]{Wang:2017}
Wang,~W.; Ramezani,~M.; V�kev�inen,~A.~I.; T�rm�,~P.; Rivas,~J.~G.;
Odom,~T.~W. The rich photonic world of plasmonic nanoparticle arrays.
\emph{Materials Today} \textbf{2017}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zou \latin{et~al.}(2004)Zou, Janel, and Schatz]{zou:2004}
Zou,~S.; Janel,~N.; Schatz,~G.~C. Silver nanoparticle array structures that
produce remarkably narrow plasmon lineshapes. \emph{The Journal of Chemical
Physics} \textbf{2004}, \emph{120}, 10871--10875\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kravets \latin{et~al.}(2008)Kravets, Schedin, and
Grigorenko]{Kravets:2008}
Kravets,~V.~G.; Schedin,~F.; Grigorenko,~A.~N. Extremely Narrow Plasmon
Resonances Based on Diffraction Coupling of Localized Plasmons in Arrays of
Metallic Nanoparticles. \emph{Phys. Rev. Lett.} \textbf{2008}, \emph{101},
087403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rodriguez \latin{et~al.}(2011)Rodriguez, Abass, Maes, Janssen, Vecchi,
and G\'omez~Rivas]{Rodriguez:2011}
Rodriguez,~S. R.~K.; Abass,~A.; Maes,~B.; Janssen,~O. T.~A.; Vecchi,~G.;
G\'omez~Rivas,~J. Coupling Bright and Dark Plasmonic Lattice Resonances.
\emph{Phys. Rev. X} \textbf{2011}, \emph{1}, 021019\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vecchi \latin{et~al.}(2009)Vecchi, Giannini, and
G\'omez~Rivas]{Vecchi:2009prl}
Vecchi,~G.; Giannini,~V.; G\'omez~Rivas,~J. Shaping the Fluorescent Emission by
Lattice Resonances in Plasmonic Crystals of Nanoantennas. \emph{Phys. Rev.
Lett.} \textbf{2009}, \emph{102}, 146807\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Humphrey and Barnes(2014)Humphrey, and Barnes]{Humphrey:2014}
Humphrey,~A.~D.; Barnes,~W.~L. Plasmonic surface lattice resonances on arrays
of different lattice symmetry. \emph{Phys. Rev. B} \textbf{2014}, \emph{90},
075404\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Todisco \latin{et~al.}(2016)Todisco, Esposito, Panaro, De~Giorgi,
Dominici, Ballarini, Fern�ndez-Dom�nguez, Tasco, Cuscun�, Passaseo,
Cirac�, Gigli, and Sanvitto]{Todisco:2016}
Todisco,~F.; Esposito,~M.; Panaro,~S.; De~Giorgi,~M.; Dominici,~L.;
Ballarini,~D.; Fern�ndez-Dom�nguez,~A.~I.; Tasco,~V.; Cuscun�,~M.;
Passaseo,~A.; Cirac�,~C.; Gigli,~G.; Sanvitto,~D. Toward Cavity Quantum
Electrodynamics with Hybrid Photon Gap-Plasmon States. \emph{ACS Nano}
\textbf{2016}, \emph{10}, 11360--11368, PMID: 28024373\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramezani \latin{et~al.}(2016)Ramezani, Lozano, Verschuuren, and
G\'omez-Rivas]{Ramezani:2016}
Ramezani,~M.; Lozano,~G.; Verschuuren,~M.~A.; G\'omez-Rivas,~J. Modified
emission of extended light emitting layers by selective coupling to
collective lattice resonances. \emph{Phys. Rev. B} \textbf{2016}, \emph{94},
125406\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rodriguez and Rivas(2013)Rodriguez, and Rivas]{Rodriguez:13}
Rodriguez,~S.; Rivas,~J.~G. Surface lattice resonances strongly coupled to
Rhodamine 6G excitons: tuning the plasmon-exciton-polariton mass and
composition. \emph{Opt. Express} \textbf{2013}, \emph{21}, 27411--27421\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rodriguez \latin{et~al.}(2013)Rodriguez, Feist, Verschuuren,
Garcia~Vidal, and G\'omez~Rivas]{Rodriguez:2013prl}
Rodriguez,~S. R.~K.; Feist,~J.; Verschuuren,~M.~A.; Garcia~Vidal,~F.~J.;
G\'omez~Rivas,~J. Thermalization and Cooling of Plasmon-Exciton Polaritons:
Towards Quantum Condensation. \emph{Phys. Rev. Lett.} \textbf{2013},
\emph{111}, 166802\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[V{\"a}kev{\"a}inen \latin{et~al.}(2014)V{\"a}kev{\"a}inen, Moerland,
Rekola, Eskelinen, Martikainen, Kim, and T{\"o}rm{\"a}]{Vakevainen:2014}
V{\"a}kev{\"a}inen,~A.~I.; Moerland,~R.~J.; Rekola,~H.~T.; Eskelinen,~A.~P.;
Martikainen,~J.~P.; Kim,~D.~H.; T{\"o}rm{\"a},~P. {Plasmonic Surface Lattice
Resonances at the Strong Coupling Regime}. \emph{Nano letters} \textbf{2014},
\emph{14}, 1721--1727\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Todisco \latin{et~al.}(2015)Todisco, D�Agostino, Esposito,
Fern�ndez-Dom�nguez, De~Giorgi, Ballarini, Dominici, Tarantini, Cuscun�,
Della~Sala, Gigli, and Sanvitto]{Todisco:2015}
Todisco,~F.; D�Agostino,~S.; Esposito,~M.; Fern�ndez-Dom�nguez,~A.~I.;
De~Giorgi,~M.; Ballarini,~D.; Dominici,~L.; Tarantini,~I.; Cuscun�,~M.;
Della~Sala,~F.; Gigli,~G.; Sanvitto,~D. Exciton�Plasmon Coupling
Enhancement via Metal Oxidation. \emph{ACS Nano} \textbf{2015}, \emph{9},
9691--9699\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sno(2014)]{Snoke:2012}
Tomofeev,~V., Sanvitto,~D., Eds. \emph{Exciton Polaritons in Microcavities: New
Frontiers}; Springer, 2014\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Horikiri \latin{et~al.}(2017)Horikiri, Byrnes, Kusudo, Ishida, Matsuo,
Shikano, L\"offler, H\"ofling, Forchel, and Yamamoto]{Yamamoto:2017}
Horikiri,~T.; Byrnes,~T.; Kusudo,~K.; Ishida,~N.; Matsuo,~Y.; Shikano,~Y.;
L\"offler,~A.; H\"ofling,~S.; Forchel,~A.; Yamamoto,~Y. Highly excited
exciton-polariton condensates. \emph{Phys. Rev. B} \textbf{2017}, \emph{95},
245122\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hakala \latin{et~al.}(2018)Hakala, Moilanen, V{\"a}kev{\"a}inen, Guo,
Martikainen, Daskalakis, Rekola, Julku, and T{\"o}rm{\"a}]{Torma:2018}
Hakala,~T.~K.; Moilanen,~A.~J.; V{\"a}kev{\"a}inen,~A.~I.; Guo,~R.;
Martikainen,~J.-P.; Daskalakis,~K.~S.; Rekola,~H.~T.; Julku,~A.;
T{\"o}rm{\"a},~P. Bose-Einstein Condensation in a Plasmonic Lattice.
\emph{Nature Physics} \textbf{2018}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramezani \latin{et~al.}(2017)Ramezani, Halpin,
Fern\'{a}ndez-Dom\'{i}nguez, Feist, Rodriguez, Garcia-Vidal, and
Rivas]{Ramezani:2017}
Ramezani,~M.; Halpin,~A.; Fern\'{a}ndez-Dom\'{i}nguez,~A.~I.; Feist,~J.;
Rodriguez,~S. R.-K.; Garcia-Vidal,~F.~J.; Rivas,~J.~G.
Plasmon-exciton-polariton lasing. \emph{Optica} \textbf{2017}, \emph{4},
31--37\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giannini \latin{et~al.}(2010)Giannini, Vecchi, and
G\'omez~Rivas]{Giannini:2010}
Giannini,~V.; Vecchi,~G.; G\'omez~Rivas,~J. Lighting Up Multipolar Surface
Plasmon Polaritons by Collective Resonances in Arrays of Nanoantennas.
\emph{Phys. Rev. Lett.} \textbf{2010}, \emph{105}, 266801\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ropers \latin{et~al.}(2005)Ropers, Park, Stibenz, Steinmeyer, Kim,
Kim, and Lienau]{Lienau:2005prl}
Ropers,~C.; Park,~D.~J.; Stibenz,~G.; Steinmeyer,~G.; Kim,~J.; Kim,~D.~S.;
Lienau,~C. Femtosecond Light Transmission and Subradiant Damping in Plasmonic
Crystals. \emph{Phys. Rev. Lett.} \textbf{2005}, \emph{94}, 113901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rodriguez \latin{et~al.}(2011)Rodriguez, Abass, Maes, Janssen, Vecchi,
and G\'omez~Rivas]{SRK:2011prx}
Rodriguez,~S. R.~K.; Abass,~A.; Maes,~B.; Janssen,~O. T.~A.; Vecchi,~G.;
G\'omez~Rivas,~J. Coupling Bright and Dark Plasmonic Lattice Resonances.
\emph{Phys. Rev. X} \textbf{2011}, \emph{1}, 021019\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Abass \latin{et~al.}(2014)Abass, Rodriguez, G�mez~Rivas, and
Maes]{Abass:2014acs}
Abass,~A.; Rodriguez,~S. R.-K.; G�mez~Rivas,~J.; Maes,~B. Tailoring Dispersion
and Eigenfield Profiles of Plasmonic Surface Lattice Resonances. \emph{ACS
Photonics} \textbf{2014}, \emph{1}, 61--68\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mazza \latin{et~al.}(2009)Mazza, Fontanesi, and
La~Rocca]{mazza:2009prb}
Mazza,~L.; Fontanesi,~L.; La~Rocca,~G.~C. Organic-based microcavities with
vibronic progressions: Photoluminescence. \emph{Phys. Rev. B} \textbf{2009},
\emph{80}, 235314\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mazza \latin{et~al.}(2013)Mazza, K\'ena-Cohen, Michetti, and
La~Rocca]{Mazza2013}
Mazza,~L.; K\'ena-Cohen,~S.; Michetti,~P.; La~Rocca,~G.~C. Microscopic theory
of polariton lasing via vibronically assisted scattering. \emph{Phys. Rev. B}
\textbf{2013}, \emph{88}, 075321\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[De~Giorgi \latin{et~al.}(2014)De~Giorgi, Ballarini, Cazzato,
Deligeorgis, Tsintzos, Hatzopoulos, Savvidis, Gigli, Laussy, and
Sanvitto]{DeGiorgi:2014prl}
De~Giorgi,~M.; Ballarini,~D.; Cazzato,~P.; Deligeorgis,~G.; Tsintzos,~S.~I.;
Hatzopoulos,~Z.; Savvidis,~P.~G.; Gigli,~G.; Laussy,~F.~P.; Sanvitto,~D.
Relaxation Oscillations in the Formation of a Polariton Condensate.
\emph{Phys. Rev. Lett.} \textbf{2014}, \emph{112}, 113602\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Deng \latin{et~al.}(2010)Deng, Haug, and Yamamoto]{Yamamoto2010}
Deng,~H.; Haug,~H.; Yamamoto,~Y. Exciton-polariton Bose-Einstein condensation.
\emph{Rev. Mod. Phys.} \textbf{2010}, \emph{82}, 1489--1537\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Aberra~Guebrou \latin{et~al.}(2012)Aberra~Guebrou, Symonds, Homeyer,
Plenet, Gartstein, Agranovich, and Bellessa]{BellessaPRL2009}
Aberra~Guebrou,~S.; Symonds,~C.; Homeyer,~E.; Plenet,~J.~C.; Gartstein,~Y.~N.;
Agranovich,~V.~M.; Bellessa,~J. Coherent Emission from a Disordered Organic
Semiconductor Induced by Strong Coupling with Surface Plasmons. \emph{Phys.
Rev. Lett.} \textbf{2012}, \emph{108}, 066401\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shi \latin{et~al.}(2014)Shi, Hakala, Rekola, Martikainen, Moerland,
and T\"orm\"a]{PTormaPRL2014}
Shi,~L.; Hakala,~T.~K.; Rekola,~H.~T.; Martikainen,~J.-P.; Moerland,~R.~J.;
T\"orm\"a,~P. Spatial Coherence Properties of Organic Molecules Coupled to
Plasmonic Surface Lattice Resonances in the Weak and Strong Coupling Regimes.
\emph{Phys. Rev. Lett.} \textbf{2014}, \emph{112}, 153002\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hoang \latin{et~al.}(2017)Hoang, Akselrod, Yang, Odom, and
Mikkelsen]{Hoang:2017}
Hoang,~T.~B.; Akselrod,~G.~M.; Yang,~A.; Odom,~T.~W.; Mikkelsen,~M.~H.
Millimeter-Scale Spatial Coherence from a Plasmon Laser. \emph{Nano Letters}
\textbf{2017}, \emph{17}, 6690--6695, PMID: 28956442\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Caputo \latin{et~al.}(2018)Caputo, Ballarini, Dagvadorj,
S�nchez~Mu�oz, De~Giorgi, Dominici, West, N.~Pfeiffer, Gigli, Laussy,
Szymanska, and Sanvitto]{Caputo:2017BKT}
Caputo,~D.; Ballarini,~D.; Dagvadorj,~G.; S�nchez~Mu�oz,~C.; De~Giorgi,~M.;
Dominici,~L.; West,~K.; N.~Pfeiffer,~L.; Gigli,~G.; Laussy,~F.;
Szymanska,~M.; Sanvitto,~D. Topological order and equilibrium in a condensate
of exciton-polaritons. \textbf{2018}, \emph{17}, 145--151\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Daskalakis \latin{et~al.}(2015)Daskalakis, Maier, and
K\'ena-Cohen]{KenaCohen:2015}
Daskalakis,~K.~S.; Maier,~S.~A.; K\'ena-Cohen,~S. Spatial Coherence and
Stability in a Disordered Organic Polariton Condensate. \emph{Phys. Rev.
Lett.} \textbf{2015}, \emph{115}, 035301\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[tagkey1985iii(1985)]{palik}
Palik,~E.~D., Ed. \emph{Handbook of Optical Constants of Solids}; Academic
Press: Boston, 1985\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
\section*{Molecular dye}
As an organic molecule, we used a derivative of rylene dye with the formula [N,N'-Bis(2,6-diisopropylphenyl)-1,7- and -1,6-bis (2,6-diisopropylphenoxy)-perylene-3,4:9,10-tetracarboximide] due to its photostability and the possibility of reaching high molecular concentrations within a PMMA matrix without aggregation. The absorption and emission spectra of the dye are shown in Fig. S1a. The absorption spectrum is characterized by one main peak at E = 2.24 eV and a vibronic replica at E = 2.41 eV. In Fig. S1b and c, the real and imaginary components of the refractive index of the PMMA layer doped with the rylene dye are reported, as obtained by ellipsometric measurements. These data were used to take into account the dye doped polymer dispersion in the numerical simulations.
\begin{figure}[H]
\begin{center}
\includegraphics[width=7in]{FigS1.pdf}
\caption{Normalized absorption and emission spectra (left) and permittivity (right) of the bare rylene dye.}
\end{center}
\label{fig:FigS1}
\end{figure}
\section*{Spatial coherence}
A sketch of the experimental setup used for the coherence measurements is depicted in Fig. S2, showing a classical Michelson interferometer scheme.
\begin{figure}[H]
\begin{center}
\includegraphics[width=5in]{FigS-Interferometer.pdf}
\caption{Sketch of the Michelson interferometer used as experimental setup for the coherence measurements.}
\end{center}
\label{fig:FigS2}
\end{figure}
The two-dimensional spatial map of the first order correlation function $g^{(1)}(\mathbf{r},\mathbf{-r})$ was evaluated using the fast Fourier transform (FFT) to extract the interference pattern modulation from the experimental interferogram. Selecting only the frequencies relative to the fringes modulation, allows the reconstruction of the fringes visibility. In order to normalize the measured visibility, the continuous background signal is used. The normalized visibility ($V$) and the first order correlation function have the relation
\begin{equation}
|g^{(1)}(\mathbf{r},\mathbf{-r})|= V I_{\text{ideal}}
\end{equation}
where $I_\text{ideal}=\displaystyle (I_1+I_2)(2\sqrt{I_1I_2})^{-1}$, with $I_1$ and $I_2$ the light intensities measured on the two separated channels of the interferometer, takes into account possible small asymmetries between the two arms.
The function used for the fitting in Fig. 2d and 2f of the manuscript, is an exponential profile with the form
\begin{equation}
|g^{(1)}(\Delta r)|= A e^{-r/b}
\end{equation}
where $A$ is a renormalization factor taking into account the experimental reduced visibility, e.g. from mechanical vibrations of the setup, and $b$ is the parameter describing the coherence length.
In Fig. S3 we report the interference patterns of the transmitted laser beam on the sample (Fig. S3a) and of the emission of the sample (Fig. S3b) filtered with a 10 nm linewidth bandpass filter centered at $\lambda = 605$ $nm$. Here a clear difference appears, in terms of the overall spatial extension of coherent emission from the sample. While the exciting laser shows a larger degree of coherence fully concentrated in the 20 $\mu m$ diameter laser spot, the coherent emission from the sample is found to be lower but more spatially extended, coming from an area much larger than the excitation spot.
\begin{figure}[H]
\begin{center}
\includegraphics[width=5in]{FigS-CoherenceExtention}
\caption{Real space interference pattern from (a) the 20 $\mu m$ diameter laser spot transmitted through the sample and (b) the sample emission above threshold. The coherent emission of the sample comes from an area much larger than the excitation spot.}
\end{center}
\label{fig:FigS3}
\end{figure}
\section*{Temporal coherence}
The temporal decay of coherence was evaluated from the coherence in the autocorrelation point ($\mathbf{r_0}$), when lengthening the optical path in one arm of the interferometer by means of a single axis translation micrometer stage. This, in turn, results in a relative temporal delay between the two arms, $\Delta t$, ranging from $0.07$ $ps$
to $\sim 3$ $ps$. Each measured frame is analysed by calculating the first order correlation function in the autocorrelation point, as discussed in the previous sections.
The extracted temporal decay is fitted by using a stretched exponential function as in:
\begin{equation}
|g^{(1)}(\mathbf{r_0}, \Delta t)|= A e^{-(\Delta t/l_e)^{\beta}}
\end{equation}
where $A$ is a renormalization factor, while $l_e$ and $\beta$ are the parameters containing the temporal coherence decay length,
given by:
\begin{equation}
\displaystyle
\langle l_t \rangle= \int_0^\infty dx e^{-(x/l_e)^{\beta}} = \frac{l_e}{\beta} \Gamma \left(\frac{1}{\beta} \right)
\end{equation}
where $\langle l_t \rangle$ is the temporal decay length, $l_e$ is the renormalization factor scale of the temporal axis,
$\beta$ the exponent of the stretched exponential and $\Gamma$ the gamma function. From the fit shown in Fig. 5b of the manuscript, the extracted $\beta$ is $\approx 1.5$.
\section*{Time resolved measurements}
The time resolved experiments have been performed by non-resonantly exciting the sample with a pulsed laser of 100 fs pulse width, and collecting the emission on a streak camera. Fig. S4 shows the temporal profile of the laser pulse (in log scale) measured on the streak camera, which defines the time resolution of our set up.
By fitting the laser profile with a gaussian function (red line), a FWHM of 1.8 ps is obtained.
The peak energy of the plasmon exciton polariton condensate at different delay times has been extracted by fitting the emission spectra at each time with a Gaussian peak profile. In Fig. S5, we show some of these fitted spectra at t = 0.5 ps (a), t = 7.2 ps (b) and t = 13.6 ps (c).
\begin{figure}[H]
\begin{center}
\includegraphics[width=3in]{FigS-StreakResolution}
\caption{Temporal profile of the 100 fs exciting laser pulse, measured on the streak camera setup. The red line shows the gaussian fit of the data with a FWHM of 1.8 ps.}
\end{center}
\label{fig:FigS4}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=4in]{Fig_Fit_SI}
\caption{Plasmon-exciton-polariton spectra at delay times of t = 0.5 ps (a), t = 7.2 ps (b) and t = 13.6 ps (c) as extracted by the time resolved emission spectrum of Fig. 3b of the main text. The red lines shows the gaussian fit of the data.}
\end{center}
\label{fig:FigS5}
\end{figure}
\end{document} |
3,212,635,537,645 | arxiv | \section{Introduction}
\iffalse
Things to write about.
Comparisons to
\begin{enumerate}
\item Sums of products of univariates.
\item Ankit Gupta's PIT paper...need to read it slightly
\item Any other non-multilinear PIT of larger than depth 3....Shubhangi might know
\item Can handle polylog top fan-in for depth three with bounded bottom fan-in...need to check the exact tradeoffs from depth three papers/survey
\end{enumerate}
\fi
Arithmetic circuits are the most natural model of computation for a wide variety of algebraic problems such as matrix multiplication, computing fast fourier transforms etc. The problem of proving lower bounds for arithmetic circuits is one of the most fundamental and interesting problems in complexity theory. Proving superpolynomial lower bounds for general arithmetic circuits would resolve the $\VP$ versus $\VNP$ conjecture~\cite{Valiant79}, the algebraic analog of the ${\mathbb{P}}$ vs $\NP$ conjecture. This is one of the holy grails of complexity theory and has received a lot of attention, since it is a more structured and potentially easier question to understand and analyse than the ${\mathbb{P}}$ vs $\NP$ problem .
The intimately related problem of polynomial identity testing (PIT) is the problem of testing if a polynomial, given as an arithmetic circuit is identically zero. In the setting where the algorithm cannot look inside the circuit, but only has access to evaluations of the circuit, the problem is referred to as blackbox PIT. There is a very simple randomized algorithm for this problem - simply evaluate the polynomial at a random point from a large enough domain. With very high probability, a nonzero polynomial will have a nonzero evaluation~\cite{Schwartz80, Zippel79}. It is a very important and fundamental question to derandomize the above algorithm. In a seminal work, Kabanets and Impagliazzo~\cite{KI04} showed that the problem of proving lower bounds for arithmetic circuits and the problem of derandomizing identity testing are essentially equivalent!
These two problems have occupied a central position in complexity theory and despite much attention, our understanding of general arithmetic circuits is still very limited. Thus there has been a great deal of effort in understanding the complexity of restricted classes of arithmetic circuits in an attempt to obtain a better understanding of the general problem. Low depth arithmetic circuits in particular are one such well studied class.
\paragraph{Lower bounds for homogeneous low depth arithmetic circuits.}
The last few years have seen a tremendous amount of exciting progress on the problems of ``depth reduction" of general arithmetic circuits to low depth arithmetic circuits, and of proving lower bounds for low depth arithmetic circuits. Using depth reduction techniques~\cite{VSBR83, AV08, koiran, Tavenas13} it was shown that $N^{\omega(\sqrt n)}$ lower bounds (for polynomials in $N$ variables and of degree $n$) for just homogeneous depth 4 arithmetic circuits of bottom fan-in $\sqrt n$ would suffice to separate $\VP$ from $\VNP$ and imply superpolynomial lower bounds for general arithmetic circuits. At the same time there was a very exciting line of works proving $N^{\Omega(\sqrt n)}$ lower bounds for the same model of arithmetic circuits (and in fact for even the more general class of homogeneous depth 4 arithmetic circuits with no restriction on bottom fan-in)~\cite{GKKS12, FLMS13, KSS13, KS-formula, KLSS14, KS-full}.
\paragraph{Lower bounds for non-homogeneous low depth arithmetic circuits.} Despite all this remarkable progress, and some very strong lower bounds for homogeneous low depth arithmetic circuits, in the nonhomogenous world much less is understood. Only mild lower bounds are known when we drop the condition of homogeneity, even for very simple classes of low depth arithmetic circuits. For depth 3 circuits over fields of characteristic 0, only quadratic lower bounds known~\cite{SW01, Shp01}, and there has been no progress on this question in more than a decade now.
In a beautiful depth reduction result over fields of characteristic 0, Gupta et al~\cite{GKKS13} showed that $N^{\omega(\sqrt n)}$ lower bounds (for polynomials in $N$ variables and of degree $n$) for the class of non-homogeneous {\it depth 3} circuits would already separate $\VP$ from $\VNP$. It was recently observed by Kayal and Saha~\cite{KayalSaha14}~\footnote{They attribute the observation to Ramprasad Saptharishi.} that in fact it suffices to prove such lower bounds for depth 3 circuits with bottom fan-in $\sqrt n$.
Till recently (in particular till the work of~\cite{KayalSaha14}), the best known lower bounds for depth 3 circuits even with bottom fan-in 2 were still just quadratic. In a very nice recent result, Kayal and Saha~\cite{KayalSaha14} showed an exponential lower bound for depth 3 circuits over fields of characteristic 0, whose bottom fan-in is at most $N^{\mu}$, where $N$ is the number of variables and $0 \leq \mu < 1$ is an arbitrary constant. More precisely, they prove the following.
\begin{thm}[Kayal-Saha~\cite{KayalSaha14}]~\label{thm: KSaha}
Let ${\mathbb{F}}$ be a field of characteristic zero. Then, for every constant $0 \leq \mu < 1$ there is a family $\{P_N\}$ of degree $n$ polynomials in $N = n^{O_{\mu}(1)}$ variables over ${\mathbb{F}}$ in $\VNP$ such that any depth three circuit of bottom fan-in at most $N^{\mu}$ computing $P_N$ has top fan-in at least $N^{\Omega_{\mu}{(\sqrt{n})}}$.
\end{thm}
\paragraph{Our Model:} In this work, we consider the model of sums of products of polynomials in few variables. More formally, we consider representations of polynomials $P$ (degree $n$ in $N = n^{O(1)}$ variables) in the form
\begin{equation}~\label{def:model intro}
P = \sum_{i=1}^T \prod_{j = 1}^d Q_{ij}
\end{equation}
where each $Q_{ij}$ is an arbitrary polynomial (of arbitrarily high degree) in at most $s$ variables. We call this the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits.
Observe that the model is more general than that considered in~\cite{KayalSaha14}. The model in ~\cite{KayalSaha14} corresponds to sums of products of {\it linear forms} in few variables. In our case, the $Q_{ij}$ no longer have to be linear forms, but can be general polynomials of arbitrarily high degree. Prior to this work, even for the case when $s = 2$, there were no nontrivial lower bounds known for this model.
$\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits for $s \geq 2$ can also be seen as a generalization of the model of sums of products of univariate polynomials (which corresponds to $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits with $s=1$), which has been very well studied in the arithmetic circuit complexity literature. Lower bounds for $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits follow from works of Nisan~\cite{Nisan91} and Saxena~\cite{S07}. Over the last few years, there have been some very nice results giving quasipolynomial time blackbox identity testers for $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits~\cite{ForbesS13, FS13, ASS13}.
$\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits can also be seen as a generalization of the widely studied model of diagonal circuits, since polynomials computable by diagonal circuits can be represented as a $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuit without much blow up in the size of the representation~\cite{S07}.
Although $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits seem fairly well understood from the point of view of lower bounds and derandomization of polynomial identity testing, if one considers the model of sums of products of bivariate polynomials ($\Sigma\Pi\left(\Sigma\Pi\right)^{[2]}$ circuits), then our understanding changes completely. Although only seemingly a mild generalization of $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits, the known proof techniques
for lower bounds for $\Sigma\Pi\left(\Sigma\Pi\right)^{[1]}$ circuits (which were proved using {\it evaluation dimension} techniques of~\cite{Nisan91, Raz06}) seem to completely break down in this setting. Thus, studying this model seems like an interesting next step towards understanding non-homogeneous small depth algebraic computation.
As far as we are aware there are also (not surprisingly) no nontrivial PIT results for the model.
We are now ready to state our results.
\subsection{Our results}
\paragraph{Lower bounds : } We show an exponential lower bound for the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$, when $s$ is at most $N^{\mu}$ for any constant $0 \leq \mu < 1$ ($N$ is the number of variables). More precisely, we show the following.
\begin{thm}~\label{thm:mainthm intro}
Let ${\mathbb{F}}$ be a field of characteristic zero and $\mu$ be any constant such that $0 \leq \mu < 1$. There exists a family $\{P_N\}$ of polynomials over ${\mathbb{F}}$ in $\VNP$, where $P_N$ is of degree $n$ in $N = n^{O_{\mu}(1)}$ variables, such that for any representation of $P_N$ of the form
$$P_N = \sum_{i = 1}^T\prod_{j = 1}^{d} Q_{ij}$$
where each $Q_{ij}$ is polynomial in at most $s =N^{\mu}$ variables, it must be true that $$T\cdot d \geq n^{\Omega_{\mu}(\sqrt{n})}$$
\end{thm}
Given the depth reduction results of~\cite{GKKS13} and the observation mentioned earlier from~\cite{KayalSaha14}, it is known that any asymptotic improvement in the exponent of the lower bound (even for $s = O(\sqrt{n})$) would imply $\VNP$ is different from $\VP$.
As discussed in the introduction, even though this model seems a natural generalization of the model of sums of products of univariate polynomials, our lower bound technique is very different from those used in proving lower bounds for sums of products of univariates. Our lower bound proof is based on ideas developed in the course of investigating homogeneous depth four arithmetic circuits~\cite{KLSS14, KS-full}.
\iffalse
{\bf There are some potential generalization of the lower bound results which could be written as corollaries...such as size lower bounds for sums of products of low things, when the product fan-in is small}
\fi
\paragraph{Blackbox PIT : } We also consider the problem of PIT for the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. For general sums of products of even bivariate polynomials, this question seems quite difficult, and as of now we are not even able to obtain subexponential time PIT. However, as a consequence of our lower bounds and by suitably adapting hardness randomness tradeoffs for arithmetic circuits developed in~\cite{KI04} and~\cite{DSY09}, we are able to obtain PIT results in the setting where the top fan-in of the circuit is bounded, and when we have the promise that the circuit
computes a polynomial of low individual degree.
Our understanding of blackbox PIT for depth four circuits is very limited, and the results known are in very restricted settings. Saraf and Volkovich~\cite{SarafV11} gave blackbox PIT algorithms for multilinear depth 4 circuits with bounded top fan-in. To the best of our knowledge, the idea in~\cite{SarafV11} does not extend to the case of non-multilinear depth 4 circuits, even when the individual degree of each of the variables is at most $2$. Recently, Oliveira et al~\cite{OSV14} gave a subexponential time blackbox PIT for all depth four multilinear circuits\footnote{The running time increases with the size of the circuit, and in particular, it is subexponential time for polynomial sized depth four multilinear circuits.}. In the non-multilinear setting, Agrawal et al.~\cite{ASSS12} gave PIT algorithms for constant depth formulas in which the number of {\it occurences} of each variable is bounded. Without going into the technical details, we remark that the notion of {\it bounded occur} is a generalization of the well studied notion of bounded reads. The most closely related results to those in this paper that we are aware of are the recent papers of Gupta~\cite{Gupta14} and Mukhopadhyay~\cite{Mukhopadhyay15}, which give blackbox PIT results for sums of products of low degree polynomials, where the top sum fan-in is bounded and the circuits satisfy certain algebraic geometric restrictions.
So, the question of getting PIT results for general depth four circuits (even with bounded top and bottom fan-in) remains wide open. For instance we still do not know any nontrivial PIT results for a sum of constant many products of degree 2 polynomials. Though we still don't know how to deal with this question, when we replace the polynomials of low degree with polynomials of few variables (but of arbitrarily large degree), then we are able to obtain quasipolynomial PIT results. There is one added caveat however, that the final polynomial computed
needs to be of low individual degree (as seems necessary for PIT results obtained from
the known hardness-randomness tradeoffs for bounded depth circuits~\cite{DSY09}). We now formally state the theorem
\begin{thm}~\label{thm:mainthm2 intro}
Let $c$ and $\mu$ be arbitrary constants such that $c> 0$ and $0 \leq \mu < 1/2$, and let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$
such that
\begin{enumerate}
\item $T < \log^c N$
\item $k < \log ^c N$
\item $d < N^c$
\item each $Q_{ij}$ depends on at most $N^{\mu}$ variables
\end{enumerate}
Then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$.
\iffalse
Let ${\mathbb{F}}$ be a field of characteristic zero.
Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$
If there exist constants $c$ and $\mu$, with $0 \leq \mu < 1/2$ for which
\begin{enumerate}
\item $T < \log^c N$
\item $k < \log ^c N$
\item $d < N^c$
\item each $Q_{ij}$ depends on at most $N^{\mu}$ variables
\end{enumerate}
then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$.
\fi
\end{thm}
Moreover, from our proof, it also follows that if each of polynomial $Q_{ij}$ depends only on $\log^{O(1)} N$ variables, then both the size of the hitting set and the time to construct it, are upper bounded by a quasipolynomial function in $N$.
\iffalse
\paragraph{Computational model : } In this paper, we will study polynomials which can be computed by sums of products of polynomials in few variables. More formally, we say that $P$ has a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit of top fan-in $T$ and formal degree $d$, if there exist polynomials $Q_{ij}$ on at most $s$ variables and constants $\{a_i : 1 \leq i \leq T\}$ such that
\begin{equation}~\label{def:model}
P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}
\end{equation}
We say that the size of such a representation is given by $Td$. Without loss of generality, we can assume that the degree zero term in each of the $Q_{ij}$ is either zero or one. If it is a non-zero constant other than $1$, we can extract it out and absorb it in $\alpha_i$. Observe that the formal degree of the circuit could be much larger than the degree of the polynomial $P$.
The model can also be seen as a generalization of the model of sums of products of univariates studied by Saxena~\cite{S07}. The lower bound of Saxena is proved by using the techniques developed in proving lower bounds for multilinear circuits by Raz~\cite{Raz06}. Our techniques on the other hand, will be based on the ideas developed in the course of investigating homogeneous depth four circuits in a sequence of recent works~\cite{KLSS14, KS-full}.
\fi
\paragraph{Organisation of the paper:} We provide an overview of the proofs in Section~\ref{sec:overview}. We describe some definitions and preliminaries in Section~\ref{sec:prelims}. We present the proof of the lower bound in Section~\ref{sec:lower bound}. We describe the application to blackbox PIT in Section~\ref{sec:pit} and conclude with some open problems in Section~\ref{sec:open ques}.
\section{Proof overview}~\label{sec:overview}
In this section, we provide an overview of the main ideas in proofs of Theorem~\ref{thm:mainthm intro} and Theorem~\ref{thm:mainthm2 intro}.
\subsection{Overview of proof of Theorem~\ref{thm:mainthm intro}}~\label{sec:overview lower bounds}
We restate Theorem~\ref{thm:mainthm intro} for the sake of clarity. \\
{\bf Theorem~\ref{thm:mainthm intro}}~\label{thm:mainthm intro2}
{\it Let ${\mathbb{F}}$ be a field of characteristic zero and $\mu$ be any constant such that $0 \leq \mu < 1$. There exists a family $\{P_N\}$ of polynomials over ${\mathbb{F}}$ in $\VNP$, where $P_N$ is of degree $n$ in $N = n^{O_{\mu}(1)}$ variables, such that for any representation of $P_N$ of the form
$$P_N = \sum_{i = 1}^T\prod_{j = 1}^{d} Q_{ij}$$
where each $Q_{ij}$ is polynomial in only $N^{\mu}$ variables, it must be true that $$T\cdot d \geq n^{\Omega_{\mu}(\sqrt{n})}$$
}
The key difference between proving the above lower bound and the lower bounds for homogeneous depth four circuits is that the formal degree of the circuit in the above case could be much larger than the degree of the polynomial, which is $n$. In fact, even the fan-in of the product gates at level 2, that is $d$ could be much larger than $n$. Therefore, a straightforward application of homogeneous depth four circuit lower bounds does not seem to work. Our proof is in two steps and at a high level follows the strategy of the lower bound for non-homogeneous depth three circuits with bounded bottom fan-in by Kayal and Saha~\cite{KayalSaha14} with some key differences.
\begin{itemize}
\item In the first step, we obtain another representation of $P_N$, as $$P_N = \sum_{i = 1}^{Td2^{O(\sqrt{n})}}\prod_{j = 1}^{n} Q_{ij}'$$
where every monomial in each of the $Q_{ij}'$ has {\it support}\footnote{A monomial is said to have support support $s$ if it depends on at most $s$ distinct variables.} at most $s$, although each $Q_{ij}'$ could now depend on all the variables. The key property that we have gained from this transformation is that the fan-in of the product gates at level two is bounded by $n$ now, which is the degree of $P_N$. However, we have no bound on the degree of the $Q_{ij}'$. Moreover, we have blown up the top fan-in a bit, but we will be able to tolerate this loss if $s$ is small.
\item In the second step, the strategy can be seen in two stages. If $\mu$ was very small, say $0.001$, then we could have taken advantage of the fact that in the representation obtained in the first step above, the product fan-in is at most $n$ and the support of every monomial in each of the $Q_{ij}'$ is small, to prove an upper bound on the dimension of the space of projected shifted partial derivatives of the above representation. Comparing this dimension with that of our hard polynomial gives us our lower bound. For larger values of $\mu$, we use random restrictions to ensure that all the monomials of {\it large support} in $Q_{ij}'$ are set to zero. At the end of such a procedure, we are back to the low support case. This step of the proof is closely along the lines of the proof of homogeneous depth four arithmetic circuit lower bounds in~\cite{KLSS14, KS-full} although in the present case, formal degree of the circuit could be as large as $n^2$, which is much larger than the degree of the polynomial $P_N$. For such large formal degrees, in general we do not even know lower bounds for non-homogeneous depth three circuits.
\end{itemize}
We would like to point out that the first step of the proof above is similar to the homogenization step in the proof of lower bounds for general depth three circuits with bounded bottom fan-in by Kayal and Saha~\cite{KayalSaha14}. The key difference is that while the circuit they obtain at the end of this step is a strictly homogeneous circuit of formal degree $n$, we are unable to get a similar structure. The complication stems from the fact that when $Q_{ij}$ are not affine forms, they could contain monomials of varying degrees. In this case, it seems difficult to obtain a strict homogenization with a small blow up in size. We get around this deficiency by a more subtle analysis in the second step, where we show a lower bound for a circuit which has a formal degree much larger than the degree of the polynomial being computed, but has some added structure. This step critically uses that the fact that the product fan-in at level two of these circuits is at most $n$, and the support of every monomial in each of the $Q_{ij}'$ is small.
\subsection{Overview of proof of Theorem~\ref{thm:mainthm2 intro}}
We first restate Theorem~\ref{thm:mainthm2 intro}. \\
{\bf Theorem~\ref{thm:mainthm2 intro}}~\label{thm:mainthm2 intro 2}{\it
Let $c$ and $\mu$ be arbitrary constants such that $c> 0$ and $0 \leq \mu < 1/2$, and let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$
such that
\begin{enumerate}
\item $T < \log^c N$
\item $k < \log ^c N$
\item $d < N^c$
\item each $Q_{ij}$ depends on at most $N^{\mu}$ variables
\end{enumerate}
Then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$.
}
\vspace{2mm}
The construction of the hitting set is based on the well known idea of using hard functions for derandomization. Our goal is to reduce the number of variables from $N$ to at most $N^{\delta}$ for some constant $\delta < 1$, while maintaining the zeroness/nonzeroness of the polynomial being tested~\cite{KI04, DSY09}. Once we have done this, we take a brute force hitting set of size $\text{(Degree + 1)}^{\text{Number of variables}}$ as given by Lemma~\ref{lem: comb nulls}.
To reduce the number of variables, we use the framework introduced by Kabanets and Impagliazzo~\cite{KI04}.
The key technical step of the proof is to show that for a non-zero polynomial $P$ as defined above, if there exists a polynomial $f \in {\mathbb{F}}[X_1, X_2, \ldots, X_{i-1}, X_{i+1}, X_{i+2}, \ldots, X_{N}]$ such that $X_i-f$ divides $P$, then $f$ can also be expressed as a sum of products of polynomials in few variables of reasonably small size. This step crucially uses a statement about complexity of roots of polynomials computed by low depth circuits from~\cite{DSY09}. Therefore, if $f$ is a polynomial which does not have a small representation as a sum of products of polynomials in {\it few} variables, then $X_i - f$ does not divide $P$. This observation guarantees that the construction of hitting sets from hard polynomials given by~\cite{KI04} works for this class of circuits.
\section{Notation and Preliminaries}~\label{sec:prelims}
We now introduce some notation and preliminary notions that we use in the rest of the paper.
\paragraph{Computational model : } In this work, we consider the model of sums of products of polynomials in few variables. More formally, we consider representations of polynomials $P$ (degree $n$ in $N = n^{O(1)}$ variables) in the form
\begin{equation}~\label{def:model}
P = \sum_{i=1}^T \alpha_i\cdot \prod_{j = 1}^d Q_{ij}
\end{equation}
where each $Q_{ij}$ is an arbitrary polynomial (of arbitrarily high degree) in at most $s$ variables and each $\alpha_i$ is a field constant. We call this the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. We use the quantity $Td$ as a measure of the size of a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit. Without loss of generality, we can assume that the degree zero term in each of the $Q_{ij}$ is either zero or one. If it is a non-zero constant other than $1$, we can extract it out and absorb it in $\alpha_i$. For each of the product gates, the fan-in could be different, but we can assume without loss of generality that all the product fan-ins are equal to $d$. Observe that the $d$ could be much larger than the degree of the polynomial $P$. Throughout this paper, we will be working over a field of characteristic zero.
\paragraph{Some basic notations : }
\begin{enumerate}
\item For an integer $i$, we denote the set $\{1, 2, \ldots, i\}$ by $[i]$.
\item By $\overline{X}$, we mean the set of variables $\{X_1, X_2, \ldots, X_N\}$.
\item For a polynomial $P$ and a positive integer $i$, we represent by $\mathsf{Hom}^i[P]$, the homogeneous component of $P$ of degree equal to $i$. By $\mathsf{Hom}^{\leq i}[P]$ and $\mathsf{Hom}^{\geq i}[P]$, we represent the component of $P$ of degree at most $i$ and at least $i$ respectively.
\item The support of a monomial $\alpha$ is the set of variables which appear with a non-zero exponent in $\alpha$. We denote the size of the support of $\alpha$ by $\text{Supp}(\alpha)$.
\item Throughout the paper, we say that a function $f(N)$ is subexponential in $N$ if there exists a positive real number $\epsilon$, such that $\epsilon < 1$ and for all $N$ sufficiently large, $f(N) < \exp(N^{\epsilon})$.
\item We say that a function $f(N)$ is quasipolynomial in $N$ if there exists a positive absolute constant $c$, such that for all $N$ sufficiently large, $f(N) < \exp(\log^c N)$.
\item In this paper, we only consider layered arithmetic circuits and we will be counting levels from top to bottom, starting with the output gates being at level one.
\item By a $\Sigma\Pi\Sigma\wedge$ circuit, we refer to a depth four circuit with all the product gates at the lowest level being replaced by powering ($\wedge$) gates. Similarly, by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit, we mean a depth six circuit all of whose product gates at level four from the top are powering gates.
\end{enumerate}
\iffalse
\paragraph{Nisan-Wigderson polynomials : }We will now define a variant of the family of Nisan-Wigderson polynomials polynomials in $\VNP$ defined in~\cite{KayalSaha14}. For every fixed real number $\mu$ such that $0 \geq \mu < 1$, we define the family $NW_{n,{\mu}}$ of degree $n$ in $N = n^{O(1)}$ variables. We first define some parameters below.
\begin{enumerate}
\item $\delta$ is a positive real number such that $\mu + \delta < 1$. For this paper, we will think of $\delta = (1-\mu)/2$.
\item $\gamma = \lceil\frac{2(\mu + \delta) + 1}{1-\mu-\delta} \rceil$ is an integer.
\item $N$ is chosen such that $N/n$ is set equal to a prime number between $n^{1 + \gamma}$ and $2n^{1+\gamma}$. Such a prime number always exists from the Bertrand-Chebychev theorem. Without loss of generality, we pick the smallest one.
\item $d = \frac{\gamma(1 + \mu + \delta)}{2(1 + \gamma)}\cdot n$ is such that $D-1$ is the degree of the underlying univariate polynomials in the definition of $NW_{n,{\mu}}$. (we clarify this below)
\end{enumerate}
Let $N/n$ be set equal to the prime $\psi$. We are now ready to restate the definition of $NW_{n,{\mu}}$ from~\cite{KayalSaha14}.
\begin{define}[Nisan-Wigderson Polynomials]~\label{defn:NW} For the set of $N$ variables $\{X_{ij} : i\in [n], j \in [\psi]\} $, we define the degree $n$ homogeneous polynomial $NW_{n,{\mu}}$ as
$$NW_{n,{\mu}} = \sum_{\substack{f(z) \in {\mathbb{F}}_{\psi}[z] \\
deg(f) \leq d-1}} \prod_{i \in [n]} X_{if(i)}$$
\end{define}
From the definition, we can observe the following properties of $NW_{n,{\mu}}$.
\begin{enumerate}
\item The number of monomials in $NW_{n,{\mu}}$ is exactly ${\psi}^{d} = n^{O(d)}$.
\item Each of the monomials in $NW_{n,{\mu}}$ is multilinear.
\item Each monomial corresponds to evaluations of a univariate polynomial of degree at most $d-1$ at all points of ${\mathbb{F}}_{\psi}$. Thus, any two distinct monomials agree in at most $d-1$ variables in their support.
\end{enumerate}
\begin{lem}~\label{lem: NW eval}
Let $\mu$ be a non-negative real number less than $1$. Given $q \in {\mathbb{F}}^N$, $\mu$, $n$, we can evaluate the polynomial $NW_{n,{\mu}}$ at $q$ in time $N^{O(n)}$.
\end{lem}
\begin{proof}
Given $n$ and $\mu$, we first find $d$, $\psi$ as given by the choice of parameters.
Once we have $d$, we iterate through every monomial $\alpha$ of degree $n$ in the $\overline{X}$ variables which is supported on all the rows of the variable matrix and check if it is in the polynomial $NW_n$ by trying to find a univariate polynomial $f(z) \in {\mathbb{F}}_{\psi}[z]$ such that degree of $f$ is at most $d-1$ and $\prod_{i \in [n]} X_{if(i)} = \alpha$. The interpolation takes only $\text{Poly}(n)$ time, and the total number of monomials to try is at most $N^n$. So, we get the lemma.
\end{proof}
\fi
\paragraph{Hitting set : } Let ${\cal C}$ be a set of polynomials in $N$ variables over a field ${\mathbb{F}}$. Then, a set ${\cal H} \subseteq {\mathbb{F}}^{N}$ is said to be a {\it hitting set} for the class ${\cal C}$, if for every polynomial $P \in {\cal C}$ such that $P$ is not the identically zero polynomial, there exists a $p \in {\cal H}$ such that $P(p) \neq 0$.
\paragraph{Elementary symmetric polynomials : } For variables $\overline{X} = \{X_1, X_2, \ldots, X_N\}$ and any integer $0 \leq l \leq N$, the elementary symmetric polynomial of degree $l$ on variables $\overline{X}$ is defined as $$\mathsf{ESYM}_l(\overline{X}) = \sum_{S \subseteq [N], |S| = l} \prod_{j \in S} X_j$$
\paragraph{Projected shifted partial derivatives :}
A key idea behind the recent progress on lower bounds is the notion of {\it shifted partial derivatives} introduced in~\cite{Kayal12}. In this paper, we use a variant of the measure, called projected shifted partial derivatives introduced in~\cite{KLSS14} and subsequently used in~\cite{KS-full}. Although we never explicitly do any calculations with the measure in this paper, we provide a brief introduction to it below since the bounds are based on it.
For a polynomial $P$ and a monomial $\gamma$, ${\partial_{\gamma} (P)}$ is the partial derivative of $P$ with respect to $\gamma$. For every polynomial $P$ and a set of monomials ${\cal M}$, $\partial_{\cal M} (P)$ is the set of partial derivatives of $P$ with respect to monomials in ${\cal M}$. The space of $({\cal M}, m)\mhyphen$projected shifted partial derivatives of a polynomial $P$ is defined below.
\begin{define}[$({\cal M}, m)\mhyphen$projected shifted partial derivatives]\label{def:shiftedderivative}
For an $N$ variate polynomial $P \in {\field{F}}[X_1, X_2, \ldots, X_{N}]$, set of monomials ${\cal M}$ and a positive integer $m\geq 0$, the space of $({\cal M}, m)$-projected shifted partial derivatives of $P$ is defined as
\begin{align}
\langle \partial_{\cal M} (P)\rangle_{m} \stackrel{def}{=} \field{F}\mhyphen span\{\sigma(\prod_{i\in S}{X_i}\cdot g) : g \in \partial_{\cal M} (P), S\subseteq [N], |S| = m\}
\end{align}
\end{define}
Here, $\sigma(P)$ of a polynomial $P$ is the projection of $P$ on the multilinear monomials in its support.
The measure of complexity of a polynomial that we use in this paper, is the dimension of projected shifted partial derivative space of $P$ with respect to some set of monomials ${\cal M}$ and a parameter $m$. Formally,
$$\Phi_{{\cal M}, m} (P) = \mathsf{Dim}( \langle \partial_{\cal M} (P)\rangle_{m})$$
\iffalse
In this paper, we carefully choose a set of monomials ${\cal M}$ and a parameter $m$ and use the quantity
$\Phi_{{\cal M}, m} (P)$ defined as
$$\Phi_{{\cal M}, m} (P) = \mathsf{Dim}( \langle \partial_{\cal M} (P)\rangle_{m})$$
as a measure of complexity of the polynomial $P$.
We will now elaborate on this definition of the measure in words - we look at the space of $({\cal M}, m)\mhyphen$projected shifted partial derivatives as the space of polynomials obtained at the end of the following steps, starting with the polynomial $P$.
\begin{enumerate}
\item We fix a set of monomials ${\cal M}$ and a parameter $m$.
\item We take partial derivatives of $P$ with every monomial in ${\cal M}$, to obtain the set $\partial_{\cal M}(P)$.
\item We obtain the set of shifted partial derivatives of $P$ by taking the product of every polynomial in $\partial_{\cal M}(P)$ with every monomial of degree $m$. In this paper, we will often be working with restrictions of polynomial $P$ obtained by setting some of the input variables to zero. Even for such restrictions, we consider product of the derivatives by all multilinear monomials of degree $m$ over the complete set of input variables $\{x_1, x_2, \ldots, x_N\}$.
\item Then, we consider each polynomial in the set defined in the item above and project it to the polynomial composed of only the multilinear monomials in its support. The span of this set over ${\mathbb{F}}$ is defined to be $\langle \partial_{\cal M}(P)\rangle_m$.
\item We define the complexity of the polynomial $\Phi_{{\cal M}, m}(P)$ to be the dimension of $\langle \partial_{\cal M}(P)\rangle_m$ over ${\mathbb{F}}$.
\end{enumerate}
It follows easily from the definitions that the complexity measure is subadditive. We formalize this in the lemma below.
\fi
From the definitions, it is straight forward to see that the measure is subadditive.
\begin{lem}[Sub-additivity]~\label{subadditive}
Let $P$ and $Q$ be any two multivariate polynomials in ${\mathbb{F}}[X_1, X_2, \ldots, X_{N}]$. Let ${\cal M}$ be any set of monomials and $m$ be any positive integer. Then, for all scalars $\alpha$ and $\beta$
$$\Phi_{{\cal M}, m} (\alpha\cdot P + \beta\cdot Q) \leq \Phi_{{\cal M}, m} (P) + \Phi_{{\cal M}, m} (Q)$$
\end{lem}
\paragraph{Approximations : }We will refer to the following lemma to approximate expressions during our calculations.
\begin{lem}[\cite{GKKS12}]~\label{lem:approx}
Let $a(n), f(n), g(n) : {\mathbb{Z}}_{>0}\rightarrow {\mathbb{Z}}_{>0}$ be integer valued functions such that $(f+g) = o(a)$. Then,
$$\log \frac{(a+f)!}{(a-g)!} = (f+g)\log a \pm O\left( \frac{(f+g)^2}{a}\right)$$
\end{lem}
In the proofs in this paper, we use Lemma~\ref{lem:approx} only in situations where $(f+g)^2$ will be $O(a)$. In this case, the error term will be bounded by an absolute constant. So, up to constant factors, $\frac{(a+f)!}{(a-g)!} = a^{(f+g)}$. We use the symbol $\approx$ to indicate equality up to constant factors.
\paragraph{Complexity of coefficients and homogeneous components :} We now summarise two simple lemmas which are useful for our proof. The first lemma summarises that given a circuit $C$ for a polynomial $P \in {\mathbb{F}}[X1, X2, \ldots, X_{N}, Y]$ of degree at most $d$, for every $0 \leq i \leq d$, the coefficient of $Y^i$ in $P$ (when viewing $P$ as a polynomial in ${\mathbb{F}}[X_1, X_2, \ldots, X_{N}][Y]$) can also be computed by a circuit of size not much larger than the size of $C$.
\begin{lem}~\label{lem:extracting coefficients}
Let $P \in {\mathbb{F}}[X_1, X_2, \ldots, X_{N}, Y]$ be a polynomial of degree at most $d$ in $Y$ over a field ${\mathbb{F}}$ of characteristic zero, such that $P$ is computable by an arithmetic circuit $C$ of size $|C|$.
Let $$P = \sum_{i = 0}^d Q_i(X_1, X_2, \ldots, X_{N})\cdot Y^i$$
for polynomials $Q_i(X_1, X_2, \ldots, X_{N}) \in {\mathbb{F}}[X_1, X_2, \ldots, X_{N}]$.
Then, for every $i$ such that $0 \leq i \leq d$, the polynomial $Q_i$ can be computed by an arithmetic circuit $C'$ of size at most $|C|\cdot (d+1)$. Moreover, if the output gate of $C$ is a $+$ gate, then the depth of $C'$ is equal to the depth of $C$. Else, the depth of $C'$ is at most $1$ more than the depth of $C$.
\end{lem}
\begin{proof}
We can view $P$ as a univariate polynomial of degree at most $d$ in $Y$ with the coefficients coming from ${\mathbb{F}}(\overline{X})$. From the classical Lagrange interpolation, we know that the coefficient of $Y^i$ in $P$ can be written as an ${\mathbb{F}}(\overline{X})$ linear combination of the evaluations of $P$ at $d+1$ distinct values of $Y$ taken from ${\mathbb{F}}(\overline{X})$. In fact, more strongly, we can evaluate $P$ at $d+1$ values of $Y$ all chosen from ${\mathbb{F}}$ itself, in which case the constants in the linear combination are also from ${\mathbb{F}}$.
So, $Q_i$ can be computed by a circuit obtained from taking $d+1$ circuits each obtained from $P$ by substituting $Y$ by a scalar in ${\mathbb{F}}$, and taking their linear combination. Let this circuit be $C'$. Clearly the size of $C'$ is at most $(d+1)$ times the size of $C$. If the output gate of $C$ was an addition gate, then the outer addition for the linear combination can be absorbed into it, and the depth remains the same. Else, the depth increases by one.
\end{proof}
The second lemma stated below essentially says that the circuit complexity of homogeneous components of a polynomial is not much larger than the circuit complexity of the polynomial itself.
\begin{lem}~\label{lem:interpolation}
Let $P$ be a polynomial of degree at most $d$ in $N$ variables over a field ${\mathbb{F}}$ of characteristic zero, such that $P$ is computable by an arithmetic circuit $C$ of size $|C|$. Then, for every $i$ such that $0 \leq i \leq d$, the homogeneous component of degree $i$ of $P$ can be computed by an arithmetic circuit $C'$ of size at most $|C|\cdot (d+1)$. Moreover, if the output gate of $C$ is a $+$ gate, then the depth of $C'$ is equal to the depth of $C$. Else, the depth of $C'$ is at most $1$ more than the depth of $C$.
\end{lem}
\begin{proof}
Let $P'(t)$ be the polynomial obtained from $P$ by replacing every variable $X$ in $P$ by $X\cdot t$ for a new variable $t$. We can view $P'$ to be a univariate polynomial of degree at most $d$ in $t$ with the coefficients coming from ${\mathbb{F}}(\overline{X})$. Observe that for every $i$ such that $0 \leq i \leq d$, the homogeneous component of $P$ of degree equal to $i$ is equal to the coefficient of $t^i$ in $P'$. The proof now follows from Lemma~\ref{lem:extracting coefficients}.
\end{proof}
\section{Proof of the lower bound}~\label{sec:lower bound}
In this section, we give the proof of Theorem~\ref{thm:mainthm intro}. We prove the lower bound for a variant of the well known family of Nisan-Wigderson polynomials defined by Kayal and Saha~\cite{KayalSaha14}.
\subsection{Target polynomials for the lower bound}
We now define the family of polynomials of degree $n$ in $N$ variables for which we prove the lower bounds. The family is a variant of the Nisan-Wigderson polynomials which were introduced by Kayal et al in~\cite{KSS13} in the context of lower bounds for homogeneous depth four circuits. The particular variant we use in the paper is due to Kayal and Saha~\cite{KayalSaha14}.
The tradeoff between the number of variables $N$ and the degree $n$ will be parameterized by the parameter $\mu$ where $0 \leq \mu < 1$.
First we need some parameters, which we define below.
\begin{enumerate}
\item $\delta = (1-\mu)/2$ is a positive real number such that $\mu + \delta < 1$.
\item $\gamma = \frac{2(\mu + \delta) + 1}{1-\mu-\delta} $.
\item $N$ is chosen such that $N/n$ is a prime number between $n^{1 + \gamma}$ and $2n^{1+\gamma}$. Such a prime number always exists from the Bertrand-Chebychev theorem. Without loss of generality, we pick the smallest one.
\item $\rho = (\mu + \delta)\frac{\log N}{\log n}$
\item $D = \frac{\gamma + \rho}{2(1 + \gamma)} \cdot n$ , where $D-1$ is the degree of the underlying univariate polynomials in the definition of $NW_{n,{\mu}}$.
\end{enumerate}
Let $\psi$ be the prime number equalling $N/n$. We are now ready to restate the definition of $NW_{n,{\mu}}$ from~\cite{KayalSaha14}.
\begin{define}[Nisan-Wigderson Polynomials~\cite{KayalSaha14}]~\label{defn:NW} Let $\mu$ be a real number such that $0 \leq \mu < 1$. For a given $\mu$ and $n$, let $N$, $D$, $\psi$ be as defined above. For the set of $N$ variables $\{X_{ij} : i\in [n], j \in [\psi]\} $, we define the degree $n$ homogeneous polynomial $NW_{n,{\mu}}$ as
$$NW_{n,{\mu}} = \sum_{\substack{f(z) \in {\mathbb{F}}_{\psi}[z] \\
deg(f) \leq D-1}} \prod_{i \in [n]} X_{if(i)}$$
\end{define}
From the definition, we can observe the following properties of $NW_{n,{\mu}}$.
\begin{enumerate}
\item The number of monomials in $NW_{n,{\mu}}$ is exactly ${\psi}^{D} = n^{O(D)}$.
\item Each of the monomials in $NW_{n,{\mu}}$ is multilinear.
\item Each monomial corresponds to evaluations of a univariate polynomial of degree at most $D-1$ at all points of ${\mathbb{F}}_{\psi}$. Thus, any two distinct monomials agree in at most $D-1$ variables in their support.
\end{enumerate}
We will also need the following lemma in our proof.
\begin{lem}~\label{lem: NW eval}
Let $\mu$ be a non-negative real number less than $1$. Given $q \in {\mathbb{F}}^N$, $\mu$, $n$, we can evaluate the polynomial $NW_{n,{\mu}}$ at $q$ in time $N^{O(n)}$.
\end{lem}
\begin{proof}
Given $n$ and $\mu$, we first find $D$, $\psi$ as given by the choice of parameters.
Once we have $D$, we iterate through every monomial $\alpha$ of degree $n$ in the $\overline{X}$ variables which is supported on all the rows of the variable matrix and check if it is in the polynomial $NW_{n,{\mu}}$ by trying to find a univariate polynomial $f(z) \in {\mathbb{F}}_{\psi}[z]$ such that degree of $f$ is at most $D-1$ and $\prod_{i \in [n]} X_{if(i)} = \alpha$. The interpolation takes only $\text{Poly}(n)$ time, and the total number of monomials to try is at most $N^n$. So, we get the lemma.
\end{proof}
\iffalse
\subsection{Overview of the proof}
The proof proceeds as described in Section~\ref{sec:overview lower bounds}. The high level strategy is the same as that of the proof of lower bounds for general depth three circuits with bounded bottom fan-in by Kayal and Saha~\cite{KayalSaha14}. There will be two major steps.
\begin{itemize}
\item In the first step, we show that any polynomial of degree $n$ computed by $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit of top fan-in $T$ and product fan-in $d$ at the second level, can be written $\sum_{i}^{Tn2^{O(\sqrt{n})}}\prod_{j = 1}^n Q_{ij}'$, where every monomial in each of the $Q_{ij}'$ is of support at most $s$. Clearly, we have blown up the top fan-in of the representation a bit, but each of the product gates at level two has fan-in at most $n$ now. This will be crucial to our proof of a lower bounds for this model. Let us call this circuit $C'$.
\item We will then show that, any circuit structured like $C'$ computing $NW_n$ must have size at least $n^{\Omega(\sqrt{n})}$. This implies an exponential lower bound for the original $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit computing $NW_{n,{\mu}}$. This step essentially proceeds by using random restrictions to reduce $C'$ to a depth four circuit with bounded bottom support, and the second level product fan-in being bounded by $n$, and then using the projected shifted partial derivatives to prove a lower bound for this model.
\end{itemize}
\fi
We now proceed with the proof as outlined in Section~\ref{sec:overview lower bounds}.
\subsection{Reducing the product fan-in at level two}
Let $P$ be a homogeneous polynomial in $N$ variables of degree $n$ which has a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit of top fan-in $T$ and product fan-in $d$ at the second level. In other words, there exist polynomials $\{Q_{ij} : i \in [T], j \in [d]\}$ in at most $s$ variables each, such that
\begin{equation}~\label{def:model2}
P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}
\end{equation}
Recall that without loss of generality, we can assume that the constant term in each of the $Q_{ij}$ is either $0$ or $1$. We have the following lemma.
\begin{lem}~\label{lem:homog}
Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a homogeneous polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ as defined above. For each $i$, $1\leq i \leq T$ define the set $$S_i = \{j : 1 \leq j \leq d \text{ and } \mathsf{Hom}^{0}[Q_{ij}] = 1\}$$ Then,
\begin{equation}
P = \sum_{i = 1}^T \alpha_i \cdot \mathsf{Hom}^n\left[\prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^{n} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right]
\end{equation}
\end{lem}
\begin{proof}
To prove the lemma, we will try to extract out the homogeneous part of degree $n$ of each product gate $\prod_{j = 1}^d Q_{ij}$. Together with the fact that the polynomial $P$ is homogeneous of degree $n$, we get the lemma. Every $Q_{ij}$ with a non-zero constant term can be written as $\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1$, since the constant term in each $Q_{ij}$ is either $0$ or $1$. Now,
\begin{equation}~\label{eqn:1}
\prod_{j = 1}^d Q_{ij} = \prod_{j \notin S_i} Q_{ij} \times \prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1)
\end{equation}
Decomposing the product $\prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1)$ further, we have
\begin{equation}~\label{eqn:2}
\prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1] = \sum_{l = 0}^{|S_i|} \sum_{U \subseteq S_i : |U| = l} \prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]
\end{equation}
Now, observe that the degree of every monomial in $\prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]$ is at least as large as the size of $U$. So, for every subset $U$ of size larger than $n$, $\prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]$ is a polynomial of degree strictly larger than $n$. Also, for any fixed $l$, the expression $ \sum_{U \subseteq S_i : |U| = l} \prod_{j \in U} \mathsf{Hom}^{\geq 1}[Q_{ij}]$ is precisely the elementary symmetric polynomial of degree $l$ in the set of variables $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\}$. Therefore,
\begin{equation}~\label{eqn:3}
\mathsf{Hom}^{\leq n}\left[\prod_{j \in S_i} (\mathsf{Hom}^{\geq 1}[Q_{ij}] + 1)\right] = \mathsf{Hom}^{\leq n}\left[\sum_{l = 0}^{n} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i \})\right]
\end{equation}
Therefore,
\begin{equation}~\label{eqn:4}
\mathsf{Hom}^{n}\left[\prod_{j = 1}^d Q_{ij}\right] = \mathsf{Hom}^{n}\left[\prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^{n} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i \})\right]
\end{equation}
Summing up for all $i$, we get the lemma.
\end{proof}
The lemma above has in some sense helped us locate the monomials of degree $n$ in the circuit, which otherwise has a much higher formal degree. We now combine the above lemma with the well known fact that elementary symmetric polynomial of degree $l$ in $k$ variables can be computed by homogeneous $\Sigma\Pi\Sigma\wedge$ circuits of size at most $k2^{O(\sqrt{l})}$ to obtain a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circut $C'$ such that the fan-in of the product gates at level two is at most $n$. We use the following theorem (Theorem 5.2) by Shpilka and Wigderson~\cite{SW01}.
\begin{thm}[Shpilka-Wigderson~\cite{SW01}]~\label{thm : SW}
For every set of variables $\{Y_1, Y_2, \ldots, Y_m\}$ and a positive integer $l$, $\mathsf{ESYM}_l(\{Y_1, Y_2, \ldots, Y_m\})$ can be computed by a homogeneous $\Sigma\Pi\Sigma\wedge$ circuit of size $m2^{O(\sqrt{l})}$.
\end{thm}
We now prove the following lemma.
\begin{lem}~\label{lem:depth6}
Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ which is computable by an $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit $C$ of top fan-in $T$ and the degree of product gates at level two being $d$. So, $P$ can be represented as $$P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$
Then, $P$ can be represented as the homogeneous component of degree $n$ of a polynomial computed by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$ with the following properties :
\begin{enumerate}
\item The inputs to the $\wedge$ gates are the polynomials $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$
\item The fan-in of the $\times$ gates at the second level from the top is at most $n$
\item The top fan-in of $C''$ is at most $Tdn2^{O(\sqrt{n})}$.
\end{enumerate}
\end{lem}
\begin{proof}
From Lemma~\ref{lem:homog}, we know that for the set $S_i$ defined as $$S_i = \{j : 1 \leq j \leq d \text{ and } \mathsf{Hom}^{0}[Q_{ij}] = 1\}$$ the polynomial $P$ can be written as
$$P = \sum_{i = 1}^T \alpha_i \cdot \mathsf{Hom}^n\left[\prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^n \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right]$$
which is the same as
$$P = \mathsf{Hom}^n\left[ \sum_{i = 1}^T \alpha_i \cdot \prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^n \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right] $$
Observe that the polynomial $\prod_{j \notin S_i} Q_{ij}$ has degree at least $d-|S_i|$. We remark that if $d-|S_i|$ is larger than $n$, then such product gates do not contribute anything to the degree $n$ component of the polynomial and hence can be discarded without loss of generality; hence we assume $n-(d-|S_i|) > 0$. So, we could confine the inner sum from $l = 0$ to $l = n-(d-|S_i|)$, and still preserve the degree $n$ part of the polynomial, which is what we are interested in.
From Theorem~\ref{thm : SW}, we know that for every $0 \leq l \leq n$, we can compute the polynomial $\mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})$ by a $\Sigma\Pi\Sigma\wedge$ circuit of top fan-in at most $d \times 2^{O(\sqrt{l})} $ which takes as input the polynomials $\{\mathsf{Hom}^{\geq 1}(Q_{ij}) : 1\leq j \leq d\}$. From the homogeneity of the circuits given by Theorem~\ref{thm : SW}, it follows that the product gates at level two of these circuits have fan-in at most the degree of polynomial they compute, which is at most $n-(d-|S_i|)$. So, it follows that the polynomial
$$\tilde{P} = \left( \sum_{i = 1}^T \alpha_i \cdot \prod_{j \notin S_i} Q_{ij} \times \sum_{l = 0}^{n-(d-|S_i|)} \mathsf{ESYM}_l(\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : j \in S_i\})\right)$$
can be computed by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit, with top fan-in at most $Tdn\cdot 2^{O(\sqrt{n})}$, which satisfies the conditions in the lemma.
\iffalse
Finally, observe that the monomials of degree strictly larger than $n$ in the $Q_{ij}$ do not contribute to degree $n$ part of $\tilde{P}$. So, we can drop them, while still preserving the degree $n$ part of $\tilde{P}$. Therefore, the degree of $\tilde{P}$ can be upper bounded by $n^2d$. We can recover the degree $n$ part of $\tilde{P}$ by interpolation which blows up the top fan-in by a factor of at most $n^2d$. In this process, items 1 and 2 are preserved while the top fan-in becomes at most $Td^2n^32^{O(\sqrt{n})}$.
\fi
\end{proof}
Finally, given the circuit $C''$ constructed above, we can construct a circuit which computes the polynomial $P$ as given by Lemma~\ref{lem:interpolation}. For this, observe that the monomials of degree strictly larger than $n$ in any of the $Q_{ij}$ do not contribute to degree $n$ part of $\tilde{P}$. So, we can drop them, while still preserving the degree $n$ part of $\tilde{P}$. Therefore, the degree of $\tilde{P}$ can be upper bounded by $n^2d$. We can recover the degree $n$ part of $\tilde{P}$ by interpolation which blows up the top fan-in by a factor of at most $n^2d$.
In this process, the fan-in of the product gates at level two remains unchanged. Strictly speaking, inputs to the powering gate $\wedge$ at level four may no longer be the polynomials $\mathsf{Hom}^{\geq 1}[Q_{ij}]$, since in the process of interpolation, we replaced every variable $X_i$ by $X_i.t$ in $\tilde{P}$ and looked at the resulting polynomial $\tilde{P'}$ as a univariate polynomial in $t$ over the function field ${\mathbb{F}}(\overline{X})$. We then evaluated $\tilde{P'}$ at sufficiently many values of $t \in {\mathbb{F}}$ and then took their ${\mathbb{F}}$ linear combination.
So, each of the polynomials $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ gives rise to many other polynomials, one each for different values of $t$. We will call them the {\it siblings} of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$. The key observation for our proof is that the set of variables in the siblings of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ is the same as the set of variables in $\mathsf{Hom}^{\geq 1}[Q_{ij}]$. From the lemma and the discussion above, we have the following corollary.
\begin{cor}~\label{lem:depth6-cor}
Let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ which is computable by an $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit $C$ of top fan-in $T$ and the degree of product gates at level two being $d$. So, $P$ can be represented as $$P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$
Then, $P$ can be computed by a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$ with the following properties :
\begin{enumerate}
\item The inputs to the $\wedge$ gates are the siblings of polynomials $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$
\item The fan-in of the $\times$ gates at the second level from the top is at most $n$
\item The top fan-in of $C''$ is at most $Td^2n^32^{O(\sqrt{n})}$.
\end{enumerate}
\end{cor}
\subsection{Random Restrictions}~\label{sec: random res}
From the definition, it follows that the total number of variables in $NW_{n,\mu}$ is $N$. Let the set of all these variables be $\cal V$. We now define our random restriction procedure by defining a distribution $\cal D$ over subsets $V \subset \cal V$. The random restriction procedure will sample $V \gets \cal D$ and then keep only those variables ``alive" that come from $V$ and set the rest to zero. We will denote the restriction of the polynomial obtained by such a restriction as $NW_{n, \mu}|_V$. Observe that a random restriction also results in a distribution over all circuits computing the polynomial $NW_{n, \mu}$. We denote by $C|_V$ the restriction of a circuit $C$ obtained by setting every input gate in $C$ which is labelled by a variable outside $V$ to $0$.
\vspace{2mm}
\noindent
{\bf The distribution ${\cal D}_p$: } Each variable in $\cal V$ is independently kept alive with a probability $p$. We will choose the value of $p$ based on the parameter $\mu$.
\subsection{Analysing the circuit under random restrictions}
Let $C$ be a $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuit computing the polynomial $NW_{n,{\mu}}$. Let the top fan-in of $C$ be $T$ and the product fan-in at the second level be $d$. So, we have the following expression.
$$NW_{n,{\mu}} = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$
where each $Q_{ij}$ depends on at most $N^{\mu}$ variables.
Recall that from the choice of parameters $\delta = (1-\mu)/2$. Let $s$ be a parameter, which we later set such that $s = \Theta(\sqrt{n})$. If $T\cdot d \geq N^{\frac{\delta}{4} s}$, then we already have the desired lower bound of $n^{\Omega(\sqrt{n})}$ on the size of $C$ and we are done. Therefore, for the rest of this discussion, we will assume that $T\cdot d \leq N^{\frac{\delta}{4}s}$. We now apply the transformation to $C$ given by Corollary~\ref{lem:depth6-cor} to obtain a $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$, which has the following properties:
\begin{enumerate}
\item The inputs to the $\wedge$ gates are the siblings of polynomials $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$
\item The fan-in of the $\times$ gates at the second level from the top is at most $n$
\item The top fan-in of $C''$ is at most $Td^2n^32^{O(\sqrt{n})}$.
\end{enumerate}
We now analyse the effect of the random restrictions on the circuit $C''$. We will choose a parameter $p = N^{-\mu-\delta}$ and keep every variable alive with a probability $p$. The circuit $C''$ can be represented as $$C'' = \sum_{u}\prod_{v} D_{uv}$$
Here, each $D_{uv}$ is a sum of powers of the siblings of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$. Our goal is to argue that under random restrictions, all the monomials in each of the $D_{uv}$ are of small support (support at most $s$).
For any polynomial $P$ in $N^{\mu}$ variables and any integers $t, t_0$ such that $t_0 < t$, observe that $P^t$ can be written as
$$P^t = P_0 + \sum_{\alpha}\alpha\cdot P_{\alpha}$$
where $P_0$ is the part of $P$ consisting of monomials of support strictly less than $t_0$. The inner sum is over all multilinear monomials $\alpha$ of support equal to $t_0$. Such a decomposition may not be unique, but for this application, it would suffice to work with any one such decomposition. The number of such monomials $\alpha$ is at most ${N^{\mu} \choose t_{0}}$. The probability that one such monomial survives the random restriction procedure is equal to $p^{t_0}$. So, the expected number of such multilinear monomials $\alpha$ surviving the random restriction procedure is at most ${N^{\mu} \choose t_{0}}\cdot p^{t_0}$. The crucial observation is that if no such monomials survive, then only the monomials in $P_0$ survive, all of which have support at most $t_0-1$.
Now, observe that each of the $D_{uv}$ are a sum of powers of the siblings of polynomials in the set $\{\mathsf{Hom}^{\geq 1}[Q_{ij}] : 1 \leq i \leq T, 1 \leq j \leq d\}$. Define ${\cal B}$ to be the set of all multilinear monomials of support equal to $s$, supported entirely on variables in any of the polynomials ${Q_{ij}}$ for some $1 \leq i \leq T, 1 \leq j \leq d$. From the discussion in the paragraph above, the following observation follows.
\begin{obs}~\label{obs: random rest 1}
Let the polynomials $D_{uv}$, $Q_{ij}$ and the set ${\cal B}$ be as defined above. Then,
\begin{itemize}
\item $|{\cal B}| \leq T\cdot d\cdot {N^{\mu} \choose s}$
\item If none of the monomials in ${\cal B}$ survive under some random restrictions, then each of the polynomials $D_{uv}'$ obtained as a restriction of $D_{uv}$ has all monomials of support at most $s$.
\end{itemize}
\end{obs}
\begin{proof}
The bound on the size trivially follows from the fact that each of the $Q_{ij}$ depends on at most $N^{\mu}$ variables. For the second item, observe that each of the $D_{uv}$ is a sum of powers of siblings of the $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ and all the siblings are supported on the same set of variables. If all the monomials in the set ${\cal B}$ are set to zero, then the surviving monomials in any power of any of the siblings of $\mathsf{Hom}^{\geq 1}[Q_{ij}]$ has support at most $s$.
\end{proof}
We now estimate the probability that at least one of the monomials in the set ${\cal B}$ survives the random restriction procedure. We have the following lemma.
\begin{lem}~\label{lem: rand res 2}
Let $\delta$ be a positive real number such that $\delta = (1-\mu)/2$ and let $p = N^{-\mu-\delta}$. Then
$$Pr_{V\leftarrow {\cal D}_p}\left[|{\cal B}|_{V}| \geq 1\right] \leq N^{-3/4 \cdot\delta \cdot s}$$
\end{lem}
\begin{proof}
We know that $$|{\cal B}| \leq T \cdot d \cdot {N^{\mu} \choose s}$$ and the probability that any fixed monomial in ${\cal B}$ survives the random restriction procedure is at most $p^{s}$. So $${\mathbb E}_{V\leftarrow {\cal D}_p}[|{\cal B}_{V}|] \leq T \cdot d \cdot {N^{\mu} \choose s} \cdot p^s $$
Now, observing that the value of $T\cdot d$ is at most $N^{\frac{\delta}{4}s}$ and $p = N^{-\mu-\delta}$, the expected value is at most $$ N^{\frac{\delta}{4}s} {N^{\mu} \choose s} \cdot N^{-(\mu+\delta)s} \leq N^{-3/4 \cdot\delta \cdot s}$$
The lemma then follows by Markov's inequality.
\end{proof}
As a corollary of Lemma~\ref{lem: rand res 2} and Observation~\ref{obs: random rest 1}, we get the following lemma.
\begin{lem}~\label{lem: rand res main}
Let $\delta$ be a positive real number such that $ \delta = (1-\mu)/2$ and let $p = N^{-\mu-\delta}$. Then with probability at least $1- N^{-3/4 \cdot\delta \cdot s}$ over random restrictions $V \leftarrow {\cal D}_p$, the polynomial computed by the circuit $C''|_{V}$ can be written as $\sum_{u = 1}^{T'} \prod_{v = 1}^n D_{uv}'$, where each of the monomials in each of the polynomials $D_{uv}'$ has support at most $s$.
\end{lem}
\subsection{Upper bound on the complexity of C}
In order to upper bound the dimension of the projected shifted partial derivatives (under random restrictions) of the $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit $C$, Corollary~\ref{lem:depth6-cor} implies that it suffices to upper bound the dimension of the space of projected shifted partial derivatives of the $\Sigma\Pi\Sigma\wedge\Sigma\Pi$ circuit $C''$ given by Corollary~\ref{lem:depth6-cor}. In some sense, $C''$ is more structured than $C$ and this lets us prove a better upper bound.
Recall that we are under the assumption that for the circuit $C$, the product of the top fan-in and the product fan-in at level two is at most $N^{\frac{\delta}{4} \cdot s}$, else we are already done.
From Lemma~\ref{lem: rand res main}, we know that with a high probability, under random restrictions, we are left with a circuit of the form $\sum_{u = 1}^{T'} \prod_{v = 1}^n D_{uv}'$ where each of the monomials in each of the polynomials $D_{uv}'$ has support at most $s$. The upper bound on the complexity of the projected shifted partial derivatives of $\sum_{u = 1}^{T'} \prod_{v = 1}^n D_{uv}'$ then just follows from the upper bound for homogeneous depth four circuits of bounded bottom support proved in~\cite{KLSS14, KS-full}. We restate the bound from~\cite{KS-full}.
\begin{lem}~\label{lem:lowsupbound1}
Let $C$ be a depth 4 circuit with the fan-in or product gates at level two bounded by $n$, the bottom support bounded by $s$ and computing a polynomial in $N$ variables. Let ${\cal M}$ be a set of monomials of degree equal to $r$ and let $m$ be a positive integer. Then, $$\Phi_{{\cal M}, m}(C) \leq \text{Top fan-in}(C){n + r \choose r}{N \choose m+ rs}$$ for any choice of $m, r, s, N$ satisfying $m+rs \leq N/2$.
\end{lem}
The upper bound for $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuits, follows easily form the above lemma after random restrictions, and we formalize this in the lemma below.
\begin{lem}~\label{lem:complexity ub}
Let $\mu$ be a positive real number such that $0 \leq \mu < 1$. Let $\delta = (1-\mu)/2$ and let $p = N^{-\mu-\delta}$ and let ${\mathbb{F}}$ be a field of characteristic zero. Let $P$ be a polynomial of degree $n$ in $N$ variables over ${\mathbb{F}}$ which is computed by an $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuit $C$ of top fan-in $T$ and degree of product gates at level two at most $d$, i.e $P$ can represented as $$P = \sum_{i=1}^T \alpha_i\cdot\prod_{j = 1}^d Q_{ij}$$ where $\alpha_i$ are field constants.
Let $m$ and $r$ be positive integers satisfying $m+rs \leq N/2$ and ${\cal M}$ be any subset of multilinear monomials of degree equal to $r$.
If $Td \leq N^{\frac{s\cdot \delta}{4}}$, then with probability at least $1- N^{-3/4 \cdot\delta \cdot s}$ over random restrictions $V \leftarrow {\cal D}_p$, $$\Phi_{{\cal M}, m} (C|_V) \leq Td^2n^3 \cdot rs \cdot 2^{O(\sqrt{n})}\cdot {N \choose m+rs} \cdot {n + r \choose r} $$
\end{lem}
\begin{proof}
The lemma follows immediately from Corollary~\ref{lem:depth6-cor}, Lemma~\ref{lem: rand res main} and Lemma~\ref{lem:lowsupbound1}.
\end{proof}
\subsection{Nisan-Wigderson polynomial under random restrictions}
To complete the proof of Theorem~\ref{thm:mainthm intro}, we need a lower bound on the dimension of the space of projected shifted partial derivatives of the polynomial $NW_{n,{\mu}}$, under random restrictions.
To this end, we will use the lower bound proved by Kayal and Saha~\cite{KayalSaha14}.
We first enumerate our choice of parameters. Recall that $\delta = (1-\mu)/2$ is a positive real number.
\begin{enumerate}
\item $\gamma = \frac{2(\mu + \delta) + 1}{1-\mu-\delta}$
\item $N$ is such that $N/n$ is set equal to the smallest prime number between $n^{1 + \gamma}$ and $2n^{1+\gamma}$.
\item $\rho = (\mu + \delta)\frac{\log N}{\log n}$
\item $D = \frac{\gamma + \rho}{2(1 + \gamma)} \cdot n$ , where $D-1$ is the degree of the underlying univariate polynomials in the definition of $NW_{n,{\mu}}$.
\item $r, s$ which are the order of derivative and the bound on bottom support of the circuit after random restrictions respectively, are chosen such that $r = \epsilon_1\cdot \sqrt{n}, s = \epsilon_2\cdot \sqrt{n}$. Here, $\epsilon_1$ and $\epsilon_2$ are small enough positive real numbers satisfying $\epsilon_1\cdot\epsilon_2 = 0.001 n $.
\item $m = \frac{N}{2}(1-r\frac{\ln n}{n})$ is the degree of the shifts.
\item $p = N^{-(\mu + \delta)}$ is the probability with which each variable is independently kept alive.
\item ${\cal M}$ is the set of all multilinear monomials of degree $r$. We take partial derivatives with respect to monomials in this set.
\end{enumerate}
We are now ready to state the lower bound on the dimension of projected shifted partial derivatives as in~\cite{KayalSaha14}.
\begin{lem}[Kayal-Saha~\cite{KayalSaha14}]~\label{lem: KS main}
Let $NW_{n,{\mu}}$ be Nisan-Wigderson polynomials as defined in Definition~\ref{defn:NW}. Let ${\mathbb{F}}$ be any field of characteristic zero. Then, for the choice of parameters defined above
$$\Phi_{{\cal M}, m}(NW_{n,{\mu}}|_V) \geq \frac{1}{n^{O(1)}}\text{min}\left(\frac{p^r}{4^r} \cdot {N \choose r} \cdot {N \choose m}, {N \choose m + n - r}\right) $$
with probability at least $1 - \frac{1}{n^{\theta(1)}}$ over random restrictions $V \leftarrow {\cal D}_p$.
\end{lem}
\subsection{Wrapping up the proof of Theorem~\ref{thm:mainthm intro}}
From Lemma~\ref{lem: KS main} and Lemma~\ref{lem: rand res main}, we know that with a non-zero probability over the random restrictions $V$ from the distribution ${\cal D}_p$, the following two conditions hold.
\begin{enumerate}
\item $$\Phi_{{\cal M}, m}(NW_{n,{\mu}}|_V) \geq \frac{1}{n^{O(1)}}\text{min}\left(\frac{p^r}{4^r} \cdot {N \choose r} \cdot {N \choose m}, {N \choose m + n - r}\right) $$
\item $$\Phi_{{\cal M}, m} (C|_V) \leq Td^2n^3 \cdot rs \cdot 2^{O(\sqrt{n})}\cdot {N \choose m+rs} \cdot {n + r \choose r}$$
\end{enumerate}
If $C$ computed the polynomial $NW_{n,{\mu}}$, then
$$Td^2n^3 \cdot rs \geq \frac{{\frac{1}{n^{O(1)}}\text{min}\left(\frac{p^r}{4^r} \cdot {N \choose r} \cdot {N \choose m}, {N \choose m + n - r}\right)}}{{2^{O(\sqrt{n})}\cdot {N \choose m+rs} \cdot {n + r \choose r}}} $$
From the calculations in Appendix~\ref{sec:calc}, it follows that for our choice of parameters, the ratio is at least $\exp(\sqrt{n}\log n)$. So, we have the following theorem.
\begin{thm}~\label{thm:mainthm}
Let $\mu$ be an absolute constant such that $0 \geq \mu < 1$ and ${\mathbb{F}}$ be a field of characteristic zero. For $1 \leq i \leq T$ and $1 \leq j \leq d$, if there exist polynomials $Q_{ij}$, each dependent on only $s = N^{\mu}$ variables, such that
$$NW_{n,{\mu}} = \sum_{i = 1}^T\prod_{j = 1}^{d} Q_{ij}$$
Then
$$T\cdot d \geq n^{\Omega_{\mu}(\sqrt{n})}$$
\end{thm}
As a remark, we mention here that the lower bound above also holds for any translation $NW_{n,{\mu}}(\overline{X} + \overline{a})$ of the polynomial $NW_{n,{\mu}}(\overline{X})$. This is because the highest degree term of $NW_{n,{\mu}}(\overline{X} + \overline{a})$ equals the polynomial $NW_{n,{\mu}}(\overline{X})$ and from Lemma~\ref{lem:interpolation}, the homogeneous components of a polynomial computable by small sized $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits also have small sized $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits. We leave the details to the interested reader.
\section{Application to polynomial identity testing}~\label{sec:pit}
In this section, we prove Theorem~\ref{thm:mainthm2 intro}.
We are interested in identity testing for $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits, i.e for polynomials in $N$ variables $\{X_1, X_2, \ldots, X_N\}$ which can be expressed in the form
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that
\begin{enumerate}
\item The individual degree in $P$ of every variable is at most $k$
\item Each $Q_{ij}$ depends on at most $s$ variables
\end{enumerate}
For the case of this application, we will think of $k, T$ being polynomial in $(\log N)$ and $s$ being $N^{1/2-\epsilon}$ for a positive constant $\epsilon$. Observe that the bound on individual degree lets us upper bound the total degree of the polynomials by $Nk$.
\iffalse
\subsection{Overview of the proof}~\label{sec:pit overview}
At a high level, our goal is to reduce the number of variables, while preserving the zeroness/nonzeroness of the polynomial. We will show that we can do this while not blowing up the degree of the polynomial by too much. Once we have reduced the number of variables to $N'$, we will apply a brute force hitting set of size $\text{(Degree + 1)}^{\text{(Number of variables)}}$ as given by Lemma~\ref{lem: comb nulls}
In order to reduce the number of variables, we use the well known idea of trading hardness for randomness for arithmetic circuits given by Kabanets and Impagliazzo~\cite{KI04} and a version of it given for low depth circuits by Dvir, Shpilka and Yehudayoff~\cite{DSY09}.
\fi
We describe the construction of the hitting set in Section~\ref{sec:hitting set} and prove its correctness in Section~\ref{sec:hitting set correct}. We go over some preliminaries that we need in our proof in the next section.
\subsection{Some preliminaries}
In the following lemma, we prove some properties of the model of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits, which will be useful in the proof of the identity testing result.
\begin{lem}~\label{lem: model props}
Let ${\mathbb{F}}$ be a field of characteristic zero.
Let $P$ be a non-zero polynomial in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, which is computed by a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit C of top fan-in $T$ and product fan-in $d$ at level two, i.e $P$ can be expressed as
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$
such that for each $i \in [T]$ and $j \in [d]$, $Q_{ij}$ depends on at most $s$ variables. Then, the following are true.
\begin{enumerate}
\item For every variable $y$ and integer $1 \leq j \leq k$, $\frac{{\partial}^j P}{{\partial y^j}}$ can be computed by a circuit of the form $$\frac{\partial^j P}{\partial y^j} = \sum_{i = 1}^{T'} \prod_{j = 1}^d Q_{ij}'$$
where $T' \leq T\cdot (k+1)^2$ and each of the polynomials $Q_{ij}'$ depends on at most $s$ variables.
\item For any $a \in {\mathbb{F}}^N$, $P(\overline{X} + \overline{a})$ can be computed by a circuit of the form $$P(\overline{X} + \overline{a}) = \sum_{i = 1}^{T} \prod_{j = 1}^d Q_{ij}''$$
where each of the polynomials $Q_{ij}''$ depends on at most $s$ variables.
\end{enumerate}
\end{lem}
\begin{proof}
The proof of the second item is immediate from the definitions. The only thing that changes due to a translation is the number of monomials in the $Q_{ij}$. The number of variables that each $Q_{ij}$ depends on remains unchanged, and so does the fan-in of the top sum gate and the product gates at level two.
We now prove the first item. Let the set of variables in $P$ be $\overline{X} = \overline{X'} \cup \{y\}$ where $X'$ is of size $N-1$. Since the individual degree of $P$ is at most $k$, we can write $P = \sum_{i = 0}^k C_i(\overline{X'})\cdot y^i$. Here, $C_i(\overline{X'})$ are polynomials only in the $X'$ variables and are the coefficient of $y^i$, when viewing $P$ as an element of ${\mathbb{F}}[\overline{X'}][y]$. Now, for every $0 \leq i \leq k$, we can compute each of $C_i$ by a $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuit with top fan-in at most $T\cdot(k+1)$ by interpolation as given by Lemma~\ref{lem:extracting coefficients}. All the partial derivatives of $P$ with respect to $y$ are linear combinations of the terms of the form $C_{j_1}\cdot y^{j_2}$. And so, the result follows.
\end{proof}
We will also need the following simple fact about polynomials.
\begin{lem}~\label{lem:non zero derivative}
Let ${\mathbb{F}}$ be a field of characteristic zero.
Let $R \in {\mathbb{F}}[y]$ be a non-zero polynomial of degree at most $t$ over the field ${\mathbb{F}}$. Then, for every $a \in {\mathbb{F}}$ such that $R(a) = 0$, there exists a $j$ such that $0 \leq j \leq t-1$ and $\frac{\partial^j R}{\partial y^j}(a) = 0$ and $\frac{\partial^{j+1} R}{\partial y^{j+1}}(a) \neq 0$.
\end{lem}
\begin{proof}
Let the degree of $R$ in $y$ be equal to $t'$. This means that the coefficient of highest degree term $y^{t'}$ in $R$ is non-zero. Let us call the coefficient of $y^{t'}$ in $R(y)$ as $C_{t'}$. We know that $C_{t'}$ is nonzero. Consider $j = t'-1$. The lemma immediately follows.
\end{proof}
We will crucially use the following result of Dvir, Shpilka, Yehudayoff~\cite{DSY09} in the analysis of the hitting set constructed in this paper.
\begin{lem}[Dvir, Shpilka, Yehudayoff~\cite{DSY09}]~\label{lem:DSY main}
For a field ${\mathbb{F}}$, let $P \in {\mathbb{F}}[X_1, X_2, \ldots, X_N, Y ]$ be a non-zero polynomial of degree at most $k$ in $Y$. Let $f \in {\mathbb{F}}[X_1, X_2, \ldots, X_N]$ be a polynomial such that $P(X_1, X_2, \ldots, X_N, f) = 0$ and $\frac{\partial P}{\partial Y} (0, 0, \ldots, 0, f(0, 0, \ldots, 0))\neq 0$. Let $$P = \sum_{i = 0}^k C_i(X_1, X_2, \ldots, X_N)\cdot y^i$$ Then, for every $t \geq 0$, there exists a polynomial $R_t \in {\mathbb{F}}[Z_1, Z_2, \ldots, Z_{k+1} ]$ of degree at most $t$ such that $$\mathsf{Hom}^{\leq t}[f(X_1, X_2, \ldots, X_N)] = \mathsf{Hom}^{\leq t}[R_t(C_0, C_1, \ldots, C_k)] $$
\end{lem}
A key technical idea in the proof will be the notion of Nisan-Wigderson designs introduced in~\cite{NW94}. We will use the following lemma.
\begin{lem}[Nisan-Wigderson~\cite{NW94}]~\label{lem: designs}
For every $a, b \in {\mathbb{N}}$, $b < 2^a$, there exists a family of sets $S_1, S_2, \ldots, S_b \subseteq \{1, 2, \ldots, l\}$ such that
\begin{enumerate}
\item $l \in O(a^2/\log b)$
\item for all $i$, $|S_i| = a$
\item for all $i \neq j$, $|S_i \cap S_j| \leq \log b$
\end{enumerate}
Moreover, such a set family can be constructed in time polynomial in $b$ and $2^l$.
\end{lem}
We will also use the following lemma of Alon~\cite{AlonCN} very crucially in our proof.
\begin{lem}[Combinatorial Nullstellensatz~\cite{AlonCN}]~\label{lem: comb nulls}
Let $P$ be a non-zero polynomial of individual degree at most $d$ in $N$ variables over a large enough field ${\mathbb{F}}$. Let $S$ be an arbitrary subset of ${\mathbb{F}}$ of size $d+1$. Then, there exists a point $p$ in $S^{N}$ such that $P(p) \neq 0$.
\end{lem}
\subsection{Blackbox PIT for $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits}~\label{sec:hitting set}
In this section, we prove the following theorem.
\begin{thm}~\label{thm:mainthm2}
Let $c$ and $\mu$ be arbitrary constants such that $c> 0$ and $0 \leq \mu < 1/2$, and let ${\mathbb{F}}$ be a field of characteristic zero. Let ${\cal C}$ be the set of polynomials $P$ in $N$ variables and individual degree at most $k$ over ${\mathbb{F}}$, with the property that $P$ can be expressed as
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$
such that
\begin{enumerate}
\item $T < \log^c N$
\item $k < \log ^c N$
\item $d < N^c$
\item each $Q_{ij}$ depends on at most $N^{\mu}$ variables
\end{enumerate}
Then, there exists a constant $\epsilon < 1$ dependent only on $c$ and $\mu$, such that there is a hitting set of size $\exp(N^{\epsilon})$ for ${\cal C}$ which can be constructed in time $\exp(N^{\epsilon})$.
\end{thm}
From our proof, it also follows that if each of polynomial $Q_{ij}$ depends only on $\log^{O(1)} N$ variables, then both the size of the hitting set and the time to construct it, are upper bounded by a quasipolynomial function in $N$.
In the rest of the section, we prove Theorem~\ref{thm:mainthm2}. We start by describing the construction of the hitting set $\cal H$.
\subsubsection{Construction of hitting sets for $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuits for $0 \leq \mu < 1/2$}
Given $\mu$ such that $0 \leq \mu < 1/2$, we pick the parameter $\mu'$ such that $0 < \mu' < 1$ and $\frac{2\mu}{\mu'}$ is a positive constant strictly smaller than $1$.
We construct a family of Nisan-Wigderson designs as described in Lemma~\ref{lem: designs} with the following parameters :
\begin{enumerate}
\item $b$, the number of sets is set equal to $N$
\item $a$, the size of each of the sets $S_i$ is set equal to $N^{\frac{\mu}{\mu'}}\log^{\frac{1}{\mu'}} N$.
\item $l$, the size of the universe is chosen large enough in order to satisfy the hypothesis of Lemma~\ref{lem: designs}. From
Lemma~\ref{lem: designs}, it follows that we can pick $l$ which is not too large ($l \in O(a^2/\log b)$). For the above chosen values of $a, b$, there is a choice of $l$ such that $l$ is at most $N^{\frac{2\mu}{\mu'}}\log^{\frac{2}{\mu'}-1} N$.
\end{enumerate}
Recall that our goal is to construct a hitting set for $\Sigma\Pi\left(\Sigma\Pi\right)^{[N^{{\mu}}]}$ circuits. Observe that the choice of parameters $l, a, b$ satisfy the hypothesis of Lemma~\ref{lem: designs}.
So, we get a collection of $N$ subsets $S_1, S_2, \ldots, S_N$ of $\{1, 2, 3, \ldots, l\}$ satisfying
\begin{enumerate}
\item for all $1\leq i \leq N$, $|S_i| = a$
\item for all $1 \leq i < j \leq N$, $|S_i \cap S_j| \leq \log N$
\end{enumerate}
Moreover, these sets can be constructed in time polynomial in $b$ and $2^l$.
We identify the set $\{1, 2, 3, \ldots, l\}$ with the set of new variables $\overline{Y} = \{Y_1, Y_2, \ldots, Y_l\}$.
Before we proceed further, we need some notation. We will pick $\delta = (1-\mu')/2$ to be a non-negative constant. Given, $a, \mu', \delta$, we define $\gamma = \frac{2(\mu' + \delta) + 1}{1-(\mu' + \delta)}$. Then, we define $q$ to be the smallest prime number between $({a/2})^{\frac{1+\gamma}{2+\gamma}}$ and $2\cdot ({a/2})^{\frac{1+\gamma}{2+\gamma}}$. Also, we set $a'$ to be equal to $({a/2})^{\frac{1}{2+\gamma}}$.
Observe that $a/2 \leq a'q \leq a$.
For each $i$, such that $1 \leq i \leq N$, let ${S_i}'$ be an arbitrary subset of $S_i$ of size equal to $a'q$. For brevity, we rename the sets $S_i'$ as $S_i$~\footnote{We have replaced the family $\{S_1, S_2, \ldots, S_N\}$ by the set family $\{S_1', S_2', \ldots, S_N'\}$ such that for each $i \in [N]$, $S_i' \subseteq S_i$. Observe that the design based properties of the original system continue to hold. The only thing that changes is that the size of $S_i'$ could be smaller than the size of $S_i$, by at most a factor $2$. }. Let $\rho = (\mu' + \delta)\frac{\log a'q}{\log a'}$ and $D = \frac{\gamma + \rho}{2(1 + \gamma)} \cdot a'$.
Often for the ease of notation we will identify the set $S_i$ of $\{1, 2, \ldots, l\}$ with the set of variables $\{Y_j : j \in S_i\}$. We will think of the variables $\{Y_j : j \in S_i\}$ to be arranged in a $a'\times q$ matrix $V(i)$, with the variables placed in the matrix in some order.
For every $i\in \{1, 2, 3, \ldots, N\}$, we define $NW_{{a'}, { \mu'}}(S_i)$ as
$$NW_{{a'}, { \mu'}}(S_i) = \sum_{\substack{f(z) \in {\mathbb{F}}_{q}[z] \\
deg(f) \leq D-1}} \prod_{j \in [a']} V(i)_{jf(j)}$$
For a point $p = (p_1, p_2, \ldots, p_l) \in {\mathbb{F}}^l$, we denote by $NW_{{a'}, { \mu'}}(S_i)|p$, the evaluation of $NW_{{a'}, { \mu'}}(S_i)$ when the variable $Y_j$ is set to $p_j$.
Let $G$ be an arbitrary subset of ${\mathbb{F}}$ of size $Nka' + 1$. We define the hitting set ${\cal H}$ as follows.
\begin{define}[Definition of the hitting set ${\cal H}$]~\label{def:hitting set}
$${\cal H} = \left\{ (NW_{{a'}, { \mu'}}(S_1)|p, NW_{{a'}, { \mu'}}(S_2)|p, \ldots, NW_{{a'}, { \mu'}}(S_N)|p) : p \in G^{l} \right\} $$
\end{define}
We now proceed to prove the correctness of the construction. We first prove the following lemma which shows that ${\cal H}$ is explicit and has the correct size as per Theorem~\ref{thm:mainthm2}.
\begin{lem}~\label{lem: hitting set size}
The set ${\cal H}$ as defined in Definition~\ref{def:hitting set} has size at most $(Nka' + 1)^l$ and all its elements can be enumerated in time $a^{a'}\cdot (Nka' + 1)^l\cdot N^{O(1)}$.
\end{lem}
\begin{proof}
The size of the set ${\cal H}$ is equal to $|G|^l = (Nka' + 1)^l$. The set ${\cal H}$ can be enumerated by enumerating through the points $p$ in $G^l$ in some natural order (say lexicographic order) and evaluating the tuple $ (NW_{{a'}, { \mu'}}(S_1)|p, NW_{{a'}, { \mu'}}(S_2)|p, \ldots, NW_{{a'}, { \mu'}}(S_N)|p)$ at each of these points. For every point $p$ and subset $S_i$, the polynomial $NW_{{a'}, { \mu'}}(S_i)$ can be evaluated in time at most $a^{a'}\times \text{Poly}(N)$ from Lemma~\ref{lem: NW eval}. So, the second part of the lemma follows.
\end{proof}
Observe that for our choice of parameters, the above bounds on the size and the time of enumeration are bounded by a function which is subexponential in $N$.
We now show that for every non-zero polynomial $P$ in the class ${\cal C}$, as defined in the statement of Theorem~\ref{thm:mainthm2}, there exists a point $p \in {\cal H}$, such that $P(p)$ is non-zero. We show this in Lemma~\ref{lem: hitting set correctness} below. That will complete the proof of Theorem~\ref{thm:mainthm2}.
\subsection{Correctness of the construction}~\label{sec:hitting set correct}
For the rest of this section, we denote $N^{\mu}$ by $s$.
\begin{lem}~\label{lem: hitting set correctness}
Let $P$ be a non-zero polynomial in the set $\cal C$ as defined in the statement of Theorem~\ref{thm:mainthm2}, and let ${\cal H}$ be the set defined in Definition~\ref{def:hitting set}. Then, there is a point $p$ in the set ${\cal H}$ such that $P(p) \neq 0$.
\end{lem}
\begin{proof}
We define $$P_i(\overline{X}, \overline{Y}) := P(NW_{{a'}, { \mu'}}(S_1), NW_{{a'}, { \mu'}}(S_2), \ldots, NW_{{a'}, { \mu'}}(S_i), X_{i+1}, X_{i+2}, \ldots, X_N)$$ to be the polynomial obtained from $P$ by substituting the variables $X_j$ by $NW_{{a'}, { \mu'}}(S_j)$, for every $1 \leq j \leq i$.
From the construction of our hitting set, it follows that it would suffice to argue that the polynomial $P_{N}(\overline{X}, \overline{Y})$ is non-zero. If this was true, then the lemma above will follow from Lemma~\ref{lem: comb nulls}, since the degree of any variable $P(\overline{X}, \overline{Y})$ is at most $Nka'$.
We proceed via contradiction. If possible, let $P_N(\overline{X}, \overline{Y})$ be identically zero. Since $P = P_0(\overline{X}, \overline{Y})$ is non-zero to start with, by a hybrid argument it follows that there is an index $i$, such that $P_i(\overline{X}, \overline{Y})$ is non-zero while $P_{i+1}(\overline{X}, \overline{Y})$ is identically zero. Observe that $P_i$ is a polynomial in the variables $\overline{Y}$ and $X_{i+1}, X_{i+2}, \ldots, X_N$.
In going from $P_{i}$ to $P_{i+1}$, we substituted the variable $X_{i+1}$ by the polynomial $NW_{{a'}, { \mu'}}(S_{i+1})$. Since $P_{i}(\overline{X}, \overline{Y})$ is non-zero by assumption above, there exists a substitution $\overline{c}$ of all variables apart from $\{Y_j : j \in S_{i+1}\}$ and $X_{i+1}$, which keeps the polynomial non-zero. Let the polynomial resulting after this substitution be $P_i'$. From the definitions, it follows that
$$P_i' = P(NW_{{a'}, { \mu'}}(S_1)|{\overline{c}}, NW_{{a'}, { \mu'}}(S_2)|{\overline{c}}, \ldots, NW_{{a'}, { \mu'}}(S_i)|{\overline{c}}, X_{i+1}, X_{i+2}|{\overline{c}}, \ldots, X_N|{\overline{c}}) $$
Observe that each of the polynomials $NW_{{a'}, { \mu'}}(S_j)|{\overline{c}}$ depends only on the variables in the set $S_j \cap S_{i+1}$. From the properties of Nisan-Wigderson designs, and the choice of parameters, the size of this intersection is at most $\log N$. From the definition of $P_i$ and the choice of $\overline{c}$, $P_i'$ is not identically zero. We will think of $P_i'$ as a polynomial in $X_{i+1}$ with the coefficients being polynomials in the variables in the set $\{Y_j : j \in S_{i+1}\}$. Now, we know that the the polynomial $P_{i+1}'$ obtained by substituting $X_{i+1}$ by $NW_{{a'}, { \mu'}}(S_{i+1})$ is identically zero. Hence, it must be the case that $X_{i+1} - NW_{{a'}, { \mu'}}(S_{i+1})$ is a factor of $P_i'$.
To proceed further, we need the following claim.
\begin{claim}~\label{clm: p1}
$P_i'$ as defined above can be represented as
$$P_i' = \sum_{r = 1}^T \prod_{j = 1}^d Q_{rj}'$$
such that each of the polynomials $Q_{rj}'$ depends on at most $s\log N$ variables.
\end{claim}
\begin{proof}
Recall that $P$ can be represented as
$$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$
where each $Q_{ij}$ is a polynomial in at most $s = N^{\mu}$ variables.
In going from $P$ to $P_i'$, we have substituted each of the variables outside the set $\{Y_j : j \in S_{i+1}\} \cup \{X_{i+1}\}$ by either a constant or by the polynomial $NW_{{a'}, { \mu'}}(S_{j})|\overline{c}$ (which is a polynomial in at most $|S_j \cap S_{i+1}| \leq \log N$ variables) for some $j$. In either case, after substitution, the polynomials $Q_{rj}'$ obtained from $Q_{rj}$ depends on at most $s\log N$ variables, since $Q_{rj}$ depended on at most $s$ variables. This completes the proof of the claim.
\end{proof}
Moreover, since the individual degree of variables in $P$ is at most $k$, the individual degree of $X_{i+1}$ in $P_i'$ is at most $k$. The goal now is to invoke Lemma~\ref{lem:DSY main}, which would imply that $NW_{{a'}, { \mu'}}(S_{i+1})$ also has a small circuit as a sum of product of polynomials in {\it few} variables, and together with the lower bound from Theorem~\ref{thm:mainthm}, this would lead to a contradiction
We essentially follow this outline. Formally, we use the following claim to complete the proof of Lemma~\ref{lem: hitting set correctness}. We defer the proof of the claim to the end.
\begin{claim}~\label{clm:dsy app}
If $(X_{i+1} - NW_{{a'}, { \mu'}}(S_{i+1}) )$ divides $P_i'$, then $NW_{{a'}, { \mu'}}(S_{i+1})$ can be written as
$$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{I'} \prod_{j = 1}^{d'} \Gamma_{rj} $$
where
\begin{enumerate}
\item $I' \leq (da'^2 + 1)\cdot {{k+a' + 1} \choose k + 1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$
\item $d' \leq d\cdot a'$
\item Each $\Gamma_{rj}$ depends on at most $s\log N$ variables
\end{enumerate}
\iffalse
$$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{T'} \prod_{j = 1}^{d'} \Gamma_{rj}$$
such that
\begin{enumerate}
\item $T' \leq a'\cdot {{k+a'} \choose k} \times {{T\cdot (k+1)^3 + a'}\choose a'}^k$
\item $d' \leq d\cdot a'$
\item Each of the polynomials $\Gamma_{rj}$ depends on at most $s\log N$ variables
\end{enumerate}
\fi
\end{claim}
\iffalse
\begin{proof}
From Claim~\ref{clm: p1}, we know that $$P_i' = \sum_{r = 1}^T \prod_{j = 1}^d Q_{rj}'$$
such that each $Q_{rj}'$ depends on at most $s\log N$ variables.
Since $P_i'$ is not identically zero and $NW_{{a'}, { \mu'}}(S_{i+1})$ is a root of $P_i'$, it follows from Lemma~\ref{lem:non zero derivative} that there is an integer $\lambda$ such that $0 \leq \lambda \leq k-1$ and, $$\frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}(NW_{{a'}, { \mu'}}(S_{i+1})) = 0$$ and $$\frac{\partial^{\lambda+1} P_i'}{\partial X_{i+1}^{\lambda+1}}(NW_{{a'}, { \mu'}}(S_{i+1})) \neq 0$$
From Lemma~\ref{lem: model props} it follows that $\tilde{P_i'} = \frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}$ can also be expressed as $$\tilde{P_i'} = \sum_{r = 1}^{T'} \prod_{j = 1}^d \tilde{Q}_{ij}$$
where $T' \leq T\cdot (k+1)^2$ and each of the $\tilde{Q}_{rj}$ depends on at most $s\log N$ variables.
\iffalse
Let $\tilde{P_i'} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $P$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form
$$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$
where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables.
\fi
Observe that, $\tilde{P_i'}$ vanishes when $NW_{{a'}, { \mu'}}(S_{i+1})$ is substituted for $X_{i+1}$, while its derivative with respect to $X_{i+1}$ does not vanish identically at $X_{i+1} = NW_{{a'}, { \mu'}}(S_{i+1})$. So, in particular, there is a substitution of the $Y$ variables where the derivative $\frac{\partial{\tilde{P_i'}}}{\partial{X_{i+1}}}$ is nonzero. Since the class of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits is closed under translations of variables (from item 2 in Lemma~\ref{lem: model props}), we can assume without loss of generality that the derivative is nonzero when all the variables in $\overline{Y}$ are set to zero. Also observe that by this variable translation, we have actually obtained a polynomial $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$. Moreover, the degree of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $a'$ and the homogeneous component of degree $a'$ of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. Let the polynomial obtained after the variable translation from $\tilde{P_i'}$ as $\tilde{P_i''}$. At this point, the hypothesis of Lemma~\ref{lem:DSY main} is satisfied by $\tilde{P_i''}$.
Let $\tilde{P_i''} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $\tilde{P_i''}$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form
$$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$
where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables.
Hence, by Lemma~\ref{lem:DSY main}, for every $t \geq 0$, there exists a polynomial $R_t \in {\mathbb{F}}[Z_1, Z_2, \ldots, Z_{k+1} ]$ of degree at most $t$ such that $$\mathsf{Hom}^{\leq t}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{\leq t}[R_t(C_0, C_1, \ldots, C_k)] $$
The goal now is to obtain a representation of $NW_{{a'}, { \mu'}}(S_{i+1})$ as a sum of products of polynomials in few variables and show that this contradicts the lower bound in Theorem~\ref{thm:mainthm}.
$NW'_{{a'}, { \mu'}}(S_{i+1})$ is a polynomial of degree at most $a'$. So, there is a polynomial $R_{a'}$ of degree at most $a'$ in $k + 1$ variables such that $$NW'_{{a'}, { \mu'}}(S_{i+1}' = \mathsf{Hom}^{\leq {a'}}[R_{a'}(C_0, C_1, \ldots, C_k)]$$
From the discussion on the relation between $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$, we also know that
$$NW_{{a'}, { \mu'}}(S_{i+1}) = \mathsf{Hom}^{a'}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{a'}[R_{a'}(C_0, C_1, \ldots, C_k)]$$
Since $R_{a'}$ is a polynomial in $k+1$ variables of degree $a'$, the number of monomials in $R_{a'}$ is at most ${a' + k + 1} \choose {k+1}$. Therefore, we can represent $R_{a'}(C_0, C_1, \ldots, C_k)$ as a sum of products of the $C_j$'s, with the sum fan-in at most ${a' + k + 1} \choose {k+1}$ and the product fan-in at most $a'$. Moreover, each of the product gates in this representation takes the polynomials $C_j$'s as inputs. We know that each $C_j$ can be written as
$$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$
where each $Q_{rl}''$ is a polynomial in at most $s\log N$ variables, and the top sum fan-in $T_j$ is at most $T\cdot (k+1)^3$. For any $t$, the polynomial $C_j^t$, has a similar representation with the top sum fan-in at most ${T\cdot (k+1)^3 + t}\choose t$. Therefore, any product of fan-in at most $a'$ in the $C_j$'s can be written as a sum of product of polynomials in at most $s\log N$ variables, with top fan-in at most $${{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$$
since each $C_j$ is raised to a power of at most $a'$ and there are $k+1$ such $C_j$'s.
Therefore, $R_{a'}(C_0, C_1, \ldots, C_k)$ can be written as $$R_{a'}(C_0, C_1, \ldots, C_k) = \sum_{r = 1}^I \prod_{j = 1}^{d'} \Gamma'_{rj}$$ such that
\begin{enumerate}
\item $I \leq {{k+a' + 1} \choose k+1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$
\item $d' \leq d\cdot a'$
\item Each $\Gamma'_{rj}$ depends on at most $s\log N$ variables
\end{enumerate}
We would now like to extract the homogeneous part of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$, which we know is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. We do this by a standard application of Lemma~\ref{lem:interpolation}. Since we are interested only in the homogeneous part of degree $a'$, we can assume without loss of generality that each of the polynomials $\Gamma'_{rj}$ is of degree at most $a'$ (we can discard all monomials of degree larger than $a'$ in each of the $\Gamma'_{rj}$, since they do not contribute to the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ ). Hence, the degree of $R_{a'}(C_0, C_1, \ldots, C_k)$ is upper bounded by $da'\cdot a'$. So, from Lemma~\ref{lem:interpolation}, we can extract the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ by blowing up the top fan-in by a factor of at most $da'^2 + 1$. Hence, $NW_{{a'}, { \mu'}}(S_{i+1})$ can be expressed as
$$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{I'} \prod_{j = 1}^{d'} \Gamma_{rj} $$
where
\begin{enumerate}
\item $I' \leq (da'^2 + 1)\cdot {{k+a' + 1} \choose k + 1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$
\item $d' \leq d\cdot a'$
\item Each $\Gamma_{rj}$ depends on at most $s\log N$ variables
\end{enumerate}
\end{proof}
\fi
From our choice of parameters, recall that
$$a = N^{\mu/{\mu'}}\cdot \log^{1/{\mu'}} N$$
and $$s = N^{\mu} $$
Therefore, $s\log N \leq N^{\mu}\cdot \log N \leq a^{\mu'}$. To complete the proof, we observe that by Theorem~\ref{thm:mainthm}, we must have $$I'd' \geq (a')^{\Omega(\sqrt{a'})}$$
But, for our choice of parameters,
\begin{enumerate}
\item $I' \leq (da'^2+1)\cdot {{k+a'} \choose k} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1} \leq da^{O(Tk^4)} \leq d{a'}^{O(Tk^4)}$ (since $a$ and $a'$ are polynomially related)
\item $d' \leq da'$
\end{enumerate}
This implies that $I'd' \leq d^2a^{O(Tk^4)}$. From our choice of parameters, $s\log N < a^{\mu'}$ and $Tk^4 + 2\log d \in o(\sqrt{a'})$. This contradicts that $I'd' \geq (a')^{\Omega(\sqrt{a'})}$. This completes the proof of Lemma~\ref{lem: hitting set correctness} assuming Claim~\ref{clm:dsy app}.
\end{proof}
We now give a proof of Claim~\ref{clm:dsy app}.
\begin{proof}[Proof of Claim~\ref{clm:dsy app}]
From Claim~\ref{clm: p1}, we know that $$P_i' = \sum_{r = 1}^T \prod_{j = 1}^d Q_{rj}'$$
such that each $Q_{rj}'$ depends on at most $s\log N$ variables.
Since $P_i'$ is not identically zero and $NW_{{a'}, { \mu'}}(S_{i+1})$ is a root of $P_i'$, it follows from Lemma~\ref{lem:non zero derivative} that there is an integer $\lambda$ such that $0 \leq \lambda \leq k-1$ and, $$\frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}(NW_{{a'}, { \mu'}}(S_{i+1})) = 0$$ and $$\frac{\partial^{\lambda+1} P_i'}{\partial X_{i+1}^{\lambda+1}}(NW_{{a'}, { \mu'}}(S_{i+1})) \neq 0$$
From Lemma~\ref{lem: model props} it follows that $\tilde{P_i'} = \frac{\partial^{\lambda} P_i'}{\partial X_{i+1}^{\lambda}}$ can also be expressed as $$\tilde{P_i'} = \sum_{r = 1}^{T'} \prod_{j = 1}^d \tilde{Q}_{ij}$$
where $T' \leq T\cdot (k+1)^2$ and each of the $\tilde{Q}_{rj}$ depends on at most $s\log N$ variables.
\iffalse
Let $\tilde{P_i'} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $P$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form
$$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$
where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables.
\fi
Observe that, $\tilde{P_i'}$ vanishes when $NW_{{a'}, { \mu'}}(S_{i+1})$ is substituted for $X_{i+1}$, while its derivative with respect to $X_{i+1}$ does not vanish identically at $X_{i+1} = NW_{{a'}, { \mu'}}(S_{i+1})$. So, in particular, there is a substitution of the $Y$ variables where the derivative $\frac{\partial{\tilde{P_i'}}}{\partial{X_{i+1}}}$ is nonzero. Since the class of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits is closed under translations of variables (from item 2 in Lemma~\ref{lem: model props}), we can assume without loss of generality that the derivative is nonzero when all the variables in $\overline{Y}$ are set to zero. Also observe that by this variable translation, we have actually obtained a polynomial $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$. Moreover, the degree of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $a'$ and the homogeneous component of degree $a'$ of $NW'_{{a'}, { \mu'}}(S_{i+1})$ is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. Let the polynomial obtained after the variable translation from $\tilde{P_i'}$ as $\tilde{P_i''}$. At this point, the hypothesis of Lemma~\ref{lem:DSY main} is satisfied by $\tilde{P_i''}$.
Let $\tilde{P_i''} = \sum_{j = 0}^k C_j(\overline{Y})\cdot X_{i+1}^j$. Here, $C_j(\overline{Y})$ is a polynomial only in the $Y$ variables and is the coefficient of $X_{i+1}^j$, when viewing $\tilde{P_i''}$ as an element of ${\mathbb{F}}[\overline{Y}][X_{i+1}]$. From Lemma~\ref{lem:extracting coefficients}, we know that each of the polynomials $C_j$ can be expressed as a polynomial of the form
$$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$
where $T_j \leq T'\cdot(k+1) \leq T\cdot (k+1)^3$ and each $Q_{rl}''$ depends on at most $s\log N$ variables.
Hence, by Lemma~\ref{lem:DSY main}, for every $t \geq 0$, there exists a polynomial $R_t \in {\mathbb{F}}[Z_1, Z_2, \ldots, Z_{k+1} ]$ of degree at most $t$ such that $$\mathsf{Hom}^{\leq t}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{\leq t}[R_t(C_0, C_1, \ldots, C_k)] $$
The goal now is to obtain a representation of $NW_{{a'}, { \mu'}}(S_{i+1})$ as a sum of products of polynomials in few variables and show that this contradicts the lower bound in Theorem~\ref{thm:mainthm}.
$NW'_{{a'}, { \mu'}}(S_{i+1})$ is a polynomial of degree at most $a'$. So, there is a polynomial $R_{a'}$ of degree at most $a'$ in $k + 1$ variables such that $$NW'_{{a'}, { \mu'}}(S_{i+1}) = \mathsf{Hom}^{\leq {a'}}[R_{a'}(C_0, C_1, \ldots, C_k)]$$
From the discussion on the relation between $NW'_{{a'}, { \mu'}}(S_{i+1})$ from $NW_{{a'}, { \mu'}}(S_{i+1})$, we also know that
$$NW_{{a'}, { \mu'}}(S_{i+1}) = \mathsf{Hom}^{a'}[NW'_{{a'}, { \mu'}}(S_{i+1})] = \mathsf{Hom}^{a'}[R_{a'}(C_0, C_1, \ldots, C_k)]$$
Since $R_{a'}$ is a polynomial in $k+1$ variables of degree $a'$, the number of monomials in $R_{a'}$ is at most ${a' + k + 1} \choose {k+1}$. Therefore, we can represent $R_{a'}(C_0, C_1, \ldots, C_k)$ as a sum of products of the $C_j$'s, with the sum fan-in at most ${a' + k + 1} \choose {k+1}$ and the product fan-in at most $a'$. Moreover, each of the product gates in this representation takes the polynomials $C_j$'s as inputs. We know that each $C_j$ can be written as
$$C_j = \sum_{r= 1}^{T_j} \prod_{l = 1}^d Q_{rl}''$$
where each $Q_{rl}''$ is a polynomial in at most $s\log N$ variables, and the top sum fan-in $T_j$ is at most $T\cdot (k+1)^3$. For any $t$, the polynomial $C_j^t$, has a similar representation with the top sum fan-in at most ${T\cdot (k+1)^3 + t}\choose t$. Therefore, any product of fan-in at most $a'$ in the $C_j$'s can be written as a sum of product of polynomials in at most $s\log N$ variables, with top fan-in at most $${{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$$
since each $C_j$ is raised to a power of at most $a'$ and there are $k+1$ such $C_j$'s.
Therefore, $R_{a'}(C_0, C_1, \ldots, C_k)$ can be written as $$R_{a'}(C_0, C_1, \ldots, C_k) = \sum_{r = 1}^I \prod_{j = 1}^{d'} \Gamma'_{rj}$$ such that
\begin{enumerate}
\item $I \leq {{k+a' + 1} \choose k+1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$
\item $d' \leq d\cdot a'$
\item Each $\Gamma'_{rj}$ depends on at most $s\log N$ variables
\end{enumerate}
We would now like to extract the homogeneous part of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$, which we know is equal to $NW_{{a'}, { \mu'}}(S_{i+1})$. We do this by a standard application of Lemma~\ref{lem:interpolation}. Since we are interested only in the homogeneous part of degree $a'$, we can assume without loss of generality that each of the polynomials $\Gamma'_{rj}$ is of degree at most $a'$ (we can discard all monomials of degree larger than $a'$ in each of the $\Gamma'_{rj}$, since they do not contribute to the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ ). Hence, the degree of $R_{a'}(C_0, C_1, \ldots, C_k)$ is upper bounded by $da'\cdot a'$. So, from Lemma~\ref{lem:interpolation}, we can extract the homogeneous component of degree $a'$ of $R_{a'}(C_0, C_1, \ldots, C_k)$ by blowing up the top fan-in by a factor of at most $da'^2 + 1$. Hence, $NW_{{a'}, { \mu'}}(S_{i+1})$ can be expressed as
$$ NW_{{a'}, { \mu'}}(S_{i+1}) = \sum_{r = 1}^{I'} \prod_{j = 1}^{d'} \Gamma_{rj} $$
where
\begin{enumerate}
\item $I' \leq (da'^2 + 1)\cdot {{k+a' + 1} \choose k + 1} \times {{T\cdot (k+1)^3 + a'}\choose a'}^{k+1}$
\item $d' \leq d\cdot a'$
\item Each $\Gamma_{rj}$ depends on at most $s\log N$ variables
\end{enumerate}
\end{proof}
We remark that if the value of $s$ was $\log^{O(1)} N$ to start with, the same proof as above goes through with $l$ and $a$ being set to polynomials of sufficiently high degree in $\log N$. The size of the hitting set and the time to construct it in this case are upper bounded by a quasipolynomial function in $N$.
\section{Open problems}~\label{sec:open ques}
We conclude with some open problems.
\begin{enumerate}
\item An intriguing open question is to obtain PIT for $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits without the restriction on the individual degree. The strategy in this paper relies on hardness randomness tradeoffs for bounded depth circuits~\cite{DSY09}. The tradeoffs in~\cite{DSY09} crucially use the fact that the individual degree is bounded.
\item Another related question would be to get any non-trivial PIT (even subexponential) for the sum of constant many products of degree two polynomials.
\item It would also be interesting to understand if one could obtain any non-trivial PIT for slightly non-multilinear depth four circuits (say individual degree at most 2) with bounded top fan-in. A natural strategy for this question would be to reduce it to the case of $\Sigma\Pi\left(\Sigma\Pi\right)^{[s]}$ circuits by either expanding out the polynomials $Q_{ij}$ which depend on too many variables or use a partial derivative like trick, as in~\cite{OSV14}. The immediate challenge in this case is that the top fan-in seems to increase by any of these tricks and the calculations in this paper seem to not work out.
\end{enumerate}
\section*{Acknowledgements}
We would like to thank Rafael Oliveira for many helpful discussions regarding hardness-randomness tradeoffs for bounded depth arithmetic circuits at the early stages of this work.
\bibliographystyle{alpha}
|
3,212,635,537,646 | arxiv | \section{Introduction}
\label{intro}
Observations of the Cosmic Microwave Background (CMB) radiation are one of the most powerful tools to study the early Universe, also they can provide precise measurement of the cosmological parameters. Starting with COBE's groundbreaking detection, in the past two decades there has been a major improvement in the measurement of microwave background temperature fluctuation. On the other hand, recent observations of the CMB power spectrum, e.g. the release of \emph{Planck} data \cite{Planckcls13} and the recent claim about the detection of B-modes originated by primordial gravitational waves \cite{BICEP2}, has strengthened the theoretical status of inflationary scenarios among cosmologists.
In the standard (and the simplest) inflationary scenario, the origin of structures in our Universe like galaxies and clusters of galaxies is explained by assuming a stage described by an accelerating (nearly de Sitter) expansion driven by the potential of a single scalar field, and from its quantum fluctuations characterized by a simple vacuum state. In particular, the quantum fluctuations transform into the classical statistical fluctuations that represent the seeds of the current cosmic structure. However, the usual account for the origin of cosmic structure is not fully satisfactory as it lacks a physical mechanism capable of generating the inhomogeneity and anisotropy of our Universe, from an exactly homogeneous and isotropic initial state associated with the early inflationary regime. This issue has been analyzed in previous papers \cite{PSS06,Shortcomings,LLS13} and one key aspect of the problem is that there is no satisfactory solution within the standard physical paradigms of quantum unitary evolution because this kind of dynamics is not capable to break the initial symmetries of the system. To handle this shortcoming, a proposal has been developed by D. Sudarsky and collaborators \cite{PSS06,Sudarsky07,US08,Leon10,Leon11,DT11,LSS12,CPS13,LLS13}. In this scheme, a new ingredient is introduced into the inflationary scenario: \emph{the self-induced collapse hypothesis}. The main assumption is that, at a certain stage in the cosmic evolution, there is an induced jump from the original quantum state characterizing the particular mode of the quantum field; after the jump, the quantum state is inhomogeneous and anisotropic or more precisely it must not be an eigen-state of the linear and angular momentum operators. This process is similar to the quantum mechanical reduction of the wave function associated with a measurement. However, in our scheme, there is no external measuring device or observer (as there is nothing in the situation we are considering that could be called upon to play such a role). The hypothesis concerning an observer independent collapse of the wave function has been proposed and analyzed in the community working on quantum foundations: The continuous spontaneous localization (CSL) model \cite{pearle1989}, representing a continuous version of the Ghirardi-Rimini-Weber model \cite{ghirardi1985}, and the proposals of Penrose \cite{penrose1996} and Di\'osi \cite{diosi1987,diosi1989} addressing gravity as the main agent for triggering the reduction of the wave function, are among the main schemes attempting to model the physical mechanism of a self-induced collapse (for more recent examples see Refs. \cite{weinberg2011,bassi2003}).
Therefore, by considering a self-induced collapse (in each mode) of the inflaton wave function, the inhomogeneities and anisotropies arise at each particular length scale. As a consequence of this modification of the inflationary scenario, the predicted primordial power spectrum is modified and also the CMB fluctuation spectrum. Previous works \cite{PSS06,Shortcomings,LLS13,DT11} have extensively discussed both the conceptual and formal aspects of this new proposal, and we refer the reader to the references. However, we would like to comment on an important point, namely the characteristics of the state into which such jump occurs. As mentioned previously, the quantum state must not be an homogeneous and isotropic state. One could then assume a particular collapse mechanism, which would lead to such post-collapse state, and then calculate the corresponding observables in that state. The question now would be: which are the appropriate observables for the problem at hand that emerge from the quantum theory?
One possible approach would be to assume that both--metric and matter--perturbations are well characterized by a quantum field theory constructed on a classical unperturbed background; in the context of inflation, this approach corresponds to the quantization of the so-called Mukhanov-Sasaki variable, which then is used to yield predictions for the observational quantities (e.g. the spectrum of the temperature anisotropies). Therefore, if one assumes a particular collapse mechanism, which somehow modifies the standard unitary evolution given by Schroedinger's equation, then the dynamic of the observables, in terms of the Mukhanov-Sasaki variable, would be modified directly; this scheme was developed in Refs. \cite{jmartin,tpsingh} for the inflationary Universe.
Another possible approach to relate the quantum degrees of freedom with the observational quantities, is to rely on the semiclassical gravity picture; within this framework, the metric perturbations are always described in a classical way, while the matter degrees of freedom are modeled by a quantum field theory in a curved classical background. Then, by using Einstein's semiclassical equations $G_{ab} = 8 \pi G \langle \hat{T}_{ab} \rangle$, one relates the quantum matter perturbations with the corresponding ones from the classical metric. Nevertheless, assuming a particular collapse mechanism, which once again can be thought as a modification of standard Schroedinger's equation, would not affect the dynamics of the metric perturbation; indeed, the dynamics of the modes characterizing the quantum field would be modified, but since the metric perturbation is always a classical object, its dynamics is not given by the modified Schroedinger's equation. Assuming a particular collapse mechanism, would only modify the initial conditions of the motion equation for the metric perturbation, which again is always described at the classical level; in the context of inflation, this was analyzed in Ref. \cite{CPS13}.
In this work, we will take the semiclassical gravity approach, since (as will be argued in the paper) it presents a clear picture of how the inhomogeneities and anisotropies are born from the quantum collapse. Moreover, since the consideration of a particular collapse mechanism will not alter the dynamics of the classical quantities, we can characterize the post-collapse state in a generic way. In particular, we will follow the pragmatical approach first proposed in \cite{PSS06} in which one describes the collapse by characterizing the expectation values of the quantum field variable and its momentum in the post-collapse state. In Refs. \cite{PSS06,US08,LSS12} two schemes were considered; one in which, after the collapse, both expectation values are randomly distributed within their respective ranges of uncertainties in the pre-collapsed state, and another one in which it is only the conjugate momentum that changes its expectation value from zero to a value in its corresponding range as a result of the collapse. In this paper, we will also consider the possibility that only the field variable changes its expectation value after the collapse.
On the other hand, in all previous works \cite{PSS06,US08,DT11,CPS13} the self-induced collapse of the inflaton wave function is restricted to happen at the inflationary stage of the Universe. However, there is no reason for this restriction, apart from the observational limits imposed by the CMB data. As matter of fact, the idea of generating the primordial curvature perturbation after the inflationary era has ended is not a new proposal; earlier works based on the curvaton scenario deal with such picture \cite{curvaton1,curvaton2,curvaton3}, and even in recent works \cite{curvaton4} the curvaton model is still, under certain assumptions, considered as a viable option for generating the curvature perturbations. Moreover, in a model by R.M. Wald \cite{wald}, the density perturbations can be achieved even if there was no inflationary regime at all. The aim of the present paper is to analyze the possibility that the primordial curvature perturbation can be generated by a self-induced collapse of the wave function of the inflaton field, but with the additional hypothesis that such collapse occurs during the radiation dominated epoch. We analyze three different possibilities for the post-collapse state of the wave function in a radiation dominated background. As we will show, it is possible to obtain a viable model, i.e. a nearly scale invariant power spectrum. Nevertheless, when comparing the model's prediction with recent data from the CMB temperature and temperature-polarization spectra, the predictions of the collapse model are essentially indistinguishable from the ones given by the traditional slow-roll inflationary scenario provided by a single scalar field.
The paper is organized as follows: In Sec. \ref{classical}, we present the action of the model and solve Einstein's semiclassical equations. In Sec. \ref{quantum}, we perform the quantization of the inflaton field in a radiation dominated background. In Sec. \ref{collapseschemes}, we introduce the collapse hypothesis for three different choices of the post-collapse state: i) the collapse affects only the field variable, ii) the collapse affects only the momentum variable, iii) the collapse affects both the field and momentum variable. In Sec. \ref{oquantities}, we relate the CMB observational quantities with the primordial spectrum modified with the collapse hypothesis. In Sec. \ref{analisis}, we analyze, from the theoretical point of view, the viability of the power spectrum obtained from each one of three proposed collapse schemes. In Sec. \ref{camb}, we present an analysis where recent observational data is used to examine the validity of the predicted power spectrum. Finally, in Sec. \ref{discussion}, we end with a brief discussion of our conclusions. Regarding notation and conventions, we will work with signature $(-,+,+,+)$ for the metric; primes over functions will denote derivatives with respect to the conformal time $\eta$, and we will use units where $c=\hbar=1$ but keep the gravitational constant $G$.
\section{Classical analysis}
\label{classical}
The background space-time will be described by a spatially flat Friedmann-Robertson-Walker (FRW) radiation dominated Universe. The action of the theory is:
\begin{equation}\label{action}
S = S_{\text{rad}} + S_G + S_{\text{inf}},
\end{equation}
with $S_G$ is the standard action describing the gravity sector; $S_{\text{rad}}$ represents the action of the dominant type of matter, which in our case would be radiation type of matter and $S_{\text{inf}}$ is the action of a single scalar field $\phi$ minimally coupled to gravity and with an appropriate potential representing the inflaton:
\begin{equation}
S_{\text{inf}} = \int d^4x \sqrt{-g} \bigg[ -\frac{1}{2} \nabla_a \phi \nabla_b \phi g^{ab} - V[\phi] \bigg].
\end{equation}
Varying the action \eqref{action} with respect to the metric yields Einstein's equations
\begin{equation}
G_{ab} = 8 \pi G (T_{ab}^{\text{rad}} + T_{ab}^{\text{inf}}).
\end{equation}
The energy-momentum tensor for the inflaton can be written as:
\begin{equation}
T^{a \text{ inf}}_{b} = g^{ac} \nabla_c \phi \nabla_b \phi + \delta^a_b \left( \frac{1}{2} g^{cd} \nabla_c \phi \nabla_d \phi -V[\phi] \right).
\end{equation}
Since we will work in a radiation dominated Universe, the contribution of $T_{ab}^{\text{inf}}$ to the total energy-momentum tensor should be negligible, i.e. $T_{ab}^{\text{inf}} \ll T_{ab}^{\text{rad}}$. As usual, we separate the fields into a ``background'' part, taken to be homogeneous and isotropic, but in this case we have FRW radiation dominated Universe instead of quasi de-Sitter (inflaton) driven Universe, and the perturbations. In this way, the metric and the energy-momentum tensor field are written as: $g = g_0 + \delta g$ and $T_{ab} = T_{ab}^{(0)} + \delta T_{ab}$. One can then apply perturbation theory to Einstein's equations. Nevertheless, we will assume that the dominant contribution to the perturbations in the matter sector is mainly due to the inhomogeneities of the inflaton field. In other words, \textbf{$\delta T_{ab}^{\text{rad}}$ should be negligible compared to $\delta T_{ab}^{\text{inf}}$}. We remind the reader that, at this point, we are not indicating that there are inhomogeneities of any definite size in the Universe, but merely one is considering what would be the dynamics of any such small inhomogeneity if it existed. The issue of their presence and magnitude is dealt with at the quantum level; as a matter of fact, if there has been no collapse of the wave function at this point, $\delta T_{ab} = \langle 0| \delta \hat{T}_{ab}^{\text{inf}}|0 \rangle + \langle 0|\delta \hat{T}_{ab}^{\text{rad}}|0 \rangle = 0$, consequently $\delta G_{ab} = 0$, and the space-time is perfectly homogeneous and isotropic, it is only after the collapse that generically $\langle \Theta| \delta \hat{T}_{ab}^{\text{inf}}|\Theta \rangle \neq 0$ and $ \langle \Theta |\delta \hat{T}_{ab}^{\text{rad}}|\Theta \rangle \neq 0$, thus, $\delta T_{ab} \neq 0$. This will be made more clear in the next section. For now, we will just continue with the classical analysis and deal with the quantum treatment in the next section.
Einstein's equations for the background $G_{00}^{(0)}=8\pi G T_{00}^{(0)}=8\pi G a^2 \rho$ yield Friedmann's equations. Since we are assuming that the Universe is dominated by radiation, the energy contribution of the inflaton to the total energy density $\rho$ will be negligible; therefore, the equation of state is to a good approximation $P=\rho/3$. Given the previous equation of state, one can find the explicit expression for the scale factor, this is
\begin{equation}\label{aeta}
a(\eta) = C(\eta-\eta_r)+ a_r,
\end{equation}
where $\eta$ is the conformal time, $C$ is a constant, $\eta_r$ is the conformal time at the beginning of the radiation era and $a_r=a(\eta_r)$. Normalizing the scale factor today as $a_0 = 1$ and by assuming that inflation ends at an energy scale of $10^{15}$ GeV, one can find the numerical values $\eta_r \simeq -1.2 \times 10^{-22}$ Mpc, $a_r \simeq 2.4 \times 10^{-28}$ and $C \simeq 1.6 \times 10^{-6}$ Mpc$^{-1}$.
Furthermore, we will ignore for the most part of the treatment the reheating era. In other words, we will assume that the inflationary regime ends at a conformal time $\eta_{ei} \simeq -10^{-22}$ Mpc and for all practical purposes $\eta_{ei} \simeq \eta_r$.
Now we will focus on the perturbations. The perturbed space-time will be represented by the line element
\begin{equation}\label{metric}
ds^2 = a(\eta)^2 [-(1+2\Phi) d\eta^2 + (1-2\Psi) \delta_{ij} dx^idx^j],
\end{equation}
where we have focused only on the scalar perturbations and have chosen to work in the longitudinal gauge.
As we have said, the contribution from $\delta T_{ab}^{\text{rad}}$ to the perturbations of the matter sector is negligible compared to $\delta T_{ab}^{\text{inf}}$. Thus,
\begin{equation}
\delta G_{ab} = 8 \pi G \delta T_{ab}^{\text{inf}}.
\end{equation}
Furthermore we can write the scalar field as as follows: $\phi (\vec{x},\eta) = \phi_0 (\eta) + \delta \phi (\vec{x},\eta)$, where $\delta \phi \ll \phi_0$.
Einstein's equations at first order in the perturbations, $\delta G_0^0 = 8 \pi G \delta T_0^0$, $\delta G_i^0 = 8 \pi G \delta T_i^0$ and $\delta G^i_j = 8 \pi G \delta T^i_j$, are given respectively by
\begin{equation}\label{00inf1}
\nabla^2 \Psi -3\mathcal{H}(\mathcal{H}\Phi + \Psi') = 4 \pi G [-\phi_0'^2 \Phi + \phi_0' \delta \phi' + \partial_\phi V a^2 \delta \phi],
\end{equation}
\begin{equation}\label{0iinf1}
\partial_i (\mathcal{H} \Phi + \Psi') = 4 \pi G \partial_i ( \phi_0' \delta \phi),
\end{equation}
\begin{eqnarray}\label{ijinf1}
[\Psi'' + \mathcal{H}(2\Psi+\Phi)' + (2\mathcal{H}' + \mathcal{H}^2)\Phi &+& \textstyle{\frac{1}{2}} \nabla^2 (\Phi - \Psi)] \delta^i_j - \textstyle{\frac{1}{2}} \partial^i \partial_j (\Phi - \Psi) = \nonumber \\
& & 4 \pi G [\phi_0' \delta \phi' -\phi_0'^2 \Phi - \partial_\phi V a^2 \delta \phi]\delta^i_j.
\end{eqnarray}
It is easy to see that for the case $i\not=j$ in Eq. \eqref{ijinf1}, together with appropriate boundary conditions (more easily seen in the Fourier transformed version), leads to $\Psi = \Phi$; from now on we will use this result.
By combining Eqs. \eqref{00inf1} and \eqref{0iinf1}, one obtains
\begin{equation}\label{master0}
\nabla^2 \Psi + 4\pi G \phi_0'^2 \Psi = 4 \pi G [ \phi_0' \delta \phi' + (a^2 \partial_\phi V + 3 \mathcal{H} \phi_0') \delta \phi].
\end{equation}
After decomposing $\Psi$ and $\phi$ in Fourier modes, the above equation yields
\begin{equation}\label{master1}
\Psi_{\vec{k}} (\eta) = \frac{ 4 \pi G \phi_0' (\eta) }{-k^2 + 4\pi G \phi_0' (\eta)^2} \left[ \delta \phi'_{\vec{k}} (\eta) + \left( 3\mathcal{H} + \frac{a^2 \partial_\phi V }{\phi_0'(\eta)} \right) \delta \phi_{\vec{k}} (\eta)\right].
\end{equation}
The energy density of the scalar field is $\rho_\phi = T_{00}^{\text{inf}}$. Since the Universe is radiation dominated and the inflationary era has ended, the scalar field is now rapidly oscillating around the minimum of its potential, this is $\partial_\phi V \simeq 0$; therefore, we can approximate the energy density of the inflaton as $\rho_\phi \simeq \phi_0'^2/2a^2 \ll \rho_{\text{rad}}$. Thus, Eq. \eqref{master1} is rewritten as
\begin{equation}\label{master2}
\Psi_{\vec{k}} (\eta) = \frac{\sqrt{\rho_\phi}}{\sqrt{2} M_P^2 \left( -k^2 + \rho_\phi a^2 / M_P^2 \right)} \left[ a \delta \phi'_{\vec{k}} (\eta) + 3 \mathcal{H} a \delta \phi_{\vec{k}} (\eta) \right],
\end{equation}
where we used the definition of the reduced Planck's mass $M_P^2 \equiv (8\pi G)^{-1}$. Equation \eqref{master2} relates the perturbations in the inflaton field with the perturbations of the metric.
Moreover, Eq. \eqref{master2} was obtained by combining Eqs. \eqref{00inf1} and \eqref{0iinf1} which correspond to Einstein's equations with components $\delta G^{0}_0 = 8\pi G \delta T^0_0$ and $\delta G^{0}_i = 8\pi G \delta T^0_i$; it is a well known result \cite{Wald84} that these particular equations are not actual motion equations but rather constraint equations. The motion equation is the one given by $\delta G^i_j = 8\pi G \delta T^i_j$ [Eq. \eqref{ijinf1}], from this equation (with $i = j$) one can derive the metric perturbation motion equation; for the epoch corresponding to a radiation-dominated Universe, the motion equation for the modes $\Psi_k$ takes the form
\begin{equation}\label{movpsi}
\Psi_{\vec{k}}'' (\eta) + \frac{4}{\eta-\eta_r + a_r/C} \Psi_{\vec{k}}' (\eta) + \frac{k^2}{3} \Psi_{\vec{k}} (\eta) = 0
\end{equation}
The analytical solution to Eq. \eqref{movpsi} is:
\begin{eqnarray}\label{psisolucion}
\Psi_{\vec{k}} (\eta) &=& \frac{3}{(k\eta-\delta_k)^2} \bigg\{ C_1 (\vec{k}) \left[ \frac{\sqrt{3}}{k\eta-\delta_k} \sin \left( \frac{k\eta-\delta_k}{\sqrt{3}} \right) - \cos \left( \frac{k\eta-\delta_k}{\sqrt{3}} \right) \right] \nonumber \\
&+& C_2 (\vec{k}) \left[ \frac{\sqrt{3}}{k\eta-\delta_k} \cos \left( \frac{k\eta-\delta_k}{\sqrt{3}} \right) + \sin \left( \frac{k\eta-\delta_k}{\sqrt{3}} \right) \right] \bigg\},
\end{eqnarray}
with $\delta_k \equiv k\eta_r - k a_r/C$. Once the collapse has created all modes $\Psi_k$ (as will be argued in more detail in Sec. \ref{semiclassical}), we can divide them in two types:
\begin{itemize}
\item Modes with an associated proper wavelength bigger than the Hubble radius, we will call these the super-horizon modes.
\item Modes with an associated proper wavelength smaller than the Hubble radius, we will call these the sub-horizon modes.\footnote{The condition that modes are smaller than the horizon is given by $k \gg aH = \mathcal{H}$, by using the exact expression for $\mathcal{H}$ during the radiation dominated epoch $\mathcal{H} \equiv a'(\eta)/a(\eta) = 1/(\eta-\eta_r + a_r/C)$, one checks that the latter condition is equivalent to $(k\eta-\delta_k) \gg 1$. Alternatively, modes that are super-horizon during radiation satisfy $(k\eta-\delta_k) \ll 1 $.}
\end{itemize}
If $(k\eta-\delta_k) \gg 1$ the general solution, Eq. \eqref{psisolucion}, approaches zero; in other words, for sub-horizon modes $\Psi_{\vec{k}} \to 0$. On the other hand, the dynamics of the super-horizon modes, i.e. those that satisfy $(k\eta-\delta_k) \ll 1$, is given by
\begin{equation}
\Psi_{\vec{k}} (\eta) = \frac{C_1 (\vec{k})}{3} + \frac{3^{3/2} C_2 (\vec{k}) }{(k\eta-\delta_k)^3}.
\end{equation}
The second mode is known as the decaying mode which we shall neglect hereafter. Since sub-horizon modes decay as $1/(k\eta-\delta_k)^2 \propto 1/a(\eta)^2$, they cannot account for the modes of interest in the angular power spectrum; conversely, super-horizon modes are constant until they enter the horizon. Therefore, we will only focus on super-horizon modes
\begin{equation}\label{psisuperhor}
\Psi_{\vec{k}} (\eta) \simeq \frac{C_1 (\vec{k})}{3}.
\end{equation}
The constant $C_1(k)$ can be obtained from Eq. \eqref{master2}, which, as we said, corresponds to a constraint equation, evaluated at some particular time, say $\eta_{\vec{k}}^c$ (later in the paper we will argue in more detail that this corresponds to the time of collapse), before the modes enters the horizon; thus,
\begin{equation}\label{masterx}
\Psi_{\vec{k}} \simeq \frac{\sqrt{\rho_\phi}}{\sqrt{2} M_P^2 \left( -k^2 + \rho_\phi a^2 / M_P^2 \right)} \left[ a \delta \phi'_{\vec{k}} (\eta) + 3 \mathcal{H} a \delta \phi_{\vec{k}} (\eta) \right] \bigg|_{\eta=\eta_{\vec{k}}^c} \qquad \textrm{with} \qquad k\eta_{\vec{k}}^c-\delta_k \ll 1.
\end{equation}
We want to emphasize that at this point the analysis has been done in a classical manner, the quantum aspects will be analyzed in the next section. Nevertheless, we have shown that the super-horizon modes for the curvature perturbation are constant during the radiation era, if $\Psi_{\vec{k}}$ is classical, and, thus, follows a dynamical evolution given by Einstein's (classical) equations.
\section{Quantum analysis of the perturbations}\label{quantum}
In this section we proceed to establish the quantum theory of the inflaton perturbations. The difference with previous works \cite{PSS06,US08,DT11,Leon11} is that, in the case of the present work, the scale factor of the background metric is given by Eq. \eqref{aeta}, which corresponds to a radiation dominated Universe; while in the cited works, the scale factor corresponds to a (quasi) de-Sitter type of Universe. Consequently, we will construct the quantum theory of a scalar field in a radiation FRW background Universe.
We start by writing the action:
\begin{equation}\label{actioncol2}
S_{\text{inf}} = \int d^4x \sqrt{-g} \bigg[ -\frac{1}{2} \nabla_a \phi \nabla_b \phi g^{ab} - V[\phi] \bigg].
\end{equation}
Our fundamental quantum variable will be the fluctuation of the inflaton field, $\delta \phi (\vec{x},\eta)$; however, it will be easier to work with the rescaled field variable $y=a\delta \phi$. Next we expand the action \eqref{actioncol2} up to second order in the rescaled variable (i.e. up to second order in the scalar field fluctuations)
\begin{equation}\label{acciony}
\delta S^{(2)}= \int d^4x \delta \mathcal{L}^{(2)} = \int d^4x \frac{1}{2} \left[ y'^2 - (\nabla y)^2 + \left(\frac{a'}{a} \right)^2 y^2 - 2 \left(\frac{a'}{a} \right) y y' \right].
\end{equation}
The canonical momentum conjugated to $y$ is $\pi \equiv \partial \delta \mathcal{L}^{(2)}/\partial y' = y'-(a'/a)y=a\delta \phi'$. The field and momentum variables are promoted to operators satisfying the equal time commutator relations $[\hat{y}(\vec{x},\eta), \hat{\pi}(\vec{x}',\eta)] = i\delta (\vec{x}-\vec{x}')$ and $[\hat{y}(\vec{x},\eta), \hat{y}(\vec{x}',\eta)] = [\hat{\pi}(\vec{x},\eta), \hat{\pi}(\vec{x}',\eta)] = 0$. We expand the momentum and field operators in Fourier modes
\begin{equation}
\hat{y}(\eta,\vec{x}) = \frac{1}{L^3} \sum_{\vec{k}} \hat{y}_{\vec{k}} (\eta) e^{i \vec{k} \cdot \vec{x}} \qquad \hat{\pi}(\eta,\vec{x}) = \frac{1}{L^3} \sum_{\vec{k}} \hat{\pi}_{\vec{k}} (\eta) e^{i \vec{k} \cdot \vec{x}},
\end{equation}
where the sum is over the wave vectors $\vec k$ satisfying $k_i L=2\pi n_i$ for $i=1,2,3$ with $n_i$ integer and $\hat y_{\vec{k}} (\eta) \equiv y_k(\eta) \ann_{\vec{k}} + y_k^*(\eta) \cre_{-\vec{k}}$ and $\hat \pi_{\vec{k}} (\eta) \equiv g_k(\eta) \ann_{\vec{k}} + g_{k}^*(\eta) \cre_{-\vec{k}}$. From the previous expression it is clear that we are taking the quantization on a finite cubic box of length $L$, at the end of the calculations we will go to the continuum limit ($L \to \infty$, $k \to $ cont.). The equation of motion for $y_k(\eta)$ derived from action \eqref{acciony} is
\begin{equation}\label{ykmov}
y_k''+\left( k^2 - \frac{a''}{a} \right) y_k = 0.
\end{equation}
It is worthwhile to mention that the scale factor $a$ corresponds to the radiation dominated era. In such case, the scale factor is given as in Eq. \eqref{aeta}, consequently the motion equation \eqref{ykmov} is written as
\begin{equation}
y_k'' + k^2 y_k = 0,
\end{equation}
which is the motion equation of a harmonic oscillator. The solutions are, thus,
\begin{subequations}\label{modosrad}
\begin{equation}
y_k(\eta) = A_k e^{ik\eta} + B_k e^{-ik\eta},
\end{equation}
\begin{equation}
g_k (\eta) = -A_k k \left( \frac{\mathcal{H}}{k} - i \right) e^{ik\eta} - B_k k \left( \frac{\mathcal{H}}{k} + i \right) e^{ik\eta},
\end{equation}
\end{subequations}
where $A_k$ and $B_k$ are constants that are fixed by the canonical commutation relations between $\hat y$ and $\hat \pi$, which give $[\hat{a}_{\vec{k}},\hat{a}^\dag_{\vec{k}'}] = L^3 \delta_{\vec{k},\vec{k}'}$, thus $y_k(\eta)$ must satisfy $y_k g_k^* - y_k^* g_k = i$ for all $k$ at some time $\eta$; however, this condition alone does not completely fix the constants $A_k$ and $B_k$. One still needs to select a choice for the vacuum state for the field. In order to proceed, we will select a vacuum state in the inflation era (where $a''/a \simeq 2 \eta^{-2}$), where the quantum fluctuations of the inflaton field are originated. There are a variety of choices regarding the vacuum state during inflation, one of the most common choices is the so-called Bunch-Davies (BD) vacuum characterized by
\begin{equation}\label{ykbd}
y_k (\eta) = \frac{1}{\sqrt{2k}} \left(1 - \frac{i}{k\eta} \right) e^{-ik\eta}, \qquad g_k (\eta) = -i \sqrt{\frac{k}{2}} e^{-ik\eta}.
\end{equation}
Consequently, the constants $A_k$ and $B_k$ will be fixed by matching the modes during the inflation [Eqs. \eqref{ykbd}] era and the modes during the radiation era [Eqs. \eqref{modosrad}] at the time $\eta_r$, which corresponds to the conformal time of the beginning of the radiation era and is essentially the same order of magnitude as the conformal time that marks the end of inflation. Note that we are neglecting the reheating era that describes the decay of the inflaton in all the fields characterizing the radiation type of matter. If one takes into account the interaction of the inflaton and the quantum fields representing the radiation matter, the vacuum state could possibly change; however, such new vacuum state would still be perfectly homogeneous and isotropic. In other words, the reheating period cannot break the symmetry of an original quantum state because its dynamics is given by the Schroedinger's equation which preserves the symmetry. For simplicity we will not consider the reheating period and assume that all the fields, before and after inflation, are characterized by the BD vacuum state.
Therefore, with the previous assumptions, the constants $A_k$ and $B_k$ are
\begin{equation}\label{AkBk}
A_k = \frac{e^{-2ik\eta_r}}{2^{3/2}k^{5/2}\eta_r^2}, \qquad B_k = \frac{1}{\sqrt{2k}} \left( 1 - \frac{i}{k\eta_r} \right) - \frac{1}{2^{3/2}k^{5/2}\eta_r^2}.
\end{equation}
To recapitulate, the modes $y_k(\eta)$ are originated during the inflationary epoch in the BD vacuum state, after inflation reaches its end at $\eta_r$ (and ignoring the reheating era), the radiation dominated epoch begins and the inflaton is now oscillating around the minimum of its potential. Additionally, its modes continue to evolve according to Eqs. \eqref{modosrad}; nevertheless, the quantum state of the modes is still the BD vacuum state, which is 100\% homogeneous and isotropic; consequently there are no inhomogeneities and anisotropies present at this stage of the evolution. Thus, as discussed in Sec. \ref{intro}, in order to account for the issue regarding the emergence of an anisotropic and inhomogeneous Universe from an exactly isotropic and homogeneous initial state of the primordial perturbations, we must consider a self-induced collapse of the wave function. In the following section, we will describe how to parameterize such collapse and show how the primordial curvature perturbations are produced by the self-induced collapse in a radiation dominated era.
\section{The collapse model and the curvature perturbation }
\label{collapseschemes}
In this section, we will show how one can generate the primordial curvature perturbation during the radiation dominated era by introducing the collapse hypothesis.
The self-induced collapse hypothesis is based on considering that the collapse acts similar to a ``measurement'' (clearly, there is no external observer or detector involved). This lead us to consider Hermitian operators, which in ordinary quantum mechanics are the ones susceptible of direct measurement. Therefore, we separate $\hat y_{\vec{k}} (\eta)$ and $\hat \pi_{\vec{k}} (\eta)$ into their real and imaginary parts $\hat y_{\vec{k}} (\eta)=\hat y_{\vec{k}}{}^R (\eta) +i \hat y_{\vec{k}}{}^I (\eta)$ and $\hat \pi_{\vec{k}} (\eta) =\hat \pi_{\vec{k}}{}^R (\eta) +i \hat \pi_{\vec{k}}{}^I (\eta)$ in this way the operators $\hat y_{\vec{k}}^{R, I} (\eta)$ and $\hat \pi_{\vec{k}}^{R, I} (\eta)$ are hermitian operators. Thus,
\begin{equation}\label{operadoresRI}
\hat{y}_{\vec{k}}^{R,I} (\eta) = \sqrt{2} \mathcal{R}[y_k(\eta) \hat{a}_{\vec{k}}^{R,I}], \qquad \hat{\pi}_{\vec{k}}^{R,I} (\eta) = \sqrt{2} \mathcal{R}[g_k(\eta) \hat{a}_{\vec{k}}^{R,I}],
\end{equation}
where $\hat{a}_{\vec{k}}^R \equiv (\hat{a}_{\vec{k}} + \hat{a}_{-\vec{k}})/\sqrt{2}$, $\hat{a}_{\vec{k}}^I \equiv -i (\hat{a}_{\vec{k}} - \hat{a}_{-\vec{k}})/\sqrt{2}$. The commutation relations for the $\hat{a}_{\vec{k}}^{R,I}$ are non-standard
\begin{equation}\label{creanRI}
[\hat{a}_{\vec{k}}^R,\hat{a}_{\vec{k}'}^{R \dag}] = L^3 (\delta_{\vec{k},\vec{k}'} + \delta_{\vec{k},-\vec{k}'}), \quad [\hat{a}_{\vec{k}}^I,\hat{a}_{\vec{k}'}^{I \dag}] = L^3 (\delta_{\vec{k},\vec{k}'} - \delta_{\vec{k},-\vec{k}'}),
\end{equation}
with all other commutators vanishing.
One natural way to proceed is to assume that the effect of the collapse on a state is analogous to some sort of approximate measurement; in other words, after the collapse, the expectation values of the field and momentum operators in each mode will be related to the uncertainties of the initial state. In the vacuum state, $\hat{y}_{\vec{k}}$ and $\hat{\pi}_{\vec{k}}$ are individually distributed according to Gaussian wave functions centered at 0 with spread $(\Delta \hat{y}_{\vec{k}})^2_0$ and $(\Delta\hat{\pi}_{\vec{k}})^2_0$, respectively. We consider various possibilities for such relations; we will refer to them as ``collapse schemes'' to the different ways of characterizing the expectation values. So, even though we did not assume a specific collapse mechanism, the different schemes refer to different ways of the collapse to happen, affecting either the field or momentum variable or both. The most generic form to characterize such ``collapse schemes'' is
\begin{subequations}\label{esquemas}
\begin{equation}
\langle \hat{y}^{R,I}_{\vec{k}}(\eta^c_{\vec{k}})\rangle_{\Theta} = \lambda_1 x_{\vec{k},1}^{R,I}
\sqrt{\left(\Delta \hat{y}^{R,I}_{\vec{k}} (\eta_k^c) \right)^2_0} = \lambda_1 x_{\vec{k},1}^{R,I} \frac{L^{3/2}}{\sqrt{2}} |y_k (\eta_{\vec{k}}^c)|,
\end{equation}
\begin{equation}
\langle \hat{\pi}^{R,I}_{\vec{k}}(\eta^c_k) \rangle_{\Theta} = \lambda_2 x_{\vec{k},2}^{R,I}
\sqrt{\left(\Delta \hat{\pi}^{R,I}_{\vec{k}} (\eta_k^c) \right)^2_0} = \lambda_2 x_{\vec{k},2}^{R,I} \frac{L^{3/2}}{\sqrt{2}} |g_k (\eta_{\vec{k}}^c)|.
\end{equation}
\end{subequations}
The subindex $\langle \cdot \rangle_{\Theta}$ represents that we are taking the expectation value on the post-collapse state $|\Theta \rangle$. The random variables $x_{\vec{k},1}^{R,I}, x_{\vec{k},2}^{R,I}$ are distributed according to a Gaussian centered at zero, of spread one (normalized), and are statistically uncorrelated; the quantity $\eta_{\vec{k}}^c$ denotes the conformal time of collapse, which in principle might depend on $k$. The parameters $\lambda_1, \lambda_2$ can only take two values: 0 or 1, the only purpose of these parameters is to ``switch on'' or ``switch off'' the operators in which the collapse take place. For example, we can choose a scheme in which the momentum operator is affected by the collapse but not the field, i.e. $\langle \hat{\pi}_{\vec{k}} (\eta_{\vec{k}}^c) \rangle_{\Theta} \neq 0$, $\langle \hat{y}_{\vec{k}} (\eta_{\vec{k}}^c) \rangle_{\Theta} = 0$, this situation corresponds to set $\lambda_2=1$, $\lambda_1=0$. In section \ref{analisis} we will study with detail the primordial spectrum in three different cases: i) only the field variable is affected by the collapse, $\lambda_1=1$, $\lambda_2=0$; ii) only the momentum variable is affected by the collapse, $\lambda_1=0$, $\lambda_2=1$; iii) both variables are affected by the collapse, $\lambda_1=1$, $\lambda_2=1$.
The next step would be to relate the quantum objects with the observational quantities, but before we proceed in that direction, we will like to introduce the way in which we believe the quantum degrees of freedom (DOF) relate to the classical description of the space-time in terms of the metric.
\subsection{The semiclassical gravity approach and the collapse of the wave function}\label{semiclassical}
We will rely on the so-called ``semiclassical gravity'' approach. This approach is characterized by Einstein's semiclassical equations $G_{ab} = 8 \pi G \langle \hat{T}_{ab} \rangle$, which relate the matter quantum DOF with the classical description of gravity in terms of the metric. The semiclassical approach is a valid approximation in the energy scales for our case of interest, also, this approach lead us to consider that the Universe can be described, by what was called \textit{Semiclassical Self-consistent Configuration} (SSC), first introduced in Ref. \cite{DT11}; in the following, we present a brief description of such idea.
The SSC considers a space-time geometry characterized by a classical space-time metric and a standard quantum field theory constructed on that fixed space-time background, together with a particular quantum state in that construction such that the semiclassical Einstein's equations hold. Specifically, one will establish that the set
\begin{equation}
\left\lbrace g_{\mu\nu}(x),\hat{\varphi}(x),\hat{\pi}(x),\mathscr{H},\vert\xi\rangle\in\mathscr{H}\right\rbrace
\end{equation}
characterizes a SSC if and only if $\hat{\varphi}(x)$, $\hat{\pi}(x)$ and $\mathscr{H}$ correspond to a quantum field theory constructed over a space-time with metric $g_{\mu\nu} (x)$ (as described in, say \cite{Wald94}), and the state $\vert\xi\rangle$ in $\mathscr{H}$ is such that
\begin{equation}\label{Mset-up}
G_{\mu\nu}[g(x)]=8\pi G\langle\xi\vert \hat{T}_{\mu\nu}[g(x),\hat{\varphi}(x),\hat{\pi}(x)]\vert\xi\rangle,
\end{equation}
for all the points in the space-time manifold.
Such description is thought to be appropriate in the regime of interests except in those times when a collapse occurs. In particular, if one considers a specific collapse mechanism, then Eq. \eqref{Mset-up} will not hold; this is due to the fact that the quantum collapse would induce sudden changes or ``state jumps'' to the initial quantum state, thus the divergence $\nabla_a \langle \hat{T}^{ab} \rangle \neq 0$ which implies that $\nabla_a G^{ab} \neq 0$; evidently that is a problem since a well-known result from General Relativity is that the divergence of Einstein's tensor vanishes. Nevertheless, since we will be only interested in states \emph{before} and \emph{after} the collapse, this breakdown of the semiclassical approximation would not be important for our present work. During the collapse, the dynamics of the space-time would be affected, but in the absence of a full workable theory of quantum gravity, we cannot characterize the metric dynamical response to the modification of the standard unitary quantum evolution.
The relation between the SSC and the collapse process can be described in a more formal way: first, within the Hilbert space associated to the given SSC-i, one can consider that a transition $\vert\xi^{\textrm{(i)}}\rangle\to\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}$ ``is about to happen", with both $\vert\xi^{\textrm{(i)}}\rangle$ and $\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}$ in $\mathscr{H}^{\textrm{(i)}}$. In general, the set $\{g^{\textrm{(i)}},\hat{\varphi}^{\textrm{(i)}},\hat{\pi}^{\textrm{(i)}}, \mathscr{H}^{\textrm{(i)}},\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}\}$ will not characterize a new SSC. In order to describe a reasonable picture, as presented in Ref. \cite{DT11}, one needs to relate the state $\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}$ with another one $\vert\zeta^{\textrm{(ii)}}\rangle$ ``existing'' in a new Hilbert space $\mathscr{H}^{\textrm{(ii)}}$ for which $\{g^{\textrm{(ii)}},\hat{\varphi}^{\textrm{(ii)}},\hat{\pi}^{\textrm{(ii)}}, \mathscr{H}^{\textrm{(ii)}},\vert\zeta^{\textrm{(ii)}}\rangle\}$ is a valid SSC; this new SSC is denoted by SSC-ii. Consequently, one needs to determine first the ``target'' (non-physical) state in $\mathscr{H}^{\textrm{(i)}}$ to which the initial state is ``tempted'' to jump, sort of speak, and after that, one can relate such target state with a corresponding state in the Hilbert space of a new SSC, the SSC-ii. One then considers that the target state is chosen stochastically, guided by the quantum uncertainties of designated field operators, evaluated on the initial state $\vert\xi^{\textrm{(i)}}\rangle$, at the collapsing time; this was the motivation behind the characterization of the collapse schemes presented in Eqs. \eqref{esquemas}.
Regarding the identification between the two different SSC's involved in the collapse, the prescription introduced in Ref. \cite{DT11} is the following: Assume that the collapse takes place along a Cauchy hyper-surface $\Sigma$. A transition from the physical state $\vert\xi^{\textrm{(i)}}\rangle$ in $\mathscr{H}^{\textrm{(i)}}$ to the physical state $\vert\zeta^{\textrm{(ii)}}\rangle$ in $\mathscr{H}^{\textrm{(ii)}}$ (associated to a certain target \textit{non-physical} state $\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}$ in $\mathscr{H}^{\textrm{(i)}}$) will occur in a way that
\begin{equation}\label{recipe.collapses}
_\textrm{target}\langle\zeta^{\textrm{(i)}}\vert \hat{T}^{\textrm{(i)}}_{\mu\nu}[g^{\textrm{(i)}}, \hat{\varphi}^{\textrm{(i)}},\hat{\pi}^{\textrm{(i)}}]\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}} \big|_{\Sigma}= \langle\zeta^{\textrm{(ii)}}\vert \hat{T}^{\textrm{(ii)}}_{\mu\nu}[g^{\textrm{(ii)}}, \hat{\varphi}^{\textrm{(ii)}},\hat{\pi}^{\textrm{(ii)}}]\vert\zeta^{\textrm{(ii)}}\rangle \big|_{\Sigma}\,
\end{equation}
i.e. in such a way that the expectation value of the energy momentum tensor, associated to the states $\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}$ and $\vert\zeta^{\textrm{(ii)}}\rangle$ evaluated on the Cauchy hyper-surface $\Sigma$, coincides. Note that the left hand side in the expression above is meant to be constructed from the elements of the SSC-i (although $\vert\zeta^{\textrm{(i)}}\rangle_{\textrm{target}}$ is not really {\it the state} of the SSC-i), while the right hand side correspond to quantities evaluated using the SSC-ii.
In the situation of interest for this work, the SSC-i corresponds to a homogeneous and isotropic space-time characterized by $\Psi=0$ with the state of the quantum field corresponding to the Bunch-Davies vacuum. Meanwhile, the SSC-ii corresponds to an excitation of all the modes $k$, characterized by the Newtonian potential $\Psi_{\vec{k}}$. In particular, the post-collapse state $|\zeta^{\textrm{(ii)}} \rangle$ is explicitly
\begin{equation}
|\zeta^{\textrm{(ii)}} \rangle = \ldots |\zeta^{\textrm{(ii)}}_{-\vec{k}_2} \rangle \otimes |\zeta^{\textrm{(ii)}}_{-\vec{k}_1} \rangle \otimes |\zeta^{\textrm{(ii)}}_0 \rangle \otimes |\zeta^{\textrm{(ii)}}_{\vec{k}_1} \rangle \otimes |\zeta^{\textrm{(ii)}}_{\vec{k}_2} \rangle \ldots,
\end{equation}
which means that the collapse process affects all modes of the quantum field. Given the previous prescription for the post-collapse state, and considering the SSC-ii, we can now associate each mode of the post-collapse state to each mode characterized by $\Psi_{\vec{k}}$. In this way the metric perturbations $\Psi(x)$ are born, and, thus the SSC-ii, corresponds to an inhomogeneous and anisotropic space-time at all scales $k$; in particular, $\Psi_{\vec{k}}$ corresponds to modes that are super-horizon and sub-horizon.
One advantage of relying on the semiclassical approach is that it allows to present a clear picture of the physical process (although not exactly known) responsible for the birth of the primordial perturbations from the quantum collapse: the initial state of the Universe is described by both an homogeneous-isotropic vacuum state and an equally homogeneous-isotropic Friedmann-Robertson-Walker space-time. Then, at some point during the radiation epoch, some unknown physical mechanism, causes a quantum collapse of the matter field wave function. However, the state resulting from the collapse needs not to share the same symmetries as the initial state. After the collapse, the gravitational DOF are assumed to be, once more, accurately described by Einstein's semiclassical equation. Nevertheless, $\langle \hat{T}_{ab} \rangle$ evaluated in the new state does not generically posses the symmetries of the pre-collapse state; hence, we are led to a new geometry that is no longer homogeneous and isotropic.
We should note here that we will not be using at this point the full fledged formal treatment developed. This is because, as can be see in Ref. \cite{DT11}, the problem becomes extremely cumbersome even in the treatment of a single mode. Thus, even though it is in principle possible to use such detailed formalism to treat the complete set of relevant modes, when studying the CMB spectrum the task quickly becomes a practical impossibility. We will instead rely on the less formal treatments we had employed in previous works. This is, we can assume that after the collapse has ended, and having constructed a SSC-ii, we can generalize Eq. \eqref{masterx} in the following manner:
\begin{equation}\label{master4}
\Psi_{\vec{k}} (\eta_{\vec{k}}^c) = \frac{\sqrt{\rho_\phi}}{\sqrt{2} M_P^2 \left( -k^2 + \rho_\phi a_c^2 / M_P^2 \right)} \bigg( \langle\hat{\pi}_{\vec{k}} (\eta_{\vec{k}}^c) \rangle + 3 \mathcal{H}_c \langle\hat{y}_{\vec{k}} (\eta_{\vec{k}}^c) \rangle \bigg) ,
\end{equation}
with $a_c \equiv a(\eta_{\vec{k}}^c)$ and $ \mathcal{H}_c \equiv \mathcal{H} (\eta_{\vec{k}}^c) $. The condition that the associated proper wavelength of the modes is bigger than the Hubble radius at the time of collapse is given by $k \eta_{\vec{k}}^c - \delta_k \ll 1$; but upon using the numerical values for $a_r,\eta_r,C$ one obtains that $\delta_k \simeq 10^{-22}$, thus, the time of collapse must satisfy $k\eta_{\vec{k}}^c \ll 1$.
Equation \eqref{master4} is the main result of this section as it relates the primordial curvature perturbation with the quantum expectation values after the collapse; i.e. is an expression that relates the metric perturbation with the parameters characterizing the collapse. In this manner, the quantum collapse of the wave function can generate the primordial cosmic seeds at the radiation era. Note that, as discussed above, the collapse affects all modes, therefore we could use Eq. \eqref{masterx}, which corresponds to the super-horizon modes. The sub-horizon modes are present too, but as shown in Sec. \ref{classical}, they decay as $1/a(\eta)^2$. Furthermore, within the semiclassical approach, the metric is always a classical object, therefore its dynamics during the radiation era, is exactly given by the motion equation \eqref{movpsi}, and as we have argued, it will not be modified once the collapse mechanism has ended.
It is worth noting that, by relying on the semiclassical approach, we have no issue regarding the ``quantum-to-classical'' transition that is always present in the traditional approach, namely, to find a justification from going from an strictly quantum object $\hat{\Psi}_{\vec{k}}$ to a classical stochastic field $\Psi_{\vec{k}}$. The next task is to obtain an equivalent power spectrum for the primordial perturbations that can be consistent with the observational data.
Regarding the tensor modes and the semiclassical gravity approach, we should mention that recent observational data \cite{BICEP2} suggest that the amplitude corresponding to the tensor modes may be non-trivial. Additionally, in our approach, the source of the curvature perturbations lies in the quantum inhomogeneities of the inflaton field (after the collapse). Once the collapse has taken place, the inhomogeneities of the inflaton feed into the gravitational DOF leading to perturbations in the metric components. However, the metric itself is not a source of the self-induced collapse. Therefore, as the scalar field does not act as a source for the metric tensor modes, at least not at first-order considered here, the analysis concerning the amplitude of the primordial gravitational waves should be done at second-order in the perturbations; such analysis is beyond the scope of this paper and would be the subject of future research. On the other hand, if one takes the view that both, metric and matter perturbations should be quantized, say at the level of the Mukhanov-Sasaki variable, then one could still implement a specific collapse mechanism for this variable. Furthermore, quantizing matter and metric perturbations would yield a non-trivial amplitude for first-order tensor modes (in the same vein as in the standard approach), after putting into effect a mechanism responsible for collapsing the wave function, one can look for possible modifications to tensor power spectra and their implications. In the particular case of the CSL mechanism, this type of analysis has been done in Ref. \cite{TPsinghBICEP2}.
\section{Observational quantities}
\label{oquantities}
In this section, we will relate the parameters characterizing the collapse with the observational quantities.
The temperature anisotropies $ \frac{\delta T}{T_0}$ of the CMB are clearly the most direct observational quantity available ($T_0$ is the mean temperature). One can expand such anisotropies with the help of the spherical harmonics $ \frac{\delta T}{T_0} (\theta,\varphi) = \sum_{l,m} a_{lm} Y_{lm} (\theta,\varphi)$; therefore, the coefficients $a_{lm}$ are given by
\begin{equation}\label{alm0}
a_{lm} = \int \Theta (\hat n) Y_{lm}^\star (\theta,\varphi) d\Omega,
\end{equation}
with $\hat n = (\sin \theta \sin \varphi, \sin \theta \cos \varphi, \cos \theta)$ and $\theta,\varphi$ the coordinates on the celestial two-sphere; we have also defined $\Theta (\hat {n}) \equiv \delta T (\hat n)/ T_0$. Assuming instantaneous recombination, the relation between the primordial perturbations and the observed CMB anisotropies is
\begin{equation}\label{mastertemp}
\Theta (\hat n) = [\Psi + \frac{1}{4} \delta_\gamma] (\eta_D) + \hat n \cdot \vec{v}_\gamma (\eta_D) + 2 \int_{\eta_D}^{\eta_0} \Psi'(\eta) d\eta,
\end{equation}
where $\eta_D$ is the time of decoupling; $\delta_\gamma$ and $\vec{v}_\gamma$ are the density perturbations and velocity of the radiation fluid (which are generated after the collapse, i.e. once the curvature perturbation $\Psi$ is originated).
It is common practice to decompose the temperature anisotropies in Fourier modes
\begin{equation}
\Theta (\hat n) = \sum_{\vec{k}} \frac{\Theta (\vec{k})}{L^3} e^{i \vec{k} \cdot R_D \hat n},
\end{equation}
with $R_D$ the radius of the last scattering surface. Afterwards, one solves the fluid motion equations with the initial condition $\Psi_{\vec{k}}$, which in our model corresponds to $\Psi_{\vec{k}} (\eta_{\vec{k}}^c)$, i.e. the curvature perturbation at the time of collapse, Eq. \eqref{master4}.
Furthermore, using that $e^{i \vec{k} \cdot R_D \hat n} = 4 \pi \sum_{lm} i^l j_l (kR_D) Y_{lm} (\theta,\varphi) Y_{lm}^\star (\hat k )$, expression \eqref{alm0} can be rewritten as
\begin{equation}\label{alm1}
a_{lm} = \frac{4 \pi i^l}{L^3} \sum_{\vec{k}} j_l (kR_D) Y_{lm}^\star(\hat k) \Theta (\vec{k}),
\end{equation}
with $j_l (kR_D)$ the spherical Bessel function of order $l$.
The linear evolution which relates the initial curvature perturbation $\Psi_{\vec{k}}$ and the temperature anisotropies $\Theta (\vec{k})$ is summarized in the transfer function $T(k)$, in other words, $T(k)$ is the result of solving the fluid motion equations (for one mode) with the initial condition provided by the curvature perturbation $\Psi_{\vec{k}}$ and then make use of Eq. \eqref{mastertemp} to relate it with the temperature anisotropies. Thus, $\Theta (\vec{k}) = T(k) \Psi_{\vec{k}}$.
Consequently, the coefficients $a_{lm}$, in terms of the modes $\Psi_{\vec{k}} (\eta_{\vec{k}}^c)$, are given by
\begin{equation}\label{alm2}
a_{lm} = \frac{4 \pi i^l}{ L^3} \sum_{\vec{k}} j_l (kR_D) Y_{lm}^\star(\hat k) T (k) \Psi_{\vec{k}} (\eta_{\vec{k}}^c),
\end{equation}
We emphasize that $\Psi_{\vec{k}} (\eta_{\vec{k}}^c)$ must correspond to the modes such that $z_k \ll 1$, because as explained in Section \ref{classical} only the super-horizon modes are relevant in this context. Substituting Eq. \eqref{master4} and using Eqs. \eqref{esquemas} (i.e. the collapse schemes) in Eq. \eqref{alm2}, yields
\begin{equation}\label{almrandom}
a_{lm} = \frac{2 \pi i^l}{ L^{3/2}} \frac{\sqrt{\rho_\phi}}{M_P^2} \sum_{\vec{k}} \frac{j_l (kR_D) Y_{lm}^\star (\hat k) T(k)}{(-k^2 + \rho_\phi a_c^2/M_P^2)} \bigg( \lambda_2 X_{\vec{k},2} |g_k (\eta_{\vec{k}}^c)| + 3\mathcal{H}_c \lambda_1 X_{\vec{k},1} |y_k (\eta_{\vec{k}}^c) | \bigg),
\end{equation}
where $X_{\vec{k},i} \equiv x_{\vec{k},j}^R + i x_{\vec{k},j}^I$ ($j=1,2$).
One key aspect that in our treatment differs, from those followed in the standard approaches, is the manner in which the results from the formalism are connected to observations. This is most clearly exhibited by our result regarding the quantity $a_{lm} $ in Eq. \eqref {almrandom}. Despite the fact that we have in principle a close expression for the quantity of interest, we cannot use Eq. \eqref {almrandom} to make a definite prediction because the expression involves the numbers $ X_{\vec{k},j}$ that correspond, as we indicated before, to a random choice ``made by nature'' in the context of the collapse process. The way one makes predictions is by regarding the sum appearing in Eq. \eqref {almrandom} as representing a kind of two-dimensional random walk, i.e the sum of complex numbers depending on random choices (characterized by the $ X_{\vec{k}}$). As is well known, for a random walk, one cannot predict the final displacement (which would correspond to the complex quantity $a_{lm} $), but one might estimate the most likely value of the magnitude of such displacement. Thus, we focus precisely on the most likely value of $|a_{lm}|$, which we denote by $|a_{lm}|_{\text{M.L.}}$. In order to compute that quantity, we make use of a fiducial (imaginary) ensemble of realizations of the random walk and compute the ensemble average value over of the total displacement. Thus we identify:
\begin{equation}\label{ML}
|a_{lm}|_{\text{M.L.}}^2 = \overline{|a_{lm}|^2}.
\end{equation}
The over-line appearing denotes average over the fiducial ensemble of possible realizations, i.e. of possible outcomes of the random variables where each outcome corresponds to a single Universe. Thus, we identify the ensemble average of possible realizations with most likely value, and this most likely value with the one characterizing our Universe.
The estimate is done now in the standard way in which one deals with such random walks:
\begin{eqnarray}\label{ML2}
|a_{lm}|^2_{\text{M.L.}} = \overline{ |a_{lm}|^2}&=& \frac{4\pi^2 \rho_\phi}{ L^3 M_P^4}\sum_{\vec{k},\vec{k}'} \frac{j_l(kR_D) j_l(k'R_D) Y_{lm}^\star (\hat k) Y_{lm} (\hat k') T(k) T(k')}{(-k^2+\rho_\phi a_c^2 /M_P^2 )(-k'^2+\rho_\phi a_c^2 /M_P^2 )} \nonumber \\
&\times& \overline{\left( \lambda_2 X_{\vec{k},2} |g_k (\eta_{\vec{k}}^c)| + 3\mathcal{H}_c \lambda_1 X_{\vec{k},1} |y_k (\eta_{\vec{k}}^c) | \right) \left( \lambda_2 X_{\vec{k}',2}^\star |g_k' (\eta_{\vec{k}'}^c)| + 3\mathcal{H}_c \lambda_1 X_{\vec{k}',1}^\star |y_k (\eta_{\vec{k}'}^c) | \right)},
\end{eqnarray}
which upon using the normalized Gaussian assumption for fiduciary ensemble, this is, $ \overline{X_{\vec{k},i} X^\star_{{\vec{k},j'}}} = 2 \delta_{i,j} \delta_{\vec{k}, {\vec{k}}'} $, leads to
\begin{equation}\label{ML3}
|a_{lm}|^2_{\text{M.L.}} = \frac{8\pi^2 \rho_\phi}{ L^3 M_P^4} \sum_{\vec{k},} \frac{ j_l (kR_D)^2 | Y_{lm} (\hat k)|^2 T(k)^2}{(-k^2 + \rho_\phi a_c^2 /M_P^2)^2} \left( \lambda_2^2 |g_k (\eta_{\vec{k}}^c)|^2 + 9 \mathcal{H}_c^2 \lambda_1^2 |y_k (\eta_{\vec{k}}^c) | ^2 \right).
\end{equation}
Finally, we can remove the fiducial box of side $L$ and pass to the continuum
\begin{equation}\label{ML4}
|a_{lm}|^2_{\text{M.L.}}
= \frac{ \rho_\phi}{ \pi M_P^4} \int d^3k \frac{ j_l (kR_D)^2 | Y_{lm} (\hat k)|^2 T(k)^2}{(-k^2 + \rho_\phi a_c^2 /M_P^2)^2} \left( \lambda_2^2 |g_k (\eta_{\vec{k}}^c)|^2 + 9 \mathcal{H}_c^2 \lambda_1^2 |y_k (\eta_{\vec{k}}^c) | ^2 \right).
\end{equation}
The exact expressions for $|y_k (\eta_{\vec{k}}^c)|$ and $|g_k (\eta_{\vec{k}}^c)|$ can be obtained from Eqs. \eqref{ykbd} [with $A_k$ and $B_k$ given in Eqs \eqref{AkBk}], these are
\begin{equation}\label{yk}
|y_k (\eta_{\vec{k}}^c)|^2 = \frac{1}{2k} \left[ 1 + \frac{1}{2\sigma_k^4} + \frac{\cos 2D_k}{\sigma_k^2} \left(1 - \frac{1}{2\sigma_k^2} \right) - \frac{\sin 2 D_k}{\sigma_k^3} \right]
\end{equation}
and
\begin{eqnarray}\label{gk}
|g_k (\eta_{\vec{k}}^c)|^2 &=& \frac{k}{2} \bigg\{ \left( \frac{\mathcal{H}_c^2}{k^2} +1 \right) \left( 1 + \frac{1}{2\sigma_k^4 } \right) + \frac{\cos 2D_k}{\sigma_k^2} \left[ \left( \frac{\mathcal{H}_{c}^{2}}{k^2} -1 \right) \left( 1 - \frac{1}{2\sigma_k^2 } \right) +\frac{2 \mathcal{H}_c}{k \sigma_k} \right] \nonumber \\
&-& \frac{\sin 2 D_k}{\sigma_k^2} \left[ - \frac{2 \mathcal{H}_c}{k}\left( 1 - \frac{1}{2\sigma_k^2} \right) + \left( \frac{\mathcal{H}_{c}^{2}}{k^2} -1 \right) \frac{1}{\sigma_k} \right] \bigg\},
\end{eqnarray}
where $\sigma_k \equiv k\eta_r $, $z_k \equiv k\eta_{\vec{k}}^c$ and $D_k \equiv z_k - \sigma_k$.
At this point, one could focus on the quantity that is commonly presented as a direct result from the observational data, namely
\begin{equation}\label{ML6}
C_l \equiv \frac{1}{2l+1} \sum_m |a_{lm}|^2
\end{equation}
for which we would have the estimate
\begin{eqnarray}\label{ML7}
{C_l}^{\text{M.L.}} &\equiv& \frac{1}{2l+1} \sum_m |a_{lm}|_{\text{M.L.}}^2 \nonumber \\
&=& \frac{ \rho_\phi}{ \pi M_P^4} \int_0^\infty \frac{dk}{k} \frac{ j_l (kR_D)^2 T(k)^2 k^3}{(-k^2 + \rho_\phi a_c^2 /M_P^2)^2} \left( \lambda_2^2 |g_k (\eta_{\vec{k}}^c)|^2 + 9 \mathcal{H}_c^2 \lambda_1^2 |y_k (\eta_{\vec{k}}^c) | ^2 \right).
\end{eqnarray}
In the standard inflationary paradigm, a well-known result is that the dimensionless power spectrum $\Delta^2 (k)$ for the curvature perturbation and the $C_l$ are related by
\begin{equation}\label{cl}
C_l = \frac{4 \pi}{9} \int_0^\infty \frac{dk}{k} j_l^2 (kR_D) T(k)^2 \Delta^2 (k).
\end{equation}
Thus, by comparing Eq. \eqref{ML7} with \eqref{cl} we can extract an ``equivalent power spectrum'' for the $\Psi_{\vec{k}}$
\begin{equation}\label{psexacto}
\Delta^2 (k) = \frac{9 \rho_\phi}{4 \pi^2 M_P^4} \frac{k^3}{ (-k^2 + \rho_\phi a_c^2 /M_P^2)^2 } \left( \lambda_2^2 |g_k (\eta_{\vec{k}}^c)|^2 + 9 \mathcal{H}_c^2 \lambda_1^2 |y_k (\eta_{\vec{k}}^c) | ^2 \right).
\end{equation}
In the next section, we will show that, under certain conditions, the power spectrum given in Eq. \eqref{psexacto} can be approximated to yield a nearly scale invariant spectrum with the correct amplitude.
\section{Analysis of the equivalent power spectrum}
\label{analisis}
In this section, we will study different cases and show that, under specific conditions, our model reproduces a nearly flat power spectrum. In standard inflationary models, the power spectrum has a phenomenological expression: $\Delta^2 (k) = \mathcal{A} k^{n_s-1}$; with $n_s$ the scalar spectral index of the perturbations. A perfect scale-invariant spectrum corresponds to $n_s = 1$. However, the most recent results from \emph{Planck} mission rule out exact scale invariance (at over $5 \sigma$, the spectral index is $n_s = 0.9603 \pm 0.0073$). Therefore, we will explore the conditions given in our model that lead to a nearly scale invariant spectrum. Note, however, that the departure from perfect scale-invariance will be given by having introduced the collapse hypothesis. Thus, the dependence on $k$ introduced by the collapse proposal will be different from the standard one.
Our first approximation concerns the scale factor at the time of collapse, namely $a_c = C(\eta_{\vec{k}}^c - \eta_r) + a_r$; if we assume that $\eta_{\vec{k}}^c \gg |\eta_r|$, then $a_c \simeq C\eta_{\vec{k}}^c$; additionally $\mathcal{H}_c$ at the time of collapse is $\mathcal{H}_c = (\eta_{\vec{k}}^c-\eta_r + a_r/C )^{-1}$, which can be approximated by $\mathcal{H}_c \simeq 1/\eta_{\vec{k}}^c$. Thus, the power spectrum in Eq. \eqref{psexacto} is approximately
\begin{equation}\label{ps1}
\Delta^2 (k) \simeq \frac{9 \rho_\phi}{8 \pi^2 M_P^4} \frac{k^4}{ [-k^2 + \rho_\phi (C \eta_{\vec{k}}^c /M_P)^2]^2 } \left( \lambda_2^2 N(z_k) + 9 \lambda_1^2 M(z_k) \right),
\end{equation}
where
\begin{equation}
M(z_k) \equiv \frac{1}{z_k^2} \left[1 + \frac{1}{2 \sigma_k^4} + \frac{\cos(2z_k - 2\sigma_k)}{\sigma_k^2} \left( 1-\frac{1}{2\sigma_k^2}\right) - \frac{\sin(2z_k-2\sigma_k)}{\sigma_k^3} \right]
\end{equation}
and
\begin{eqnarray}
N(z_k) &\equiv& 1+ \frac{1}{z_k^2} + \frac{1}{2 \sigma_k^4} + \frac{1}{2 \sigma_k^4 z_k^2} \nonumber \\
&+& \cos(2z_k - 2\sigma_k) \left( -\frac{1}{\sigma_k^2} + \frac{1}{z_k^2 \sigma_k^2} + \frac{1}{2 \sigma_k^4} - \frac{1}{2 z_k^2 \sigma_k^4} + \frac{2}{z_k \sigma_k^3} \right) \nonumber \\
&-& \sin(2z_k - 2\sigma_k) \left( - \frac{2}{z_k \sigma_k^2} + \frac{1}{z_k \sigma_k^4} +\frac{1}{z_k^2 \sigma_k^3} - \frac{1}{\sigma_k^3}\right)
\end{eqnarray}
Moreover, we can make another approximation by considering the fact that $\sigma_k \equiv k \eta_r \ll 1$. Hence, one can take the first two term of the series expansion for $\sin (2\sigma_k)$ and $\cos(2\sigma_k)$ and, after performing the simplification of the terms, only retain the dominant term, which is of order $\mathcal{O}(\sigma_k^{-4}) $. Thus,
\begin{equation}\label{Map1}
M(z_k) \simeq \frac{1}{\sigma_k^4} \frac{\sin^2 z_k}{z_k^2}
\end{equation}
and
\begin{equation}\label{Nap1}
N(z_k) \simeq \frac{1}{\sigma_k^4} \left[ \frac{1}{2} + \frac{1}{2z_k^2} + \cos(2z_k) \left( \frac{1}{2} - \frac{1}{2 z_k^2}\right) -\frac{\sin(2z_k)}{z_k} \right].
\end{equation}
There are two limit cases we can further analyze at this point: the limit $k^2 \ll \rho_\phi (C \eta_{\vec{k}}^c /M_P)^2$ or $k^2 \gg \rho_\phi (C \eta_{\vec{k}}^c /M_P)^2$. Let us focus on the first case.
If $k^2 \ll \rho_\phi (C \eta_{\vec{k}}^c /M_P)^2$ then the power spectrum in Eq. \eqref{ps1} can be further approximated as
\begin{equation}\label{ps2}
\Delta^2 (k) \simeq \frac{9}{8 \pi^2} \frac{k^4}{\rho_\phi (C \eta_{\vec{k}}^c)^4 } \left[ 1+ 2 \beta_k \right] \left[ \lambda_2^2 N(z_k) + 9 \lambda_1^2 M(z_k) \right],
\end{equation}
where we defined
\begin{equation}
\beta_k \equiv \frac{ k^2 M_P^2}{\rho_\phi (C \eta_{\vec{k}}^c)^2},
\end{equation}
with $M(z_k)$ and $N(z_k)$ as expressed in Eqs. \eqref{Map1} and \eqref{Nap1}. Therefore, the condition $k^2 \ll \rho_\phi (C \eta_{\vec{k}}^c /M_P)^2$ implies $\beta_k \ll 1$.
As mentioned earlier, $z_k \ll 1$ must be satisfied in order to ensure that the mode has a proper wavelength bigger than the Hubble radius when the collapse is triggered. Therefore, one can perform a series expansion of the functions $N(z_k)$ and $M(z_k)$ for $z_k \ll 1$, this is,
\begin{equation}
M(z_k) \simeq \frac{1}{\sigma_k^4} \left( 1 - \frac{z_k^2}{3} \right) \qquad \textrm{and} \qquad N(z_k) \simeq \frac{1}{\sigma_k^4} \frac{z_k^4}{9}.
\end{equation}
Now let us focus on the collapse scheme where the momentum variable collapse but not the field variable, i.e. the scheme were $\lambda_1=0$ and $\lambda_2 = 1$. In such case, the power spectrum takes the form
\begin{eqnarray}
\Delta^2 (k) &\simeq & \frac{1}{8 \pi^2} \frac{1}{\rho_\phi (\eta_r C )^4 } \left[ 1+ 2 \beta_k \right] {k^4},
\end{eqnarray}
where we used the definition $z_k \equiv k \eta_{\vec{k}}^c$. The power spectrum is of the form $k^4$ and the dominant term does not contain any parameter that can be adjusted to recover a nearly scale-independent spectrum. Thus, in the limit where $\beta_k \ll 1$ and $\lambda_1=0$ and $\lambda_2 = 1$ one cannot recover the standard prediction.
Next, we focus on the scheme $\lambda_1 = 1$, $\lambda_2 = 0$. For this scheme
\begin{equation}
\Delta^2 (k) \simeq \frac{9}{8 \pi^2} \frac{k^4}{\rho_\phi (C \eta_{\vec{k}}^c)^4 } \left[ 1+ 2 \beta_k \right] \frac{9}{\sigma_k^4} \left[ 1 - \frac{z_k^2}{3} \right]
\end{equation}
Substituting $\beta_k$ and $z_k$ in the last expression, the power spectrum is written explicitly as
\begin{equation}\label{psfinal}
\Delta^2 (k) \simeq \frac{81}{8 \pi^2} \frac{1}{\rho_\phi (\eta_r C \eta_{\vec{k}}^c)^4 } \left[ 1 + k^2 \left( \frac{2M_P^2}{\rho_\phi C^2 {\eta_{\vec{k}}^c}^2} - \frac{{\eta_{\vec{k}}^c}^2}{3} \right) \right].
\end{equation}
Hence, if $\eta_{\vec{k}}^c$ is independent of $k$, i.e. the time of collapse does not depend on the mode $k$, one can recover a flat spectrum plus (small) first order corrections of the form $k^2$.
The next step is to check if the amplitude of the spectrum [Eq. \eqref{psfinal}] is consistent with the latest CMB observations \cite{Planckcls13}. This is, the model must satisfy that
\begin{equation}\label{amplitudps}
\frac{81}{8 \pi^2} \frac{1}{\rho_\phi (\eta_r C \eta_{\vec{k}}^c)^4 } \simeq 10^{-9}.
\end{equation}
Using the numerical values for $C$ and $\eta_r$ the last condition is re-expressed as
\begin{equation}\label{condamp}
\rho_\phi^{-1} \simeq 10^{-120} {\eta_{\vec{k}}^c}^4.
\end{equation}
Furthermore, the condition $\beta_k \ll 1$ written explicitly is
\begin{equation}\label{condbeta}
\frac{ k^2 M_P^2}{\rho_\phi (C \eta_{\vec{k}}^c)^2} \ll 1.
\end{equation}
Using once again the numerical values for $C$ and $\eta_r$ and taking the greatest value of the relevant values for $ k \simeq 10^{-1}$ Mpc$^{-1}$, the condition \eqref{condbeta}, together with the condition on the amplitude \eqref{condamp}, establishes an upper bound on the time of collapse, namely
\begin{equation}\label{condtc}
\eta_{\vec{k}}^c \ll 10^{-2} \textrm{Mpc}.
\end{equation}
This is, the time of collapse must be approximately much before the epoch of nucleosynthesis. Additionally, condition \eqref{condtc} is consistent with the condition $k \eta_{\vec{k}}^c \ll 1$ for the modes of observational interest. One further consistency check is to ensure that $\rho_\phi \ll \rho_{\textrm{rad}} (\eta_{\vec{k}}^c)$ given that $\rho_\phi$ must satisfy Eq. \eqref{condamp}, which assures that the power spectrum posses the correct amplitude. Therefore, from Friedmann's equation
\begin{equation}
\rho_{\text{rad}} = \frac{3 M_P^2 \mathcal{H}_c^2}{a_c^2} \simeq \frac{3 M_P^2}{C^2 {\eta_{\vec{k}}^c}^4} \simeq \frac{3 M_P^2 10^{-120} \rho_\phi}{C^2},
\end{equation}
where in the last equality we used Eq. \eqref{condamp}. Inserting the the numerical values for $C$ and $M_P$ yields
\begin{equation}
\rho_\phi \simeq 10^{-5} \rho_{\text{rad}} .
\end{equation}
Thus, is consistent with the requirement that $\rho_{\text{rad}} \gg \rho_\phi$.
For the scheme $\lambda_1 = \lambda_2 = 1$, the power spectrum can be approximated as
\begin{eqnarray}
\label{psfinal2}
\Delta^2 (k) &\simeq& \frac{81}{8 \pi^2} \frac{k^4}{\rho_\phi (C \eta_{\vec{k}}^c)^4 } \left[ 1+ 2 \beta_k \right] \frac{1}{\sigma_k^4} \left[ 1 - \frac{z_k^2}{3} + \frac{z_k^4}{81}\right]. \nonumber \\
\end{eqnarray}
Thus, the dominant term is of the same form as the scheme described by $\lambda_1=1$ and $\lambda_2=0$, therefore, the analysis proceeds in an identical fashion.
Now let us analyze the case $k^2 \gg \rho_\phi (C \eta_{\vec{k}}^c /M_P)^2$, which now implies $\beta_k \gg 1$. Therefore, the power spectrum in Eq. \eqref{ps1} can be approximated by
\begin{equation}
\Delta^2 (k) \simeq \frac{9 \rho_\phi}{8 \pi^2 M_P^4} \left( 1+\frac{2}{\beta_k} \right) \left( \lambda_2^2 N(z_k) + 9 \lambda_1^2 M(z_k) \right).
\end{equation}
We focus first on the collapse scheme $\lambda_1=1$ and $\lambda_2 =0$. In this case, upon using the series expansion Eq. \eqref{Map1}, one obtains
\begin{eqnarray}
\Delta^2 (k) &\simeq& \frac{81 \rho_\phi}{8 \pi^2 M_P^4 \eta_r^4} \left( 1+\frac{2}{\beta_k} \right) k^{-4} \left( 1 - \frac{z_k^2}{3} \right). \nonumber \\
\end{eqnarray}
We see that the dominant term of the approximation is proportional to $k^{-4}$ and does not depend on the time of collapse, henceforth, one cannot recover the standard spectrum.
The collapse scheme described by $\lambda_1=0$ and $\lambda_2 =1$ yields an approximated power spectrum expressed as
\begin{equation}
\Delta^2 (k) \simeq \frac{9 \rho_\phi}{8 \pi^2 M_P^4} \left( 1+\frac{2}{\beta_k} \right) \frac{1}{\sigma_k^4} \frac{z_k^4}{9}.
\end{equation}
Substituting $\beta_k$ and $z_k$ we have
\begin{equation}
\Delta^2 (k) \simeq \frac{ \rho_\phi}{8 \pi^2 M_P^4} \frac{{\eta_{\vec{k}}^c}^4}{\eta_r^4} \left( 1+\frac{2 \rho_\phi (C \eta_{\vec{k}}^c)^2}{k^2 M_P^2} \right).
\end{equation}
Thus, in this scheme, if the time of collapse is independent of the mode $k$, the model predicts a scale-invariant spectrum plus corrections of the form $k^{-2}$. Additionally, for this scheme, we must check if the predicted amplitude is consistent with the latest CMB observations \cite{Planckcls13}:
\begin{equation}
\frac{ \rho_\phi}{8 \pi^2 M_P^4} \frac{{\eta_{\vec{k}}^c}^4}{\eta_r^4} \simeq 10^{-9}.
\end{equation}
Therefore, by inserting the numerical values the relation between the energy density and the time of collapse is
\begin{equation}\label{cond2}
\rho_\phi^{-1} \simeq 10^{-129} {\eta_{\vec{k}}^c}^4.
\end{equation}
The condition $\beta_k \gg 1$ is thus,
\begin{equation}
\frac{k^2M_P^2}{\rho_\phi(C \eta_{\vec{k}}^c)^2} \gg 1.
\end{equation}
Using Eq. \eqref{cond2} and the numerical values of $C,\eta_r,M_P$ and the lowest value for the mode of interest $k \simeq 10^{-6}$ Mpc$^{-1}$, one obtains that the time of collapse must satisfy
\begin{equation}
\eta_{\vec{k}}^c \gg 10^8 \textrm{Mpc},
\end{equation}
which is 6 orders of magnitude greater than the time of decoupling; consequently this scheme is also ruled out.
Finally, the approximated power spectrum for the last scheme corresponding to $\lambda_1=\lambda_2=1$, is
\begin{eqnarray}
\Delta^2 (k) &\simeq& \frac{81 \rho_\phi}{8 \pi^2 M_P^4 \eta_r^4} \left( 1+\frac{2}{\beta_k} \right) \left( 1 - \frac{z_k^2}{3} + \frac{z_k^4}{81} \right) k^{-4}. \nonumber \\
\end{eqnarray}
As we see, the dominant term in the expansion is of the form $k^{-4}$ and therefore the scheme is discarded.
We end this section by summarizing the main conditions under which the model can reproduce an nearly scale independent power spectrum.
The first condition is that the collapse scheme must be such that the field variable is affected by the collapse, i.e. $\langle \hat{y}_{\vec{k}} (\eta_{\vec{k}}^c) \rangle \neq 0$; the momentum variable can or cannot be affected by the collapse. The second condition is that the \textbf{time of collapse must be independent of $k$}, i.e., $\eta_{\vec{k}}^c=\eta_c$ the same for all modes and satisfy $\eta_c \ll 10^{-2}$ Mpc; this is a reasonable range for the time of collapse, since it should occur before the nucleosynthesis stage. If those conditions are met, then the power spectrum is explicitly
\begin{equation}\label{psfinal3}
\Delta^2 (k) \simeq \mathcal{A} C(k),
\end{equation}
where
\begin{equation}
\mathcal{A} \equiv \frac{81}{8 \pi^2} \frac{1}{\rho_\phi C^4 \eta_r^4 {\eta_c}^4 },
\end{equation}
\begin{equation}\label{Ck}
C(k) \equiv \left(1 + 2 \beta_k\right)\left\{\frac{ \sin^2 (k\eta_c)}{(k\eta_c)^2} + \frac{\lambda_2^2}{9} \left[ \frac{1}{2} + \frac{1}{2(k\eta_c)^2} + \cos(2k\eta_c) \left( \frac{1}{2} - \frac{1}{2 (k\eta_c)^2}\right) -\frac{\sin(2k\eta_c)}{k\eta_c} \right]\right\},
\end{equation}
with $\lambda_2$ either 1 or 0 and $\rho_\phi$ to be adjusted by the amplitude. Therefore, apparently we have constructed a viable model for generating the primordial curvature perturbation. It is a viable model in the sense that our theoretical prediction Eq. \eqref{psfinal3} has a consistent amplitude and is almost independent of $k$.
Let us remark that the prediction from our model [Eq. \eqref{psfinal3}] is different from the standard one $\Delta^2 (k) = A_s k^{n_s -1}$; in particular, the dependence on $k$ is not similar. In our model the dependence on $k$ is explicitly contained in the function $C(k)$ [see Eq. \eqref{Ck}], while in the standard case is given by $k^{n_s-1}$. This difference can be explained in part by noting that we have considered a perfect de Sitter space-time for the inflationary regime. On the other hand, we could have performed our calculations in a quasi-de Sitter Universe during inflation and that would have yielded a collapse power spectrum of the form $\Delta^2 (k) \simeq \mathcal{A} \widetilde C(k) k^{n_s-1}$, i.e., we would have obtained a power spectrum that would depend on $k$ in two ways: The first would be given by having introduced the collapse hypothesis, reflected in the function $\widetilde C(k)$, and the second one would have to do with the quasi-de Sitter background during inflation, hence the factor $k^{n_s-1}$. Nevertheless, the functional dependence on $k$, given by the collapse hypothesis, would have not been substantially different from the one obtained in this paper, this is, $\widetilde C(k) \simeq C(k)$. Therefore, by relying on pure de Sitter inflation, we have simplified our calculations but also we have retained the dependence on $k$, within the power spectrum, that has to do only with the collapse hypothesis, consequently, the predicted power spectrum, Eq. \eqref{psfinal3} is not exactly scale-invariant even if pure de Sitter inflation was used for calculations.
In the next section, we will study the effects of the collapse during the radiation era on the CMB temperature and polarization fluctuation spectrum by considering only the approximate scale-invariant spectrum given by Eq. \eqref{psfinal3} that relies on the assumption that the time of collapse is independent of $k$, i.e. $\eta_{\vec{k}}^c = A$.
\section{Effects on the CMB fluctuation spectrum and comparison with observational data}\label{camb}
In order to analyze the effects of a collapse of the wave function of the inflaton field during the radiation era on the CMB fluctuations power spectrum, let us first define the fiducial model, which will be taken just as a reference to discuss the results we obtain for the collapse models. The fiducial model is a $\Lambda$CDM model with the following cosmological parameters: baryon density in units of the critical density, $\Omega_B h^2=0.02214$; dark matter density in units of the critical density, $\Omega_{CDM} h^2=0.1187$; Hubble constant in units of ${\rm Mpc^{-1} km \ \ s^{-1}}$, $H_0=67.8$; reionization optical depth, $\tau=0.092$; and the scalar spectral index, $n_s=0.9608$. These are the best-fit values presented by the \emph{Planck} collaboration \cite{Plancklike13} using the CMB temperature data released by \emph{Planck}, the CMB polarization data reported by WMAP \cite{wmap9cls}, CMB temperature data for high values of $l$ reported by ACT \cite{ACT13} and SPT \cite{SPT12} and Baryon Acoustic Oscillations \cite{BAO1,BAO2,BAO3,BAO4}.
In Figure \ref{power} left, we show the primordial spectrum of models where a collapse of the wave function of the inflaton field during the radiation era has been included for different values of the collapse time $\eta_{\vec{k}}^c= A $ and $\lambda_2=0$. It follows from Eq. \eqref{Ck} that the main contribution to $C(k)$ comes from the term $(1+2\beta_k) \simeq 1+ 10^5 z_k^2$ and thus setting $\lambda_2\neq 0$ does not change the primordial spectrum significantly. Therefore, we will only analyze the case $\lambda_2=0$ since the same conclusions apply to the case $\lambda_2\neq 0$. Fig. \ref{power} right, shows the primordial spectrum of the collapse models compared to the fiducial model. The variation between the collapse models due to different values of the collapse time is very tiny compared to the difference of these models with the fiducial model (see Fig. \ref{power} right). Thus, it follows that the collapse models are very similar to a fiducial model with $n_s=1$ (which is ruled out at 5$\sigma$ by \emph{Planck}'s data) and it will be difficult to fit these models to present data.
This also reflects the fact, that if we would have considered quasi-de Sitter inflation, the shape of the collapse power spectrum during radiation and the one given by the standard single field slow-roll inflationary model, would have been, for all practical purposes, indistinguishable from each other.
The main reason for this is the restriction $\eta_{\vec{k}}^c \ll 10^{-2}$ Mpc that constrains the values of $A$ to be less than one and prevents the primordial power spectrum to move over significantly from the standard power spectrum. This is not the case for the models where the collapse happens during inflation and therefore, we could find good fit to the WMAP data in previous works \cite{LSS12} and also to provide features in the collapse power spectrum that made it distinguishable from the traditional spectrum.
\begin{figure}
\begin{center}
\includegraphics[scale=0.31,angle=-90]{power3.ps}
\includegraphics[scale=0.31,angle=-90]{power2.ps}
\end{center}
\caption{Left: Primordial spectra, with wave function collapse of the inflaton field during the radiation era, for different values of the collapse time $\eta_{\vec{k}}^c=A$ and $\lambda_2=0$; Right: Primordial spectra with wave function collapse of the inflaton field during the radiation era ($\lambda_2=0$) and Primordial Spectra of the Fiducial Model (for these scales the collapse models are indistinguishable among themselves). }
\label{power}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5,angle=-90]{cls.ps}
\end{center}
\caption{The temperature auto-correlation (TT) power spectrum for the fiducial model and for a model where the collapse of the inflaton wave function happens during the radiation era at conformal time $\eta_{\vec{k}}^c=10^{-3} \textrm{Mpc}$. All models are normalized to the maximum of the first peak of the fiducial model. The value of $\chi^2$ is calculated using WMAP9, \emph{Planck}, SPT and ACT release data (both temperature and temperature-polarization power spectrum are included).}
\label{clstt}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.31,angle=-90]{clsEE.ps}
\includegraphics[scale=0.31,angle=-90]{clsTE.ps}
\end{center}
\caption{Left: E polarization auto-correlation (EE) power spectrum; Right: Temperature-polarization cross correlation (TE) power spectra. In both cases we plot the fiducial model and a model where the collapse of the inflaton wave function happens during the radiation era at conformal time $\eta_{\vec{k}}^c=10^{-3} \textrm{Mpc}$. All models are normalized to the maximum of the first peak of the fiducial model. The value of $\chi^2$ for all models is the same as indicated in Fig. \ref{clstt}.}
\label{clsEE}
\end{figure}
Fig. \ref{clstt} shows the temperature auto-correlation power spectrum for the fiducial model and for the model where the collapse occurs during the radiation era. The respective EE and TE polarization power spectrum are shown in Fig. \ref{clsEE}. For all models satisfying the constraint $\eta_{\vec{k}}^c \ll 10^{-2} \textrm{ Mpc}$, the temperature, the E polarization and the TE cross correlation power spectrum are the same as the one shown in Figs. \ref{clstt} and \ref{clsEE}, labeled as ``radiation models.'' The main reason for this, is the tiny difference in the primordial power spectrum for different radiation-collapse models shown in Fig. \ref{power}. The difference between the value of $\chi^2$ for the fiducial and collapse models is significant ($\chi^2$ is calculated using WMAP9 polarization data, \emph{Planck} temperature data, SPT and ACT temperature data) and shows that a good fit to these data would be difficult to find for the collapse-radiation models. This is due to the low errors and accuracy of the present CMB data set. However, and in order to be sure about our conclusions, we intended to perform a statistical analysis to fit the CMB temperature power spectrum reported by the \emph{Planck} \cite{Planckcls13} collaboration and the polarization spectra reported by the WMAP \cite{wmap9cls} collaboration together with the temperature power spectrum for high $l$ from ACT \cite{ACT13} and SPT \cite{SPT12} and Baryon Acoustic Oscillations \cite{BAO1,BAO2,BAO3,BAO4}. We performed our statistical analysis by exploring the parameter space with Monte Carlo Markov chains generated with the publicly available CosmoMC code of Ref. \cite{LB02} that uses the Boltzmann code CAMB \cite{LCL00} to compute the CMB power spectra. We modified the primordial power spectrum according to Eq. \eqref{psfinal3} with $C(k)$ as given in Eq. \eqref{Ck} and with the time of collapse parameterized as $\eta_{\vec{k}}^c = A$. The parameters allowed to
vary are:
\begin{equation}
P=\left(\Omega_B h^2, \Omega_{CDM} h^2, \Theta, \tau, A_s, A\right),
\end{equation}
where $\Theta$ is the ratio of the comoving sound horizon at decoupling to the angular diameter distance to the surface of last scattering, $\tau$ is the reionization optical depth, $A_s$ is the amplitude of the primordial density fluctuations, and $A$ is the model's parameter related to the conformal time of collapse. According to the previous discussion, we could not find a good convergence of the Markov chains, even more, the code got stuck about $200$ steps and/or failed due to the value of the optical depth. This happens, because, in order to get a fit to the data, the code explores other values for the cosmological parameters far from the fiducial model.
Note that in Figs. \ref{power}, \ref{clstt} and \ref{clsEE}, the fiducial model assumed $n_s = 0.9608$, while for the collapse model we set $n_s =1$. If we would have considered a quasi-de Sitter inflation for our model instead of a pure de Sitter one, we should have set $n_s = 0.9608$ for our model too, but, as argued in the previous section, we could have still used the collapse power spectrum given by Eq. \eqref{psfinal3} since it should not be substantially different from the one obtained using quasi-de Sitter inflation. Therefore, as can bee seen in all figures, our model's prediction would have been practically the same as the fiducial one, which corresponds to the conventional inflationary scenario, both with $n_s = 0.9608$.
\section{Summary and Conclusions}\label{discussion}
In this paper we have constructed a plausible model for generating the primordial curvature perturbation during the radiation dominated era, by assuming a self-induced collapse of the wave function associated to each mode of the inflaton field. In Section \ref{analisis}, we showed that there are two major conditions for this model to be considered viable: i) the collapse must affect the perturbation of the inflaton field while the respective momentum can or not be affected; ii) the time of collapse $\eta_{\vec{k}}^c$ must be independent of the mode $k$. If these conditions are met, then our model predicts a nearly scale-invariant power spectrum, which in principle has a different shape from the one given by the conventional single-field slow-roll inflationary model. This difference in the shape of the power spectrum is exclusively provided by having introduced the collapse hypothesis and is reflected in the function $C(k)$ [see Eqs. \eqref{psfinal3}, \eqref{Ck}]. However, in Section \ref{camb} we showed that the changes to the primordial spectrum introduced by the collapse are very small. Moreover, the angular temperature and temperature-polarization CMB power spectrum, within the collapse proposal, are essentially indistinguishable from the standard inflationary model in an exact de Sitter background. The fact that the angular power spectrum cannot be distinguished from the standard inflationary model arises from the requirement that the primordial power spectrum matches the amplitude of scalar fluctuations consistent with the latest CMB observations. This latter requirement implies a constraint on the time of collapse $\eta_{\vec{k}}^c \ll 10^{-2}$ Mpc. On the other hand, this constraint is consistent with the requisite that the energy density of the inflaton field should be negligible compared with the energy density of the radiation field, if the collapse is supposed to take place in the radiation era. The restriction on the time of collapse, thus, does not allow the model's predictions to depart too much from the standard ones. Additionally, considering a quasi-de Sitter background for the calculation of the inflaton perturbations during inflation, would have resulted in a primordial power spectrum equal to the fiducial model one with very small corrections due to the collapse of the inflaton's wave function. Therefore, the calculations performed in this paper, let us assure that the predictions of this model (using a quasi-de Sitter background for the calculations during inflation) for the CMB temperature and polarization fluctuation spectrum will not be different from the standard model ones. We would like to emphasize that this case is different from the one in which the collapse takes place during inflation and the changes in the primordial power spectrum due to the collapse hypothesis are important even in a perfect de Sitter background.
\acknowledgments
The authors thank D. Sudarsky for useful discussions. Support for this work was provided by PIP 0152/10 CONICET. GL acknowledges financial support by CONICET postdoctoral grant. We also thank the referee for useful suggestions.
|
3,212,635,537,647 | arxiv | \section{Introduction}
Let $V$ be finite-dimensional $\mathbbm{C}$-vector space,
$W \subset \GL(V)$ be a finite (pseudo-)reflection group
with corresponding hyperplane arrangement $\mathcal{A}$.
We assume that $\mathcal{A}$ is essential, meaning
that $\bigcap \mathcal{A} = \{ 0 \}$ and denote $n = \dim V$ the
rank of $W$. We recall that an arrangement $\mathcal{A}$ is called irreducible
if it cannot be written as $\mathcal{A}_1 \times \mathcal{A}_2$,
and that $W$ is called irreducible if it acts irreducibly on $V$.
A basic result can be written as follows
\medskip
\noindent (0) $\mathcal{A}$ is irreducible iff $W$ is irreducible.
\medskip
Steinberg showed that the exterior powers of $V$ are irreducible.
His proof is based on the encryption of irreducibility
in the connectedness of certain graphs. From
this approach, the following is easily deduced
\medskip
\noindent (1) If $W$ is irreducible, then it contains an \emph{irreducible} parabolic
subgroup.
\medskip
Although this result is probably well-known to experts and easily
checked, it does not seem to appear in print, and is a key tool for
the sequel.
We then consider the permutation $W$-module $\mathbbm{C} \mathcal{A}$. A choice
of linear maps $\alpha_H \in V^*$ with kernel $H \in \mathcal{A}$
defines a linear map $\Phi : \mathbbm{C} \mathcal{A} \to S^2 V^*$ through
$\alpha_H \mapsto \alpha_H^2$. This map can be chosen to be a morphism
of $W$-modules when $W$ is a Coxeter group. We prove
\medskip
\noindent (2) $\Phi$ is onto iff $W$ is irreducible
\medskip
\noindent meaning that each quadratic form on $V$ is a linear combination of
the quadratic forms $\alpha_H^2$, as soon as $W$ is irreducible. As
a corollary, we get
\medskip
\noindent (3) The cardinality of $\mathcal{A}$ is at least $n(n+1)/2$.
\medskip
\noindent This lower bound is better than the usual $|\mathcal{A}| \geq n/2$
of \cite{OT}, cor. 6.98, and is sharp, as $|\mathcal{A}| = n(n+1)/2$
when $W$ is a Coxeter group of type $A_n$.
We denote $d_H$ the order of the (cyclic) fixer in $W$ of $H \in \mathcal{A}$,
and define the distinguished reflection $s \in W$ to be the reflection in
$W$ with $H = \Ker(s-1)$ and additional eigenvalue $\zeta_H = \exp(2 \mathrm{i} \pi/d_H)$.
We let $d : \mathcal{A} \to \mathbbm{Z}$
denote $H \mapsto d_H$. We did not find the following in the
standard textbooks :
\medskip
\noindent (4) The data $(\mathcal{A},d)$ determines $W$.
\medskip
Letting $B$ denote the braid group associated to
$W$, we show that $\mathbbm{C} \mathcal{A}$, considered as a linear representation
of $B$, can be deformed through a path in $\Hom(B,\GL(V))$ which
canonically connects $\mathbbm{C} \mathcal{A}$ to other representations of $W$.
This turns out to provide a natural generalization of the action of Weyl groups
on their positive roots to arbitray reflection groups.
Finally, we prove that this path $h \mapsto R_h$ is periodic,
namely that $R_{h + \kappa(W)} \simeq R_h$ for some integer
$\kappa(W)$, with $\kappa(W) = 2$ when $W$ is a Coxeter group.
Moreover, $\kappa(W) = 2$ if and only if the morphism
$\Phi$ above can be chosen to be a morphism of
$W$-modules. In particular, we get
\medskip
\noindent (5) If $\kappa(W) = 2$ then the $W$-module $S^2 V^*$
is a quotient of $\mathbbm{C} \mathcal{A}$.
\medskip
We emphasize the fact that the proofs presented here are elementary in the
sense that, except for one of the last results,
no use is made either of the Shephard-Todd classification
of pseudo-reflection groups, nor of the invariants theory of these
groups.
\section{Reflection groups and reflection arrangements}
We recall from \cite{OT} the following basic notions about reflection groups and hyperplane
arrangements.
An endomorphism $s \in \GL(V)$ is called a (pseudo-)reflection
if it has finite order and $\Ker(s-1)$ is an hyperplane of
$V$. A finite subgroup $W$ of some $\GL(V)$ which is generated by
reflections is called a (complex) (pseudo-)reflection group. The hyperplane arrangement associated
to it is the collection $\mathcal{A}$ of the reflecting hyperplanes
$\Ker(s-1)$ for $s$ a reflection of $W$. There is a natural
function $d : \mathcal{A} \to \mathbbm{Z}, H \mapsto d_H$ which associates to each
$H \in \mathcal{A}$ the order of the subgroup of $W$ fixing
$H$. We let $\zeta_H = \exp(2 \mathrm{i} \pi/d_H)$, and call
a reflection $s$ distinguished if its nontrivial eigenvalue is
$\zeta_H$, with $\Ker(s-1) = H$.
A nontrivial subgroup $W_0$ of $W$ is called \emph{parabolic} if it
is the fixer of some linear subspace of $V$. By a fundamental result
of Steinberg, this linear supspace lies inside some
intersection of reflecting hyperplanes, and $W_0$ is
also a reflection group in $\GL(V)$.
In general, a (central) hyperplane $\mathcal{A}$ arrangement is a finite
collection of linear hyperplanes in $V$. When $\mathcal{A}$
originates from a reflection group $W$, then $\mathcal{A}$ is called
a reflection arrangement. An arrangement $\mathcal{A}$ is called
essential if $\bigcap \mathcal{A} = \{ 0 \}$ ; for
two arrangements $\mathcal{A}_1, \mathcal{A}_2$ in $V_1,V_2$,
the arrangement $\mathcal{A}$ in $V = V_1 \times V_2$
is defined as $\{ H \oplus V_2 ; H \in \mathcal{A}_1 \} \cup
\{ V_1 \oplus H ; H \in \mathcal{A}_2 \}$ ; two arrangements in $V$ are
isomorphic if they are deduced one from the other by some element
of $\GL(V)$ ; an essential arrangement $\mathcal{A}$ is called irreducible if
it is not isomorphic to some nontrivial $\mathcal{A}_1 \times \mathcal{A}_2$.
The following lemma shows that, when $\mathcal{A}$ is a reflection
arrangement, the arrangement $\mathcal{A}$ together with
the order of the reflections determines the reflection group.
In particular, there is at most one reflection group with
reflections of order 2 admitting a given reflection arrangement.
Notice that $\mathcal{A}$ can be assumed to be essential,
as the action of $W$ on $\bigcap \mathcal{A}$ is necessarily
trivial. Although basic, this fact does not appear in standard textbooks.
The proof given here has been found in common with Fran\c cois Digne
and Jean Michel.
\begin{prop} \label{propAdetermW} Let $\mathcal{A}$ be an essential hyperplane
arrangement in $V$.
\begin{enumerate}
\item If $P \in \GL(V)$ satisfies $P(H) \subset H$ for all $H \in \mathcal{A}$,
then $P$ is semisimple.
\item If $\mathcal{A}$ is a reflection arrangement associated
to a complex reflection group $W \subset \GL(V)$, then $(\mathcal{A},d)$ determines
$W$.
\end{enumerate}
\end{prop}
\begin{proof}
To prove (1), we choose linear forms $\alpha_H \in V^*$ with kernel $H \in \mathcal{A}$. Since
$\mathcal{A}$ is essential, $V^*$ is generated by the $\alpha_H$, hence admits a basis
made out some of them. The assumption then states that the $\alpha_H$ are eigenvectors
for $ ^t P \in \GL(V^*)$, hence $ ^t P$ is semisimple and so is $P$. Now we prove (2),
assuming that $W_1,W_2 \subset \GL(V)$ are two reflection groups with the same data
$(\mathcal{A},d)$. Let $H \in \mathcal{A}$ and $s_i \in W_i$
the distinguished reflection with $\Ker(s_i - 1) = H$. Then $x =s_1 s_2^{-1}$ fixes $H$
and acts by 1 on $V/H$, hence is unipotent. The endomorphism $x \in \GL(V)$
clearly permutes the hyperplanes. Since $\mathcal{A}$ is finite,
some power of $x$ setwise stabilizes every $H \in \mathcal{A}$,
hence is semisimple by (1). Since it is also unipotent
this power of $x$ is the identity, hence $x = \mathrm{Id}$ because
$x$ is unipotent. It follows that $s_1 = s_2$ hence $W_1 = W_2$.
\end{proof}
\section{A consequence of Steinberg lemma}
Let $W \subset \GL(V)$ be a reflection group and $\mathcal{A}$ the
corresponding reflection arrangement.
A basic fact is that the notions of irreducibility
for $W$ and $\mathcal{A}$ coincide and can be checked combinatorially on
some graph.
After recalling a proof of this, we notice a useful consequence.
We endow $V$ with a $W$-invariant hermitian scalar
product. Call $v \in V$ a \emph{root} if it is an eigenvector
of a reflection $s \in V$ such that $s.v \neq v$. For $L$
a finite set of linearly independent roots we let
$V_L$ denote the subspace of $V$ spanned by $L$, and
$\Gamma_L$ the graph on $L$ connecting $v_1$ and $v_2$
if and only if $v_1$ and $v_2$ are not orthogonal.
Notice that, if $s \in W$ is a reflection
with root $v \in V$, the following properties hold : if $v \in V_L$ then $s(V_L) \subset V_L$, because $V_L = (\mathbbm{C} v) \oplus (\Ker(s-1) \cap V_L)$ ;
if $v \in V_L^{\perp}$ then $V_L \subset (\mathbbm{C} v)^{\perp}$ is pointwise stabilized by $s$.
The following proposition is basic. We provide a proof of $(1) \Leftrightarrow (2)$ for
the convenience of the reader, because of a lack of reference. $(1) \Leftrightarrow (3)$ is due to Steinberg.
\begin{prop} The following are equivalent, for an essential reflection arrangement $\mathcal{A}$.
\begin{enumerate}
\item $W$ acts irreducibly on $V$.
\item $\mathcal{A}$ is an irreducible hyperplane arrangement.
\item $V$ admits a basis $L$ of roots such that $\Gamma_L$ is connected.
\end{enumerate}
\end{prop}
\begin{proof}
In the direction $(2) \Rightarrow (1)$, if $V = V_1 \oplus V_2$ with the $V_i$ being
$W$-stable subspaces, then we define $\mathcal{A}_i = \{ H \in \mathcal{A} \ | \ (s_H)_{|V_i} \neq \mathrm{Id} \}$
with $s_H$ the distinguished reflection w.r.t. $H \in \mathcal{A}$, and we have $\mathcal{A} = \mathcal{A}_1 \times \mathcal{A}_2$.
In the direction $(1) \Rightarrow (2)$, we let $V = V_1 \oplus V_2$ be the decomposition of
$V$ corresponding to $\mathcal{A} = \mathcal{A}_1 \times \mathcal{A}_2$.
We choose a collection of roots for $\mathcal{A}$. Let $s_1,s_2$
be two distinguished reflections associated to $H_1 \in \mathcal{A}_1,H_2 \in \mathcal{A}_2$,
respectively, and let $H = H_1 \oplus H_2 \subset V$. Consider some
reflection $s \in W$ such that $\Ker(s-1) \supset H$. If
$\Ker(s-1)$ can be written as $H_0 \oplus V_2$ with
$H_0$ some hyperplane of $V_1$, then $H_0 \oplus V_2 \supset
H_1 \oplus H_2$ implies $H_0 \supset H_1$, hence $H_0 = H_1$
by equality of dimensions, meaning that $s$ is some power of
$s_1$. Similarly, if $\Ker(s-1)$ can be written as $V_1 \oplus H_0$ with
$H_0$ some hyperplane of $V_2$, then $s$ is a power of $s_2$.
Considering the reflection $s_2 s_1 s_2^{-1}$, which fixes $H$
and has reflecting hyperplane $s_2.\Ker(s_1-1)$,
since $s_1 \neq s_2$ it follows that $s_2 s_1 s_2^{-1}$
is a power of $s_1$. Then
$s_2.\Ker(s_1 - 1) = \Ker(s_1-1)$
hence $s_1 ,s_2$ commute and have orthogonal roots.
The subspace $V_1^0$ spanned by all roots aring from $\mathcal{A}_1$
is thus setwise stabilized by all reflections of $W$, hence $V_1^0 = V$.
On the other hand, the hermitian scalar product induces an isomorphism
between $V_1^0$ and $V_1^*$ (because $\mathcal{A}_1$, like
$\mathcal{A}$, is essential), hence $V_2 \neq \{ 0 \} \Rightarrow V_1^0 \neq V$,
a contradiction.
We now prove $(1) \Leftrightarrow (3)$. Let $L_0$ be of maximal size among the sets $L$ of
linearly independent roots with connected $\Gamma_L$. We prove that $|L| = \dim V$ if $W$ is irreducible.
Indeed, since $W$ is irreducible generated by reflections and $V_{L_0} \subset V$,
there would otherwise exist a reflection $s$ such that $s(V_{L_0}) \not\subset V_{L_0}$.
Letting $v \in V$ be a root of $s$, we have $v \not\in V_{L_0}$ and
$v \not\in (V_{L_0})^{\perp}$. This proves that $L = L_0 \sqcup \{ v \}$
is made out linearly independant roots and that $\Gamma_L$ is connected, since
$v \not\in (V_{L_0})^{\perp}$ cannot be orthogonal to all roots spanning $L_0$
and $L_0$ is already connected. From this contradiction it follows that
$L_0$ has cardinality $\dim V$. Conversely, if $V$ admits a basis
$L$ of roots such that $\Gamma_L$ is connected, then $W$
is irreducible, for otherwise $V = V_1 \oplus V_2$ with
$V_1,V_2$ nontrivial orthogonal $W$-stable subspaces, and
$L = L_1 \sqcup L_2$ with $L_i = \{ x \in L \ | \ x \in U_i \}$.
Then $\Gamma_L = \Gamma_{L_1} \sqcup \Gamma_{L_2}$, contradicting
the connectedness of $\Gamma_L$.
\end{proof}
\begin{cor} \label{corparab} If $W \subset \GL(V)$ is an irreducible reflection group
then it admits an \emph{irreducible} parabolic subgroup
of rank $\dim V - 1$.
\end{cor}
\begin{proof}
Considering a set $L$ of linearly independent roots such that
$\Gamma_L$ is connected, as given by the proposition,
there exists $L_0 \subset L$ with $L = L_0 \sqcup \{ v \}$
such that $\Gamma_{L_0}$ is still connected. Then $V_{L_0}$
has dimension $\dim V - 1$, and its orthogonal is spanned
by some $v' \in V$. Letting $W_0$ denote the parabolic
subgroup fixing $v'$, it has rank $\dim V - 1$,
admits for roots all elements of $L_0$, hence is irreducible
since $\Gamma_{L_0}$ is connected.
\end{proof}
\section{Quadratic forms on $V$}
Let $\mathcal{A}$ be an essential hyperplane arrangement in $V$.
The integer $n = \dim V$ is the \emph{rank} $\rk \mathcal{A}$ of $\mathcal{A}$.
For each $H \in \mathcal{A}$ we let $\alpha_H \in V^*$ denote
some linear form with kernel $H$. For a field $\mathbbm{k}$, we let $\mathbbm{k} \mathcal{A}$ denote a vector
space with basis $v_H, H \in \mathcal{A}$, and define
a linear map $\Phi : \mathbbm{C} \mathcal{A} \to S^2 V^*$ by $\Phi(v_H) = \alpha_H^2$.
For $\Phi$ to be onto, it is nessary that $\mathcal{A}$ is
irreducible. Indeed, if $\mathcal{A} = \mathcal{A}_1 \times \mathcal{A}_2$
corresponds to some direct sum decomposition $V = V_1 \oplus V_2$,
then choosing two nonzero linear forms $\varphi_i \in V_i^*$
defines a quadratic form $\varphi_1 \varphi_2 \in S^2 V^*$ which
does not belong to $\mathrm{Im}\, \Phi$. This condition is also sufficient in rank 2.
\begin{prop} If $\mathcal{A}$ is essential of rank 2, then $\Phi$
is onto if and only if $\mathcal{A}$ is irreducible.
\end{prop}
\begin{proof}
Since $\mathcal{A}$ is essential, $\mathcal{A}$ contains at least two
hyperplanes $H_1,H_2$. We denote $\alpha_i = \alpha_{H_i}$ the corresponding
(linearly independant) linear forms. If $\mathcal{A} = \{ H_1, H_2 \}$,
then $\mathcal{A}$ is obviously reducible, so we may assume that
$\mathcal{A}$ contains at least another hyperplane. Let $\beta$ denote
the corresponding linear form. It can be written as $\beta = \lambda_1 \alpha_1
+ \lambda_2 \alpha_2$ with $\lambda_1 \neq 0$, $\lambda_2 \neq 0$. Since
$\beta^2 = \lambda_1^2 \alpha_1 ^2 + 2 \lambda_1 \lambda_2 \alpha_1 \alpha_2 + \lambda_2^2
\alpha_2^2$ and $\alpha_1^2,\alpha_2^2 , \beta^2 \in \mathrm{Im}\, \Phi$
we get $\alpha_1 \alpha_2 \in \mathrm{Im}\, \Phi$. Since $\alpha_1^2, \alpha_2^2 \in
\mathrm{Im}\, \Phi$ and $\alpha_1,\alpha_2$ are linearly independent it
follows that $\mathrm{Im}\, \Phi = S^2 V^*$.
\end{proof}
This condition is not sufficient in rank 3, as shows the following
example. Consider in $\mathbbm{C}^3$ the central arrangement of polynomial
$xyz(x-y)(y-z)$. The morphism $\Phi$ is obviously not surjective,
as $\dim \mathbbm{C} \mathcal{A} = 5$ and $\dim S^2 V^* = 6$. However, $\mathcal{A}$
is irreducible, because its Poincar\'e polynomial is $P_{\mathcal{A}}(t) = (1+t)(1+4t+4t^2)$,
which is not divisible by $(1+t)^2$ --- recall from \cite{OT} that $P_{\mathcal{A}_1 \times
\mathcal{A}_2} = P_{\mathcal{A}_1} P_{\mathcal{A}_2 }$ and that
$P_{\mathcal{A}}(t)$ is divisible by $1+t$ whenever $\mathcal{A}$ is central.
It is however sufficient when $\mathcal{A}$ is a \emph{reflection arrangement}.
\begin{theo}
Let $\mathcal{A}$ be a (essential) reflection arrangement. Then
$\Phi$ is surjective if and only if $\mathcal{A}$ is irreducible.
\end{theo}
\begin{proof}
We assume that $\mathcal{A}$ is irreducible, and prove that $\Phi$
is surjective by induction on $\rk \mathcal{A}$.
If $\rk \mathcal{A} \leq 2$, this is a consequence of the above proposition,
so we can assume $\rk \mathcal{A} \geq 3$. We denote $W$
the corresponding (pseudo-)reflection group, and endow $V$ with a $W$-invariant
hermitian scalar product. By corollary \ref{corparab} there exists an irreducible
maximal parabolic subgroup $W_0 \subset W$, defined by
$W_0 = \{ w \in W \ | \ w.v = v \}$ for some $v \in V \setminus \{ 0 \}$.
We let $H_0 = (\mathbbm{C} v)^{\perp}$. By Steinberg theorem $W_0$ is a reflection
group, whose (pseudo-)reflections are the reflections of $W$ contained
in $W_0$. Let $\mathcal{A}_0 \subset \mathcal{A}$ denote
the arrangement in $V$ corresponding to $W_0$. Since $v \in H$
for all $H \in \mathcal{A}_0$, by the induction hypothesis we have
$Q \subset S^2 H_0^*$, where $Q = \mathrm{Im}\, \Phi$ and
$S^2 H_0^* \subset S^2 V^*$ is induced by $H^* \subset V^*$, letting
$\gamma \in H_0^*$ act on $H_0^{\perp}$ by 0.
Let $\alpha \in V^* \setminus \{ 0 \}$ such that $H_0 = \Ker \alpha$.
We have $S^2 V^* = S^2 H_0^* \oplus \alpha H_0^* \oplus \mathbbm{C} \alpha^2$.
Since $\mathcal{A}$ is irreducible, there exists
$H \in \mathcal{A}$ such that $\alpha_H \not\in \mathbbm{C} \alpha$
and $\alpha_H \not\in S^2 H_0^*$. Such a linear form can be written
$\lambda (\alpha + \beta)$ with $\lambda \in \mathbbm{C} \setminus \{ 0 \}$ and
$\beta \in S^2 H_0^* \setminus \{ 0 \}$. Then $(\alpha + \beta)^2 \in Q$
and $\beta^2 \in Q$, so we have $\alpha^2 + 2 \alpha \beta \in Q$.
We make $W$ act on $V^*$ by $w.\gamma(x) = \gamma(w^{-1}.x)$,
for $x \in V$, $\gamma \in V^*$. Of course this action
can be restricted to a $W_0$-action on $H_0^* \subset V^*$. Then $w.(\alpha + \beta) \in Q$
for all $w \in W$, and since $w. \alpha = \alpha$ whenever
$w \in W_0$, we get $\alpha^2 + 2 \alpha(w.\beta) \in Q$
for all $w \in W_0$. Consider now the subspace $U$ of
$H^*$ spanned by the $w_1.\beta - w_2.\beta$ for $w_1,w_2 \in W_0$.
It is a $W_0$-stable subspace of $H_0^*$. Recall that $H_0$, hence
$H_0^*$, is irreducible under the action of $W_0$. If $U = \{ 0 \}$ then
$w.\beta = \beta$ for all $w \in W_0$, hence $H_0 = \mathbbm{C} \beta$
and $\dim V = 2$, which has been excluded. Thus $U \neq \{ 0 \}$
hence $U = H_0^*$. By $2\alpha(w_1.\beta - w_2.\beta) =
(\alpha^2 + 2 \alpha (w_1.\beta)) - (\alpha^2 + 2 \alpha (w_2.\beta))$
we thus get $\alpha H_0^* \subset Q$. Then $(\alpha + \beta)^2
\in \alpha^2 + \alpha H_0^* + S^2 H_0^* \subset \alpha^2 + Q$
implies $\alpha^2 \in Q$. It follows that $Q \supset S^2 V^*$
which concludes the proof.
\end{proof}
\begin{cor} If $\mathcal{A}$ is an irreducible reflection arrangement of
rank $n$, then $|\mathcal{A} | \geq n(n+1)/2$.
\end{cor}
Notice that the above lower bound is sharp, as it is reached
for Coxeter type $A_n$.
When $\mathcal{A}$ is a reflection arrangement
with corresponding reflection group $W$, both
$\mathbbm{C} \mathcal{A}$ and $S^2 V^*$ can be endowed by natural $W$-actions,
where the action on $\mathbbm{C} \mathcal{A}$ is defined by $w.v_H = v_{w(H)}$.
It is thus natural to ask whether the linear forms
$\alpha_H$ can be chosen such that $\Phi$ is a morphism
of $W$-modules.
\begin{prop} \label{propequivPhiCox}
If $\mathcal{A}$ is a complexified \emph{real} reflection arrangement
(in particular $W$ is a finite Coxeter group), then
the linear forms $\alpha_H$ can be chosen such that $\Phi$
is a morphism of $W$-modules.
\end{prop}
\begin{proof}
We choose a $W$-invariant scalar product on the original real form $V_0$ of
$V$ and extend it to a $W$-invariant hermitian scalar product on $V$.
For every $H \in \mathcal{A}$ we choose $x_H \in V_0$ orthogonal
to $H$ with norm 1, and define $\alpha_H : y \mapsto (x|y)$,
our convention on hermitian scalar products being that they are linear
on the right. Then, for any $w \in W$, $w. x_H \in V_0$ is orthogonal
to $w(H)$ of norm 1, hence $w.x_H = \pm x_{w(H)}$. Since
$w. \alpha_H$ maps $y$ to $(w.x_H | y)$ we have $(w.\alpha_H)^2 =
\alpha_{w(H)}^2$, which shows that $\Phi$ is a morphism
of $W$-modules.
\end{proof}
When $W$ is not a Coxeter group, the $W$-modules $\mathbbm{C} \mathcal{A}$ and $S^2 V^*$ are
generally unrelated. However, this property is not a characterization
of Coxeter groups, as there is at least one example of a (non-Coxeter)
complex reflection group for which $\Phi$ can be a morphism
of $W$-module. This is the group labelled $G_{12}$ in the
Shephard-Todd classification. Notice that, in such a case,
one must have $\sum \alpha_H^2 = 0$, otherwise
this sum would provide a copy of the trivial
representation inside $S^2 V^*$, forcing $W$
to be a real reflection group.
We briefly describe this example. The group
$G_{12}$ can be described in $\GL_2(\mathbbm{C})$
by 3 generators $a,b,c$ of order 2, satisfying the
relation $abca=bcab=cabc$. We choose the following model :
$$
a = \begin{pmatrix} 1 & 1 + \sqrt{-2} \\ 0 & -1 \end{pmatrix}
b = \begin{pmatrix} -1 & 0 \\ 1-\sqrt{-2} & 1 \end{pmatrix}
c = \begin{pmatrix} \sqrt{-2} & -1 + \sqrt{-2} \\ -1 - \sqrt{-2} & -\sqrt{-2} \end{pmatrix}
$$
We define a collection of vectors $e_H \in V$, such that $w.e_H = \pm e_{w(H)}$. Letting
$\alpha_H : x \mapsto (e_H|x)$, the associated $\Phi : \mathbbm{C} \mathcal{A} \to S^2 V^*$
is then a morphism of $W$-modules. A $W$-invariant hermitian scalar product is
given on this matrix model by $(X|Y) = ^t \bar{X} A Y$ with
$$
A = \begin{pmatrix} 2 & 1 + \sqrt{-2} \\ 1- \sqrt{-2}& 2 \end{pmatrix}
$$
We choose for $e_H$ the 12
following vectors, which are fixed by the
corresponding reflection $s$.
$$
\begin{array}{|c||c|c|c|}
\hline
s & babab & a& b \\
e_H & (1+\sqrt{-2},-2) &(1,0)&(0,1) \\
\hline
\hline
s & ababa& bcb& c \\
e_H & (-2,1-\sqrt{-2})&(1,\sqrt{-2}) &(1,-1) \\
\hline
\hline
s & acaca & cbc & aba \\
e_H & (1-\sqrt{-2},1+\sqrt{-2}) & (-1+\sqrt{-2},-\sqrt{-2}) & (-1-\sqrt{-2},1) \\
\hline
\hline
s & bab & cac & aca \\
e_H & (-1,1-\sqrt{-2})&
(-\sqrt{-2},1+\sqrt{-2})&(-\sqrt{-2},1) \\
\hline
\end{array}
$$
It can be checked that the
reflections $a,b,c$ act on these vectors by monomial matrices,
with nonzero entries in $\{ \pm 1 \}$ (hence factors through the
hyperoctahedral group of rank 12). On this example, $S^2 V^*$
is a selfdual $W$-module.
We make the following remark.
\begin{prop} \label{propPhiEqKap2} For $\Phi$ to be a morphism of $W$-modules
it is necessary that $\kappa(W) \leq 2$, where
$$
\kappa(W) = \min \{ n \in \mathbbm{Z}_{>0} \ | \forall w \in W \ \forall H \in \mathcal{A} \ \
w.\alpha_H = \zeta \alpha_H \Rightarrow \zeta^n = 1 \}
$$
\end{prop}
Using the Shephard-Todd classification, we will show in section 6
that this condition is actually sufficient when $W$ is irreducible.
\section{A path between representations}
In this section we define a natural connection between
the action of $W$ on $\mathbbm{C} \mathcal{A}$ and more
surprising representations of $W$. For
this we need to introduce the space $X = V \setminus
\bigcup \mathcal{A}$ of regular vectors, on which
$W$ acts freely, and its quotient (orbit) space
$X/W$. We choose a base point $\underline{z} \in X$. The fundamental groups $B = \pi_1(X/W)$ and $P = \pi_1(X)$
are known as the braid group and pure braid group
associated to $W$, respectively. There is a natural morphism
$\pi : B \to W$ with kernel $P$. We first construct a deformation
of $W \to \GL(\mathbbm{C} \mathcal{A})$ as a linear representation
of the braid group. This deformation should
not be confused with the one described in \cite{KRAMCRG} when
$W$ is a 2-reflection group.
\subsection{A representation of the braid group}
To each $H \in \mathcal{A}$ is canonically associated
a differential form $\omega_H = \frac{\dd \alpha_H}{\alpha_H}$,
using some arbitrary linear form $\alpha_H$ with kernel
$\alpha_H$. We introduce idempotents
$p_H \in \End(\mathbbm{C} \mathcal{A})$ defined by
$p_{H_1}.v_{H_2} = v_{H_2}$ if $H_1 = H_2$,
$p_{H_1}.v_{H_2} = 0$ otherwise. Choosing $h \in \mathbbm{C}$,
the 1-form
$$
\omega = h \sum_{H \in \mathcal{A}} p_H \omega_H \in \Omega^1(X) \otimes \gl(\mathbbm{C} \mathcal{A})
$$
satisfies $\omega \wedge \omega = 0$, hence defines a flat connection on the trivial
vector bundle $X \times \mathbbm{C} \mathcal{A} \to X$, which is clearly $W$-equivariant
for the diagonal action on $X \times \mathbbm{C} \mathcal{A}$. Dividing out by $W$,
the corresponding flat bundle over $X/W$
thus defines by monodromy a linear representation of $B$ in $\mathbbm{C} \mathcal{A}$.
Letting $\gamma$ denote a representative loop of $\sigma \in B = \pi_1(X/W)$,
we can lift it to a path $\tilde{\gamma}$ in $X$ with endpoints
$\underline{z}$ and $\pi(\sigma).\underline{z}$, where $\underline{z}$
is the chosen basepoint in $X$. The 1-forms $\tilde{\gamma}^* \omega_H$
can be written as $\gamma_H(t) \dd t$ for some function $\gamma_H$
on $[0,1]$, and the differential equation $\dd f = (\gamma^* \omega)f$
to consider is then $f'(t) = h(\sum_{H \in \mathcal{A}} \gamma_H(t) p_H)f(t)$,
with $f(0) = \mathrm{Id} \in \End(\mathbbm{C} \mathcal{A})$. Since the $p_H$ commute one to the other, the
solution is easy to compute :
$$
f(t) = \prod_{H \in \mathcal{A}} \exp\left( h p_H \int_0^t \gamma_H(u) \dd u \right)
$$
and the monodromy representation is given by
$$\sigma \mapsto R_h(\sigma) = \pi(\sigma) \prod_{H \in \mathcal{A}} \exp(h p_H \int_{\gamma} \omega_H)$$
where we identified $w \in W$ with $R_0(w) \in \End(\mathbbm{C} \mathcal{A})$. In particular, the image of $P$ is commutative. More precisely, if $\gamma_0$ is a loop in $X$
around a single hyperplane $H$, the class $[\gamma_0] \in P$ is mapped to $\exp(2 \mathrm{i} \pi h p_H)$.
Since $P$ is generated by such classes, it follows that $R_n(P) = \{ \mathrm{Id} \}$ hence
$R_n$ factors through a representation of $W$ whenever $n \in \mathbbm{Z}$.
We recall that $B$ is generated by so-called braided reflections
(`generators-of-the-monodromy' in \cite{BMR}), which are defined
as follows. For a distinguished reflection $s \in W$, an element $\sigma \in B$
with $\pi(\sigma) = s$ is called a braided reflection if it admits
as representative a path $\gamma$ from $\underline{z}$ to $s. \underline{z}$
which is a composite $(s.\gamma_0)^{-1}* \gamma_1 * \gamma_0 $
of paths with the following properties. Here $\gamma_0 : \underline{z} \rightsquigarrow
\underline{z}_0$, $\gamma_1 : \underline{z_0} \rightsquigarrow
s.\underline{z_0}$ and $(s.\gamma_0)^{-1} : s.\underline{z}_0 \rightsquigarrow
s.\underline{z}$ is the reverse path of $s.\gamma_0$,
and $\gamma_1(t) = \varepsilon \exp(2 \mathrm{i} \pi t/d_H) \underline{z_0}^- + \underline{z_0}^+$
where $\underline{z_0}^+$ and $\underline{z_0}^-$ are the orthogonal
projection on $H$ and $H^{\perp}$, respectively, for $\varepsilon>0$ small enough and
$\underline{z_0}$ sufficiently close to $H$ so that the homotopy class of this
path does not vary when $\varepsilon$ decreases and $\underline{z_0}^+ \not\in H'$ for
$H' \in \mathcal{A} \setminus \{ H \}$.
Note that $\int_{s. \gamma_0} \omega_{H'} = \int_{\gamma_0} \omega_{s(H')}$
for all $H' \in \mathcal{A}$, hence $\int_{\gamma} \omega_H =
\int_{\gamma_1} \omega_H = (2 \mathrm{i} \pi)/d_H$. In particular,
for such a braided reflection $\sigma$ we get
$$
R_h(\sigma).v_H = \pi(\sigma) \exp (h p_H \int_{\gamma} \omega_H) v_H = \exp( 2 \mathrm{i} \pi h /d_H) v_H.$$
\noindent Moreover, if $H$ and $H'$ have orthogonal
roots, then again $\int_{\gamma} \omega_{H'} = \int_{\gamma_1} \omega_{H'}$. But
in this case $\alpha_{H'}(\gamma_1(t))$ is constant hence $\int_{\gamma} \omega_{H'} = 0$.
An immediate consequence of this is that we can restrict ourselves to irreducible groups, namely
\begin{prop} \label{propR1deco} If $W = W_1 \times \dots \times W_r$
is a decomposition of $W$ in irreducible components, with corresponding decompositions
$B = B_1 \times \dots \times B_k$ and $\mathcal{A} = \mathcal{A}^1 \times
\dots \mathcal{A}^r$,
then $R_h = R_{h}^{(1)} \times \dots \times R_{h}^{(r)}$ with $R_{h}^{(k)} : W_k \to \GL(\mathbbm{C} \mathcal{A}^k)$.
\end{prop}
From the formulas above follows that, under the action of $R_h$,
$\mathbbm{C} \mathcal{A}$ is the direct sum of the stable
subspaces $\mathbbm{C} \mathcal{A}_k$, where $\mathcal{A} = \mathcal{A}_1
\sqcup \dots \sqcup \mathcal{A}_r$ is the decomposition
of $\mathcal{A}$ in orbits under the action of $W$. We let
$R_h^k : B \to \GL(\mathbbm{C} \mathcal{A}_k)$, so that
$R_h = R_h^1 \oplus \dots \oplus R_h^r$.
\begin{prop} If $h \not\in \mathbbm{Z}$, then $R_h^k$ is irreducible for each $1 \leq k \leq r$.
\end{prop}
\begin{proof}
For each $H \in \mathcal{A}_k$ we choose a loop $\gamma_H$
based at $\underline{z}$ around the hyperplane $H$,
We have $\int_{\gamma_H} \omega_H = 2 \mathrm{i} \pi$ and
$\int_{\gamma_H} \omega_{H'} = 0$ for $H \neq H'$. Letting $Q_H$
denote the class of $\gamma_H$ in $P = \pi_1(X,\underline{z})$
we thus have $R_h^k(Q_H) = \exp(2 \mathrm{i} \pi h p_H)$, hence
$R_h^k(Q_H) - \mathrm{Id}$ is a nonzero multiple of $p_H$ if $h \not\in \mathbbm{Z}$.
It follows that the elements $R_h^k(Q_H)$ generate the commutative
algebra of diagonal matrices in $\End(\mathbbm{C} \mathcal{A}_k)$.
Let $\mathcal{G}_k$ be the oriented graph
on the $v_H, H \in \mathcal{A}_k$ with an edge
$(v_{H_1},v_{H_2})$ if there exists $x \in B$ such that
the matrix $R_h^k(x)$ has nonzero entry at $(v_{H_1},v_{H_2})$.
If $\mathcal{G}_k$ is connected, then $R_h^k$ is irreducible
(see e.g. \cite{IRRED} prop. 3 cor. 2).
Choosing
for each distinguished reflection $s \in W$ a braided reflection $\sigma$,
$R_h^{k}(\sigma)$ has nonzero entries in $(v_H,v_{s(H)})$
and $(v_{s(H)},v_H)$ for each $H \in \mathcal{A}$. Since $\mathcal{A}_k$ is
an orbit under $W$ and $W$ is generated by distinguished reflections, it follows that
$\mathcal{G}_k$ is connected, concluding the proof.
\end{proof}
Since $R_h$ factors through $W$ when $h \in \mathbbm{Z}$, this has the following consequence.
\begin{cor} \label{corsemisimple}
For all $h \in \mathbbm{C}$, the representation $R_h$ of $B$ is semisimple.
\end{cor}
We choose a collection of roots $e_H, H \in \mathcal{A}$.
Notice that, for $w \in W$, $w(H) =H$ implies $w.e_H = e^{\mathrm{i} \theta} e_H$
for some $\theta \in \mathbbm{R}$.
\begin{lemma} \label{lemgammaom} If $\gamma : \underline{z} \rightsquigarrow w.\underline{z}$
is a path in $X$ with $w \in W$ such that $w.e_H = e^{\mathrm{i} \theta} e_H$, then $\int_{\gamma} \omega \in \mathrm{i} \theta + 2 \mathrm{i} \pi \mathbbm{Z}$.
\end{lemma}
\begin{proof} We can assume $-\pi < \theta \leq \pi$. Since $\int_{\gamma} \omega_H$
is independent of the choice of $\alpha_H$, we can choose $\alpha_H : x \mapsto (e_H|x)$
with $(e_H|e_H) = 1$. We have $\alpha_H(w.x) = e^{\mathrm{i} \theta} \alpha(x)$. We write
$\gamma(t) = \gamma_H(t) + \gamma_0(t) e_H$ with $\gamma_0 : [0,1] \to \mathbbm{C}$
and $\gamma_H : [0,1] \to H$. Then $\alpha_H(\gamma(t)) = \gamma_0(t)$
and $\int_{\gamma} \omega_H = \int_{\gamma_0} \frac{\dd z}{z}$. Letting $x = \alpha_H(\underline{z}) \in \mathbbm{C}^{\times}$,
we have $\gamma_0 : x \rightsquigarrow e^{\mathrm{i} \theta} x$. If $\gamma_1 : x
\rightsquigarrow e^{\mathrm{i} \theta}x$ is an arbitrary path in $\mathbbm{C}^{\times}$, then $\gamma_0*\gamma_1^{-1}$
is a loop in $\mathbbm{C}^{\times}$, hence $\int_{\gamma_0} \frac{\dd z}{z} - \int_{\gamma_1}
\frac{\dd z}{z}$ is a multiple of $2 \mathrm{i} \pi$. If $e^{\mathrm{i} \theta} = 1$
this concludes the proof. If $e^{\mathrm{i} \theta} = -1$ we consider
$\gamma_1(t) = x e^{\mathrm{i} \pi t}$, for which $\int_{\gamma_1} \frac{\dd z}{z} = \mathrm{i} \pi$.
If $e^{\mathrm{i} \theta} = \zeta \not\in \{1,-1 \}$ we consider $\gamma_1(t) = (1-t)x + te^{\mathrm{i} \theta} x$
and $\int_{\gamma_1} \frac{\dd z}{z} = \left. \log(1 + (e^{\mathrm{i} \theta} -1)t) \right|_0^1$
where $\log$ denotes the natural determination of the logarithm over $\mathbbm{C} \setminus \mathbbm{R}^-$.
It follows that $\int_{\gamma_1} \frac{\dd z}{z} = \log e^{\mathrm{i} \theta} = \mathrm{i} \theta$,
and the conclusion follows.
\end{proof}
We recall from section 4 the definition of $\kappa(W)$.
$$
\kappa = \kappa(W) = \min \{ n \in \mathbbm{Z}_{>0} \ | \forall w \in W \ \forall H \in \mathcal{A} \ \
w.e_H = \zeta e_H \Rightarrow \zeta^n = 1 \}
$$
\begin{theo} \label{theoperiode}
For all $h \in \mathbbm{C}$, $R_{h + \kappa}$ is isomorphic to $R_h$. Moreover,
$\kappa$ is the smallest positive real number such that $R_{\kappa} \simeq
R_0$.
\end{theo}
\begin{proof}
Recall from corollary \ref{corsemisimple} that,
for all $h \in \mathbbm{C}$, $R_h$ is semisimple.
Letting $\chi_h$ denote the character of $R_h$ on $B$, it is thus sufficient
to prove $\chi_h = \chi_{h+\kappa}$ for all $h \in \mathbbm{C}$ in order
to get $R_{h + \kappa} \simeq R_h$.
Let $g \in B$ with $w = \pi(g)$, and $\gamma : \underline{z} \rightsquigarrow w.\underline{z}$
a representing path. By the explicit formulas above, we
have
$$
\chi_h(g) = \sum_{w(H) = H} \exp(h \int_{\gamma} \omega_H)
$$
and $R_{h+ \kappa} \simeq R_h$ follows by lemma \ref{lemgammaom}. We now show that $\kappa$ is minimal
with this property. Assuming otherwise, we let $0 < h < \kappa$
such that $\chi_h = \chi_0$. By definition of $\kappa$
there exists $w \in W$, $H \in \mathcal{A}$ such that $w.e_H = e^{\mathrm{i} \theta} e_H$
with $e^{\mathrm{i} \theta h} \neq 1$. Letting $g \in B$ with $\pi(g) = w$
and $\gamma : \underline{z} \rightsquigarrow w.\underline{z}$ a representing
path, we have $\int_{\gamma} \omega_H \in \mathrm{i} \theta + 2 \mathrm{i} \pi \mathbbm{Z}$,
hence $\exp(h \int_{\gamma} \omega_H) \neq 1$. It follows that $|\chi_h(g)| < \chi_0(g)$
hence a contradiction.
\end{proof}
\begin{prop} For any $H \in \mathcal{A}$ and $h \in \mathbbm{C}$,
if $\sigma$ is a braided reflection around $H$, then
$R_h(\sigma)$ is conjugated to $R_0(\sigma) \exp(h (2\mathrm{i} \pi/d_H) p_H)$.
\end{prop}
\begin{proof}
Let $\sigma$ be a braided reflection with corresponding paths $\gamma, \gamma_0,\gamma_1$
as above. Since $\gamma_0$ and $s.\gamma_0$ represent the same path in $X/W$,
$R_h(\sigma)$ is conjugated to the monodromy along the loop $\gamma_1$ in $X/W$,
so that we can assume $\underline{z} = \underline{z_0}$, $\gamma = \gamma_1$.
In view of the formulas above, we thus only need to show that $\int_{\gamma_1} \omega_{H'} = 0$
for $H' \neq H$. This can be done by direct computation,
as $\alpha_{H'}(\gamma_1(t)) = \varepsilon \exp(2 \mathrm{i} \pi t/d_H) \alpha_{H'}(\underline{z_0}^-) +
\alpha_{H'}(\underline{z_0}^+)$ with $\alpha_{H'}(\underline{z_0}^-) \neq 0$,
and $\int_{\gamma_1} \omega_{H'}$ is constant when $\varepsilon \to 0$. Since $\int_{\gamma_1} \omega_{H'} \to 0$
when $\varepsilon \to 0$ we get $\int_{\gamma} \omega_{H'} = 0$ and the conclusion.
\end{proof}
\subsection{New representations of $W$}
When $n \in \mathbbm{Z}$, the representation $R_n$ of $B$ factorizes through $W$.
In case $W$ is irreducible, the action of the center is easy to describe.
\begin{lemma} \label{lemR1centre} If $w \in W$ acts by $\lambda \in \mathbbm{C}^{\times}$
on $V$, then $R_n(w) = \lambda^n \mathrm{Id}$ if $n \in \mathbbm{Z}$. More generally,
if there exists $v \in X$ such that $w.v = \lambda v$ for some $\lambda \in \mathbbm{C}^{\times}$,
then $R_n(w)$ is conjugated to $\lambda^n R_0(w)$
\end{lemma}
\begin{proof}
We first assume that $w$ acts on $V$ by $\lambda$. We can write $\lambda = \exp(\mathrm{i} \theta)$ with $0 < \theta \leq 2 \pi$. We consider
the loop $\gamma(t) = e^{\mathrm{i} \theta t} \underline{z}$ in $X/W$, whose image
in $W$ is $w$. By direct
calculation we have $\int_{\gamma} \omega_H = \mathrm{i} \theta$ for all $H \in \mathcal{A}$
and the conclusion follows from the general formula for $R_1$.
Now assume $w.v = \lambda v$ for some $\lambda = \exp(\mathrm{i} \theta)$ with $0 < \theta \leq 2 \mathrm{i} \pi$.
Up to conjugation, we can assume $v = \underline{z}$, the loop
$\gamma(t) = e^{\mathrm{i} \theta t} \underline{z}$ in $X/W$ has image $w$ in $W$ and we conclude as before.
\end{proof}
More involved tools prove the following.
\begin{prop} \label{proprestrparab} If $W_0$ is a parabolic subgroup of $W$ with hyperplane
arrangement $\mathcal{A}$ and $n \in \mathbbm{Z}$, then
the restriction of $R_n$ to $W_0$ is isomorphic to the direct sum
of the representation $R_n$ of $W_0$ and the permutation representation
of $W_0$ on $\mathbbm{C}(\mathcal{A} \setminus \mathcal{A}_0)$.
\end{prop}
\begin{proof}
We let $R_h^0$ denote the representation $R_h$ for $W_0$ acting on
$\mathbbm{C} \mathcal{A}_0$, and $S_h$ the direct sum of $R_h^0$ and
the permutation representation of $W_0$ on $\mathcal{A}\setminus
\mathcal{A}_0$. We can embed the braid group $B_0$ of $W_0$
inside $B$ such that, as representations over
$\mathbbm{C}[[h]]$, the restriction to $B_0$ of $R_h$ is isomorphic to
$S_h$ (see \cite{KRAMCRG}, theorem 2.9). In particular, for all $g \in B_0$,
the traces of $R_h(g)$ and $S_h(g)$ are equal, as formal series in $h$.
Since these traces are holomorphic functions in $h$, it follows that
they are equal for all $h \in \mathbbm{C}$. This means that the semisimple
representations of $B_0$ associated to the restriction of $R_h$ and to $S_h$
are isomorphic. Since the restriction of $R_n$ and $S_n$
are semisimple for all $n \in \mathbbm{Z}$
the conclusion follows.
\end{proof}
The determination of the action of the center
enables us to prove that, contrary to $R_0$, $R_1$ is faithful in general.
\begin{prop} { \ \ }
\begin{enumerate}
\item $R_0$ has kernel $Z(W)$.
\item $R_1$ is faithful on $W$.
\item $\Ker R_n = \{ w \in Z(W) \ | \ w^n = 1 \}$
\end{enumerate}
\end{prop}
\begin{proof}
Without loss of generality (because of proposition \ref{propR1deco}) we may assume that $W$ is irreducible.
Obviously $(3) \Rightarrow (2)$.
Although (1) is also a special case of (3), we prove it separately. If $|\mathcal{A}| = 1$ the statement is obvious, so
we assume $|\mathcal{A}| \geq 2$. Clearly $Z(W) \subset \Ker R_0$,
as $\Ker(wgw^{-1} - 1) = w.\Ker(g-1)$ for all $g,w \in W$. Let
$w \in W$ such that $R_0(w) = \mathrm{Id}$, that is $w(H) = H$ for all $H \in
\mathcal{A}$. Let $s \in W$ be a distinguished reflection with
reflection hyperplane $H$. Then $wsw^{-1}$ is a reflection with
$\Ker(wsw^{-1} -1) = H$ which has the same nontrivial eigenvalue as $s$,
hence $wsw^{-1} = s$.
It follows that $w$ commutes to all distinguished reflections of $W$,
hence $w \in Z(W)$ since $W$ is generated by such elements.
We now prove (3). Let $w \in \Ker R_n$. Since $R_1(w) = R_0(w) D$
for some diagonal matrix $D$, the nonzero entries of $R_n(w)$ determine
the permutation matrix $R_0(w)$, hence $w \in Z(W)$. Since $W$ is irreducible,
$w$ acts on $V$ by some scalar $\lambda \in \mathbbm{C}^{\times}$, hence $R_n(w) = \lambda^n = 1$
by lemma \ref{lemR1centre}, hence $w^n = 1$. The converse inclusion is obvious
by lemma \ref{lemR1centre}.
\end{proof}
\begin{cor} The exponent of $Z(W)$ divides $\kappa(W)$.
If $W$ is irreducible then $|Z(W)|$ divides $\kappa(W)$.
\end{cor}
\begin{proof}
By the proposition, the period of the sequence $\Ker R_n$
is the exponent of $Z(W)$. Since $\Ker R_n$
is $\kappa(W)$-periodic the conlusion follows. If
$W$ is irreducible then $Z(W)$ is cyclic hence
its order equals its exponent.
\end{proof}
In the proof of theorem \ref{theoperiode}, we computed the character $\chi_n$
of $R_n$. We recall the result here :
\begin{prop} For any $w \in W$ and $n \in \mathbbm{Z}$ we have
$$
\chi_n(w) = \sum_{w.e_H = \zeta e_H} \zeta^n
$$
\end{prop}
If $\tilde{K} = \mathbbm{Q}(\zeta_d)$ is a cyclotomic field containing all eigenvalues of
$R_1(W)$, then letting $c_n \in \mathrm{Gal}(\tilde{K}|\mathbbm{Q})$ for $n \wedge d = 1$ be defined by
$c_n(\zeta_d) = \zeta_d^n$ we get from this proposition that $\chi_n = c_n \circ \chi_1$
for all $n$ prime to $d$.
\medskip
As an illustration of this section,
we do the example of $W$ of type $G_4$ generated
by
$$
s = \begin{pmatrix} 1 & 0 \\ 0 & j \end{pmatrix} \ \
t = \frac{1}{3}\begin{pmatrix} 1+2j & j-1 \\ 2j-2 & j+2 \end{pmatrix}.
$$
It is a reflection group of order 24, with two generators
$s,t$ of order 3 satisfying $sts=tst$, and center of order 2. It admits 3 one-dimensional
(irreducible) representations $S_{\alpha} : s,t \mapsto \alpha$, 3 two-dimensional
representations $A_{\alpha}$ with $\tr A_{\alpha}(s) = -\alpha$
for $\alpha \in \{ 1 , j,j^2 \}$ with $j = \exp(2 \mathrm{i} \pi/3)$
and a 3-dimensional one that we denote $U$.
The reflection representation is $A_{j^2}$,
and $\kappa(W) = 6$. From the character table of $W$ one gets
$$\begin{array}{lcllcllcl}
R_0 &= & S_1 + U & R_1 &= & A_1 + A_{j^2} & R_2 & = & S_{j^2} + U \\
R_3 &=& A_j + A_{j^2} & R_4 &=& S_j + U & R_5 & = & A_1 + A_{j^2}
\end{array}
$$
\subsection{The case of Coxeter groups}
If $W$ is a Coxeter group, we get a simpler form of this representation.
Recall that, in this case, $\mathcal{A}$ is the
complexification of some real arrangement $\mathcal{A}_0$ in $V_0$,
where $V_0$ is a real form of $V$ ; moreover, choosing
some connected component $\mathcal{C}$ of $V_0 \setminus \bigcup \mathcal{A}_0$,
called a Weyl chamber, determines $n$ hyperplanes $H_1,\dots,H_n$ called
the \emph{walls} of $\mathcal{C}$, and the corresponding $n$ reflections $s_1,\dots,s_n$
are called the simple reflections associated to $\mathcal{C}$.
If $\underline{z} \in \mathcal{C}$, there is also a special
set of generators for $B$, namely the braided reflections $\sigma_i$ around
$H_i$ such that $\gamma_0$ is a straight (real) segment orthogonal to $H_i$.
These are called the Artin generators of $B$ (associated to a choice
of Weyl chamber).
\begin{prop} If $W$ is a Coxeter group
with simple reflections $s_1,\dots,s_n$,
then $\sigma_i \mapsto R_0(s_i) \exp( \mathrm{i} \pi h p_{H_i})$
defines a representation of $B$ which is equivalent to $R_h$. In particular,
$R_1$ is equivalent to a representation of $W$ on $\mathbbm{C} \mathcal{A}$
for which $s_i.v_H = v_{s(H)}$ is $H \neq H_i$, $s_i.v_{H_i} = -v_{H_i}$,
and $R_{h+2}$ is equivalent to $R_h$ for any $h \in \mathbbm{C}$, while $R_1 \not\simeq R_0$.
\end{prop}
\begin{proof}
We introduce the Weyl chamber $\mathcal{C} \subset V_0$ with respect to the
simple reflections $s_1,\dots,s_n$, with
walls
$H_i = \Ker(s_i - 1)$, $1 \leq i \leq n$.
Up to conjugacy the base point $\underline{z}$ can be chosen inside the Weyl chamber,
and we define roots $e_H \in V_0$ of norm 1 such that
$\mathbbm{C} e_H = \Ker(s-1)^{\perp}$ and $(e_H | \underline{z}) > 0$
for $\underline{z} \in \mathcal{C}$. We choose for $\alpha_H$
the linear form $x \mapsto (e_H|x)$. Let us denote $\log^+$
the complex logarithm on $\mathbbm{C} \setminus \mathrm{i} \mathbbm{R}_-^{\times}$,
and define
$$
D_h = \prod_{H \in \mathcal{A}} \exp(\mathrm{i} \pi p_H\log^+ (e_H|\underline{z}))
$$
We consider a simple reflection $s_i$ around a wall $H_i$. Then
the path $\gamma$ representating $\sigma_i$ can be chosen with $\varepsilon$
small enough so that $(e_{H}|\gamma(t))$ has positive real part
for each $t \in [0,1]$ and $H \neq H_i$. It follows that $t \mapsto \log^+(e_H|\gamma(t))$
has differential $\gamma^*\omega_H$ and $R_h(\sigma_i)$ equals
$$
R_0(s_i) \prod_{H \in \mathcal{A}} \exp(h p_H \int_{\gamma} \omega_H)
= R_0(s_i) \prod_{H \in \mathcal{A}} \exp\left( h p_H (\log^+ (e_H|s_i . \underline{z}) - \log^+ (e_H|\underline{z}))\right)
$$
(see \cite{KRAMCRG}, lemma 7.10). Moreover, $(e_H|s_i.\underline{z}) = (s_i.e_H | \underline{z})
= (e_{s_i(H)}|\underline{z})$ if $H \neq H_i$ (see e.g. \cite{KRAMCRG}, lemma 7.9)
and $(e_{H_i}|s_i.\underline{z}) = -(e_{H_i}|\underline{z})$. It follows that
$$
R_h(\sigma_i) = s_i \exp( \mathrm{i} \pi h p_{H_i})\prod_{H \in \mathcal{A}\setminus\{ H_i \}}
\exp\left( h p_H (\log^+ (e_{s_0(H)}|\underline{z}) - \log^+ (e_H|\underline{z}))\right)
$$
namely
$$
R_h(\sigma_i) = D_h s_i \exp( \mathrm{i} \pi h p_{H_i}) D_h^{-1}
$$
for all $i \in [1,n]$, which concludes the proof. $R_1 \not\simeq R_0$
because $\tr R_1(s_1) = \tr R_0(s_1) - 1$.
\end{proof}
The representation of $W$ described in this proposition for $h = 1$ is natural in the realm of root systems.
Indeed, if a set $\mathcal{P}$ of roots for $\mathcal{A}_0$ is chosen,
such that $\mathcal{P}$ satisfies the axioms $(SR)_{I}$ and $(SR)_{II}$ of a root system
(see \cite{BOURB}), and $\mathcal{P}$ is subdivided in positive and negative
roots $\mathcal{P}^+, \mathcal{P}^-$ according to the chosen Weyl chamber,
where $\mathcal{P}^+ = \{ e_H, H \in \mathcal{A} \}$, then the representation described here is isomorphic to one on $\mathbbm{C} \mathcal{P}^+$
described by $w.f_{H} = f_{w(H)}$ if $w.e_H \in \mathcal{P}^+$ and
$w.f_H = - f_{w(H)}$ if $w.e_H \in \mathcal{P}^-$, where $f_H$ denotes the basis element
of $\mathbbm{C} \mathcal{P}^+$ corresponding to $e_H \in \mathcal{P}^+$.
Finally, we notice that, when $W$ is a Coxeter group, then the representation $R_h$
for arbitrary $h$ factorizes through the extended Coxeter group $B/(P,P)$ introduced
by J. Tits in \cite{TITS}.
\medskip
We give in the following table the decomposition in irreducibles
of $R_0,R_1$ for the classical Coxeter groups of type $A_n, B_n, D_n$.
We label as usual irreducible representations of $\mathfrak{S}_n$
by partitions of size $n$ (with the convention that $[n]$ is the trivial
representation), of $W$ of type $B_n$ by couples of partitions $(\lambda,\mu)$
of total size $n$, and denote $\{ \lambda ,\mu \}$ the restriction of
$(\lambda,\mu)$ to the usual index-2 subgroup of $W$ of type $D_n$. Recall
that $\{ \lambda, \mu \} = \{ \mu , \lambda \}$ is irreducible
if and only if $\lambda \neq \mu$.
$$
\begin{array}{|l||l|}
\hline
& R_0 \\
\hline
\hline
A_n, n \geq 3& [n-1,2]+[n,1] + [n+1] \\
B_n, n \geq 4 & ([n-2,2],\emptyset) + ([n-2],[2]) + 2([n-1,1],\emptyset) + 2([n],\emptyset) \\
B_3 & ([1],[2]) + 2 ([2,1],\emptyset) + 2([3],\emptyset) \\
D_n, n \geq 4 & \{ [n-2,2],\emptyset\} + \{ [n-2],[2] \} + \{ [n-1,1],\emptyset \} + \{ [n],\emptyset \} \\
\hline
\hline
& R_1 \\
\hline
\hline
A_n, n \geq 3& [n-1,1,1]+[n,1] \\
B_n, n \geq 3 & ([n-2,1],[1]) + 2([n-1],[1]) \\
D_n, n \geq 4 & \{ [n-2,1],[1]\} + \{ [n-1],[1] \} \\
\hline
\end{array}
$$
We sketch a justification of this table. For small values of $n$,
we prove this by using the character table. Then we use induction
with respect to a natural parabolic subgroup $W_0$ in the same
series, for which the branching rule is well-known. Restrictions
of $R_0$ and $R_1$ to this parabolic subgroup are then
isomorphic to the sum of the corresponding
representation $R_0$ or $R_1$ of the subgroup, plus
the permutation action of the reflections in $W$
which do not belong to $W_0$ (this is clear
for $R_0$, and a consequence of proposition \ref{proprestrparab} for $R_1$).
The decomposition in irreducibles of this permutation
representation is easy, namely $[n-1,1]+[n]$ for $A_n$,
$([n-2],[1]) + ([n-2,1],\emptyset) + 2([n-1],\emptyset)$
for $B_n$ and $\{ [n-2],[1] \} + \{[n-2,1],\emptyset\} + \{ [n-1],\emptyset \}$
for $D_n$. This provides the
restrictions of $R_0$ and $R_1$ to $W_0$.
From the combinatorial branching rule it
is easy to check that, for say $n \geq 5$,
only the given decompositions admit these
restrictions.
\section{Tables for $\kappa(W)$}
We compute here the value of $\kappa(W)$ for all irreducible
reflection groups $W$. More precisely, we compute all $d \in \mathbbm{Z}$
such that there exists $w \in W$ and $H \in \mathcal{A}$ with
$w.e_H = \zeta e_H$ and $\zeta$ of order $d$. We call these
integers the $\mathcal{A}$-indices of $W$
Recall that the group $G(de,e,r)$ for $r \geq 2$ is defined as the
set of $r \times r$ monomial matrices with nonzero entries in $\mu_{de}(\mathbbm{C})$,
such that the product of these nonzero entries lie in $\mu_d(\mathbbm{C})$.
\begin{prop} The $\mathcal{A}$-indices of $W = G(de,e,r)$ are exactly
the divisors of $\kappa(W)$. Moreover, $\kappa(W) = de$
if $d \neq 1$ or $r \geq 3$. If $W = G(e,e,2)$
then $\kappa(W) = 2$.
\end{prop}
\begin{proof}
Since $G(e,e,2)$ is a Coxeter (dihedral) group, we can assume
$d \neq 1$ or $r \geq 3$. First note that the standard
hermitian scalar product on $\mathbbm{C}^r$ is invariant under $W$.
We introduce the hyperplane arrangement
$$
\mathcal{A}^0_{de,r} = \{ z_i - \zeta z_j = 0\ | \ \zeta \in \mu_{de}(\mathbbm{C})
$$
We have $\mathcal{A}^0_{de,r} \subset \mathcal{A}$, and the orthogonal
to $H : z_i - \zeta z_j = 0$ is spanned by $e_H = e_i -\zeta^{-1}e_j$,
if $e_1,\dots,e_n$ denotes the canonical basis of $\mathbbm{C}^r$. Let
$w \in W$. Since $w$ is a monomial matrix, there exists $\lambda_1,\dots,\lambda_r \in
\mu_{de}(\mathbbm{C})$ with $\lambda_i \in \mu_{de}(\mathbbm{C}), \prod \lambda_i \in \mu_d(\mathbbm{C})$,
and $\sigma \in \mathfrak{S}_r$
such that $w.e_i = \lambda_i e_{\sigma(i)}$.
Then $w. e_H = \mu e_H$ iff $\lambda_i e_{\sigma(i)} - \lambda_j \zeta^{-1} e_{\sigma(j)} =
\mu \lambda_i e_i + \mu \lambda_j e_j$. The two possibilities are
$\mu=1, \zeta = 1$ or $\mu \lambda_j = \lambda_i, \mu \lambda_i = \lambda_j \zeta^{-1}$,
that is $\mu^2 = \zeta^{-1}, \mu = \lambda_i \lambda_j^{-1}$.
It follows that $\mu \in \mu_{de}(\mathbbm{C})$. Conversely, assume we choose $\mu \in
\mu_{de}(\mathbbm{C})$, and let $\zeta = \mu^{-2}$. If $r \geq 3$ we can define
$w \in W$ by $\sigma = (1\ 2)$, $\lambda_2=1$, $\lambda_1 = \mu$, $\lambda_3 = \mu^{-1}$,
$\lambda_k = 1$ for $k \geq 4$, and $w.e_H = \mu e_H$ for $H : z_1 - \zeta z_2 = 0$.
We have $\mathcal{A} = \mathcal{A}^0_{de,r}$ when $d = 1$, so this
settles this case and we can assume $d \neq 1$. In that
case, $\mathcal{A} = \mathcal{A}^0_{de,r} \cup \mathcal{A}^+_r$,
where $\mathcal{A}^+_r$ is made out the hyperplanes $H_i : z_i = 0$,
whose orthogonals are spanned by the $e_i$. If $w.e_i=\mu e_i$
for $w \in W$ we obviously have $\mu \in \mu_{de}(\mathbbm{C})$, and conversely
if $\mu \in \mu_{de}(\mathbbm{C})$ we can define $w \in W$ by $w.e_1 = \mu e_1,
w.e_2 = \mu^{-1} e_2$ and $w.e_i = e_i$ for $i \geq 3$. It follows
that in this case too the set of $\mathcal{A}$-indices is
the set of divisors od $de$.
\end{proof}
By noticing that $G(2,1,r)$, $G(2,2,r)$ and $G(e,e,2)$,
are Coxeter groups, this gives the following.
\begin{cor} For $W = G(de,e,r)$, we have $\kappa(W) = 2$ iff $W$ is Coxeter group, if and only if
$de = 2$ or $(d,r) = (1,2)$.
\end{cor}
By checking out the 34 exceptional reflection groups, we prove case
by case the following.
\begin{prop} Let $W$ be an irreducible complex
reflection group. The set of $\mathcal{A}$-indices
is exactly the set of divisors of $\kappa(W)$.
\end{prop}
The following table gives the value of $\kappa(W)$, where $W$ an
complex reflection group labelled by its Shephard-Todd number (ST).
$$
\begin{array}{||c|c||c|c||c|c||c|c||c|c||c|c||}
\mathrm{ST} & \kappa & \mathrm{ST} & \kappa & \mathrm{ST} & \kappa & \mathrm{ST} & \kappa & \mathrm{ST} & \kappa & \mathrm{ST} & \kappa \\
\hline
4 & 6 & 10&12 & 16 &10 & 22 & 4 & 28 & 2 & 34 & 6 \\
5 & 6& 11 &24 & 17 &20 & 23 & 2 & 29 &4 & 35 & 2 \\
6 & 12& 12 &2 & 18 &30 & 24 & 2 & 30 & 2 & 36 & 2 \\
7 & 12& 13 &8 & 19 &60 & 25 & 6& 31 & 4 & 37 & 2 \\
8 & 4& 14 &6 & 20 & 6 & 26 & 6& 32 & 6 & & \\
9 & 8& 15 &24 & 21 & 12 & 27 & 6& 33 & 6 & & \\
\end{array}
$$
We remark that the only non-Coxeter irreducible reflection groups with $\kappa(W) = 2$
are $G_{12}$ and $G_{24}$. Like in the case of $G_{12}$, it is straightforward to check that it
is possible to choose the 21 linear forms $\alpha_H$ such that the linear map $\Phi :
\mathbbm{C} \mathcal{A} \to S^2 V^*$ is a morphism of $W$-modules. This phenomenon is reminiscent
of the special properties of their ``root systems'' in the sense of \cite{COHEN}. We refer to \cite{SHI} \S 2 and
\S 4 for a detailed study of these special root systems of type $G_{12}$ and $G_{24}$. In particular,
convenient linear forms for $G_{24}$ are described in \cite{SHI}, \S 4.1.
As a consequence
of this case-by-case investigation, propositions \ref{propequivPhiCox}
and \ref{propPhiEqKap2} can
be enhanced in the following
\begin{theo} Let $W$ be an irreducible reflection group. The linear
forms $\alpha_H$ can be chosen such that $\Phi$
is a morphism of $W$-modules if and only if $\kappa(W) = 2$.
This is the case exactly when $W$ is a Coxeter group
or an exceptional reflection group of type $G_{12}$ or $G_{24}$.
\end{theo}
|
3,212,635,537,648 | arxiv | \section{Introduction}
Information in the brain is mainly represented in the form of neural impulses.
All those impulses are roughly identical in their height and width and called
spikes, see Fig. \ref{spikes}.
The only thing which matters is the time when such an impulse
has been generated or received.
If neural impulses are recorded with proper biophysical instruments, one obtains a highly
irregular sequence. It is called a spike train.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig1EN.eps}
\caption{\label{spikes}
An example of electrical activity in the environment of neurons.
The recording is made by an electrode, which is placed in the environment. Therefore, the activity of several neurons is registered simultaneously.
In a case of a neuron generating an output impulse, its membrane potential changes drastically.
At these moments, the figure shows short-term jumps of voltage, or spikes.
Different spikes have different heights because they belong to different neurons that are at different distances from the recording electrode. A single neuron generates spikes of the same height.
Modified from: {\tt https://backyardbrains.com/experiments/spikerbox}.
}
\end{figure}
It is difficult to find any rational meaning in these moments
of receiving spikes, or interspike
intervals. The situation is even worse: In most cases, those sequences do not reproduce
themselves if the same stimulation is offered to an experimental animal
several times.
This might be the first reason why neuroscientists are mainly interested
in the statistical properties of spike trains.
Theoretical physicists as well try to predict which kind of statistics it could
be and how does it change with changing stimuli or model parameters.
In this direction, there is a long-standing discussion.
What indeed represents meaningful information in a spike train?
Is it the mean number of spikes per time unit (rate coding), or are their exact temporal
positions (time coding) essential?
There is no clear answer to this question.
Initially, we supposed that the rate coding operates at the periphery of
the nervous system. An example at the actuator periphery of the brain is
the neuro-muscular junction in motoneurons \cite[Sec. 5.01.12]{Werner2014}. Namely,
the only command passed to the muscle from the motoneuron is the
contraction strength. But the contraction strength
is determined by the neurotransmitter concentration, which is released from
neural endings with each spike arrival. The more spikes per time unit,
the higher the neurotransmitter level, the higher the contraction
strength. So, here we have the rate coding.
An example at the sensory periphery of the brain is the olfactory receptor neuron \cite{Brookes2011},
where the number of spikes per time unit depends on the odor concentration.
However, even at the sensory periphery, the time coding can be the coding paradigm.
This is observed for echolocation \cite{Denny2004}, where the temporal position of spikes
from two ears should be kept with microsecond precision.
It is also clear that in the time coding mechanism a spike train can bear
more information than in the rate coding one. This could be essential for more
sophisticated intellectual tasks than muscle contraction or odor sniffing.
Unfortunately, most attempts to calculate neuronal firing statistics exclude the
possibility of time coding due to utilizing the so-called diffusion
approximation.
In this approach, the neuronal stimulus is modeled as a diffusion stochastic process
such as Wiener or Ornstein-Uhlenbeck one, see \cite{Gardiner1985,Sacerdote2013}.
In a diffusion process, any finite
time interval contains infinitely many infinitesimal spikes obtained
from the differentiation of a Wiener process.
Therefore, there is no place for the time coding mechanism.
At the same time, the output activity of a neuron stimulated with a
diffusion process is represented by finite spikes emitted when neuronal
membrane voltage crosses the firing threshold.
The time intervals between those spikes are finite, see Fig. \ref{finite}.
\begin{figure*
\centering \includegraphics[width=0.7\textwidth]{Fig2EN.eps}
\caption{\label{finite}The time course of the membrane potential, simulated by the diffusion process. At the moment when the membrane potential $ V (t) $ reaches the firing threshold $ \mathbb V_0 $, the output impulse (spike) is generated.
Modified from: A Iolov, S Ditlevsen \& A Longtin, DOI 10.1186/2190-8567-4-4.
}
\end{figure*}
Those spikes represent not diffusion but point process and, therefore, cannot be
fed into another neuron preserving the diffusion approximation approach.
That means that the diffusion approximation approach is both incomplete and
inconsistent.
Therefore, an attempt was made to calculate firing statistics without
diffusion approximation.
In the following sections, we will briefly formulate previously obtained results this work is based on. Preliminary results are related to the statistics of the activity of the leaky integrate-and-fire (LIF) neuron with a threshold of 2. In particular, earlier, \cite{Vidybida2014a, Vidybida2016}, explicit expressions were obtained for the distribution function of the output interspike intervals (ISI) at the initial section of the values of the ISI duration. The distribution of input impulses is considered to be Poissonian.
For larger values of ISI, the distribution function is represented as the sum of multiple integrals.
This enabled us to calculate the first moment of the distribution function (mean ISI).
In the current paper, we have found another representation of the distribution function, which provides the means to calculate the moment-generating function. Applying differentiation to it, a distribution moment of any order can be found.
Also, according to Curtiss theorem \cite{Curtiss1942}, the moment-generating function completely determines the distribution function itself.
\section{Preliminary results}
\subsection{Model description}\label{model}
The leaky integrate-and-fire neuron \cite{Burkitt2006} is characterized by three positive constants:
$\tau$ is the relaxation time;
$\mathbb V_0$ is the firing threshold;
$h$ is the input impulse height.
At any given time $t$ the state of LIF neuron is defined by the non-negative
real number $V(t)$, which is the deviation of the transmembrane potential difference from the rest state towards depolarization, or, in other words, the magnitude of the excitation. Here it is assumed that at rest $ V = 0 $, and the depolarization corresponds to a positive value of $V$.
The presence of the leakage means that in the absence of external stimuli, the value of $ V (t) $ decreases exponentially:
\begin{equation}\label{tau}
V(t+s)=V(t) e^{-s/\tau},
\qquad s>0.
\end{equation}
Input stimuli are input impulses.
An input impulse obtained at time $ t $
increases $ V (t) $ by $ h $:
\begin{equation}
\label{h}
V(t) \rightarrow V(t) + h.
\end{equation}
The neuron is characterized by the firing threshold $ \mathbb V_0 $.
The latter means that once the condition is met, i.e.
$
V(t) > \mathbb V_0,
$
\ LIF neuron generates an output impulse
and resets to the rest state, $ V (t) = 0 $.
Regarding $ h $ and $ \mathbb V_0 $, we make the following assumption:
\begin{equation}\label{cond}
0<h< \mathbb V_0 < 2h.
\end{equation}
From (\ref{tau}) and (\ref{h}), it follows that the LIF neuron can generate an output impulse only at the time of receiving the input one.
The condition (\ref{cond}) means that one input impulse,
applied to the LIF neuron at rest, is not enough to generate the output impulse. However, even two input impulses obtained in a short time can excite the LIF neuron above the threshold and generate the output impulse. This means that the neuron has a threshold of 2.\footnote{See \cite{Kravchuk2016}, where the case of higher thresholds is considered.}
\subsection{Distribution function of ISI durations\protect\footnote{In the current section and below, $t$ denotes ISI duration.}}
We assume that described in Sec. 2.1 neuron is stimulated by the stream of input impulses, which forms a stochastic Poisson process of intensity $ \lambda $.
The latter means that the probability of obtaining the ISI of duration $ t $ with the precision $ dt $ at the input is given by the following expression:
\begin{equation} e^{-\lambda t}\, \lambda\,dt,
\end{equation}
and input ISIs are statistically independent.
We introduce the following notations:
\begin{multline}\label{T23}
T_2 = \tau\ln(\frac{h}{\mathbb V_0-h}),~ T_3 = \tau\ln(\frac{\mathbb V_0}{\mathbb V_0-h}),~
\\
\Theta_m=T_2 + (m-3)T_3,~ m=3,\dots.
\end{multline}
In papers \cite{Vidybida2014a,Vidybida2016},
the following formula is obtained for distribution function of output ISI:
\begin{multline}\label{Sumiii}
P(t)dt=
\sum\limits_{k=2}^{m-1}
\left(P_k^0(t)\lambda dt - P_k^-(t)\lambda dt\right)
+P_m^0(t)\lambda dt,
\\
t\in\,]\Theta_m;\Theta_{m+1}],\, m\ge2 ,
\end{multline}
where $P(t)dt$ is the probability to obtain output ISI of duration $ t $ with the precision $ dt $.
Functions on the right side of (\ref{Sumiii}) are defined as follows:
\begin{multline}\label{F-la}
P_{k+1}^0(t)\lambda\,dt=
\int\limits_{\Theta_{k+1}}^t P_k^-(s)\lambda\, ds\,e^{-\lambda(t-s)}\lambda\,dt,
\\
t\ge \Theta_{k+1},\quad
k=2,3,\dots ,
\end{multline}
\begin{equation}\label{P23}
P_k^-(t)\lambda dt = e^{-\lambda t}\lambda^k dt
\int\limits_{\lbot1}^{\ltop1} dt_1
\int\limits_{\lbot2}^{\ltop2} dt_2
\dots
\int\limits_{\lbot{k-1}}^{\ltop{k-1}} dt_{k-1},
\end{equation}
where the limits of integration are defined through the following inequalities:
\begin{equation}\label{P23limits}
\begin{cases}
0 \le t_1 \le t-\Theta_{k+1},
\\
T_2 +
\tau\ln\left(\sum\limits_{1\le j\le i} e^{t_j/\tau}\right)
\le
t_{i+1},
\\
t_{i+1}
\le
\tau\ln\left(e^{(t-\Theta_{k+1-i})/\tau}-\sum\limits_{1\le j\le i} e^{t_j/\tau}\right),
\\
i=1,\dots,k-2.
\end{cases}
\end{equation}
Thus, the distribution function of the output ISI is completely determined by the function $P_k^-(t)$ for different $ k = 2,3, \dots $.
Its physical meaning is as follows: if the neuron starts from the rest state, $ V(0) = 0 $, then the expression $ P_k^-(t) \lambda dt $ gives the probability to obtain from the Poisson input process $ k $ consecutive input impulses in such a way that the last of them falls into the interval
$ [t; t + dt [$ and the neuron does not fire (the excitation threshold $ \mathbb V_0 $ has not been exceeded). In turn, $ P_{k}^0 (t) \lambda \, dt $ gives the probability to obtain $ k $ impulses,
the last one within the interval $ [t; t + dt [$ so that there is no firings up to and including the (k-1)-th impulse.
Note that in the formula (\ref{P23}) for a fixed $ t $, $ k $ cannot take values greater than $ k_{max} $, where
\begin{equation}\label{kmax}\nonumber
k_{max}=\left[\frac{t-T_2}{T_3}\right]+2,
\end{equation}
and square brackets denote an integer part of a number.
\section{A new representation of the distribution function}
In this section we will represent (\ref{P23}), (\ref{P23limits}) in a simpler form, convenient for calculating the moment-generating function. For this reason, we introduce new integration variables:
\begin{equation}\label{newz}\nonumber
z_i=e^{-\frac{t-\Theta_{k+2-i}}{\tau}}\sum\limits_{1\le j\le i}e^{\frac{t_j}{\tau}},\quad i=1,\dots,k-1.
\end{equation}
The domain of integration (\ref{P23limits}) in terms of new variables takes the following form:
\begin{equation}\label{P23limitszz}
\begin{cases}
e^{-\frac{t-\Theta_{k+1}}{\tau}} \le z_1 \le 1,
\\
z_i \le z_{i+1}\le 1,\quad i=1,\dots,k-2.
\end{cases}
\end{equation}
The Jacobian determinant of the transition to the new variables has the following form:
\begin{equation}
\left|\det||\frac{\partial z_j}{\partial t_i}||\right|=
\frac{1}{\tau^{k-1}}z_1\prod\limits_{2\le i\le k-1}(z_{i-1} - \beta z_i),
\end{equation}
where $\beta=e^{-\frac{T_3}{\tau}}$.
Taking into account (\ref{P23limitszz}) and the latter, (\ref{P23}) can be expressed in the following form:
\begin{equation}\label{P23new}
P_k^-(t) = e^{-\lambda t}(\lambda\tau)^{k-1}
\int\limits_{B_k(t)}^{1} \frac{dz_1}{z_1}
\int\limits_{z_1}^{1} \frac{dz_2}{z_2-\beta z_1}
\dots
\int\limits_{z_{k-2}}^{1} \frac{dz_{k-1}}{z_{k-1}-\beta z_{k-2}},
\end{equation}
where
\begin{equation}\label{B(t)}\nonumber
B_k(t)=e^{-\frac{t-\Theta_{k+1}}{\tau}}.
\end{equation}\bigskip
If the set of auxiliary functions $f_i(x)$ is introduced by the following
relations:
\begin{equation}\label{fdef}
f_0(x)\equiv 1,~ f_{i+1}(x) = \int\limits_{x}^{1} \frac{dy}{y-\beta x}f_i(y),~
i=0,\dots,
\end{equation}
then (\ref{P23new}) can be written as
\begin{equation}\label{P23neww}
P_k^-(t) = e^{-\lambda t}(\lambda\tau)^{k-1}
\int\limits_{B_k(t)}^{1} \frac{dx}{x}
f_{k-2}(x).
\end{equation}
The latter with (\ref{Sumiii}) and (\ref{F-la}) are used below to calculate
the moment-generating function.
\section{Moment-generating function}
The moments of probability distribution $P(t)$ are the quantities $\mu_n$ given by the formula\footnote{In considered case, $t\le0 \Rightarrow P(t)=0$.}
\begin{equation}\label{momentsDef}
\mu_n= \int_{-\infty}^\infty t^n P(t)dt.
\end{equation}
Here the first moment is the mean of a random variable, in our case of ISI. Moments calculation of can be difficult given the complexity of the expression for $ P (t) $. The moment-generating function simplifies the task.
According to the definition, the moment-generating function $ M_t (z) $ is determined by the following formula:
\begin{equation}\label{defM}
M_t(z) = \mathbb{E}[e ^{tz}] = \int_{-\infty}^\infty e ^{tz} P(t)dt.
\end{equation}
To find it, let us represent the distribution function $ P (t) $ (\ref{Sumiii}) through
auxiliary functions $ f_i (x) $ (\ref{fdef}). To accomplish this, firstly, the expression (\ref{F-la}) for $ P_ {k}^0 (t) $ should be rewritten through $ f_i (x) $ (\ref{fdef}), substituting there (\ref {P23neww}):
\begin{equation}\label{p+}
P^0_{k+1}(t)=
\lambda(t-\Theta_{k+1})P^{-}_k (t)+e^{- \lambda t} r^{k}\int_{B_{k+1}(t)}^1 \dfrac{\ln(x)}{x} f_{k-2}(x) dx,
\end{equation}
where $r=\lambda \tau$.
By regrouping the terms in the sum on the right-hand side (\ref{Sumiii}) and substituting there (\ref{P23neww}) and (\ref{p+}), the following expression for the distribution function $ P (t) $ can be obtained through the functions $ f_i (x) $:
\begin{multline}\nonumber
P(t)dt =
\lambda t e^{- \lambda t}dt+e^{- \lambda t}dt
\sum_{k=3}^{m}r^{k-2}\int_{B_{k}(t)}^1 \dfrac{dx}{x}f_{k-3}(x) (\lambda(t-\Theta_{k})-1+r \ln(x)),\\
t\in ]\Theta_m;\Theta_{m+1}],\:m\geq 2.
\end{multline}
The latter is used in (\ref{defM}) to find the moment-generating function:
\begin{equation}\label{M}
M_t(z)=
\dfrac{\lambda^2}{(\lambda-z)^2}+\dfrac{\lambda z}{(\lambda-z)^2}\sum_{m=3}^{\infty} r^{m-2} e^{-( \lambda-z)\Theta_m} I_m (z),\: z<\lambda,
\end{equation}
where the following auxiliary functions $I_m(z)$ are introduced:
\begin{equation}\label{Im}\nonumber
I_m(z)=\int_0^1 dx\; f_m(x) x^{r-\tau z-1},\;m=0,1,\ldots.
\end{equation}
To find recurrent relation for $I_m(z)$, substitute (\ref{fdef}) into the latter:
\begin{equation}
\label{reccurent}
I_m(z)=
\Phi (\beta,1,r-\tau z) I_{m-1}(z),\;m=1,2\ldots;\;
I_0(z)=\dfrac{1}{r-\tau z}.
\end{equation}
Here $\Phi (\beta,1,r-\tau z)$ denotes Lerch transcedent:
\begin{equation}\nonumber
\Phi (z,s,a) = \dfrac{1}{\Gamma(s)}\int_0^1 \dfrac{dx}{1-z x} (-\ln (x))^{s-1} x^{a-1}.
\end{equation}
It follows from the reccurent relation (\ref{reccurent}) that
\begin{equation}\begin{split}\label{I}\nonumber
I_m(z)= \dfrac{1}{r-\tau z} \left(\Phi \left(\beta,1,r-\tau z\right)\right)^m ,\;m=0,1\ldots.
\end{split}\end{equation}
Substitute the latter into (\ref{M}) and use the definition of $\Theta_m$ (\ref{T23}):
\begin{multline}\label{MwithSum}
M_t(z)
=\dfrac{\lambda^2}{(\lambda-z)^2}+
\dfrac{\lambda z}{(\lambda-z)^2} \dfrac{r}{r-\tau z} e^{-( \lambda-z)T_2}\times\\
\times \sum_{m=0}^{\infty}
\left(r \beta^{r(1-\frac{z}{\lambda})} \Phi \left(\beta,1,r-\tau z\right)\right)^{m}.
\end{multline}
Here the series $\sum_{m=0}^{\infty}
\left(r \beta^{r(1-\frac{z}{\lambda})} \Phi \left(\beta,1,r-\tau z\right)\right)^{m}$ is convergent in some neighbourhood of the point $z=0$, since $
r \beta^{r} \Phi \left(\beta,1,r \right)<1.$ The latter is proved in the Theorem 3 of
\cite{Vidybida2016}.
Finally, after summing in the right-hand side of (\ref{MwithSum}), in some neighbourhood of the point $ z = 0 $, the moment-generating function has the following form:
\begin{equation}\label{tvirna}
M_t(z)=
\dfrac{\lambda^2}{(\lambda-z)^2}+ a^r
\dfrac{\lambda z}{(\lambda-z)^2} \dfrac{r}{r-\tau z}
\dfrac{e^{z T_2}}{1-r \beta^r e^{zT_3} \Phi (\beta,1,r-\tau z)},
\end{equation}
where $a=e^{-\frac{T_2}{\tau}}$.
Since in some neighborhood of zero the moment-generating function is finite, then, according to Curtiss theorem \cite{Curtiss1942}, the obtained moment-generating function (\ref{tvirna}) completely determines the distribution function $ P (t) $.
Using the moment generating function (\ref{tvirna}), the moments of the distribution function can be found:
\begin{multline}\label{mn}
\mu_n = \at{\dfrac{d^n M_t(z)}{d z^n}}{z=0}=
\dfrac{(n+1)!}{\lambda^n}+ \dfrac{n!a^r }{2\lambda^n}\dfrac{1}{1-r \beta^r \Phi (\beta,1,r)}\times \\
\times \sum_{m=0} ^{n-1}\dfrac{(\lambda(T_2-T_3))^{m}}{m!} \sum_{k=0} ^{n-1-m} (n-m-k)(n-m-k+1)\times \\
\times \Bigg( \delta_{k,0}
+\dfrac{1}{k!}\sum_{l=1}^{k}
\dfrac{(-1)^l l!}{(1-r \beta^r \Phi (\beta,1,r))^{l}}B_{k,l} (g_1,g_2,\ldots,g_{k-l+1})\Bigg),\\
g_m=(-\lambda T_3)^m-m!r^{m+1} \beta^r \Phi(\beta, m+1, r);
\end{multline}
where $\mu_n$ denotes the $n$-th moment, $B_{k,l} (g_1,g_2,\ldots,g_{k-l+1})$ are
incomplete exponential Bell polynomials.
Setting in the last expression $ n = 1 $, for the first moment, we have:
\begin{equation}\nonumber
\begin{split}
\mu_1 =
\dfrac{2}{\lambda}+\dfrac{1}{\lambda} \dfrac{a^r}{1-r\beta^r \Phi(\beta,1,r)},
\end{split}
\end{equation}
which coincides with obtained previously in
\cite[Eq. (46)]{Vidybida2016}. Notice that in terms used in work \cite{Vidybida2016} $ I(a,r)=\beta^r \Phi(\beta,1,r)$.
According to (\ref{mn}) in case of $n=2$, the second moment is the following:
\begin{multline}\label{m2}
\mu_2 =\dfrac{6}{{\lambda}^2}+\dfrac{2}{\lambda^2}\dfrac{a^r}{1-r \beta^r \Phi(\beta,1,r)}\Bigg(3+\lambda T_2 +\\
+ \dfrac{r \beta^r \Phi(\beta,1,r)}{1-r \beta^r \Phi(\beta,1,r)}
\left(\lambda T_3+r \dfrac{\Phi(\beta,2,r)}{\Phi(\beta,1,r)} \right) \Bigg).
\end{multline}
\section{Numerical verification}
To numerically verify the obtained formulas, a program was written that simulated the dynamics of the membrane potential of a neuron stimulated by the stream of input impulses that form the stochastic Poisson process. The behavior of the neuron was simulated for such a time that, as a result, 1 000 000 output impulses were obtained, which allowed calculation of the probability density $ P (t) $ and its moments, as shown in (\ref{momentsDef}). The simulation was repeated for different values of the input stream intensities $ \lambda $. The results of calculating the 2nd and 3rd moments and their comparison with the formulas (\ref{m2}) and (\ref{mn}) for $ n = 3 $ are shown in Fig. \ref{moments}.
\begin{figure*
\centering \includegraphics[width=\textwidth]{Fig3EN.eps}
\caption{\label{moments} Dependence of the second $ \mu_2 $ and the third $ \mu_3 $ moments of the ISI duration distribution function on the intensity of the input stream $ \lambda $. Diamonds denote results of numerical simulation by the Monte Carlo method, solid line --- calculations according to the formulas (\ref{m2}) and (\ref{mn}) at $ n = 3 $. Here $\mathbb V_0=20$ mV, $h=11.2$ mV, $\tau=20$ ms.
}
\end{figure*}
\section{Conclusions}
In the current paper, the statistics of the activity of the leaky integrate-and-fire neuron during its stimulation by input impulses, which form the stochastic Poisson process, is considered. For the model of a neuron with a threshold of two, a comprehensive description of the statistics of the durations of interspike intervals in terms of the moment-generating function is obtained. The latter is found explicitly, Eq. (\ref{tvirna}). The obtained formulas have been verified by numerical modeling of neuron dynamics with specific physical parameters.
\bigskip
{\small
Acknowledgments. This work was supported by the Program of Basic Research of the Department of Physics and Astronomy of the National Academy of Sciences of Ukraine ``Noise-induced dynamics and correlations in nonequilibrium systems'',
№ 0120U101347.
}
\bibliographystyle{unsrt}
|
3,212,635,537,649 | arxiv | \section*{Abstract}
We solve the local and global structural identifiability problems for viscoelastic mechanical models represented by networks of springs and dashpots. We propose a very simple characterization of both local and global structural identifiability based on \emph{identifiability tables}, with the purpose of providing a guideline for constructing arbitrarily complex, identifiable spring-dashpot networks. We illustrate how to use our results
in a number of examples and point to some applications in cardiovascular modeling.
\section*{Introduction}
Mathematical modeling is a prominent tool used to better understand complex mechanical or biological systems \cite{Ottesen2011-Micro}.
A common problem that arises when developing a model of a biological or mechanical system is that some of its parameters are unknown.
This is especially important when those parameters have special meaning but cannot be directly measured. Thus a natural question arises: Can all, or at least some, of the model's parameters be estimated indirectly and \emph{uniquely} from observations of the system's input and output? This is the question of \emph{structural identifiability}. Sometimes the uniqueness holds only within a certain range. In this case, we say that a system is only \emph{locally} structurally identifiable.
There are numerous reasons why one would be interested in establishing identifiability. Structural identifiability is a necessary condition for the practical or numerical identifiability problem, which involves parameter estimation with real, and often noisy, data. The unobservable biologically meaningful parameters of a model can only be determined (or approximated) if the model is structurally identifiable. Moreover, optimization schemes cannot be employed reliably since they will find difficulties when trying to estimate unidentifiable parameters \cite{Banga2011-comp}. The concept of structural identifiability was introduced for the first time in the work of Bellman and {\AA}str\"om \cite{BelAst70}. Since then, numerous techniques have been developed to
analyze the identifiability of
linear and nonlinear systems with and without controls \cite{WalLec81, VidBla00, VidBlaNoi01, Banga2011-comp, LitHeiLi10}; see also \cite{Miao11} for a review of different approaches.
\smallskip
Viscoelastic mechanical models that utilize springs and dashpots in various configurations have been widely used in numerous areas of research including material sciences \cite{AnaAme06}, computer graphics \cite{TerFle1988}, and biomedical engineering to describe mechanical properties of biological systems \cite{Bland1965, Chris71-VEintro, Ros50, AkyJonWal90, GanCho96, FungMec, BurPLoS, Ack2013}. To achieve a desirable response, networks with different numbers of springs and dashpots in various configurations have been constructed. For example, it is well-known that the simplest models of viscoelastic materials such as Voigt (spring and dashpot in parallel) or Maxwell (spring and dashpot in series) do not offer satisfactory representation of the nature of real materials \cite{DieLekTur1998}. Thus more complicated configurations are usually constructed and analyzed \cite{BurPLoS}.
\smallskip
In this paper we investigate the identifiability problem of viscoelastic models represented by an arbitrarily complex spring-dashpot network. Although there exist numerous methods that can determine the type of identifiability of a system of ordinary differential equations, generally they are difficult to apply. Our results will show in a remarkably simple way how to verify whether the studied model is (locally or globally) structurally identifiable. In case it is unidentifiable, our method provides an explanation why this is the case and how to reformulate the problem. Moreover, the existing methods usually allow to establish the identifiability only \emph{a posteriori}, i.e.~after concrete systems have been established. Thus, we also introduce ``identifiability tables'', which allow not only to check but also to construct an arbitrarily complex \emph{identifiable} spring-dashpot network.
\begin{figure}[t!]
\begin{center}
\includegraphics[clip,trim={0 12cm 0 0},width=8.7cm]{Iden_CV_03.png}
\end{center}
\caption{Changing blood pressure (P) causes periodic expansion and contraction of the arterial wall. Spring-dashpot (S-D) networks are often used in order to describe the biomechanical properties of the arterial tissue as well as the strain sensed by various receptors (e.g.~baroreceptors) embedded in the arterial wall. Typically a spring (representing a receptor's nerve ending) is combined in series with a S-D network (representing viscoelastic coupling of the nerves to the wall). Recently, several cardiovascular approaches have used the framework described above, in particular, choosing one of the following S-D networks: (A) Burgers-type model \cite{AlfPhD}; (B) three element Kelvin-Voigt body \cite{BugCowBea10}; (C) generalized Kelvin-Voigt model \cite{MahStuOttOlu13}.}\label{Fig:CV}
\end{figure}
\subsubsection*{Application to cardiovascular modeling}
A particular motivation for this work comes from cardiovascular modeling \cite{BugCowBea10, MahStuOttOlu13}, although the results of this paper can be applied to any viscoelastic modeling approach.
\smallskip
\noindent{\it Arterial wall.}
Changing blood pressure causes periodic expansion and contraction of the arterial wall (see Fig.~\ref{Fig:CV}). It is well-known that the stress-strain curves of the artery walls exhibit hysteresis, which is understood to be a consequence of the fact that the wall is viscoelastic. Another manifestation of the viscoelasticity of the arterial tissue is the stress relaxation experiments under constant stretch (strain). Spring-dashpot (S-D) networks are often used in order to describe the biomechanical properties of the arterial tissue \cite{LeaTay1966, McDonaldsFlow, KalSch08}. Identifiable networks can be determined using the results of this paper (see Theorem \ref{thm:parameqcoeff}).
\smallskip
\noindent{\it Neural activity.} It is common to use the spring-dashpot network to describe the neural firing of various sensors (e.g.~muscle spindle, baroreceptors), see \cite{Houk66, Hasan83,AlfPhD, BugCowBea10}. Typically one assumes that the firing activity is proportional to the strain sensed by a spring connected in series with a spring-dashpot network, which represents a local integration of the nerve endings to the arterial wall (see Fig.~\ref{Fig:CV}). Then the arterial wall and neural activity models are combined. Although separately each model is structurally identifiable, there is no guarantee that the resulting viscoelastic structure is identifiable. Thus, using our results given in Theorem \ref{thm:localMT}, we can establish whether the combined viscoelastic model is identifiable, and if not, what needs to be modified.
\section*{Results and Discussion}
After reviewing basic concepts of viscoelasticity of systems, we present and discuss our main results related to local and global structural identifiability of such systems. Finally, we illustrate our results with a number of examples from the literature.
\medskip
\subsubsection*{Spring-dashpot networks}
The ideal linear elastic material follows Hooke's law $\sigma=E\epsilon$, where $E$ is a Young's modulus (or a spring constant), which describes the relationship between the stress $\sigma$ and the strain $\epsilon$. Analogously, the relation $\sigma= \eta \dot \epsilon$ describes the viscous material, where $\dot\epsilon = d\epsilon/dt$ and $\eta$ is a viscous constant \cite{Flu75}. In the basic linear viscoelasticity theory, the elastic and viscous elements are combined. In this work, we shall be concerned with the problem of identifiability of networks of springs and dashpots that are essentially one-dimensional. The elements can be combined either in series or in parallel. In order to obtain the relationship between the total stress (force) $\sigma$ and the total strain (extension) $\epsilon$ for a given spring-dashpot network, we use two fundamental rules. For two viscoelastic elements connected in series, the stress is the same in both elements, but the total strain is the sum of individual strains on each element. On the other hand, for elements connected in parallel, the strain is the same for both elements, but the total stress is the sum of individual stresses on each element. Now we consider concrete viscoelastic networks, starting with the simplest configurations.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{MaxVoiBur.png}
\end{center}
\caption{Simple linear viscoelastic models. (A) Maxwell element, (B) Voigt element, (C) Burgers model.}\label{Fig:simple}
\end{figure}
\begin{example}[Maxwell element]\upshape\label{ex:M}
The series combination of a spring, denoted by its constant $E$, and a dashpot, denoted by its constant $\eta$, is known as a Maxwell element (see Fig.~\ref{Fig:simple}(A)). Since the elements are connected in series, the stress $\sigma$ is the same on both elements and the total strain $\epsilon$ is the sum of strains $\epsilon_E$ and $\epsilon_\eta$ corresponding to the spring and dashpot, respectively. Now, the relationship between the total strain and stress for this system is
\begin{equation}\label{ce:M}
\dot\epsilon = \dot \sigma/E+\sigma/\eta.
\end{equation}
\end{example}
\begin{example}[Voigt element]\upshape\label{ex:V}
Another simple example is the Voigt element (also known as Kelvin or Kelvin-Voigt) given in Fig.~\ref{Fig:simple}(B). Following the steps outlined in the previous example, we obtain the $\epsilon-\sigma$ relationship
\begin{equation}\label{ce:V}
E\epsilon+\eta \dot \epsilon=\sigma.
\end{equation}
\end{example}
\begin{example}[Burgers model]\upshape\label{ex:Burgers}
In our third example we consider a particularly popular four-element model, represented by a Maxwell element combined in series with a Voigt element, and known as the Burgers model ( Fig.~\ref{Fig:simple}(C)). Denote by subscript $m$ and $v$ the spring and viscous constants of the Maxwell and Voigt elements, respectively. Note that the stress $\sigma$ is the same on all three elements connected in series (Voigt, spring and dashpot). Eliminating the corresponding local strains, we obtain the following relationship
\begin{equation}\label{ce:Burgers}
E_m\ddot\epsilon + \frac{E_m E_v}{\eta_v}\dot\epsilon=\ddot \sigma + \Big[ \frac{E_m}{\eta_m} +\frac{E_m}{\eta_v}+\frac{E_v}{\eta_v}\Big]\dot\sigma+\frac{E_m E_v}{\eta_m\eta_v}\sigma.
\end{equation}
\end{example}
\medskip
\subsubsection*{Identifiability characterization}
First note (cf.~Examples \ref{ex:M}, \ref{ex:V}, and \ref{ex:Burgers}) that for any configuration of springs $\bar{E}=(E_1,...,E_N)$ and dashpots $\bar \eta=(\eta_1,...,\eta_M)$, the total strain--stress relationship can always be written as the following $(n+1)$-st order linear ordinary differential equation
\begin{equation}\label{eq:ODE}
a_{n+1} \epsilon^{(n+1)}+a_n \epsilon^{(n)}+\cdots+a_0 \epsilon = b_n \sigma^{(n)}+\cdots+b_0 \sigma,
\end{equation}
where the coefficients $a_j=a_j(\bar E,\bar \eta)$ and $b_k=b_k(\bar E,\bar \eta)$ are functions of the spring and dashpot constants. The precise value of $n$ and the
forms of $a_j(\bar E,\bar \eta)$ and $b_k(\bar E,\bar \eta)$ will depend
on the particular structure of the spring-dashpot model.
Equation \ref{eq:ODE} is known as the \textit{constitutive equation}. In the context of spring-dashpot networks, identifiability concerns whether or not it is possible to recover the unknown parameters ($\bar E$ and $\bar \eta$) of the system from the governing equation of the model, given only the total stress $\sigma$ and total strain $\epsilon$. In other words, we assume that we know the stress and the strain at the bounding nodes only and ask if it is possible to determine the unknown parameters ($\bar E$ and $\bar \eta$).
In order to uniquely fix the coefficients of the constitutive equation \eqref{eq:ODE}, we require that \eqref{eq:ODE} be \emph{normalized} so that the leading term (in $\sigma$ or $\epsilon$, depending on the situation) is monic. Thus, letting the $d$ non-monic coefficients of \eqref{eq:ODE} be represented by the vector $\mathbf{c} = (\mathbf{a}(\bar E, \bar \eta), \mathbf{b}(\bar E, \bar \eta))$, we have the following formal definition of identifiability.
\begin{defn} \label{defn:id}
Let $\mathbf{c}$ be a function $\mathbf{c}:\Theta\rightarrow{\mathbb{R}^{d}}$, where
$\Theta \subseteq \mathbb{R}^{N+M}$ is the parameter space.
The model is \emph{globally identifiable} from $\mathbf{c}$ if and only if the map $\mathbf{c}$ is one-to-one. The model is
\emph{locally identifiable} from $\mathbf{c}$ if and only if the map $\mathbf{c}$ is finite-to-one.
The model is
\emph{unidentifiable} from $\mathbf{c}$ if and only if the map $\mathbf{c}$ is
infinite-to-one.
\end{defn}
Note that local identifiability is equivalent to saying that around
each point in parameter space there exists a neighborhood on which the
function $\mathbf{c}$ is one-to-one. For example, for the Burgers model considered in Example \ref{ex:Burgers}, the coefficient function $\mathbf{c}:\mathbb{R}^4\to\mathbb{R}^4$ is defined as
\[
{\bf c}:(E_m,E_v,\eta_m,\eta_v)\to\Big(E_m, \frac{E_m E_v}{\eta_v}, \frac{E_m}{\eta_m} +\frac{E_m}{\eta_v}+\frac{E_v}{\eta_v}, \frac{E_m E_v}{\eta_m\eta_v}\Big).
\]
Technically speaking, in this paper we will consider the slightly weaker notion
of \emph{generic global identifiability} (or generic local identifiability, or generic unidentifiability), where \textit{generic} means that the property holds almost everywhere.
We will omit the use of the term generic when speaking of identifiability.
Definition \ref{defn:id} implies that if there are more parameters than non-monic coefficients, then the system must be unidentifiable. Thus, a necessary condition for structural identifiability is that the number of parameters $\bar E, \bar \eta$ (elements of the network) is less than or equal to the number of non-monic coefficients in the constitutive equation \eqref{eq:ODE}. We will soon show that the number of non-monic coefficients is bounded by the number of parameters in spring-dashpot networks. Thus, in this case, a necessary condition for structural identifiability is that the number of parameters and non-monic coefficients are equal. We will prove that, remarkably, in the case of viscoelastic models represented by a spring-dashpot network, the converse to this statement holds as well.
\begin{thm} [Local identifiability] \label{thm:parameqcoeff}
A viscoelastic model represented by a spring-dashpot network is locally identifiable if and only if the number of non-monic coefficients of the corresponding constitutive equation \eqref{eq:ODE} equals the total number of its moduli $E_j$ and viscosity parameters $\eta_k$.
\end{thm}
Note that although the constitutive equation \eqref{eq:ODE} is a linear differential equation,
its coefficients considered as functions of spring and viscous constants are not linear
functions of the parameters (see \eqref{ce:Burgers}).
Thus, Theorem \ref{thm:parameqcoeff} allows to reduce the difficult problem of checking
one-to-one or finite-to-one behavior of nonlinear functions to simply counting the number of parameters (springs and dashpots) and
non-monic coefficients of the constitutive equation and asking whether the two numbers are equal. The positive answer implies local identifiability, whereas a negative answer implies unidentifiability. Consider, for example, the Maxwell and Voigt elements, and the Burgers model. We note that the constitutive equations \eqref{ce:M}, \eqref{ce:V}, and \eqref{ce:Burgers} for all three models are already in the normalized form. Now, simply by counting the number of parameters and the non-monic coefficients of the constitutive equations, we see that the two are equal for each model. Thus, by the above theorem, all three models are locally structurally identifiable.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8cm]{VE.png}
\end{center}
\caption{(A) Multi-parameter linear viscoelastic model considered by Dietrich et al. \cite{DieLekTur1998}. (B) Ten element viscoelastic model studied in \cite{Ros50}, (C) A viscoelastic model of used to describe the baroreceptor nerve ending coupling to the arterial wall (see \cite{BugCowBea10} and \cite{MahOttOlu12, MahStuOttOlu13}).}\label{Fig:turski}
\end{figure}
\medskip
\subsubsection* {Constructing identifiable models}
Now we examine when combining two identifiable models results also in an identifiable model. This will allow us to construct arbitrarily complex and identifiable spring-dashpot networks.
\smallskip
We start with an observation, which we prove in the following section, related to the possible form of any differential equation that describes a spring-dashpot network.
\begin{prop}\label{Prop:types}
Every spring-dashpot network, given by equation \eqref{eq:ODE}, has one of the four possible types
\begin{equation}\label{types}
\begin{aligned}
&{\bf Type\,\, A:}\quad b_{0}b_{n} \ne 0, \quad a_{n+1}=0,\quad a_0a_n\ne 0\\
&{\bf Type\,\, B:}\quad b_{0}b_{n} \ne 0, \quad a_{0}=0,\quad a_1a_{n+1}\ne 0\\
&{\bf Type\,\, C:}\quad b_{0}b_{n} \ne 0, \quad a_0a_{n+1}\ne 0\\
&{\bf Type\,\, D:}\quad b_{0}b_{n} \ne 0, \quad a_{n+1}=a_0=0,\quad a_1a_n\ne 0.
\end{aligned}
\end{equation}
\end{prop}
Recall that $n$ is the highest derivative of the stress component $\sigma^{(n)}$, which appears in the total strain-stress equation \eqref{eq:ODE}. Now we illustrate the different types of networks defined in the above proposition by considering the simplest elements.
\begin{ex}
For a spring, given by $E\epsilon=\sigma$, we have $n=0$ (only $\sigma$ appears in the constitutive equation, but none of its derivatives), $a_1=a_{n+1}=0$, and $a_0=a_n=E\ne0$. Therefore a spring is of type A. Note that for a dashpot, which is given by $\eta \dot \epsilon = \sigma$, we also have $n=0$ but $a_0=0$ and $a_1=a_{n+1}=\eta\ne 0$. Thus, according to notation given in Proposition \ref{Prop:types}, a dashpot is of type B.
For the Voigt element, given by \eqref{ce:V}, we have $n=0$ as well as $a_0=E$ and $a_1=a_{n+1}=\eta$ (that is $a_0a_{n+1}\ne 0$). We conclude that it is of type C. Finally, a Maxwell element is given by \eqref{ce:M}. Note that here $n=1$ (since $\dot \sigma$ appears in \eqref{ce:M}), $a_{0}=0$, $a_{2}=a_{n+1}=0$, and $a_1=a_n=1\ne 0$. Thus a Maxwell element is of type D.
\end{ex}
Once the constitutive equation has been determined for a given spring-dashpot network, it is very easy to establish the type that it belongs to. Unfortunately, $n$ does not always have a physical significance. The value of $n$ is determined by the specific network and cannot be easily related to the number of springs and dashpots as we will illustrate later on.
\begin{thm}[Local identifiability]\label{thm:localMT}
Consider two locally identifiable spring-dashpot systems $N_{1}$ and $N_{2}$ of one of the four types $A$, $B$, $C$, $D$. Then the new model resulting in joining $N_{1}$ and $N_{2}$ either in parallel or in series is of the type indicated by the Identifiability Tables (Table \ref{thetable}).
The letter \emph{u} indicates that the network is unidentifiable, otherwise it is identifiable of the given type.
\end{thm}
There are several ways one could use the above theorem. One way is to establish the local identifiability of a given spring-dashpot network. Contrary to our similar result given in Theorem \ref{thm:parameqcoeff}, this can be done without actually calculating
the constitutive equation. We will show how to apply Theorem \ref{thm:localMT} to establish structural identifiability after first introducing some notation.
Given any two spring-dashpot models $M$ and $N$, we use the following notation
$(M \lor N)$ and $(M \land N)$ to denote respectively the parallel and series combination of $M$ and $N$. Let $\mathcal{F}$ denote the function that takes a spring and dashpot model
$M$ and outputs its type ($A,B,C,D$) if it is locally identifiable, and $u$ if
it is unidentifiable. To apply $\mathcal{F}$ to a complicated model built up from
springs and dashpots using series and parallel connections, we replace any springs and dashpots with their respective types $A$ and $B$ as well as the operations $\lor$ and $\land$ with $\oplus$ and $\odot$, respectively. Then we
apply the operations in the Identifiability Tables (see Table \ref{thetable}).
\begin{table}
\caption{
\bf{Identifiability Tables.}}
\hspace{2cm}
\subtable[Parallel connection]{
\begin{tabular}{c|ccccc}
$ \oplus$&{\bf A}&{\bf B}&{\bf C}&{\bf D}& {\bf u}\\ \hline
{\bf A}&u&C&u&A&u\\
{\bf B}&C&u&u&B&u\\
{\bf C}&u&u&u&C&u\\
{\bf D}&A&B&C&D&u\\
{\bf u}&u&u&u&u&u
\end{tabular}
}
\hspace{2cm}
\subtable[Series connection]{
\begin{tabular}{c|ccccc}
$\odot$&{\bf A}&{\bf B}&{\bf C}&{\bf D}& {\bf u}\\ \hline
{\bf A}&u&D&A&u&u\\
{\bf B}&D&u&B&u&u\\
{\bf C}&A&B&C&D&u\\
{\bf D}&u&u&D&u&u\\
{\bf u}&u&u&u&u&u
\end{tabular}
}
\begin{flushleft}
When connecting two identifiable spring-dashpot networks of one of the types $A$, $B$, $C$, $D$, or an unidentifiable $u$ either in series or in parallel, the above tables establish the type of the resulting identifiable system. If the resulting structure is unidentifiable it is indicated by $u$. For example, a parallel connection of two networks of types $A$ and $D$ gives rise to an identifiable network of type $A$ (see (a)), but the series connection results in an unidentifiable structure (see (b)).
\end{flushleft}\label{thetable}
\end{table}
\begin{example}[Local identifiability of the Maxwell element]\upshape
Note that the Maxwell model shown in Fig. \ref{Fig:simple}(A) can be symbolically written as
\[
M = E\land \eta.
\]
In this formula, we simply replace the spring and the dashpot with $A$ and $B$, respectively, as well as the operations $\lor$ and $\land$ with $\oplus$ and $\odot$, respectively, to obtain
\[
\mathcal{F}(M)=(A\odot B)= D.
\]
Thus we conclude that the Maxwell model is locally identifiable and is of type $D$.
\end{example}
\begin{example}[Local identifiability of the Burgers model]\upshape\label{Ex:burgers}
Similarly, the Burgers model shown in Fig. \ref{Fig:simple}(C) can be symbolically written as
\[
M = (E_v\lor \eta_v)\land(E_m\land \eta_m).
\]
To check the local identifiability, we find $\mathcal{F}(M)$ and use Table \ref{thetable} to obtain
\[
\mathcal{F}(M)=\overbrace{(A\oplus B)}^C\odot\overbrace{(A\odot B)}^D
= C\odot D=D.
\]
We conclude that the Burgers model is locally identifiable and of type $D$.
\end{example}
In the next example we show how we can easily establish local structural identifiability of a more complicated network.
\begin{example}[Dietrich et al. \cite{DieLekTur1998}]\upshape\label{Ex:turski}
Consider a viscoelastic material studied in \cite{DieLekTur1998} and represented by a spring-dashpot network shown in Fig.~\ref{Fig:turski}(A). It can be symbolically represented by
\begin{equation}\label{eq:ex6}
M=\Big[(((((E_1\lor\eta_1)\land E_2)\land\eta_2)\lor\eta_3)\land E_3)\lor\eta_4\Big]\land E_4.
\end{equation}
Again, we can verify the local identifiability of the above model using Table~\ref{thetable} and obtain
\[
\begin{aligned}
\mathcal{F}(M)&=\Big[((((\overbrace{(A\oplus B)}^C\odot A)\odot B)\oplus B)\odot A)\oplus B\Big]\odot A\\
&=\Big[(((\overbrace{(C\odot A)}^A\odot B)\oplus B)\odot A)\oplus B\Big]\odot A\\
&=\Big[((\overbrace{(A\odot B)}^D\oplus B)\odot A)\oplus B\Big]\odot A=\ldots = D\\
\end{aligned}
\]
This simple computation confirms that the model is locally structurally identifiable.
\end{example}
Our method can also verify if a network is unidentifiable, providing the reason for the lack of its identifiability. Consider the following example.
\begin{example}[Unidentifiable model]\upshape
Consider a viscoelastic model used in \cite{Ros50} and shown in Fig.~\ref{Fig:turski}(B). Using the notation previously introduced, it can symbolically be written as
\[
M=\Big[(((E_1\land\eta_1)\lor(E_2\land\eta_2))\land E_3\land\eta_3)\lor (E_4\land\eta_4)\Big]\land E_5\land \eta_5.
\]
Now applying Table \ref{thetable}, we obtain
\[
\begin{aligned}
\mathcal{F}(M)
&=\Big[((\overbrace{(A\odot B)}^D\oplus \overbrace{(A\odot B)}^D)\odot \overbrace{(A\odot B)}^D)\oplus \overbrace{(A\odot B)}^D\Big]\odot \overbrace{A\odot B}^D\\
&=\Big[(\overbrace{(D\oplus D)}^D\odot D)\oplus D\Big]\odot D\\
&=\Big[\overbrace{(D\odot D)}^{u}\oplus D\Big]\odot D=\emph{u}.
\end{aligned}
\]
Whenever Table \ref{thetable} indicates $u$ (i.e.~the corresponding substructure is unidentifiable), this inevitably leads to the whole model being unidentifiable. Moreover, our method can also explain what is the reason for the lack of identifiability. In this example the situation is simple: joining in series a Maxwell element (type D) with a generalized Maxwell model leads to an unidentifiable network.
\end{example}
So far we have considered only \emph{local} identifiability of mechanical systems. Now we complete the presentation and discussion of our results by introducing a criterium, which establishes when a given network is \emph{globally} structurally identifiable.
\begin{thm}[Global identifiability]\label{Thm:global}
A viscoelastic model represented by a spring-dashpot network is globally identifiable if and only if it is locally identifiable and the network is constructed by adding either in parallel or in series at the bounding nodes exactly one basic element (spring or dashpot) at a time.
\end{thm}
Note that the network given in Fig.~\ref{Fig:turski}(A) and considered in Example \ref{Ex:turski} was deemed locally structurally identifiable. We note that it can be constructed by adding just one element at a time and therefore it is \emph{globally} structurally identifiable. Similarly, all the simple models shown in Fig.~\ref{Fig:simple} can also by constructed adding only one element at a time, and since they are locally identifiable, we conclude that they are also globally structurally identifiable. Now consider a model which is locally, but not globally, structurally identifiable.
\begin{example}[Local but not global identifiability]\upshape
Consider a generalized Kelvin-Voigt model shown Fig.~\ref{Fig:turski}(C) and used in \cite{BugCowBea10, MahOttOlu12}) in the context of cardiovascular modeling. It can be symbolically represented by
\[
M=E_0\land(E_1\lor\eta_1)\land(E_2\lor\eta_2)\land(E_3\lor\eta_3).
\]
Thus the local identifiability can be checked by computing
\[
\mathcal{F}(M)=A\odot (A\oplus B)\odot (A\oplus B)\odot (A\oplus B)=A\odot C\odot C\odot C = A.
\]
We immediately conclude that the network is locally identifiable. In order to verify whether it is also globally identifiable, note that this network \emph{cannot} be constructed by adding only one element at a time. Thus the system is only locally, but not globally, identifiable. However, in this case the non-global identifiability
arises from merely permuting the parameters among the three Voigt elements.
\end{example}
\section*{Analysis}
In this section, we prove the main results from the previous section.
To do this requires a careful analysis of the structure of the
constitutive equation after combining a pair of systems in series or
in parallel.
Let $N_{1}$ and $N_{2}$ be spring-dashpot models whose respective constitutive equations
are $L_1\epsilon=L_2\sigma$ and $L_3\epsilon=L_4\sigma$, where $L_i$ represent linear differential operators. We can write the differential operators (in general form) as:
\begin{equation} \label{eqn:ldefns}
\begin{aligned}
L_1&=a_{n_1}d^{n_1}/dt^{n_1}+...+a_{m_1}d^{m_1}/dt^{m_1}\\
L_2&=b_{n_2}d^{n_2}/dt^{n_2}+...+b_{m_2}d^{m_2}/dt^{m_2}\\
L_3&=c_{n_3}d^{n_3}/dt^{n_3}+...+c_{m_3}d^{m_3}/dt^{m_3}\\
L_4&=e_{n_4}d^{n_4}/dt^{n_4}+...+e_{m_4}d^{m_4}/dt^{m_4}\\
\end{aligned}
\end{equation}
\begin{rmk}
Table \ref{shapestable} shows that there are restrictions on the values of the $n_i$ and $m_i$, e.g.~the differential order of the lowest order term in $\sigma$ is always zero and the differential order of the lowest order term in $\epsilon$ is zero or one, but we leave the operators in general form for simplicity.
\end{rmk}
We now show the form of the resulting constitutive equation after combining these systems in series or in parallel, in terms of these differential operators. In what follows, we will treat the differential operators $L_i$ as polynomial functions in the variable $d/dt$. For example, $L_1$ can be thought of as a polynomial $a_{n_1}x^{n_1}+...+a_{m_1}x^{m_1}$.
\subsubsection*{Series connection}
Suppose that $M = N_{1} \land N_{2}$ is a series connection of models $N_{1}$ and $N_{2}$,
whose constitutive equations are
$L_1\epsilon_1=L_2\sigma_1$ and $L_3\epsilon_2=L_4\sigma_2$, respectively. Then the stresses ($\sigma$) are the same for the two systems while the strains ($\epsilon$) are added. If $L_1$ and $L_3$ are relatively prime, then the constitutive equation of $M$ is:
\begin{equation} \label{eq:series}
L_1L_3\epsilon=(L_1L_4+L_2L_3)\sigma, \quad \epsilon=\epsilon_1+\epsilon_2,\quad \sigma=\sigma_1=\sigma_2.
\end{equation}
We assume that $a_{n_1}=c_{n_3}=1$, so that the constitutive equation is monic. If $L_1$ and $L_3$ have a common factor, then the constitutive equation of $M$ is obtained by dividing
(\ref{eq:series}) by the greatest common divisor of $L_1$ and $L_3$.
\subsubsection*{Parallel connection}
Suppose that $M = N_{1} \lor N_{2}$ is a parallel connection of models $N_{1}$ and $N_{2}$,
whose constitutive equations are
$L_1\epsilon_1=L_2\sigma_1$ and $L_3\epsilon_2=L_4\sigma_2$, respectively. Then the strains ($\epsilon$) are the same for the two systems while the stresses ($\sigma$) are added. If $L_2$ and $L_4$ are relatively prime, then the constitutive equation is:
\begin{equation}\label{eq:parallel}
(L_1L_4+L_2L_3)\epsilon=L_2L_4\sigma,\quad \epsilon=\epsilon_1=\epsilon_2,\quad \sigma=\sigma_1+\sigma_2.
\end{equation}
We assume that $b_{m_2}=e_{m_4}=1$, so that the constitutive equation is monic. If $L_2$ and $L_4$ have a common factor, then the constitutive equation is obtained by dividing
(\ref{eq:parallel}) by the greatest common divisor of $L_2$ and $L_4$.
\subsubsection*{Types of networks}
Now we prove Proposition \ref{Prop:types}, that is, we show that every spring-dashpot network, given by equation \eqref{eq:ODE}, has one of the four possible types displayed in Table \ref{shapestable}, which are defined by the \textit{shapes} of the linear operators $L_i$ acting on $\epsilon$ and $\sigma$. We make this notion precise:
\begin{defn}
The \textit{shape} of a linear operator $L_i$ is a pair of numbers, written $[n_i,m_i]$, where $n_i$ is the highest differential order and $m_i$ is the lowest different order.
\end{defn}
\begin{table}
\caption{
\bf{Possible types of constitutive equations}}
\begin{center}
\begin{tabular}{ | p{1.5cm} | p{2.3cm} | p{2.3cm} | p{3cm} | p{3cm} |}
\hline
Type & Shape in $\epsilon$ & Shape in $\sigma$ \\ \hline
A & [$n,0$] & [$n,0$] \\ \hline
B & [$n+1,1$] & [$n,0$] \\ \hline
C & [$n+1,0$] & [$n,0$] \\ \hline
D & [$n,1$] & [$n,0$] \\ \hline
\end{tabular}
\end{center}
\begin{flushleft}
The four possible types of constitutive equations, defined by the shapes of the linear operators acting on $\epsilon$ and $\sigma$, written in brackets.
\end{flushleft}
\label{shapestable}
\end{table}
We note that a spring is of type A and a dashpot is of type B. A Voigt element is formed by a parallel extension of types A and B, which forms type C, and a Maxwell element is formed by a series extension of types A and B, which forms type D. The properties of these four types are displayed in Table \ref{shapestable}. We can now form the $10$ possible combinations of pairing two of these types in series and the $10$ possible combinations of pairing two of these types in parallel. In Tables \ref{seriestable} and \ref{paralleltable}, we show the $20$ total possibilities and demonstrate that each pairing results in a type A, B, C, or D. Since every spring-dashpot network can be written as a combination, in series or in parallel, of springs and dashpots, then we have shown by induction that joining any two spring-dashpot networks in series or in parallel results in one of these four types.
\begin{table}
\caption{
\bf{Series connection}}
\begin{center}
\begin{tabular}{ | p{1cm} | p{2.5cm} | p{2.5cm} | p{2cm} | p{2cm} | p{2cm} | p{1cm} |}
\hline
Type & Shape in $\epsilon$ & Shape in $\sigma$ & Non-monic coefficients & Parameters & Identifiable? & Type \\ \hline
(A,A) & [$n_1+n_2,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+1$ & $2n_1+2n_2+2$ & Not Id & A \\ \hline
(A,B) & [$n_1+n_2+1,1$] & [$n_1+n_2+1,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+2$ & Id & D \\ \hline
(A,C) & [$n_1+n_2+1,0$] & [$n_1+n_2+1,0$] & $2n_1+2n_2+3$ & $2n_1+2n_2+3$ & Id & A \\ \hline
(A,D) & [$n_1+n_2,1$] & [$n_1+n_2,0$] & $2n_1+2n_2$ & $2n_1+2n_2+1$ & Not Id & D \\ \hline
(B,B) & [$n_1+n_2+1,1$] & [$n_1+n_2,0$] & $2n_1+2n_2+1$ & $2n_1+2n_2+2$ & Not Id & B \\ \hline
(B,C) & [$n_1+n_2+2,1$] & [$n_1+n_2+1,0$] & $2n_1+2n_2+3$ & $2n_1+2n_2+3$ & Id & B \\ \hline
(B,D) & [$n_1+n_2,1$] & [$n_1+n_2,0$] & $2n_1+2n_2$ & $2n_1+2n_2+1$ & Not Id & D \\ \hline
(C,C) & [$n_1+n_2+2,0$] & [$n_1+n_2+1,0$] & $2n_1+2n_2+4$ & $2n_1+2n_2+4$ & Id & C \\ \hline
(C,D) & [$n_1+n_2+1,1$] & [$n_1+n_2+1,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+2$ & Id & D \\ \hline
(D,D) & [$n_1+n_2-1,1$] & [$n_1+n_2-1,0$] & $2n_1+2n_2-2$ & $2n_1+2n_2$ & Not Id & D \\ \hline
\end{tabular}
\end{center}
\begin{flushleft}
Two systems of types A, B, C, or D are combined in series, where in the first system $n=n_1$ and in the second system $n=n_2$.
\end{flushleft}
\label{seriestable}
\end{table}
\begin{table}
\caption{
\bf{Parallel connection}}
\begin{center}
\begin{tabular}{ | p{1cm} | p{2.5cm} | p{2.5cm} | p{2cm} | p{2cm} | p{2cm} | p{1cm} |}
\hline
Type & Shape in $\epsilon$ & Shape in $\sigma$ & Non-monic coefficients & Parameters & Identifiable? & Type \\ \hline
(A,A) & [$n_1+n_2,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+1$ & $2n_1+2n_2+2$ & Not Id & A \\ \hline
(A,B) & [$n_1+n_2+1,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+2$ & Id & C \\ \hline
(A,C) & [$n_1+n_2+1,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+3$ & Not Id & C \\ \hline
(A,D) & [$n_1+n_2,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+1$ & $2n_1+2n_2+1$ & Id & A \\ \hline
(B,B) & [$n_1+n_2+1,1$] & [$n_1+n_2,0$] & $2n_1+2n_2+1$ & $2n_1+2n_2+2$ & Not Id & B \\ \hline
(B,C) & [$n_1+n_2+1,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+3$ & Not Id & C \\ \hline
(B,D) & [$n_1+n_2+1,1$] & [$n_1+n_2,0$] & $2n_1+2n_2+1$ & $2n_1+2n_2+1$ & Id & B \\ \hline
(C,C) & [$n_1+n_2+1,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+4$ & Not Id & C \\ \hline
(C,D) & [$n_1+n_2+1,0$] & [$n_1+n_2,0$] & $2n_1+2n_2+2$ & $2n_1+2n_2+2$ & Id & C \\ \hline
(D,D) & [$n_1+n_2,1$] & [$n_1+n_2,0$] & $2n_1+2n_2$ & $2n_1+2n_2$ & Id & D \\ \hline
\end{tabular}
\end{center}
\begin{flushleft}
Two systems of types A, B, C, or D are combined in parallel, where in the first system $n=n_1$ and in the second system $n=n_2$.
\end{flushleft}
\label{paralleltable}
\end{table}
\begin{rmk} We note that if a type B or D is combined in series with a type B or D, then $L_1$ and $L_3$ have a common factor (since both lacked a constant term), so the equation $L_1L_3\epsilon=(L_1L_4+L_2L_3)\sigma$ is divided by $\gcd(L_1,L_3) = d/dt$ to arrive at the
shapes listed in the table.
\end{rmk}
In addition to the type of equation that results after combining two equations of types $\left\{A,B,C,D\right\}$, we have in Tables \ref{seriestable} and \ref{paralleltable} the resulting identifiability properties of each equation, which we will obtain in the next section. Note that Definition \ref{defn:id} implies that if there are more parameters than non-monic coefficients, then the system must be unidentifiable. The tables show that the number of non-monic coefficients is bounded by the number of parameters, thus a necessary condition for identifiability is that the number of parameters equals the number of non-monic coefficients in the constitutive equation \eqref{eq:ODE}. In the next section, we show that this is also a sufficient condition.
\subsubsection*{Local identifiability}
Consider a spring-dashpot system $M$
whose final step connection is a series connection of two systems
$N_{1}$ and $N_{2}$, i.e.~$M = N_{1} \land N_{2}$.
Since the number of non-monic coefficients in any spring-dashpot
model is always less than or equal to the number of parameters in
that model, we know that a necessary condition for this system
to be locally identifiable is that $N_{1}$ and $N_{2}$ are both
locally identifiable. Let $L_1\epsilon_1=L_2\sigma_1$ be the constitutive equation for $N_{1}$
and $L_3\epsilon_2=L_4\sigma_2$ be the constitutive equation for $N_{2}$.
Each of the operators $L_{1}$, $L_{2}$, $L_{3}$, and $L_{4}$ will
have a fixed shape determined by the structure of $N_{1}$ and $N_{2}$.
Assuming that $N_{1}$ and $N_{2}$ are locally identifiable,
we can choose parameters in each of the models $N_{1}$ and $N_{2}$
so that the coefficients of these constitutive equations are arbitrary
numbers. Thus, deciding identifiability of this system amounts
to determining whether the map that takes the pair of equations
$(L_1\epsilon_1=L_2\sigma_1, L_3\epsilon_2=L_4\sigma_2)$ to the constitutive
equation $f \epsilon = g \sigma$, where $f=L_1L_3$, $g=L_1L_4+L_2L_3$, $\epsilon=\epsilon_1+\epsilon_2$, and $\sigma=\sigma_1=\sigma_2$ (cf. \eqref{eq:series}), for the system $M$ is finite-to-one or not.
The same reasoning works {\it mutatis mutandis} for
parallel connections, where we now concern ourselves with the map from
the pair of equations $(L_1\epsilon_1=L_2\sigma_1, L_3\epsilon_2=L_4\sigma_2)$
with generic coefficients to the constitutive equation
for $M = N_{1} \lor N_{2}$ given in \eqref{eq:parallel}.
To make the above, intuitive, statements precise we introduce the following definition.
\begin{defn}
The \emph{shape factorization problem} for a quadruple of shapes
\[
Q=([n_{1}, m_{1}], [n_{2}, m_{2}], [n_{3}, m_{3}], [n_{4}, m_{4}])
\]
is the following problem: for a generic pair of
polynomials $(f,g)$ with $f$ monic such that the
${\rm shape}(f) = [n_{1} + n_{3}, m_{1} + m_{3}]$
and ${\rm shape}(g) = [\max(n_{1} + n_{4}, n_{2} + n_{3}),
\min( m_{1} + m_{4}, m_{2} + m_{3}) ]$,
do there exist finitely many quadruples of polynomials $(L_{1}, L_{2}, L_{3}, L_{4})$
with ${\rm shape}(L_{i}) = [n_{i}, m_{i}]$, $L_{1}$ and $L_{3}$ are monic,
and such that
$f = L_{1}L_{3}$ and $g = L_{1} L_{4} + L_{2} L_{3}$?
A quadruple of shapes $Q$ is said to be \emph{good} if
the shape factorization problem for that quadruple has a positive solution.
\end{defn}
Since the above definition introduces one of the key concepts of the paper, in the following example we shall further illustrate the meaning of the shape factorization problem.
\begin{ex}
Suppose that our quadruple
\[
([n_{1}, m_{1}], [n_{2}, m_{2}], [n_{3}, m_{3}], [n_{4}, m_{4}]) = ([2,0], [2,0], [3,0],[2,0])
\] which is a special case
of joining models of types $A$ and $C$ in series. The shape factorization
problem in this case asks the following question:
Let $(f,g)$ be a generic pair of polynomials where $f$ and $g$ are degree $5$ polynomials
with nonzero constant term and $f$ is monic:
\[
\begin{aligned}
&f = x^{5} + f_{4}x^{4} + f_{3}x^{3} + f_{2}x^{2} + f_{1}x + f_{0}\\
&g = g_{5}x^{5} + g_{4}x^{4} + g_{3}x^{3} + g_{2}x^{2} + g_{1}x + g_{0}.
\end{aligned}
\]
Do there exist finitely many polynomials
\[
\begin{aligned}
&L_{1} = x^{2} + a_{1}x + a_{0}, &\quad& L_{2} = b_{2}x^{2} + b_{1}x + b_{0}\\
&L_{3} = x^{3}+ c_{2} x^{2} + c_{1}x + c_{0}, && L_{4} = d_{2}x^{2} + d_{1}x + d_{0}
\end{aligned}
\]
such that $f = L_{1}L_{3}$ and $g = L_{1}L_{4} + L_{2}L_{3}$?
Or to say it another way, for generic values of $f_{4}, \ldots, f_{0}$ and $g_{5}, \ldots, g_{0}$,
does the system of $11$ equations in $11$ unknowns:
\begin{eqnarray*}
f_{4} & = & a_{1} + c_{2} \\
f_{3} & = & a_{0} + a_{1}c_{2} + c_{1} \\
f_{2} & = & a_{0}c_{2} + a_{1}c_{1} + c_{0} \\
f_{1} & = & a_{0}c_{1} + a_{1} c_{0} \\
f_{0} & = & a_{0}c_{0} \\
g_{5} & = & b_{2} \\
g_{4} & = & b_{1} + b_{2} c_{2} + d_{2} \\
g_{3} & = & b_{0} + b_{1}c_{2} + b_{2}c_{1} + a_{1}d_{2} + d_{1} \\
g_{2} & = & b_{0}c_{2} + b_{1}c_{1} + b_{2}c_{0} + a_{0}d_{2} + a_{1}d_{1} + d_{0} \\
g_{1} & = & b_{0}c_{1} + b_{1}c_{0} + a_{0}d_{1} +a_{1}d_{0} \\
g_{0} & = & b_{0}c_{0} + a_{0}d_{0}
\end{eqnarray*}
have only finitely many solutions?
\end{ex}
The language of shape factorization problems and the remarks in the preceding
paragraphs allow us to reduce the local identifiability problem for a spring-dashpot system to determining whether a
certain quadruple is a good quadruple.
\begin{prop}
Let $M = N_{1} \land N_{2}$ be a spring-dashpot model joined in series
from $N_{1}$ and $N_{2}$, where $N_{1}$ has constitutive equation
$ L_{1} \epsilon_1 = L_{2} \sigma_1$ of shapes $[n_{1}, m_{1}]$ and $[n_{2}, m_{2}]$,
respectively, and $N_{2}$ has constitutive equation
$ L_{3} \epsilon_2 = L_{4} \sigma_2$ of shapes $[n_{3}, m_{3}]$ and $[n_{4}, m_{4}]$,
respectively. Then the model $M$ is locally identifiable if and
only if
\begin{enumerate}
\item $N_{1}$ and $N_{2}$ are locally identifiable, and
\item $([n_{1}, m_{1}], [n_{2}, m_{2}],
[n_{3}, m_{3}], [n_{4}, m_{4}])$ is a good quadruple.
\end{enumerate}
Similarly, if $M = N_{1} \lor N_{2}$ is a spring-dashpot model joined in parallel
from $N_{1}$ and $N_{2}$, then $M$ is locally identifiable if and only if
\begin{enumerate}
\item $N_{1}$ and $N_{2}$ are locally identifiable, and
\item $([n_{2}, m_{2}], [n_{1}, m_{1}],
[n_{4}, m_{4}], [n_{3}, m_{3}])$ is a good quadruple.
\end{enumerate}
\end{prop}
So what remains to show is that, for the shapes that arise
in spring-dashpot models, whether a quadruple of shapes is
a good quadruple only depends on the types ($A, B, C, $ or $D$) of
the systems being combined. The proof of this statement
will occupy the rest of this section.
Let $f$ and $g$ be two polynomials. Note that for given fixed shapes,
$[n_{1}, m_{1}]$ and $[n_{3}, m_{3}]$, there are at most finitely
many factorizations $f = L_{1}L_{3}$, where $L_{1}$ has shape
$[n_{1}, m_{1}]$ and $L_{3}$ has shape $[n_{3}, m_{3}]$ and both are monic.
This is
because there are at most finitely many ways to factorize a monic polynomial into
monic factors.
Once we fix one of these finitely many choices for $L_{1}$ and
$L_{3}$, the equation $ g = L_{1} L_{4} + L_{2} L_{3}$ is
a linear system in the (unknown) coefficients of $L_{2}$ and $L_{4}$.
For a polynomial $f = f_{n}x^{n} + \cdots + f_{m}x^{m}$ of shape $[n,m]$, we can write
the coefficients of $f$ as a vector, which we denote
\[
[f] := \begin{pmatrix}
f_{n} \\
\vdots \\
f_{m} \end{pmatrix}.
\]
Let $L_i$ have shape $[n_i,m_i]$, as defined in Equation \eqref{eqn:ldefns}. The vector of coefficients of $L_1L_4$ can be written
as the result of a matrix vector product as:
\[[L_1L_4]=
\begin{pmatrix}
a_{n_1} & 0 & \cdots & 0 \\
\vdots & a_{n_1} & \cdots & 0 \\
a_{m_1} & \vdots & \cdots & \vdots \\
0 & a_{m_1} & \cdots & 0 \\
\vdots & 0 & \cdots & a_{n_1} \\
\vdots & \vdots & \cdots & \vdots \\
0 & 0 & \cdots & a_{m_1}
\end{pmatrix}
\begin{pmatrix}
e_{n_4} \\
\vdots \\
e_{m_4}
\end{pmatrix}.
\]
We will refer to this product as $H'[L_4]$, where $H'$ is a $n_1+n_4-m_1-m_4+1$ by $n_4-m_4+1$ matrix.
Likewise, the coefficients of $L_2L_3$ can be written as the result of a matrix vector product as:
\[
[L_2L_3]=
\begin{pmatrix}
c_{n_3} & 0 & \cdots & 0 \\
\vdots & c_{n_3} & \cdots & 0 \\
c_{m_3} & \vdots & \cdots & \vdots \\
0 & c_{m_3} & \cdots & 0 \\
\vdots & 0 & \cdots & c_{n_3} \\
\vdots & \vdots & \cdots & \vdots \\
0 & 0 & \cdots & c_{m_3}
\end{pmatrix}
\begin{pmatrix}
b_{n_2} \\
\vdots \\
b_{m_2}
\end{pmatrix}.
\]
We will refer to this product as $G'[L_2]$, where $G'$ is a $n_2+n_3-m_2-m_3+1$ by $n_2-m_2+1$ matrix. Then we call the \textit{matrix factored form} of $[L_1L_4+L_2L_3]$ the expression:
\begin{equation} \label{eqn:factored}
G[L_2]+H[L_4],
\end{equation}
where the matrices $G$ and $H$ are the matrices $G'$ and $H'$ padded with rows of zeros so
that coefficients corresponding to monomials of the
same degree appear in the same row. This makes $(G \ \ H)$ a $\max\left\{n_1+n_4,n_2+n_3\right\}-\min\left\{m_1+m_4,m_2+m_3\right\}+1$ by $n_2-m_2+n_4-m_4+2$ matrix.
We can now state a criteria for determining if the shape factorization problem has finitely many solutions:
\begin{prop} \label{prop:abinvert}
The quadruple $([n_{1}, m_{1}], [n_{2}, m_{2}],
[n_{3}, m_{3}], [n_{4}, m_{4}])$ is a good quadruple if and only
if the matrix $(G \ \ H)$ is generically invertible.
\end{prop}
\begin{proof}
We can write the shape factorization problem of type $([n_{1}, m_{1}], [n_{2}, m_{2}],
[n_{3}, m_{3}], [n_{4}, m_{4}])$ in matrix factored form as $G[L_2]+H[L_4]=[g]$ (see \eqref{eqn:factored}), so that
\[
\begin{pmatrix}
G & H
\end{pmatrix}
\begin{pmatrix}
L_2 \\
L_4
\end{pmatrix}
= [g].
\]
This system has a unique solution if and only if $(G \ \ H)$ is generically invertible,
i.e.~invertible for a generic choice of parameter values.
\end{proof}
\smallskip
\begin{ex}\label{ex:quadruple}
Suppose that our quadruple $([n_{1}, m_{1}], [n_{2}, m_{2}],
[n_{3}, m_{3}], [n_{4}, m_{4}])$ is
$([2,0], [2,0], [3,0],[2,0])$, which is a special case of joining models of types $A$ and $C$ in series. The resulting
matrix $(G \ \ H)$ is the matrix
$$
\left(
\begin{array}{cccccc}
0 & 0 & 0 & c_{3} & 0 & 0 \\
a_{2} & 0 & 0 & c_{2} & c_{3} & 0 \\
a_{1} & a_{2} & 0 & c_{1} & c_{2} & c_{3} \\
a_{0} & a_{1} & a_{2} & c_{0} & c_{1} & c_{2} \\
0 & a_{0} & a_{1} & 0 & c_{0} & c_{1} \\
0 & 0 & a_{0} & 0 & 0 & c_{0}
\end{array} \right).
$$
\end{ex}
\smallskip
We now determine when this matrix $(G \ \ H)$ is generically invertible, i.e.~square and full rank. The \textit{Sylvester matrix} associated to two polynomials $p(z)=p_{0}+p_{1}z+p_{2}z^2+...+p_{m}z^m$ and $q(z)=q_{0}+q_{1}z+q_{2}z^2+...+q_{n}z^n$ is the $n+m$ by $n+m$ matrix that has the coefficients of $p(z)$ repeated $n$ times as columns and the coefficients of $q(z)$ repeated $m$ times as columns in the following way:
\[
\begin{pmatrix}
p_{m} & 0 & \cdots & 0 & q_{n} & 0 & \cdots & 0\\
\vdots & p_{m} & \cdots & 0 & \vdots & q_{n} & \cdots & 0\\
p_{0} & \vdots & \cdots & \vdots & q_{0} & \vdots & \cdots & \vdots\\
0 & p_{0} & \cdots & 0 & 0 & q_{0} & \cdots & 0 \\
\vdots & 0 & \cdots & p_{m} & \vdots & 0 & \cdots & q_{n} \\
\vdots & \vdots & \cdots & \vdots & \vdots & \vdots & \cdots & \vdots\\
0 & 0 & \cdots & p_{0} & 0 & 0 & \cdots & q_{0}
\end{pmatrix}.
\]
$$ \underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{n} \underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }_{m} $$
The determinant of the Sylvester matrix of the two polynomials $p$ and $q$ is the \emph{resultant}, which is zero if and only if the two polynomials have a common root. In particular, for generic polynomials $p$ and $q$, the Sylvester matrix is invertible \cite[Chapter 3]{Cox2005}.
We will use the Sylvester matrix in the following way. We will show that there are submatrices of $(G \ \ H)$ that correspond to the Sylvester matrix associated to $L_1$ and $L_3$.
\begin{prop} \label{prop:squarefullrank} If the matrix $(G \ \ H)$ is square, then it is generically invertible.
\end{prop}
\begin{proof}
We claim that the columns of $(G \ \ H)$ can be ordered so that the
resulting matrix has the shape
\begin{equation}\label{eq:sylvesterish}
\begin{pmatrix}
S' & 0 & 0 \\
X & S & Y \\
0 & 0 & S''
\end{pmatrix}
\end{equation}
where $S$ is the Sylvester matrix associated to the nonzero coefficients of
$L_{1}$ and $L_{3}$. Note that this means that we might shift the coefficients
down if necessary so there are no extraneous zero terms of low degree (i.e.~if the shape
is $[n_{i},m_{i}]$ with $m_{i} \neq 0$).
The matrix $S'$ is a square lower triangular
matrix with nonzero entries on the diagonal, and $S''$ is a square
upper triangular matrix with nonzero entries on the diagonal.
This will
prove that $(G \ \ H)$ is invertible, since its determinant will be the product
of the determinants of $S$, $S'$ and $S''$, all of which are nonzero.
To prove that claim requires a careful case analysis.
The number of columns of $(G \ \ H)$ is
$n_{4} - m_{4} +n_{2} - m_{2} + 2$ and the number of rows is
$\max( n_{1} + n_{4}, n_{2} + n_{3}) - \min(m_{1} + m_{4}, m_{2} + m_{3}) + 1$.
Without loss of generality, we can assume that the maximum is attained by $n_{1} + n_{4}$.
We need to distinguish between the two cases where the minimum is attained by $m_{1} + m_{4}$ and by $m_{2} + m_{3}$.
\medskip
\noindent{\bf Case 1: $ \min\big[m_{1} + m_{4}, m_{2} + m_{3}\big] = m_{1} + m_{4}$.}
Since $(G \ \ H)$ is a square matrix, this implies that $n_{1} - m_{1} =
n_{2} - m_{2} + 1$. In this case we group the columns of $(G \ \ H)$
in the following order.
\begin{enumerate}
\item The first $n_{1} + n_{4} - n_{2} - n_{3}$ columns of $G$
\item Then the next $n_{3} - m_{3}$ columns of $G$
\item Then all $n_{2} - m_{2} + 1 (= n_{1} - m_{1})$ columns of $H$
\item Then the remaining $ m_{2} + m_{3} - m_{1} - m_{4}$ columns of $G$.
\end{enumerate}
This choice has the property that the middle two blocks of columns together
have the desired form, since we have chosen to start including columns from
$G$ and $H$ precisely when they both have nonzero entries in the same rows,
and stopping the formation of these when they stop having nonzero entries in the
same rows, which has the correct form. Note we have used all columns of
$G$ since
\[
n_{1} + n_{4} - n_{2} - n_{3} +n_{3} - m_{3} +
m_{2} + m_{3} - m_{1} - m_{4} =
(n_{1} - m_{1}) + (n_{4} - m_{4}) - (n_{2} - m_{2}) =
n_{4} - m_{4} + 1.
\]
\medskip
\noindent{\bf Case 2: $ \min\big[m_{1} + m_{4}, m_{2} + m_{3}\big] = m_{2} + m_{3}$.}
Note that since $(G \ \ H)$ is square, this implies that
$n_{1} - m_{3} = n_{2} - m_{4} + 1$.
In this case, we do not need to reorder the columns to obtain the desired
form.
We mention how to block the columns to obtain the desired form.
\begin{enumerate}
\item The first $n_{1} + n_{4} - n_{2} - n_{3}$ columns of $G$
\item Then the next $n_{3} - m_{3}$ columns of $G$
\item Then the first $n_{1} - m_{1}$ columns of $H$
\item Then the remaining $ m_{1} + m_{4} - m_{2} - m_{3}$ columns
of $H$.
\end{enumerate}
Note that we have the desired number of columns from the second and third
blocks, and we have chosen them so that that those columns have nonzero
entries at exactly the same rows. Furthermore, we have used
all columns of $G$ since
$$n_{1} + n_{4} - n_{2} - n_{3} + n_{3} - m_{3} =
n_{1} + n_{4} - n_{2} - m_{3} = n_{4} - m_{4} + 1$$
and all columns of $H$ since
$$n_{1} - m_{1} + m_{1} + m_{4} - m_{2} - m_{3} = n_{1} + m_{4} - m_{2} - m_{3} =
n_{2} - m_{2} + 1. \qedhere$$
\end{proof}
\begin{ex}
We can rewrite the matrix in Example \ref{ex:quadruple} as
\[
\left(
\begin{array}{c|ccccc}
c_{3} & 0 & 0 & 0 & 0 & 0 \\
\hline
c_{2} & a_{2} & 0 & 0 & c_{3} & 0 \\
c_{1} & a_{1} & a_{2} & 0 & c_{2} & c_{3} \\
c_{0} & a_{0} & a_{1} & a_{2} & c_{1} & c_{2} \\
0 & 0 & a_{0} & a_{1} & c_{0} & c_{1} \\
0 & 0 & 0 & a_{0} & 0 & c_{0}
\end{array} \right).
\]
Here the the $5 \times 5$ matrix in the lower righthand corner
is the Sylvester matrix, the matrix $S'$ is the $1 \times 1$ matrix in the upper lefthand corner, and the matrix $S''$ is the
empty matrix.
\end{ex}
\begin{proof} [Proof of Theorem \ref{thm:parameqcoeff}]
We will show that if the number of parameters equals the number of non-monic coefficients, then the matrix $(G \ \ H)$ is square. By Propositions \ref{prop:abinvert} and \ref{prop:squarefullrank}, this will imply that the model is locally identifiable.
Let $M = N_{1} \land N_{2}$ be a spring-dashpot model joined in series
from $N_{1}$ and $N_{2}$, where $N_{1}$ has constitutive equation
$ L_{1} \epsilon_1 = L_{2} \sigma_1$ of shapes $[n_{1}, m_{1}]$ and $[n_{2}, m_{2}]$,
respectively, and $N_{2}$ has constitutive equation
$ L_{3} \epsilon_2 = L_{4} \sigma_2$ of shapes $[n_{3}, m_{3}]$ and $[n_{4}, m_{4}]$,
respectively.
By induction, we can
assume that the number of parameters equals the number of non-monic coefficients for the systems $N_{1}$ and $N_{2}$, i.e.~there are $n_1-m_1+n_2-m_2+1$ parameters in the first and $n_3-m_3+n_4-m_4+1$ in the second. Assume the number of parameters equals the number of non-monic coefficients in this full system, i.e.~
\begin{eqnarray*}
\lefteqn{
n_1-m_1+ n_2-m_2+n_3-m_3+n_4-m_4 + 2 =} \\
& &
\max\left\{n_1+n_4,n_2+n_3\right\}-\min\left\{m_1+m_4,m_2+m_3\right\}+1+n_1-m_1+n_3-m_3.
\end{eqnarray*}
Subtracting $n_1-m_1+n_3-m_3$ from both sides, we get that
$$
n_2-m_2+n_4-m_4+2 = \max\left\{n_1+n_4,n_2+n_3\right\}-\min\left\{m_1+m_4,m_2+m_3\right\}+1.
$$
From the definition of $(G \ \ H)$, this means the number of rows equals the number of columns, so that $(G \ \ H)$ is square.
The argument for the parallel extension is identical and is omitted.
\end{proof}
\begin{proof} [Proof of Theorem \ref{thm:localMT}]
Theorem \ref{thm:parameqcoeff} shows that the model is locally identifiable if and only if the number of parameters equals the number of non-monic coefficients. Thus the identifiability properties of the $20$ cases in Tables \ref{seriestable} and \ref{paralleltable} are determined by checking if the numbers in the columns corresponding to the number of parameters and the number of non-monic coefficients are equal.
\end{proof}
\subsubsection*{Global identifiability}
We now determine necessary and sufficient conditions for global identifiability.
\begin{prop}
Let $M = N_{1} \land N_{2}$ be a spring-dashpot model joined in series
from $N_{1}$ and $N_{2}$, where $N_{1}$ has constitutive equation
$ L_{1} \epsilon_1 = L_{2} \sigma_1$ of shapes $[n_{1}, m_{1}]$ and $[n_{2}, m_{2}]$,
respectively, and $N_{2}$ has constitutive equation
$ L_{3} \epsilon_2 = L_{4} \sigma_2$ of shapes $[n_{3}, m_{3}]$ and $[n_{4}, m_{4}]$,
respectively. Then the model $M$ is globally identifiable if and
only if
\begin{enumerate}
\item $N_{1}$ and $N_{2}$ are globally identifiable,
\item The shape factorization problem for the quadruple $([n_{1}, m_{1}], [n_{2}, m_{2}],
[n_{3}, m_{3}], [n_{4}, m_{4}])$ generically has a unique solution.
\end{enumerate}
Similarly, if $M = N_{1} \lor N_{2}$ is a spring-dashpot model joined in parallel
from $N_{1}$ and $N_{2}$, then $M$ is globally identifiable if and only if
\begin{enumerate}
\item $N_{1}$ and $N_{2}$ are globally identifiable, and
\item The shape factorization problem for the quadruple $([n_{2}, m_{2}], [n_{1}, m_{1}],
[n_{4}, m_{4}], [n_{3}, m_{3}])$ generically has a unique solution.
\end{enumerate}
\end{prop}
\begin{proof}
We handle the case of series extensions, parallel extensions being identical.
Let $M = N_{1} \land N_{2}$. Clearly, $N_{1}$ and $N_{2}$ must be
globally identifiable otherwise we could give two sets of parameters
yielding the same constitutive equation for $N_{1}$, which could then
be combined with parameters for $N_{2}$ to get two sets of parameters
for $M$ yielding the same constitutive equation.
Now if the shape factorization problem has a unique solution, there is
a unique way to take the constitutive equation for $M$ and solve for the
constitutive equations for $N_{1}$ and $N_{2}$, since $N_{1}$ and $N_{2}$
are globally identifiable, there is a unique way to solve for parameters
of those models giving a unique solution for parameters for $M$. Conversely,
if there were multiple solutions to the shape factorization problem, then
by global identifiability of $N_{1}$ and $N_{2}$, we could solve all the
way back to get multiple parameter choices for the same parameter choice for $M$.
\end{proof}
Note that in our analysis of the shape factorization problem in the
previous section, we saw that once $L_{1}$ and $L_{3}$ are chosen among all their finitely
many values, when the model is locally identifiable there is a unique way to
then construct $L_{2}$ and $L_{4}$. Hence, the shape factorization
problem has a unique solution when there is a unique way to
factor $f = L_{1}L_{3}$. This happens if and only if either $n_{1} = m_{1}$
or $n_{3} = m_{3}$, otherwise, generically, we can exchange roots of $L_{1}$
and $L_{3}$ giving multiple solutions.
\begin{cor} \label{cor:globidconstzero}
Suppose that $M = N_{1} \land N_{2}$ is globally identifiable.
Then either $N_{1}$ or $N_{2}$ must have been one of a spring, a dashpot, or a Maxwell model.
Suppose that $M = N_{1} \lor N_{2}$ is globally identifiable.
Then either $N_{1}$ or $N_{2}$ must have been one of a spring, a dashpot, or a Voigt model.
\end{cor}
\begin{proof}
The four models given by the spring, dashpot, Voigt, and Maxwell elements are
the only four locally identifiable models that have the property that
at least one of the differential operators in its constitutive equation
has exactly one term. This can be seen by analyzing the four types
(A,B,C,D) and looking at all possibilities that arise on combining two
equations. Once both operators do not have a single term, no model combined from such a model
can have an operator with a single term.
The three choices for the series connection (spring, a dashpot, or a Maxwell model)
are the three of four models that put a differential operator with a single
term in the correct place so there could be a unique solution to the shape factorization
probelm. Similarly for the parallel connection.
\end{proof}
\begin{proof} [Proof of Theorem \ref{Thm:global}]
Clearly a globally identifiable model is locally identifiable.
By Corollary \ref{cor:globidconstzero}, we must be able to construct such
a globally identifiable model by adding at each step either a spring, dashpot,
Maxwell or Voigt element at each step, but when adding a Maxwell
element it must be used in series
and when using a Voigt element it must have been added in parallel. However,
adding a Maxwell element in series can be achieved by adding a spring and then a dashpot both
in series. Similarly, adding a Voigt element in parallel can be achieved by adding
a spring and then a dashpot both in parallel. Hence, we can work only adding
springs or dashpots at each step.
\end{proof}
\section*{Acknowledgments}
Adam Mahdi was partially supported by the VPR project under NIH-NIGMS grant \#1P50GM094503-01A0 sub-award to NCSU.
Nicolette Meshkat was partially supported by the David and
Lucille Packard Foundation. Seth Sullivant was partially supported by the David and Lucille Packard Foundation and the US National Science Foundation (DMS 0954865).
|
3,212,635,537,650 | arxiv | \section{Introduction}
There is a considerable recent interest in two directions of classical and
quantum gravity and possible implications in cosmology and astrophysics:\
The first one is related to gravity models with anisotropic scaling between
space and time at short distances which is usually referred to as the Ho\v{r
ava--Lifshitz, HL, theory \cite{horava1,horava2,horava3}. Such theories with
generic anisotropy are usually non--relativistic and ultra--violet complete;
the local Lorentz invariance is violated/brocken (LV) at short distances but
constructed to reduce to the general relativity (GR) theory in the infrared
limit.\footnote
The problem of reduction is still an open issue: the HL theory with global
Hamiltonian does not reproduce GR in the infrared domain \cite{kobakh}.
There are necessary certain further modifications of the theory because both
projectable and non--projectable versions of the HL models seem to contain
certain inconsistency \cite{blas,odintsov,carloni}.} One of the main features of this class
of theories is that they can be elaborated in a "power--counting"
renormalizable form (unlike Einstein gravity) at least if the so called
detailed balance condition is respected. This can be understood, for
instance, as a result of stochastic quantization in relation to topological
massive gravity \cite{orlando1,orlando2}. The second direction consists from
a series of models related to quantum gravity (QG) phenomenology also
including LV effects and general relativistic and non--relativistic
constructions with extra dimensions, generalized symmetries, and
compactification or trapping scenarios etc (see, for instance, reviews \cit
{kost4,xiao,liberati,carroll,burgess,barcello}).
The above mentioned classes of gravitational theories are characterized, in
general, by LV and respective modified dispersion relations (MDR), local
and/or global anisotro\-pi\-es, nonhomogeneous and, for certain models, they
are defined by nonholonomic (non--integrable) constraints on the dynamics of
gravitational and matter fields. For instance, the implications of
violations of Poincar\' e symmetry for kinematic conditions and MDR at the
"threshold" for some particle--creation interactions in HL--type theories
are studied in \cite{mercati}. One of the most important aspects to be
understood is the way when such theories can be constructed in a general
geometric form and to analyze possible applications.
During last decade, there have been published some series of works relating
quantum phenomenology and, for instance, anisotropy and dark energy/matter
problems to Finsler gravity models with LV, MDR and locally anisotropic
spacetime configurations, see explicit constructions and references in \cit
{mavromatos,mignemi,girelli,gibbons,sindoni,
lammer,stavrinos,stavr0,visser1,visser2,yang,calcagni,schuller}. There is a
study of possible links between anisotropic--scaling scenarios and Finsler
spacetimes \cite{sindoni1}. A surprising conclusion which can be drawn from
such approaches is that we have to include certain Finsler type physical
objects into various schemes of quantization of gravity and apply
corresponding geometric methods in order to elaborate in a self--consistent
form relativistic and non--relativistic models of QG.
There were proposed different ideas and elaborated explicit theoretical
constructions related to Finsler geometry, generalizations and applications
in modern physics (for particle and mathematical physics researches, we cite
Refs. \cite{vrev1,vcosm,vbrane,vsgg,vcrit,ma}\footnote
there are thousands of papers and tens of monographs on Finsler geometry and
applications - it is not possible to summarize in this paper and discuss all
such ''standard'' and ''nonstandard'' theories in relation to modern gravity
and analogous mechanical models}). For instance, Finsler gravity models can
be derived in low energy limits from string gravity theories \cit
{vstr1,vstr2} and brane gravity \cite{vsingl1,vsingl2,vbrane} and induced by
noncommutative generalizations of Einstein gravity \cite{vnc1,vnc2,vnc3}.
Various classes of commutative and noncommutative Finsler type geometries
and gravity theories are induced via nonholonomically constrained Ricci
flows of (pseudo) Riemannian metrics \cite{vrf1,vrf2,vrf3,vrf4}. Finsler
variables can be introduced in GR and extra dimension generalizations which
allows us to formulate geometric methods of constructing exact (and vary
general classes of) solutions in different gravity theories \cit
{vex1,vex2,vex3,vsgg,vnc2}. Re--writing the Einstein equations in the
so--called almost K\"{a}hler -- Finsler variables, it was possible to apply
rigorous methods of deformation and A--brane quantization and nonholonomic
gauge methods in order to elaborate quantum models of Einstein gravity and
Lagrange--Finsler--Hamilton generalizations \cit
{vqgr1,vqgr2,vqgr3,vqgr4,vqgr5,vqgr6}.
It is our purpose to elaborate a modification of HL and GR theories which
will include MDR defined, in general, for tangent bundles to Einstein
spacetimes and nonholonomic/ anisotropic modifications. Naturally such
constructions can be performed in the framework of Finsler geometry and
generalizations. We consider such an approach to be motivated because any
type of nonlinear dispersion relations are canonically related to a Finsler
generation function (usual constructions in special and general relativity
theories are contained as particular (quadratic) cases). In this sense, QG
models of HL and/or other types should be more realistically elaborated in
terms of (pseudo) Finsler fundamental geometric objects; this seems to give
a more realistic quantum theory the existing quantum versions of (pseudo)
Riemannian geometry.
The paper is organized as follows. In section \ref{sec2}, we provide a brief
summary of HL and GR theories and consider possible MDR for scaling
anisotropies. We show that certain class of fundamental Finsler functions
can be derived from HL theory and various types of QG models with MDR. We
formulate the HF gravity as a theory generalizing HL models on Finsler
spaces in section \ref{sec3}. Trapping mechanism for Finsler branes
resulting in HL gravity (and for corresponding nonholonomic constraints, in
GR theory) are studied in section \ref{sec4}. Finally, conclusions are
provided in section \ref{sec5}. In Appendix, we summarize some technical
details on diagonal solutions in HF gravity.
\section{Finsler Geometry Induced by MDR in HL Gravity}
\label{sec2} The goal of this section is to show how fundamental Finsler
geometric objects are induced from some general MDR and, in particular, for
HL gravity: We outline in brief the HL gravity theory, analyze possible MDR
and show how the fundamental Finsler generating function can be associated
to such anisotropic configurations and nonlinear dispersions.
\subsection{Preliminaries on the HL and GR theories}
In standard form, the dynamical variables of HL gravity are the lapse
function, $N,$ the shift function, $N^{\widehat{i}},$ and the spacelike
metric, $g_{\widehat{i}\widehat{j}},$ in terms of which the metric is
written as the Arnowitt--Deser--Misner, ADM, (1+3) splitting
\begin{equation}
ds^{2}=g_{ij}dx^{i}dx^{j}=-N^{2}dt^{2}+g_{\widehat{i}\widehat{j}}(dx^
\widehat{i}}+N^{\widehat{i}}dt)(dx^{\widehat{j}}+N^{\widehat{j}}dt).
\label{adm}
\end{equation
The above metric $g_{ij}=(N^{2},g_{\widehat{i}\widehat{j}})$ \ (we shall
write in brief, $hg=(N^{2},\widehat{g}))$ is supposed to be invariant under
the foliation--preserving diffeomorphisms of the HL theory, $t^{\prime
}=t^{\prime }(t)$ and $x^{\widehat{i}^{\prime }}=x^{\widehat{i}}(t,x^
\widehat{k}}),$ where indices $i,i^{\prime },j,j^{\prime },...=1,2,3,4,$ for
$x^{i}=(x^{1}=t,x^{\widehat{i}})$ and $\widehat{i},\widehat{i}^{\prime }
\widehat{j},\widehat{j}^{\prime },...=2,3,4.$\footnote
We have to elaborate a new system of notations which will be compatible with
3+1 splitting for ADM formalism and 4+4, or 2+2/3+2 / 4+3 nonholonomic
splitting used in Finsler geometry, see details in \cite{vrev1}.} The theory
is invariant under the anisotropic scaling symmetr
\begin{equation}
t\rightarrow \mathit{l}^{z}t,\ x^{\widehat{i}}\rightarrow \mathit{l}x^
\widehat{i}},\mbox{ when for }z=3,\ N\rightarrow \mathit{l}^{-2}N,N^
\widehat{i}}\rightarrow \mathit{l}^{-2}N^{\widehat{i}},\ g_{\widehat{i
\widehat{j}}\rightarrow g_{\widehat{i}\widehat{j}} \label{scalin}
\end{equation
(to elaborate a power--counting renormalizable theory of gravity in four
dimensions, 4--d, is considered $z=3).$ The projectability condition
requires a homogeneous lapse function $N=N(t)$ but admits general shift and
3--d metric, i.e. $N^{\widehat{i}}(x^{k})=N^{\widehat{i}}(t,x^{\widehat{k}})$
and $\ g_{\widehat{i}\widehat{j}}(x^{k})\rightarrow g_{\widehat{i}\widehat{j
}(t,x^{\widehat{k}}).$\footnote
It is possible to consider a general nonhomogeneous lapse function but this
may result in problems when attempting to quantize the model, see \cit
{horava2,li}.}
The action for HL gravity is postulated as a sum of ''kinetic'', $\ _{K}S,$
and ''potential'' part, $\ _{V}S,
\begin{equation}
\ ^{HL}S=\ _{K}S+\ _{V}S, \label{action}
\end{equation
where
\begin{eqnarray*}
\ _{K}S &=&\frac{2}{\kappa ^{2}}\int dtd^{3}x\sqrt{|\widehat{g}|}N\left( K_
\widehat{i}\widehat{j}}K^{\widehat{i}\widehat{j}}-\lambda K^{2}\right) \\
\ _{V}S &=&\int dtd^{3}x\sqrt{|\widehat{g}|}N[\frac{\kappa ^{2}\mu }{2\varpi
^{2}}\epsilon ^{\widehat{i}\widehat{j}\widehat{k}}R_{\widehat{i}\widehat{l
}\nabla _{\widehat{j}}R_{\ \widehat{k}}^{\widehat{l}}-\frac{\kappa ^{2}\mu }
8}R_{\widehat{i}\widehat{j}}R^{\widehat{i}\widehat{j}} \\
&&+\frac{\kappa ^{2}\mu }{8(1-3\lambda )}\left( \frac{1-4\lambda }{4
R^{2}+\Lambda R-3\Lambda ^{2}\right) -\frac{\kappa ^{2}}{2\varpi ^{2}}C_
\widehat{i}\widehat{j}}C^{\widehat{i}\widehat{j}}],
\end{eqnarray*
for some constants $\kappa ,\mu ,\varpi ,\Lambda $ and a dynamical constant
\lambda $ running as the energy scale changes. The general covariance in GR
imposes the condition $\lambda =1.$ It is known that the important
variation-interval of $\lambda $ is between $1/3$ (the ultra--violet, UV,
limit) and $1$ (the infra--red, IR, limit). In the above formulas,
\begin{equation*}
K_{\widehat{i}\widehat{j}}=\frac{1}{2N}\left( \frac{\partial g_{\widehat{i
\widehat{j}}}{\partial t}-\nabla _{\widehat{i}}N_{\widehat{j}}-\nabla _
\widehat{j}}N_{\widehat{i}}\right)
\end{equation*
is the extrinsic curvature with $K=g^{\widehat{i}\widehat{j}}K_{\widehat{i
\widehat{j}};$ the Cotton tensor is defined
\begin{equation*}
C^{\widehat{i}\widehat{j}}=\frac{\epsilon ^{\widehat{i}\widehat{j}\widehat{k
}}{\sqrt{|\widehat{g}|}}\nabla _{\widehat{k}}\left( R_{\ \widehat{l}}^
\widehat{j}}-\frac{1}{4}R\delta _{\ \widehat{l}}^{\widehat{j}}\right) ,
\end{equation*
where such geometric objects are constructed for the Levi--Civita connection
$\nabla _{\widehat{k}}$ and $R$ determined by the 3--d spacial metric $g_
\widehat{i}\widehat{j}},$ for $\delta _{\ \widehat{l}}^{\widehat{j}}$ being
the Kronecker symbol and $|\widehat{g}|$ computed as the determinant of 3--d
metric. In this work, we shall consider a simple form of theory with field
equations derived from (\ref{action}) (for instance, we can also consider
the ''detailed balance'' condition which reduces the number of terms in the
potential).\footnote
The most possible general potential is analyzed in \cite{sotir1,sotir2}; we
shall elaborate more simple constructions which do not change our basic
conclusions on relation of MDR and Finsler geometry.}
In the infrared limit of (\ref{action}) we can obtain the ADM form of the
Einstein--Hilbert action if the speed of light, $c,$ gravitational constant,
$G,$ and cosmological constant, $\ ^{GR}\Lambda ,$ (all in GR) are defined,
respectively,
\begin{equation}
c=\frac{\kappa ^{2}\mu }{4}\sqrt{\frac{\Lambda }{1-3\lambda }},\ 16\pi G
\frac{\kappa ^{4}\mu }{8}\sqrt{\frac{\Lambda }{1-3\lambda }}\mbox{ and }\
^{GR}\Lambda =\frac{3\kappa ^{4}\mu ^{2}\Lambda ^{2}}{32(1-3\lambda )}.
\label{grcond}
\end{equation
There is also a coefficient before the $R^{2}$ term, $\kappa ^{2}\mu
^{2}=8(1-3\lambda )c^{3}/16\pi G\Lambda .$ The GR theory can be considered
as a ''homogeneous'' and locally isotropic version of HL gravity.
\subsection{MDR in HL gravity}
\label{ssmdr}A stability analysis of HL gravity is performed, for instance,
in Ref. \cite{bogdanos}. The conclusion of that work is that the HL gravity
in original form suffers from instabilities and fine--tuning which cannot be
overcome by simple tricks such an analytic continuation, see also \cit
{kobakh,blas}. We propose that the HL theory should be extended on
tangent/cotangent bundle (with velocity type coordinates) in order to
include MDR which will put the problem of stability of gravitational field
equations with nonholonomic constraints in a different form.
Let us outline some typical dispersion relations\footnote
which can be computed by perturbing the action (\ref{action}) up to second
order of metric preserving the ADM 3+1 foliation preserving formalism around
a flat background} in HL gravity. Under the so--called ''detailed balance''
conditions, there are possible the following variants (with Fourier
transforms of type $\psi (t,x^{\widehat{i}})=\int \frac{d^{3}k}{(2\pi )^{3/2
}\psi _{p}(t)e^{ip_{\widehat{i}}x^{\widehat{i}}}).$
\begin{itemize}
\item For scalar perturbations and considering a low--$p$ behavior, we
acquire
\begin{equation*}
\omega ^{2}=-\frac{9\kappa ^{4}\mu ^{2}\Lambda ^{2}}{32(1-3\lambda )^{2}}<0.
\end{equation*
Such a MDR induces instabilities at the IR for all values of $\lambda $ and
both sings of $\Lambda .$
\item For high--$p,$ the dispersion relation is
\begin{equation*}
\omega ^{2}=\frac{\kappa ^{4}\mu ^{2}}{16}\left( \frac{1-\lambda }
1-3\lambda }\right) ^{2}p^{4}.
\end{equation*}
\item Similar computations can be performed for tensor perturbations
\begin{equation*}
\omega ^{2}=c^{2}p^{2}+\frac{\kappa ^{4}\mu ^{2}}{16}p^{4}\pm \frac{\kappa
^{4}\mu }{4\varpi ^{2}}p^{5}+\frac{\kappa ^{4}}{4\varpi ^{4}}p^{6}.
\end{equation*}
\end{itemize}
A perturbative analysis can be extended beyond detailed balance. Such
extended relations can be written (using an additional parameter for the
corresponding contribution to the action):
\begin{itemize}
\item For the UV--behavior of scalar perturbations
\begin{equation*}
\omega ^{2}=\frac{\kappa ^{2}(1-\lambda )^{2}}{16(1-3\lambda )^{2}}p^{4}
\frac{3\kappa ^{2}(1-\lambda )}{2(1-3\lambda )}\eta p^{6}.
\end{equation*}
\item Finally, we present the formula for tensor perturbations:
\begin{equation*}
\omega ^{2}=c^{2}p^{2}+\frac{\kappa ^{4}\mu ^{2}}{16}p^{4}\pm \frac{\kappa
^{4}\mu }{4\varpi ^{2}}p^{5}+\left( \frac{\kappa ^{4}}{4\varpi ^{4}}-\frac
\kappa ^{2}\eta }{2}\right) p^{6}.
\end{equation*}
\end{itemize}
We conclude that HL\ theory with Minkovski background is characterized by
corresponding MDR $\omega (p^{i},\kappa ,\mu ,\Lambda ,\varpi ,c,\lambda
,\eta )$ depending nonlinearly on momentum variables and with critical
behavior (up to instabilities, branching of dispersion relation etc)
determined by the values of the fundamental constants of the theory. The
formulas for nonlinear dispersions presented in this sections are typical
ones which can be derived in various models of HL gravity or alternative
theories (in different approaches, one can be considered only "even powers"
of momenta, parametric deformations etc).
\subsection{Fundamental Finsler functions and the HL theory}
In a more general context, we can perform an analysis of propagation of
light rays in HL and various classes of gravity theories with LV, see
details, for instance, in Refs. \cite{lammer,vcosm,vbrane}. For light rays
propagating on HL\ spacetime, the nonlinear \ dispersion relation\footnote
we can consider such a relation in a fixed point \ $x^{k}=x_{(0)}^{k},$ \
when $g_{\widehat{i}\widehat{j}}(x_{0}^{k})=g_{\widehat{i}\widehat{j}}$ and
q_{\widehat{i}_{1}\widehat{i}_{2}...\widehat{i}_{2r}}=q_{\widehat{i}_{1
\widehat{i}_{2}...\widehat{i}_{2r}}$ $(x_{0}^{k})$} between the frequency
\omega $ and the wave vector $k_{i},$ can be written in a general abstract
form
\begin{equation}
\omega ^{2}=c^{2}\left[ g_{\widehat{i}\widehat{j}}k^{\widehat{i}}k^{\widehat
j}}\right] ^{2}\left( 1-\frac{1}{r}\frac{q_{\widehat{i}_{1}\widehat{i}_{2}..
\widehat{i}_{2r}}y^{\widehat{i}_{1}}...y^{\widehat{i}_{2r}}}{\left[ g_
\widehat{i}\widehat{j}}y^{\widehat{i}}y^{\widehat{j}}\right] ^{2r}}\right) .
\label{disp}
\end{equation
Depending on explicit parametrizations, with $k_{i}\rightarrow p_{i}\sim
y^{a},$ we can include the above dispersion formulas for scalar and tensor
perturbations, or for light propagation, into a formal expression of type
\ref{disp}). Such MDR can be derived from very general arguments for a large
class quantum and classical, commutative and noncommutative, gravity and
particle field theories with LV, see \cit
{kost4,xiao,liberati,dimopoulos,anchor,amelino,carroll,burgess,mercati} (the
coefficients $q_{\widehat{i}_{1}\widehat{i}_{2}...\widehat{i}_{2r}}$ are
computed in explicit form for corresponding models).
In a series of works \cit
{mavromatos,mavromatos1,mavromatos2,mignemi,girelli,gibbons,sindoni,lammer,stavrinos,stavr0,vcosm,vbrane
, there were analyzed various possibilities when MDR (\ref{disp}), or
certain particular forms\footnote
for instance, in the very special relativity, with corrections from string
and/or noncommutative dynamics, with Higgs type induced Finsler structures
etc}, can be naturally associated to nonlinear homogeneous quadratic
elements (with $F(x^{i},\beta y^{j})=\beta F(x^{i},y^{j}),$ for any $\beta
>0),$ when
\begin{eqnarray}
ds^{2} &=&F^{2}(x^{i},y^{j}) \notag \\
&\approx &-(cdt)^{2}+g_{\widehat{i}\widehat{j}}(x^{k})y^{\widehat{i}}y^
\widehat{j}}\left[ 1+\frac{1}{r}\frac{q_{\widehat{i}_{1}\widehat{i}_{2}..
\widehat{i}_{2r}}(x^{k})y^{\widehat{i}_{1}}...y^{\widehat{i}_{2r}}}{\left(
g_{\widehat{i}\widehat{j}}(x^{k})y^{\widehat{i}}y^{\widehat{j}}\right) ^{r}
\right] +O(q^{2}). \label{fbm}
\end{eqnarray
Such nonlinear metric elements are usually considered in Finsler geometry. A
value $F$ is considered to be a fundamental (generating) Finsler function
usually satisfying the condition that the Hessian
\begin{equation}
\ ^{F}g_{ij}(x^{i},y^{j})=\frac{1}{2}\frac{\partial F^{2}}{\partial
y^{i}\partial y^{j}} \label{hess}
\end{equation
is not degenerate.
For $q_{\widehat{i}_{1}\widehat{i}_{2}...\widehat{i}_{2r}}\rightarrow 0$ and
a corresponding re-definition of frames and coordinates, we can generate
elements of type (\ref{adm}) for GR. The HL theory is with generic
anisotropy and LV characterized by dispersion relations $\omega
(p^{i},\kappa ,\mu ,\Lambda ,\varpi ,c,\lambda ,\eta )$ considered in
section \ref{ssmdr}. Our idea is to extend the Ho\v{r}ava constructions in a
(pseudo) Finsler form on tangent bundles to Lorentz modified manifolds which
will include nonlinear dispersion relations and parametric dependence of
solutions with various stable and nonstable nonlinear properties. Such
Finsler structures are determined naturally from perturbative properties and
light/probing bodies propagations in HL gravity. A Finsler generalization of
HL gravity can be constructed in metric compatible form following principles
very similar to the Einstein and Einstein--Finsler gravity (EFG) \cit
{vcosm,vbrane,vrev1}, for metric compatible Finsler connections. The
gravitational field equations for such a theory can be integrated in general
form following methods \cite{vex1,vex2,vex3} (with parametric dependence of
solutions which allows us to consider stable and non--stable
configurations). It is also possible to quantize certain classes of Ho\v{r
ava--Finsler (HF) gravity \ models following methods of deformation
quantization, A--brane formalism, gauge like methods etc, see \cit
{vqgr1,vqgr2,vqgr3,vqgr4,vqgr5,vqgr6}.
\section{Ho\v{r}ava--Finsler Gravity}
\label{sec3}In this section, we provide a Finsler generalization of the HL
theory (called the Ho\v rava--Finsler, in brief, HF) which will include as
some ''branch'' configurations respective MDR on tangent bundle $T\mathbf{V
, $ where $\mathbf{V}$ is a (pseudo) Riemannian spacetime in GR or its
anisotropic modifications defined by a HL action (\ref{action}).
\subsection{Fundamental geometric objects for HF gravity}
We shall label local coordinates on $T\mathbf{V}$ in the form $u^{\alpha
}=(x^{i},y^{a})$ (in brief $u=(x,y)$), where $x^{i}$ are local coordinates
on $\mathbf{V}$ and $y^{a}$ are fiber (velocity, or momentum type)
coordinates. Indices $\alpha ,\beta ,...$ will run values $1,2,...,8.$
Contrary to the case of (pseudo) Riemannian geometry (which is completely
determined by its metric tensor), a fundamental Finsler metric
(equivalently, generating function) $F^{2}$ (\ref{fbm}) and/or its Hessian
\ ^{F}g_{ij}$ (\ref{hess}) do not define completely a geometric/physical
model on $T\mathbf{V.}$ We need certain additional assumptions in order to
construct in a unique form a triple of fundamental geometric objects (a
nonlinear connection, N--connection, structure, a metric structure on the
total space and a linear connection which is adapted to a chosen
N--connection structure, called a distinguished connection, in brief, a
d--connection; in a canonical approach all such objects are induced in a
unique way by fundamental Finsler function $F$), which are necessary for
definition of a physical generalized spacetime/gravitational model using
principles of Einstein--Finsler gravity (EFG) \cite{vcosm,vrev1}.
\subsubsection{N--connections induced by MDR and associated Finsler
generating functions}
A N--connection $\mathbf{N}$ is defined as a Whitney su
\begin{equation}
TT\mathbf{V}=hT\mathbf{V}\oplus vT\mathbf{V}. \label{whitney}
\end{equation
With respect to a local coordinate base, it is determined by its
coefficients $\mathbf{N}=\{N_{i}^{a}(x,y)\},$ i.e. $\mathbf{N=
N_{i}^{a}dx^{i}\otimes \partial /\partial y^{a}.$\footnote
Following our notation conventions \cite{vrev1,vsgg}, we use boldface
symbols for spaces and geometric object on spaces endowed with N--connection
structure. Because there are standard denotations using symbol $N$ both in
ADM model of gravity and in Finsler geometry, we have to use $(N,N^{\widehat
i}})$ for lapse and shifting functions and $N_{i}^{a}$ for the N--connection
coefficients.} There is a class of associated to N--connection local bases,
\mathbf{e}_{\nu }=(\mathbf{e}_{i},e_{a}),$ and cobases, $\mathbf{e}^{\mu
}=(e^{i},\mathbf{e}^{a}),$ when
\begin{eqnarray}
\mathbf{e}_{i}&=&\frac{\partial }{\partial x^{i}}-\ N_{i}^{a}(u)\frac{\partial
}{\partial y^{a}}\mbox{ and
}\ e_{a}=\frac{\partial }{\partial y^{a}}, \label{nader} \\
e^{i}&=&dx^{i}\mbox{ and }\mathbf{e}^{a}=dy^{a}+\ N_{i}^{a}(u)dx^{i}.
\label{nadif}
\end{eqnarray
Such a structure is, in general, nonholonomic (equivalently, anholonomic/
non--integrable) because, for instance, (\ref{nader}) satisfy nontrivial
nonholonomy relations of type
\begin{equation}
\lbrack \mathbf{e}_{\alpha },\mathbf{e}_{\beta }]=\mathbf{e}_{\alpha
\mathbf{e}_{\beta }-\mathbf{e}_{\beta }\mathbf{e}_{\alpha }=W_{\alpha \beta
}^{\gamma }\mathbf{e}_{\gamma }, \label{anhrel}
\end{equation
with (antisymmetric) nontrivial anholonomy coefficients $W_{ia}^{b}=\partial
_{a}N_{i}^{b}$ and $W_{ji}^{a}=\Omega _{ij}^{a}$ determined by the
coefficients of curvature of N--connection $\Omega _{ij}^{a}=\mathbf{e
_{j}\left( N_{i}^{a}\right) -\mathbf{e}_{i}\left( N_{j}^{a}\right) .$\ It
should be emphasized here that there is a N--connection structure $\mathbf
N=\ }^{c}\mathbf{N}$ which is canonically defined by $F.$\footnote
Considering $L=F^{2}$ as a regular Lagrangian (i.e. with nondegenerate
^{F}g_{ij}$ (\ref{hess})) we can define the action integral $S(\tau
)=\int\limits_{0}^{1}L(x(\tau ),y(\tau ))d\tau $ with $y^{k}(\tau
)=dx^{k}(\tau )/d\tau ,$ for $x(\tau )$ parametrizing smooth curves on $V$ \
with $\tau \in \lbrack 0,1]$. We can prove \cite{ma} that the
Euler--Lagrange equations for $S(\tau ),$ $\frac{d}{d\tau }\frac{\partial L}
\partial y^{i}}-\frac{\partial L}{\partial x^{i}}=0,$ are equivalent to the
''nonlinear geodesic'' (equivalently, semi--spray) equations $\frac
d^{2}x^{k}}{d\tau ^{2}}+2G^{k}(x,y)=0,$ where $G^{k}=\frac{1}{4}g^{kj}\left(
y^{i}\frac{\partial ^{2}L}{\partial y^{j}\partial x^{i}}-\frac{\partial L}
\partial x^{j}}\right) $ induces the canonical N--connection $\mathbf{\ }^{c
\mathbf{N=}\{\ ^{c}N_{j}^{a}=\partial G^{a}/\partial y^{j}\}.$} Under
general (co) frame/coordinate transform, $\mathbf{e}^{\alpha }\rightarrow
\mathbf{e}^{\alpha ^{\prime }}=e_{\ \alpha }^{\alpha ^{\prime }}\mathbf{e
^{\alpha }$ and/or $u^{\alpha }\rightarrow u^{\alpha ^{\prime }}=u^{\alpha
^{\prime }}(u^{\alpha }),$ preserving the splitting (\ref{whitney}), we get
a corresponding transformation law $\ ^{c}N_{j}^{a}\rightarrow N_{j^{\prime
}}^{a^{\prime }},$ when $\mathbf{N}=N_{i^{\prime }}^{a^{\prime
}}(u)dx^{i^{\prime }}\otimes \frac{\partial }{\partial y^{a^{\prime }}}$ is
given locally by a set of coefficients $\{N_{j}^{a}\}$ (we shall omit
priming, underlying etc of indices if that will not result in ambiguities)
\footnote
We can use any convenient (for constructing exact solutions of field
equations, or geometric considerations) equivalent sets $\mathbf{\ N=
\{N_{j}^{a}\},$ which under corresponding frame/coordinate transform can be
parametrized in a form $\ ^{F}\mathbf{N}=\ ^{c}\mathbf{N}=\{\
^{c}N_{j}^{a}\}.$ Here, we also emphasize that we can define conventionally
a N--connection structure on any manifold (not only on tangent/vector
bundles) by prescribing a fibered structure with conventional horizontal (h)
and vertical (v) splitting, for instance, a nonholonomic 2+2 splitting in GR
as we considered in \cite{vex1,vex3}.}
\subsubsection{Finsler metric structure on total tangent bundle}
We can use the so--called Sasaki lift in order to construct on $T\mathbf{V}$
a metric structure completely determined by a fundamental Finsler function
F(x,y),$
{\small
\begin{eqnarray}
\ ^{F}\mathbf{g} &=&(h\ ^{F}g_{ij},v\ ^{F}g_{ij})
=\ ^{F}g_{ij}(x,y)[\ e^{i}\otimes e^{j}+\left( \ ^{\ast }\mathit{l
_{P}\right) ^{2}\ ^{F}\mathbf{e}^{i}\otimes \ \ ^{F
\mathbf{e}^{j}], \label{slm} \\
e^{i} &=&dx^{i}\mbox{ and }\ ^{F}\mathbf{e}^{a}=dy^{a}+\
^{F}N_{i}^{a}(u)dx^{i}, \label{ddifl}
\end{eqnarray}
where for canonical constructions $\ ^{F}N=\mathbf{\ }^{c}\mathbf{N.}$ In
the above formula we consider a length constant $\ ^{\ast }\mathit{l}_{P}$
which can be just the Planck length $\mathit{l}_{P}$ in models of QG for the
GR but it can be a different one for brane models. We have to consider such
a value before the v--part of metric (\ref{slm}) in order to have the same
dimensions for the h-- and v--components of metric when coordinates have the
dimensions $[x^{i}]=cm$ and $[y^{i}\sim dx^{i}/ds]=cm/cm.$ In our further
considerations, we shall include such a constant into $h$--coefficients of
metrics if that will not result in ambiguities.
Under general frame transforms $e^{\alpha ^{\prime }}=e_{\ \alpha }^{\alpha
^{\prime }}\mathbf{e}^{\alpha },$ the above Finsler metric can be
represented in a general 4+4 form
\begin{eqnarray}
\ \ ^{H}\mathbf{g} &=&(h\ g_{ij},vg_{ab})=\ \ ^{H}g_{\alpha \beta }(x,y)\
\mathbf{e}^{\alpha }\otimes \ \mathbf{e}^{\beta } \label{dm} \\
&=&\ g_{ij}(x,y)\ e^{i}\otimes e^{j}+\left( \ ^{\ast }\mathit{l}_{P}\right)
^{2}\ h_{ab}(x,y)\ \mathbf{e}^{a}\otimes \ \mathbf{e}^{b}, \notag
\end{eqnarray
for arbitrary $N_{j^{\prime }}^{a}$ (we put the left label $H$ in order to
emphasize that such a metric is induces by MDR and nonholonomic deformations
from HL gravity). With respect to a coordinate co-basis $du^{\beta
}=(dx^{j},dy^{b}),$ when $\partial _{\alpha }=\partial /\partial u^{\alpha
}=(\partial _{i}=\partial /\partial x^{i},\partial _{a}=\partial /\partial
y^{a}),$ both metrics can be transformed equivalently into
\begin{equation}
\ \ ^{H}\mathbf{g}=\ \ ^{H}\ \underline{g}_{\alpha \beta }\left( u\right)
du^{\alpha }\otimes du^{\beta }, \label{fmetr}
\end{equation
wher
\begin{equation}
\ \ ^{H}\ \underline{g}_{\alpha \beta }=\left[
\begin{array}{cc}
\ g_{ij}+\left( \ ^{\ast }\mathit{l}_{P}\right) ^{2}\
h_{ab}N_{i}^{a}N_{j}^{b} & \left( \ ^{\ast }\mathit{l}_{P}\right) ^{2}\
h_{ae}N_{j}^{e} \\
\ \left( \ ^{\ast }\mathit{l}_{P}\right) ^{2}\ h_{be}N_{i}^{e} & \left( \
^{\ast }\mathit{l}_{P}\right) ^{2}\ \ h_{ab
\end{array
\right] . \label{fansatz}
\end{equation
The values $N_{i}^{a}(u)$ should be not identified to certain gauge fields
in a Kaluza--Klein theory on tangent bundle with the potentials depending on
velocities if we do not consider compactifications on coordinates $y^{a}.$
In Finsler like theories, a set $\{N_{i}^{a}\}$ defines a N--connection
structure, with elongated partial derivatives (\ref{ddifl}).
We can invert the constructions for arbitrary (\ref{fmetr}) and/or (\ref{dm
) and introduce Finsler variables and define metric (\ref{slm}) by
prescribing an arbitrary generating function $F$ on a manifold or bundle
space.
\subsubsection{The canonical distinguished Finsler connection}
In order to perform self--consistent geometric constructions with h-- and
v--splitting, it was introduced the concept of distinguished connection (in
brief, d--connection). A d--connection $\mathbf{D=(}h\mathbf{D,}v\mathbf{D)}$
is defined as a linear one \ preserving under parallelism the N--connection
structure on $\mathbf{V.}$ The N--adapted components $\mathbf{\Gamma }_{\
\beta \gamma }^{\alpha }$ of a d--connection $\mathbf{D}$ are computed
following equations $\mathbf{D}_{\alpha }\mathbf{e}_{\beta }=\mathbf{\Gamma
_{\ \alpha \beta }^{\gamma }\mathbf{e}_{\gamma }$ and parametrized in the
form $\ \mathbf{\Gamma }_{\ \alpha \beta }^{\gamma }=\left(
L_{jk}^{i},L_{bk}^{a},C_{jc}^{i},C_{bc}^{a}\right) ,$ where $\mathbf{D
_{\alpha }=(D_{i},D_{a}),$ with $h\mathbf{D}=(L_{jk}^{i},L_{bk}^{a})$ and $
\mathbf{D}=(C_{jc}^{i},$ $C_{bc}^{a})$ defining certain covariant,
respectively, h-- and v--derivatives.
The simplest way to perform computations with a d--connection $\mathbf{D}$
is to associate it with a N--adapted differential 1--form
\begin{equation}
\mathbf{\Gamma }_{\ \beta }^{\alpha }=\mathbf{\Gamma }_{\ \beta \gamma
}^{\alpha }\mathbf{e}^{\gamma }, \label{dconf}
\end{equation
and apply on $T\mathbf{V}$ the well known formalism of differential forms as
in GR. For instance, the torsion of $\ \mathbf{D}$ is defined/computed
\begin{equation}
\mathcal{T}^{\alpha }\doteqdot \mathbf{De}^{\alpha }=d\mathbf{e}^{\alpha }
\mathbf{\Gamma }_{\ \beta }^{\alpha }\wedge \mathbf{e}^{\beta }.
\label{tors}
\end{equation
With respect to a N--adapted basis, this torsion is stated by $\mathcal{T}=\
\mathbf{T}_{\ \alpha \beta }^{\gamma }\equiv \mathbf{\Gamma }_{\ \alpha
\beta }^{\gamma }-\mathbf{\Gamma }_{\ \beta \alpha }^{\gamma };T_{\
jk}^{i},T_{\ ja}^{i},T_{\ ji}^{a},T_{\ bi}^{a},T_{\ bc}^{a}\},$ where the
nontrivial coefficients are
\begin{eqnarray}
T_{\ jk}^{i} &=&L_{jk}^{i}-L_{kj}^{i},T_{\ ja}^{i}=C_{jb}^{i},T_{\
ji}^{a}=-\Omega _{\ ji}^{a}, \label{dtors} \\
T_{aj}^{c} &=&L_{aj}^{c}-e_{a}(N_{j}^{c}),T_{\ bc}^{a}=C_{bc}^{a}-C_{cb}^{a}.
\notag
\end{eqnarray}
There is a canonical d--connection\footnote
For any of type of metric parametrizatons (\ref{dm}), (\ref{fmetr}) and/or
\ref{slm}), we can construct the Levi--Civita connection $\nabla =\{\Gamma
_{\ \beta \gamma }^{\alpha }\}$ on $\mathbf{V}$ in a standard form. This
connection is not used in Finsler geometry and generalizations because it is
not compatible with a N--connection splitting; under parallel transports
with $\nabla ,$ it is not preserved the Whitney sum (\ref{whitney}).}, $\
^{H}$ $\widehat{\mathbf{D}}=\{\ ^{H}\widehat{\mathbf{\Gamma }}_{\ \alpha
\beta }^{\gamma }=(\widehat{L}_{jk}^{i},\widehat{L}_{bk}^{a},\widehat{C
_{jc}^{i},$ $\widehat{C}_{bc}^{a})\},$ which is uniquely and completely
defined by the coefficients of metric $\mathbf{g}$ (\ref{dm}) (equivalently,
(\ref{fmetr}) and/or (\ref{slm})) following the metric compatibility
conditions that $\widehat{\mathbf{D}}\mathbf{g}=0$ and the ''pure''
horizontal and vertical torsion coefficients are zero, i. e. $\widehat{T}_{\
jk}^{i}=0$ and $\widehat{T}_{\ bc}^{a}=0,$
\begin{eqnarray}
\widehat{L}_{jk}^{i} &=&\frac{1}{2}g^{ir}\left(
e_{k}g_{jr}+e_{j}g_{kr}-e_{r}g_{jk}\right) , \label{candcon} \\
\widehat{L}_{bk}^{a} &=&e_{b}(N_{k}^{a})+\frac{1}{2}h^{ac}\left(
e_{k}h_{bc}-h_{dc}\ e_{b}N_{k}^{d}-h_{db}\ e_{c}N_{k}^{d}\right) , \notag \\
\widehat{C}_{jc}^{i} &=&\frac{1}{2}g^{ik}e_{c}g_{jk},\ \widehat{C}_{bc}^{a}
\frac{1}{2}h^{ad}\left( e_{c}h_{bd}+e_{c}h_{cd}-e_{d}h_{bc}\right) . \notag
\end{eqnarray
Such a d--connection contains nontrivial torsion components $\widehat{T}_{\
ja}^{i},\widehat{T}_{\ ji}^{a},\widehat{T}_{aj}^{c},$ i.e., in general, $\ \
^{H}\widehat{\mathcal{T}}\neq 0.$ It is very different from various types of
torsions in Einstein--Cartan, gauge, string and other type gravity theories
(for which additional field equations are defined) because its N--adapted
components are completely by the metric structure, which in its turn (in our
model) is related to MDR in HL\ gravity - we do not need additional field
equations for this type of torsions induced nonholonomically via the
N--connection structure.\footnote
Any geometric/physical construction for $\widehat{\mathbf{D}}$ can be
re--defined equivalently into a similar one with the Levi--Civita connection
following formula
\begin{equation*}
\Gamma _{\ \alpha \beta }^{\gamma }=\widehat{\mathbf{\Gamma }}_{\ \alpha
\beta }^{\gamma }+Z_{\ \alpha \beta }^{\gamma },
\end{equation*
where the distortion tensor $Z_{\ \alpha \beta }^{\gamma }$ is given by
nontrivial coefficient
\begin{eqnarray*}
\ Z_{jk}^{a} &=&-\widehat{C}_{jb}^{i}g_{ik}h^{ab}-\frac{1}{2}\Omega
_{jk}^{a},~Z_{bk}^{i}=\frac{1}{2}\Omega _{jk}^{c}h_{cb}g^{ji}-\Xi _{jk}^{ih}
\widehat{C}_{hb}^{j}, \\
Z_{bk}^{a} &=&~^{+}\Xi _{cd}^{ab}~\widehat{T}_{kb}^{c},\ Z_{kb}^{i}=\frac{1}
2}\Omega _{jk}^{a}h_{cb}g^{ji}+\Xi _{jk}^{ih}~\widehat{C}_{hb}^{j},\
Z_{jk}^{i}=0, \\
\ Z_{jb}^{a} &=&-~^{-}\Xi _{cb}^{ad}~\widehat{T}_{jd}^{c},\ Z_{bc}^{a}=0,\
Z_{ab}^{i}=-\frac{g^{ij}}{2}\left[ \widehat{T}_{ja}^{c}h_{cb}+\widehat{T
_{jb}^{c}h_{ca}\right] ,
\end{eqnarray*
for $\ \Xi _{jk}^{ih}=\frac{1}{2}(\delta _{j}^{i}\delta
_{k}^{h}-g_{jk}g^{ih})$ and $~^{\pm }\Xi _{cd}^{ab}=\frac{1}{2}(\delta
_{c}^{a}\delta _{d}^{b}+h_{cd}h^{ab}).$}
Via nonholonomic transforms, we can transform $\ ^{H}\widehat{\mathbf{D}}$
into the Cartan d--con\-nection $\ ^{H}\widetilde{\mathbf{D}}$ in Finsler
geometry which is also metric compatible and completely defined by the same
metric structure. On spaces of even dimensions such connections contain the
same physical information if a Finsler generating function $F$ on spacetime
manifold. The metric compatibility play a crucial role in defining Finsler
generalizations of gravity in an "almost standard form" following principles
which are similar to those in GR (it is a more sophisticate task to
elaborate viable physical models using metric noncompatible connections, for
instance, the so--called Chern connection for Finsler geometry, see critical
remarks and details in Refs. \cite{vcrit,vcosm,vrev1}).
\subsubsection{Nonholonomic deformations relating HF and HL metrics}
The \textbf{Horava--Finsler }(HF) gravity theory is a (pseudo) Finsler
geometry model induced canonically on $T\mathbf{V}$ by a Finsler generating
function $F$ associated to MDE relations in ''standard''\footnote
the word standard is an approximation because up till present there are
different versions of HL with, or not, detailed balance conditions,
generalized forms etc} HL gravity. Such a theory is determined by the data
[F:\ ^{F}\mathbf{g=}(h\ ^{F}g_{ij},v\ ^{F}g_{ij})\mathbf{,\ }^{F}\mathbf{N,\
}^{F}\mathbf{D=}\ \ ^{H}\widehat{\mathbf{D}}],$ where (up to frame
transforms)
\begin{equation*}
\ ^{F}g_{ij}(x,y)\sim g_{ij}(x,y)=e_{\ i}^{i^{\prime }}(x,y)e_{\
j}^{j^{\prime }}(x,y)\ ^{HL}g_{ij}(x),
\end{equation*}
for $\ ^{HL}g_{ij}(x)$ being a solution of gravitational field equations in
HL\ gravity on $\mathbf{V,}$ derived from action (\ref{action}). The values
e_{\ i}^{i^{\prime }}(x,y)$ and $h_{bc}(x,y)$ have to be defined from
certain solutions of gravitational field equations in HF gravity, see next
section \ref{ssfeq}.
If the conditions (\ref{grcond}) are imposed in HF gravity, we can state
such limits that the model defines an \textbf{Einstein--Finsler gravity}
theory (EFG) \cite{vcosm}. This class of metric compatible Finsler gravity
theories on $TV$ is defined by data $[F:\ ^{F}\mathbf{g=}(h\ ^{F}g_{ij},v\
^{F}g_{ij})\mathbf{,\ }^{F}\mathbf{N,\ }^{F}\mathbf{D=}\widehat{\mathbf{D}
], $ when $\ ^{F}g_{ij}(x,y)\sim g_{ij}(x,y)=e_{\ i}^{i^{\prime }}(x,y)e_{\
j}^{j^{\prime }}(x,y)\ ^{E}g_{ij}(x),$ for $\ ^{E}g_{ij}(x)$ being a
solution of the Einstein equations in GR. Via nonholonomic frame transforms,
the theory can be equivalently described in standard variables of GR with
\left[ \mathbf{g},\ ^{g}\nabla \right].$
In explicit form, we have to elaborate a natural trapping/warped mechanisms
defined by explicit solutions of (Finsler type) gravitational field
equations which in classical limits for $\ ^{\ast }\mathit{l}_{P}\rightarrow
0,$ when HF / EFG $\rightarrow $ HL, or GR, determining QG corrections to
gravitational and matter field interactions at different scales depending on
the class of considered models and solutions, see below section \ref{sec4}.
\subsection{Field equations in canonical Ho\v{r}ava--Finsler gravity}
\label{ssfeq}A canonical (pseudo) Finsler structure on $T\mathbf{V}$
determined by MDR in HL gravity contains already all anisotropic properties
which are contained in metrics (\ref{adm}) with scaling properties (\re
{scalin}) included in the $h$--part of corresponding N--adapted metric (\re
{dm}) and/or (\ref{slm}). We elaborate a Ho\v rava--Finsler gravity theory
not just lifting formally the geometric objects and action (\ref{action}) on
$T\mathbf{V}$ (geometrically such a procedure can be defined in a canonical
way). There are not experimental data about matter fields and their
energy--momentums on tangent/vector bundles. A ''simple'' approach is to
develop a Finsler brane gravity model with general assumptions on matter
field in the bulk and warping/trapping of matter on a 4--d base spacetime
(see details in Refs. \cite{vbrane} and, for some yearly off--diagonal
constructions with N--connection structure for generalized Rundall--Sundrum
scenarios, \cite{vsingl1,vsingl2}, and references therein). A well--defined
trapping mechanism with effective (in general, anisotropically polarized)
cosmological constant and maximal speed of light (as solutions for the bulk
HF gravity) allows us to simplify substantially the constructions related to
possible models of HF gravity which can transform in the quasi--classical
limit into the HL or/and GR theories.
Using the canonical d--connection 1--form of type (\ref{dconf}), with
coefficients $\ \ ^{H}\widehat{\mathbf{\Gamma }}_{\ \alpha \beta }^{\gamma }$
(\ref{candcon}), we can compute the curvature of $\ \ \ ^{H}\widehat{\mathbf
D}},$
\begin{equation}
\ \ ^{H}\widehat{\mathcal{R}}_{~\beta }^{\alpha }:=\ \ ^{H}\widehat{\mathbf{
}}\ \ ^{H}\widehat{\mathbf{\Gamma }}_{\ \beta }^{\alpha }=d\ \ ^{H}\widehat
\mathbf{\Gamma }}_{\ \beta }^{\alpha }-\ \ ^{H}\widehat{\mathbf{\Gamma }}_{\
\beta }^{\gamma }\wedge \ \ ^{H}\widehat{\mathbf{\Gamma }}_{\ \gamma
}^{\alpha }=\widehat{\mathbf{R}}_{\ \beta \gamma \delta }^{\alpha }\mathbf{e
^{\gamma }\wedge \mathbf{e}^{\delta }, \label{curv}
\end{equation
see details in \cite{vcosm,vrev1}, where the formulas for all coefficients
are given in explicit form. The Ricci d--tensor $\widehat{R}ic=\{\widehat
\mathbf{R}}_{\alpha \beta }\}$ is defined by contracting respectively the
components of curvature tensor, $\widehat{\mathbf{R}}_{\alpha \beta
}\doteqdot \widehat{\mathbf{R}}_{\ \alpha \beta \tau }^{\tau },$ The h--/
v--components of this d--tensor, $\widehat{\mathbf{R}}_{\alpha \beta }=\
\widehat{R}_{ij},\widehat{R}_{ia},\ \widehat{R}_{ai},\ \widehat{R}_{ab}\},$
are
\begin{equation}
\widehat{R}_{ij}:=\widehat{R}_{\ ijk}^{k},\ \ \widehat{R}_{ia}:=-\widehat{R
_{\ ika}^{k},\ \widehat{R}_{ai}:=\widehat{R}_{\ aib}^{b},\ \widehat{R}_{ab}:
\widehat{R}_{\ abc}^{c}. \label{dricci}
\end{equation}
The scalar curvature of $\ \ ^{H}\widehat{\mathbf{D}}$ is constructed by
using the inverse to $\mathbf{g}$ (\ref{dm}),
\begin{equation}
\ ^{s}\widehat{\mathbf{R}}:=\mathbf{g}^{\alpha \beta }\widehat{\mathbf{R}
_{\alpha \beta }=g^{ij}\widehat{R}_{ij}+h^{ab}\widehat{R}_{ab}=\check{R}
\check{S}, \label{sdccurv}
\end{equation
where $\check{R}=g^{ij}\widehat{R}_{ij}$ and $\check{S}=h^{ab}\widehat{R
_{ab}$ are respectively the h-- and v--components of scalar curvature.
The Einstein tensor for$\ \ ^{H}\widehat{\mathbf{D}}$ \ is, by definition,
\begin{equation}
\ \ ^{H}\widehat{\mathbf{E}}_{\alpha \beta }:=\widehat{\mathbf{R}}_{\alpha
\beta }-\frac{1}{2}\mathbf{g}_{\alpha \beta }\ ^{s}\widehat{\mathbf{R}}.
\label{enstdt}
\end{equation
We can postulate the gravitational field equation for the HF gravity on $
\mathbf{V}$ in the form
\begin{equation}
\ \ ^{H}\widehat{\mathbf{E}}_{\alpha \beta }=\widehat{\mathbf{\Upsilon }
_{\beta \delta }, \label{ensteqcdc}
\end{equation
for arbitrary sources $\widehat{\mathbf{\Upsilon }}_{\beta \delta }$ which
can be, as a matter of principle, defined as certain lifts of
energy--momentum tensors of matter fields in HL, or GR, theory. It should be
emphasized here that the solutions of equations (\ref{ensteqcdc}), for
''projections'' $T\mathbf{V\rightarrow V,}$ in general, do not transform
trivially into solutions of HL gravity with action (\ref{action}). Certain
warped/trapping scenarios can be constructed in such a form that
nonholonomic deformations of exact solution in HF brane gravity are, in
general, non--explicitly related to solutions in HL gravity. This is a
consequence of nontrivial nonholonomic structure and generic nonlinear
character of such locally anisotropic gravitational systems.
\subsection{Magic splitting of gravitational HF filed equations}
The gravitational field equations in HF gravity can be integrated in very
general forms on $T\mathbf{V}$ following the anholonomic deformation method
summarized in Refs. \cite{vex1,vex2,vex3} (necessary ''velocity'' type
coordinated should be treated as certain ''extra'' dimension to two/four
dimensional base space ones). The bulk of such solutions do not have obvious
implications in modern physics. For simplicity, in this work we shall use a
more restricted class of exact solutions in HF gravity which seem to be
related to models of Finsler branes.
We parametrize the metric (\ref{dm}) in a form with three ''shell''
anisotro\-py (with a nonholonomic splitting 2+2 and 2+2+2),
\begin{eqnarray}
\ \ \mathbf{g} &=&\ g_{ij}(x)dx^{i}\otimes dx^{j}+h_{\ ^{0}a\ ^{0}b}(x,\
^{0}y)\mathbf{e}^{\ ^{0}a}\otimes \mathbf{e}^{\ ^{0}b} \label{2forman} \\
&&+h_{\ ^{1}a_{1}\ ^{1}b}(x,\ ^{0}y,\ ^{1}y)\mathbf{e}^{\ ^{1}a}\otimes
\mathbf{e}^{\ ^{1}b}+h_{\ ^{2}a\ ^{2}b}(x,\ ^{0}y,\ ^{1}y,\ ^{2}y)\mathbf{e
^{\ ^{2}a}\otimes \mathbf{e}^{\ ^{2}b}, \notag \\
\mathbf{e}^{\ ^{0}a} &=&dy^{\ ^{0}a}+N_{i}^{\ ^{0}a}(\ ^{0}u)dx^{i}, \notag
\\
\mathbf{e}^{\ ^{1}a} &=&dy^{\ ^{1}a}+N_{i}^{\ ^{1}a}(\ ^{1}u)dx^{i}+N_{\
^{0}a}^{\ ^{1}a}(\ ^{1}u)\ \mathbf{e}^{\ ^{0}a}, \notag \\
\mathbf{e}^{\ ^{2}a} &=&dy^{\ ^{2}a}+N_{i}^{\ ^{2}a}(\ ^{2}u)dx^{i}+N_{\
^{0}a}^{\ ^{0}a}(\ ^{2}u)\ \mathbf{e}^{\ ^{0}a}+N_{\ ^{1}a}^{\ ^{2}a}(\
^{2}u)\ \mathbf{e}^{\ ^{1}a}, \notag
\end{eqnarray
for local $x=\{x^{i}\},\ ^{0}y=\{y^{\ ^{0}a}\},\ ^{1}y=\{y^{\ ^{1}a}\},\
^{2}y=\{y^{\ ^{2}a}\};$ the vertical indices and coordinates split in the
form $y=[\ ^{0}y,\ ^{1}y,\ ^{2}y],$ or $y^{a}=[y^{\ ^{0}a},y^{\ ^{1}a},y^{\
^{2}a}];$ $\ ^{0}u=(x,\ ^{0}y),\ ^{1}u=(\ ^{0}u,\ ^{1}y),\ ^{2}u=(\ ^{1}u,\
^{2}y),$ or $u^{\ \alpha }=u^{\ ^{0}\alpha }=(x^{i},y^{\ ^{0}a}),u^{\
^{1}\alpha }=(u^{\ ^{0}\alpha },y^{\ ^{1}a}),u^{\ ^{2}\alpha }=(u^{\
^{1}\alpha },y^{\ ^{2}a}).$ There is a ''less'' general ansatz of type (\re
{2forman}) (with Killing symmetry on $y^{8},$ when the metric coefficients
do not depend on variable $y^{8};$ it is convenient to write $\ y^{3}=\
^{0}v,$ $y^{5}=\ ^{1}v,y^{7}=\ ^{2}v$ and express the $N$--coefficients via
n$-- and $w$--functions)
\begin{eqnarray}
\ ^{sol}\mathbf{g} &=&g_{i}(x^{k})dx^{i}\otimes dx^{j}+h_{\ ^{0}a}(x^{k},\
^{0}v)\mathbf{e}^{\ ^{0}a}{\otimes }\mathbf{e}^{\ ^{0}a} \label{ansgensol}
\\
&&+h_{\ ^{1}a}(u^{\ ^{0}\alpha },\ ^{1}v)\ \mathbf{e}^{\ ^{1}a}{\otimes }\
\mathbf{e}^{\ ^{1}a}+h_{\ ^{2}a}(u^{\ ^{1}\alpha },\ ^{2}v)\ \mathbf{e}^{\
^{2}a}{\otimes }\ \mathbf{e}^{\ ^{2}a}, \notag
\end{eqnarray
\begin{eqnarray}
\mathbf{e}^{3} &=&dy^{3}+w_{i}(x^{k},\ ^{0}v)dx^{i},\mathbf{e
^{4}=dy^{4}+n_{i}(x^{k},\ ^{0}v)dx^{i}, \notag \\
\mathbf{e}^{5} &=&dy^{5}+w_{\ ^{0}\beta }(u^{\ ^{0}\alpha },\
^{1}v)du^{\beta },\mathbf{e}^{6}=dy^{6}+n_{\ ^{0}\beta }(u^{\ ^{0}\alpha },\
^{1}v)du^{\ ^{0}\beta }, \notag \\
\mathbf{e}^{7} &=&dy^{7}+w_{\ ^{1}\beta }(u^{\ ^{1}\alpha },\ ^{2}v)du^{\
^{1}\beta },\mathbf{e}^{8}=dy^{8}+n_{\ ^{1}\beta }(u^{\ ^{1}\alpha },\
^{2}v)du^{\ ^{1}\beta }. \notag
\end{eqnarray}
The HF gravitational field equations (\ref{ensteqcdc}) for the canonical
d--connection $\widehat{\mathbf{D}}$ can be solved in general forms for
ansatz (\ref{ansgensol}) and sources parametrized with respect N--adapted
frames in the form
\begin{eqnarray}
\widehat{\mathbf{\Upsilon }}_{\ \delta }^{\beta } &=&diag[\widehat{\mathbf
\Upsilon }}_{\ 1}^{1}=\widehat{\mathbf{\Upsilon }}_{\ 2}^{2}=\widehat
\mathbf{\Upsilon }}_{2}(u^{\ ^{2}\alpha }),\widehat{\mathbf{\Upsilon }}_{\
3}^{3}=\widehat{\mathbf{\Upsilon }}_{\ 4}^{4}=\widehat{\mathbf{\Upsilon }
_{4}(u^{\ ^{2}\alpha }), \notag \\
&&\widehat{\mathbf{\Upsilon }}_{\ 5}^{5}=\widehat{\mathbf{\Upsilon }}_{\
6}^{6}=\widehat{\mathbf{\Upsilon }}_{6}(u^{\ ^{2}\alpha }),\widehat{\mathbf
\Upsilon }}_{\ 7}^{7}=\widehat{\mathbf{\Upsilon }}_{\ 8}^{8}=\widehat
\mathbf{\Upsilon }}_{8}(u^{\ ^{2}\alpha })], \label{sourcb}
\end{eqnarray
when the coefficients are subjected to algebraic conditions (for vanishing
N---coefficients, containing respectively the functions (\ref{source3})
determining sources in the gravitational field equations
\begin{eqnarray}
\ ^{h}\Lambda (x^{i}) &=&\widehat{\mathbf{\Upsilon }}_{4}+\widehat{\mathbf
\Upsilon }}_{6}+\widehat{\mathbf{\Upsilon }}_{8},\ ^{v}\Lambda (x^{i},v)
\widehat{\mathbf{\Upsilon }}_{2}+\widehat{\mathbf{\Upsilon }}_{6}+\widehat
\mathbf{\Upsilon }}_{8}, \label{sourcb1} \\
\ ^{1}\Lambda \ (u^{\ \alpha },y^{5}) &=&\widehat{\mathbf{\Upsilon }}_{2}
\widehat{\mathbf{\Upsilon }}_{4}+\widehat{\mathbf{\Upsilon }}_{8},\ \ \
^{2}\Lambda (u^{\ \ ^{1}\alpha },y^{7})\ =\widehat{\mathbf{\Upsilon }}_{2}
\widehat{\mathbf{\Upsilon }}_{4}+\widehat{\mathbf{\Upsilon }}_{6}.\ \notag
\end{eqnarray
Introducing the coefficients of metric (\ref{ansgensol}) into the formulas
for d--connection (\ref{candcon}) after tedious calculations (see details in
\cite{vex1,vex2})) we obtain \
\begin{eqnarray}
\widehat{R}_{1}^{1} &=&\widehat{R}_{2}^{2} \label{eqe1} \\
&=&-\frac{1}{2g_{1}g_{2}}\left[ g_{2}^{\bullet \bullet }-\frac
g_{1}^{\bullet }g_{2}^{\bullet }}{2g_{1}}-\frac{\left( g_{2}^{\bullet
}\right) ^{2}}{2g_{2}}+g_{1}^{\prime \prime }-\frac{g_{1}^{\prime
}g_{2}^{\prime }}{2g_{2}}-\frac{\left( g_{1}^{\prime }\right) ^{2}}{2g_{1}
\right] =-\ ^{h}\Lambda (x^{k}), \notag \\
\widehat{R}_{3}^{3} &=&\widehat{R}_{4}^{4}=-\frac{1}{2h_{3}h_{4}}\left[
h_{4}^{\ast \ast }-\frac{\left( h_{4}^{\ast }\right) ^{2}}{2h_{4}}-\frac
h_{3}^{\ast }h_{4}^{\ast }}{2h_{3}}\right] =-\ ^{v}\Lambda (x^{k},y^{3}),
\label{eqe2}
\end{eqnarray
\begin{eqnarray}
\widehat{R}_{3k}=\frac{w_{k}}{2h_{4}}[h_{4}^{\ast \ast }-\frac{(h_{4}^{\ast
})^{2}}{2h_{4}}-\frac{h_{3}^{\ast }h_{4}^{\ast }}{2h_{3}}]+\frac{h_{4}^{\ast
}}{4h_{4}}(\frac{\partial _{k}h_{3}}{h_{3}}+\frac{\partial _{k}h_{4}}{h_{4}
)-\frac{\partial _{k}h_{4}^{\ast }}{2h_{4}}=0 && \label{eqe3a} \\
\widehat{R}_{4k}=\frac{h_{4}}{2h_{3}}n_{k}^{\ast \ast }+\left( \frac{h_{4}}
h_{3}}h_{3}^{\ast }-\frac{3}{2}h_{4}^{\ast }\right) \frac{n_{k}^{\ast }}
2h_{3}}=0, && \label{eqe3}
\end{eqnarray
where certain differential derivatives are denoted in the form $a^{\bullet
}=\partial a/\partial x^{1},$\ $a^{\prime }=\partial a/\partial x^{2},$\
a^{\ast }=\partial a/\partial y^{3},$ and (extra to 4--d ''shell''
equations)
\begin{eqnarray*}
\widehat{R}_{5}^{5} &=&\widehat{R}_{6}^{6}=-\frac{1}{2h_{5}h_{6}}\left[
\partial _{\ ^{1}v\ ^{1}v}^{2}h_{6}-\frac{\left( \partial _{\
^{1}v}h_{6}\right) ^{2}}{2h_{6}}-\frac{(\partial _{\ ^{1}v}h_{5})(\partial
_{\ ^{1}v}h_{6})}{2h_{5}}\right] \\
&&=-\ \ ^{1}\Lambda \ (u^{\ \alpha },y^{5}), \\
\widehat{R}_{7}^{7} &=&\widehat{R}_{8}^{8}=-\frac{1}{2h_{7}h_{8}}\left[
\partial _{\ ^{2}v\ ^{2}v}^{2}h_{8}-\frac{\left( \partial _{\
^{2}v}h_{8}\right) ^{2}}{2h_{8}}-\frac{(\partial _{\ ^{2}v}h_{7})(\partial
_{\ ^{2}v}h_{6})}{2h_{7}}\right] \\
&& =-\ ^{2}\Lambda (u^{\ \ ^{1}\alpha },y^{7}),
\end{eqnarray*
\begin{eqnarray}
\widehat{R}_{5\ ^{0}\alpha } &=&\frac{\ ^{1}w_{\ ^{0}\alpha }}{2h_{6}}\left[
\partial _{\ ^{1}v\ ^{1}v}^{2}h_{6}-\frac{\left( \partial _{\
^{1}v}h_{6}\right) ^{2}}{2h_{6}}-\frac{(\partial _{\ ^{1}v}h_{5})(\partial
_{\ ^{1}v}h_{6})}{2h_{5}}\right] \notag \\
&&+\frac{\partial _{\ ^{1}v}h_{6}}{4h_{6}}\left( \frac{\partial _{\
^{0}\alpha }h_{5}}{h_{5}}+\frac{\partial _{\ ^{0}\alpha }h_{6}}{h_{6}
\right) -\frac{\partial _{\ ^{0}\alpha }\partial _{\ ^{1}v}h_{6}}{2h_{6}}=0,
\label{eqeho} \\
\widehat{R}_{6\ ^{0}\alpha } &=&\frac{h_{6}}{2h_{5}}\partial _{\ ^{1}v\
^{1}v}^{2}\ ^{1}n_{\ ^{0}\alpha }+\left( \frac{h_{6}}{h_{5}}\partial _{\
^{1}v}h_{5}-\frac{3}{2}\partial _{\ ^{1}v}h_{6}\right) \frac{\partial _{\
^{1}v}\ ^{1}n_{\ ^{0}\alpha }}{2h_{5}}=0, \notag
\end{eqnarray
\begin{eqnarray*}
\widehat{R}_{7\ ^{1}\alpha } &=&\frac{\ ^{2}w_{\ ^{1}\alpha }}{2h_{4}}\left[
\partial _{\ ^{2}v\ ^{2}v}^{2}h_{8}-\frac{\left( \partial _{\
^{2}v}h_{8}\right) ^{2}}{2h_{8}}-\frac{(\partial _{\ ^{2}v}h_{7})(\partial
_{\ ^{2}v}h_{8})}{2h_{7}}\right] \\
&&+\frac{(\partial _{\ ^{2}v}h_{8})}{4h_{8}}\left( \frac{\partial _{\
^{1}\alpha }h_{7}}{h_{7}}+\frac{\partial _{\ ^{1}\alpha }h_{8}}{h_{8}
\right) -\frac{\partial _{\ ^{1}\alpha }\partial _{\ ^{2}v}h_{8}}{2h_{8}}=0,
\\
\widehat{R}_{8\ ^{1}\alpha } &=&\frac{h_{8}}{2h_{7}}\partial _{\ ^{2}v\
^{2}v}^{2}\ ^{2}n_{\ ^{1}\alpha }+\left( \frac{h_{8}}{h_{7}}\partial _{\
^{2}v}h_{7}-\frac{3}{2}\partial _{\ ^{2}v}h_{8}\right) \frac{\partial _{\
^{2}v}\ ^{2}n_{\ ^{1}\alpha }}{2h_{8}}=0,
\end{eqnarray*
where partial derivatives, for instance, are $\partial _{\ ^{1}v}=\partial
/\partial \ ^{1}v=$ $\partial /\partial y^{5},\ $ $\partial _{\
^{2}v}=\partial /\partial \ ^{2}v=$ $\partial /\partial y^{7},$ and $N_{\
^{0}\alpha }^{5}=\ ^{1}w_{\ ^{0}\alpha }(u^{\ ^{0}\alpha },\ ^{1}v),\ N_{\
^{0}\alpha }^{6}=\ ^{1}n_{\ ^{0}\alpha }(u^{\ ^{0}\alpha },\ ^{1}v),$ $N_{\
^{1}\alpha }^{7}=\ ^{2}w_{\ ^{1}\alpha }(u^{\ ^{1}\alpha },\ ^{2}v),\ N_{\
^{1}\alpha }^{8}=\ ^{2}n_{\ ^{1}\alpha }(u^{\ ^{1}\alpha },\ ^{2}v).$
The above system of equations is a generic nonlinear one with partial
derivatives. Surprisingly, the existing separation of equations (we should
not confuse with separation of variables which is a different property)
allows us to construct very general classes of exact solutions (depending on
the conditions if certain partial derivatives are zero, or not), see
detailed analysis, discussions possible applications in modern gravity and
cosmology in \cite{vex2,vcosm,vrev1}.
Let us explain using the set of equations (\ref{eqe1})--(\ref{eqe3}) the
property of separation of equations for ansatz of type (\ref{ansgensol}).
For a HL model with given matter fields on $\mathbf{V,}$ we construct the
energy momentum tensor $T_{ij}$.\ We can consider a nonholonomic lift on $
\mathbf{V}$ such way organized that the resulting in $\Upsilon
_{j}^{i}=diag[Y_{1}^{1}=Y_{2}^{2}=\Upsilon
_{4}(x^{k},y^{3}),Y_{3}^{3}=Y_{4}^{4}=\Upsilon _{2}(x^{k})]$ (using
corresponding noholonomic distributions and transforms, various types of
physically motivated energy--momentum tensors can be parametrized in such a
diagonal form with respect to N--adapted frames)$.$ Taking the value
\Upsilon _{2}(x^{k}),$ we can define $g_{1}(x^{k})$ (or, inversely,
g_{2}(x^{k})$) for a given $g_{2}(x^{k})$ (or, inversely, $g_{1}(x^{k})$) as
an explicit, or non--explicit, solution of (\ref{eqe1}) by integrating two
times on $h$--variables. Similarly, for a given $\Upsilon _{4}(x^{k},y^{3}),
$ we solve (\ref{eqe2}) by integrating one time on $y^{3}$ and defining
h_{3}(x^{k},y^{3})$ for a given $h_{4}(x^{k},y^{3})$ (or, inversely, by
integrating two times on $y^{3}$ and defining $h_{4}(x^{k},y^{3})$ for a
given $h_{3}(x^{k},y^{3})$). After we determined the values $g_{i}(x^{k})$
and $h_{\ ^{0}a}(x^{k},y^{3}),$ we can compute the coefficients of
N--connection:\ The functions $w_{j}(x^{k},y^{3})$ are solutions of
algebraic equations (\ref{eqe3a}). Integrating two times on $y^{3},$ we find
$n_{j}(x^{k},y^{3}).$ The general solutions depend on integration functions
depending on coordinates $x^{k}.$ For physical considerations, we have to
consider well defined boundary conditions for such integration functions.
\section{Finsler Branes and Trapping to HL and GR}
\label{sec4} In this section, we analyze brane models when the 4--d
Horava--Lifshitz theory is embedded into 8--d Finsler spaces with
non--factorizable velocity type coordinates (experimentally, the light
velocity is finite). We shall adapt to nonholonomic and/or scale anisotropic
configurations some ideas and methods from Refs. \cit
{vbrane,midod,gm1,gm2,gs1,gs2,singlbr} when various trapping/localizing
mechanisms for various spins $\left(0,1/2,1,2\right) $ on the 4--d brane/
observable spacetime were analyzed.
We have to consider warped Finsler geometries and analyze trapping
mechanisms because there are not experimental data for Finsler like metrics
depending on coordinates and velocities. Such dependencies can be always
derived in various isotropic and anisotropic QG models with nonlinear
dispersions. We expectations that brane trapping effects may allow us to
detect QG and LV effects experimentally even at scales much large than the
Planck one. On Finsler branes, we can consider that gravitons are allowed to
propagate in the bulk of a Finsler spacetime with dependence of
geometric/physical objects on velocity/ momentum coordinates.
\subsection{An ansatz for generating HF--brane solutions}
For constructing brane solutions in EFG, we use the ansatz for a class of
metrics which via frame transform can be parametrized in the form
\begin{eqnarray}
\mathbf{g} &=&\ \phi ^{2}(y^{5})[g_{1}(x^{k})\ e^{1}\otimes
e^{1}+g_{2}(x^{k})\ e^{2}\otimes e^{2} \notag \\
&&+h_{3}(x^{k},v)\ \mathbf{e}^{3}\otimes \mathbf{e}^{3}+h_{4}(x^{k},v)\
\mathbf{e}^{4}\otimes \mathbf{e}^{4}] \notag \\
&&+\left( \ ^{\ast }\mathit{l}_{P}\right) ^{2}\ [h_{5}(x^{k},v,y^{5})\
\mathbf{e}^{5}\otimes \ \mathbf{e}^{5}+h_{6}(x^{k},v,y^{5})\ \mathbf{e
^{6}\otimes \ \mathbf{e}^{6}] \label{ans8d} \\
&&+\left( \ ^{\ast }\mathit{l}_{P}\right) ^{2}\ [h_{7}(x^{k},v,y^{5},y^{7})\
\mathbf{e}^{7}\otimes \ \mathbf{e}^{7}+h_{8}(x^{k},v,y^{5},y^{7})\ \mathbf{e
^{8}\otimes \ \mathbf{e}^{8}], \notag
\end{eqnarray
\begin{eqnarray*}
\mbox{ where } \mathbf{e}^{3} &=&dv+w_{i}dx^{i},\ \mathbf{e}^{4}=dy^{4}+n_{i}dx^{i}, \\
\mathbf{e}^{5} &=&dy^{5}+\ ^{1}w_{i}dx^{i}+\ ^{1}w_{3}dv+\ ^{1}w_{4}dy^{4},
\\
\mathbf{e}^{6} &=&dy^{6}+\ ^{1}n_{i}dx^{i}+\ ^{1}n_{3}dv+\ ^{1}n_{4}dy^{4},
\\
\mathbf{e}^{7} &=&dy^{7}+\ ^{2}w_{i}dx^{i}+\ ^{2}w_{3}dv+\ ^{2}w_{4}dy^{4}+\
^{2}w_{5}dy^{5}+\ ^{2}w_{6}dy^{6}, \\
\mathbf{e}^{8} &=&dy^{8}+\ ^{2}n_{i}dx^{i}+\ ^{2}n_{3}dv+\ ^{2}n_{4}dy^{4}+\
^{2}n_{5}dy^{5}+\ ^{2}n_{6}dy^{6},
\end{eqnarray*
for nontrivial N--connection coefficients
\begin{eqnarray}
N_{i}^{3} &=&w_{i}(x^{k},v),N_{i}^{4}=n_{i}(x^{k},v); \label{ncon8d} \\
N_{i}^{5} &=&\ ^{1}w_{i}(x^{k},v,y^{5}),N_{3}^{5}=\
^{1}w_{3}(x^{k},v,y^{5}),N_{4}^{5}=\ ^{1}w_{4}(x^{k},v,y^{5}); \notag \\
N_{i}^{6} &=&\ ^{1}n_{i}(x^{k},v,y^{5});N_{3}^{6}=\
^{1}n_{3}(x^{k},v,y^{5}),N_{4}^{6}=\ ^{1}n_{4}(x^{k},v,y^{5}); \notag \\
N_{i}^{7} &=&\ ^{2}w_{i}(x^{k},v,y^{7}),N_{3}^{7}=\
^{2}w_{3}(x^{k},v,y^{7}),N_{4}^{7}=\ ^{2}w_{4}(x^{k},v,y^{7}), \notag \\
&&N_{5}^{7}=\ ^{2}w_{3}(x^{k},v,y^{7}),N_{6}^{7}=\ ^{2}w_{4}(x^{k},v,y^{7});
\notag \\
N_{i}^{8} &=&\ ^{2}n_{i}(x^{k},v,y^{7}),N_{3}^{8}=\
^{2}n_{3}(x^{k},v,y^{7}),N_{4}^{8}=\ ^{2}n_{4}(x^{k},v,y^{7}), \notag \\
&&N_{5}^{8}=\ ^{2}n_{3}(x^{k},v,y^{7}),N_{6}^{8}=\ ^{2}n_{4}(x^{k},v,y^{7}).
\notag
\end{eqnarray
The local coordinates in the above ansatz (\ref{ans8d}) are labelled in the
form $x^{i}=(x^{1},x^{2}),$ for $i,j,...=1,2;$ $y^{3}=v.$
We can include solutions of HL gravity into (\ref{ans8d}) vial polarization
\eta $--functions whe
\begin{eqnarray*}
g_{i}(x^{k}) &=&\eta _{i}(x^{k},v)\ ^{\circ }g_{i}(x^{k},v),\
h_{a}(x^{k},v)=\eta _{a}(x^{k},v)\ ^{\circ }h_{a}(x^{k},v), \\
N_{i}^{3}(x^{k},v) &=&\eta _{i}^{3}(x^{k},v)\ ^{\circ }w_{i}(x^{k},v),\
N_{i}^{4}(x^{k},v)=\eta _{i}^{3}(x^{k},v)\ ^{\circ }n_{i}(x^{k},v),
\end{eqnarray*
where "primary" data $\left[ \ ^{\circ }g_{i},\ ^{\circ }h_{a},\ ^{\circ }w_{i},\
^{\circ }n_{i}\right] $ are defined for a solution of gravitational field
equations derived from HL action (\ref{action}), and gravitational
polarizations $\left[ \eta _{i},\ \eta _{a},\ \eta _{i}^{b}\right] $ should
be defined from the condition that the "target" data $\left[ g_{i},h_{a},w_{i},n_{i}\right] $
determine solutions of the system (\ref{eqe1})--(\ref{eqe3}); the nontrivial
$[ N_{\alpha }^{5},N_{\alpha }^{6},N_{\ ^{1}\alpha }^{7},N_{\ ^{1}\alpha
}^{8},$ $h_{5},h_{6},h_{7},h_{8}]$ should be constructed as solutions of the system (\ref{eqeho}). For instance, we can take that some values with $\ ^{\circ }$ are correspondingly given by solutions on extremal spherical and rotating black holes of Ho\v rava gravity \cite{ghodsi} and derive generic off--diagonal generalizations, for instance, with ellipsoidal configurations like we considered in Refs. \cite{vnc2,vsingl1,vsingl2}.
The purpose of this section is to construct and analyze physical
implications of solutions of equations (\ref{ensteqcdc}) and, in particular,
(\ref{eqe1})--(\ref{eqeho}) defined by ansatz (\ref{ans8d}) with,
respectively, trivial and non--trivial N--connection coefficients (\re
{ncon8d}). The diagonal scenario from HF to GR is outlined in brief in
Appendix in a form very similar that for the diagonal transition from\ EFG
to GR in Ref. \cite{vbrane}. In this work, we use the canonical
d--connection instead of the Cartan d--connection.
\subsection{Finsler brane solutions}
One of the main goals of this work is to elaborate trapping scenarios for
''true'' Finsler like configurations with positively nontrivial
N--connections as solutions of nonolonomic gravitational equations (\re
{ensteqcdc}). The priority of such generic off--diagonal solutions is that
they allows us to distinguish the QG phenomenology and effects with LV of
(pseudo) Finsler type from that described by (pseudo) Riemannian ones
(following analysis from Introduction section the last variant is less
natural with very special types of nonlinear dispersions which must result
in vanishing N--connection structures).
\subsubsection{Separation of equations in HF models of brane gravity}
We consider an ansatz (\ref{ans8d}) multiplied to $\phi ^{2}(y^{5})$ and
with non--trivial N--connection coefficients (\ref{ncon8d}) and respective
\eta $--polarizations. We define the conditions when the coefficients
generate exact solutions of (\ref{ensteqcdc})for general sources of type
\ref{sourcb1}) and (\ref{source3}). For such ansatz, the system of equations
in HF gravity (\ref{eqe1})--(\ref{eqeho}) (we label $g_{1}=g_{2}=\epsilon
_{\pm }e^{\psi (x^{k})},$ for $\epsilon _{\pm }=\pm 1)$ transform into
\begin{eqnarray}
&&\epsilon _{\pm }\psi ^{\bullet \bullet }(x^{i})+\epsilon _{\pm }\psi
^{\prime \prime }(x^{i})=2\ ^{h}\Lambda (x^{i}), \notag \\
&&h_{4}^{\ast }(x^{i},v)=2h_{3}(x^{i},v)\ h_{4}(x^{i},v)\ ^{v}\Lambda
(x^{i},v)/\widehat{\phi }^{\ast }(x^{i},v), \label{4ep2b} \\
&&\partial _{y^{5}}h_{6}(u^{\alpha },y^{5})\ =2h_{5}(u^{\alpha
},y^{5})h_{6}(u^{\alpha },y^{5})\ ^{1}\Lambda u^{\alpha },y^{5})\ /\partial
_{y^{5}}\ ^{1}\widehat{\phi }(u^{\alpha },y^{5})\ , \notag \\
&&\partial _{y^{7}}h_{8}(u^{\ ^{1}\alpha },y^{7})=2h_{7}(u^{\ ^{1}\alpha
},y^{7})h_{6}(u^{\ ^{1}\alpha },y^{7})\ ^{2}\Lambda (u^{\ ^{1}\alpha
},y^{7})/\partial _{y^{7}}\ ^{2}\widehat{\phi }(u^{\ ^{1}\alpha },y^{7}),
\notag
\end{eqnarray
and the solutions for N--connection coefficients
\begin{eqnarray}
\beta (x^{i},v)\ w_{i}(x^{i},v)+\alpha _{i}(x^{i},v) &=&0, \label{4ep3b} \\
\ ^{1}\beta (u^{\alpha },y^{5})\ \ ^{1}w_{\gamma }(u^{\alpha },y^{5})+\
^{1}\alpha _{\gamma }(u^{\alpha },y^{5}) &=&0, \notag \\
\ ^{2}\beta (u^{\ ^{1}\alpha },y^{7})\ \ ^{2}w_{\ ^{1}\gamma }(u^{\
^{1}\alpha },y^{7})+\ ^{2}\alpha _{\ ^{1}\gamma }(u^{\ ^{1}\alpha },y^{7})
&=&0, \notag
\end{eqnarray
\begin{eqnarray*}
n_{i}^{\ast \ast }(x^{i},v)+\gamma (x^{i},v)\ n_{i}^{\ast }(x^{i},v) &=&0, \\
\partial _{y^{5}}\partial _{y^{5}}\ ^{1}n_{\mu }(u^{\alpha },y^{5})\ +\
^{1}\gamma (u^{\alpha },y^{5})\ \partial _{y^{5}}\ \ ^{1}n_{\mu }(u^{\alpha
},y^{5})\ &=&0, \\
\partial _{y^{7}}\partial _{y^{7}}\ ^{2}n_{\mu }(u^{\ ^{1}\alpha },y^{7})\
+\ ^{2}\gamma (u^{\ ^{1}\alpha },y^{7})\ \partial _{y^{7}}\ \ ^{2}n_{\mu
}(u^{\ ^{1}\alpha },y^{7})\ &=&0,
\end{eqnarray*
where
\begin{eqnarray*}
\alpha _{i} &=&h_{4}^{\ast }\partial _{i}\widehat{\phi },\ \beta
=h_{4}^{\ast }\widehat{\phi }^{\ast },\ \widehat{\phi }=\ln \left| \frac
h_{4}^{\ast }}{\sqrt{|h_{3}h_{4}|}}\right| ,\ \gamma =\left( \frac
|h_{4}|^{3/2}}{|h_{3}|}\right) ^{\ast }, \\
\ ^{1}\alpha _{\mu } &=&(\partial _{y^{5}}h_{6})\partial _{\mu }\ ^{1
\widehat{\phi },\ \ ^{1}\beta =(\partial _{y^{5}}h_{6})(\partial _{y^{5}}\
^{1}\widehat{\phi }), \\
&&\ ^{1}\widehat{\phi }=\ln \left| \frac{\partial _{y^{5}}h_{6}\ }{\sqrt
|h_{5}h_{6}|}}\right| ,\ \ ^{1}\gamma =\partial _{y^{5}}\left( \frac
|h_{6}|^{3/2}}{|h_{5}|}\right) ,
\end{eqnarray*
\begin{eqnarray*}
\ ^{2}\alpha _{\ ^{1}\mu } &=&(\partial _{y^{7}}h_{8})\partial _{\ ^{1}\mu
}\ ^{2}\widehat{\phi },\ \ ^{2}\beta =(\partial _{y^{7}}h_{8})(\partial
_{y^{7}}\ ^{2}\widehat{\phi }), \\
&&\ ^{2}\widehat{\phi }=\ln \left| \frac{\partial _{y^{7}}h_{8}}{\sqrt
|h_{7}h_{8}|}}\right| ,\ \ ^{2}\gamma =\partial _{y^{7}}\left( \frac
|h_{8}|^{3/2}}{|h_{7}|}\right) ,
\end{eqnarray*
for $h_{3,4}^{\ast }\neq 0,\ \partial _{y^{5}}h_{6}\neq 0,\partial
_{y^{7}}h_{8}\neq 0,\ ^{h}\Lambda \neq 0,\ ^{v}\Lambda \neq 0,\ ^{b}\Lambda
\neq 0.$
\subsubsection{Exact solutions for HF brane models}
The system of partial derivative equations (\ref{4ep2b}) and (\ref{4ep3b})
is with separation of equations. It can be integrated in general form
\begin{eqnarray}
g_{1} &=&g_{2}=\epsilon _{\pm }e^{\psi (x^{k})}, \label{solf1} \\
h_{4} &=&\ ^{0}h_{4}(x^{k})\pm 2\int \frac{\left( \exp [2\ \widehat{\phi
(x^{i},v)]\right) ^{\ast }}{\ ^{v}\Lambda (x^{i},v)}dv, \notag \\
h_{3} &=&\pm \frac{1}{4}\left[ \sqrt{|h_{4}^{\ast }(x^{i},v)|}\right]
^{2}\exp \left[ -2\ \widehat{\phi }(x^{i},v)\right] , \notag \end{eqnarray}
\begin{eqnarray*}
h_{6} &=&\ ^{0}h_{6}(u^{\alpha })\pm 2\int \frac{\partial _{y^{5}}\left(
\exp [2\ \ ^{1}\widehat{\phi }(u^{\alpha },y^{5})]\right) }{\ ^{1}\Lambda
(u^{\alpha },y^{5})\ }dy^{5}, \notag \\
h_{5} &=&\pm \frac{1}{4}\left[ \sqrt{|\partial _{y^{5}}h_{6}(u^{\alpha
},y^{5})|}\right] ^{2}\exp \left[ -2\ \ ^{1}\widehat{\phi }(u^{\alpha
},y^{5})\right] , \notag \\
h_{8} &=&\ ^{0}h_{8}(u^{\ ^{1}\alpha })\pm \int \frac{\partial
_{y^{7}}\left( \exp [2\ \ ^{2}\widehat{\phi }(u^{\ ^{1}\alpha
},y^{7})]\right) }{\ ^{2}\Lambda (u^{\ ^{1}\alpha },y^{7})\ }dy^{7}, \notag
\\
h_{7} &=&\pm \frac{1}{4}\left[ \sqrt{|\partial _{y^{7}}h_{8}(u^{\ ^{1}\alpha
},y^{7})|}\right] ^{2}\exp \left[ -2\ \ ^{2}\widehat{\phi }(u^{\ ^{1}\alpha
},y^{7})\right] , \notag
\end{eqnarray*
and, for N--connection coefficients
\begin{eqnarray}
w_{i} &=&-\partial _{i}\ \widehat{\phi }/\ \widehat{\phi }^{\ast },
\label{solf2} \\
n_{k} &=&\ n_{k}^{[0]}(x^{i})+\ n_{k}^{[1]}(x^{i})\int \left[ h_{3}/\left(
\sqrt{|h_{4}|}\right) ^{3}\right] dv, \notag \\
\ ^{1}w_{\alpha } &=&-\partial _{\alpha }(\ ^{1}\widehat{\phi })/\partial
_{y^{5}}(\ ^{1}\widehat{\phi }), \notag \\
\ ^{1}n_{\beta } &=&\ \ ^{1}n_{\beta }^{[0]}(u^{\alpha })+\ \ ^{1}n_{\beta
}^{[1]}(u^{\alpha })\int \left[ h_{5}/\left( \sqrt{|h_{6}|}\right) ^{3
\right] dy^{5}, \notag \\
\ ^{2}w_{\ ^{1}\alpha } &=&-\partial _{\ ^{1}\alpha }(\ \ ^{2}\widehat{\phi
)/\partial _{y^{7}}(\ ^{2}\widehat{\phi }), \notag \\
\ ^{2}n_{\ ^{1}\beta } &=&\ \ ^{2}n_{\ ^{1}\beta }^{[0]}(u^{\ ^{1}\alpha
})+\ \ ^{2}n_{\ ^{1}\beta }^{[1]}(u^{\ ^{1}\alpha })\int \left[ h_{7}/\left(
\sqrt{|h_{8}|}\right) ^{3}\right] dy^{7}. \notag
\end{eqnarray}
The above presented classes of solutions with nonzero $h_{3}^{\ast },$
h_{4}^{\ast },\partial _{y^{5}}h_{5},\partial _{y^{5}}h_{6},$ $\partial
_{y^{7}}h_{7},\partial _{y^{7}}h_{8}$ are determined by generating functions
$\widehat{\phi }(x^{i},v),\widehat{\phi }^{\ast }\neq 0;$ \newline
$\ ^{1}\ \widehat{\phi }(x^{i},y^{5}),\partial _{y^{5}}\ ^{1}\ \widehat{\phi
}\neq 0,$ $\ ^{2}\ \widehat{\phi }(x^{i},y^{5},y^{7}),$ $\partial _{y^{7}}\
^{2}\ \widehat{\phi }\neq 0,$ and integration functions $\
n_{k}^{[0]}(x^{i}),$ $\ n_{k}^{[1]}(x^{i}),\ \ \ ^{1}n_{\beta
}^{[0]}(u^{\alpha }),\ ^{1}n_{\beta }^{[1]}(u^{\alpha }),\ \ ^{2}n_{\
^{1}\beta }^{[0]}(u^{\ ^{1}\alpha }),\ \ ^{2}n_{\ ^{1}\beta }^{[1]}(u^{\
^{1}\alpha }).$ In order to construct explicit solutions, we have to chose
and/or fix such functions following additional assumptions on symmetry of
solutions, boundary conditions etc.
The coefficients (\ref{solf1}) and (\ref{solf2}) can be additionally
constrained if we wont to construct solutions for the Levi--Civita
connection on $T\mathbf{V.}$ By straightforward computations (see details in
\cite{vex1,vex2,vex3}), we can verify that all torsion coefficients (\re
{dtors}) vanish if
\begin{eqnarray*}
w_{i}^{\ast } &=&\mathbf{e}_{i}\ln |h_{4}|,\ \mathbf{e}_{k}w_{i}=\mathbf{e
_{i}w_{k},\ n_{i}^{\ast }=0, \partial _{i}n_{k}=\partial _{k}n_{i}, \\
\partial _{y^{5}}(\ ^{1}w_{\alpha }) &=&\ ^{1}\mathbf{e}_{\alpha }\ln
|h_{6}|,\ \ ^{1}\mathbf{e}_{\alpha }\ ^{1}w_{\beta }=\ ^{1}\mathbf{e}_{\beta
}\ ^{1}w_{\alpha }, \\
&& \partial _{y^{5}}(\ ^{1}n_{\alpha })=0, \partial _{\alpha }\ ^{1}n_{\beta
}=\partial _{\beta }\ ^{1}n_{\alpha },
\end{eqnarray*}
\begin{eqnarray*}
\partial _{y^{7}}(\ ^{2}w_{\ ^{1}\alpha }) &=&\ ^{2}\mathbf{e}_{\ ^{1}\alpha
}\ln |h_{8}|,\ \ ^{2}\mathbf{e}_{\ ^{1}\alpha }\ ^{2}w_{\ ^{1}\beta }=\ ^{2
\mathbf{e}_{\ ^{1}\beta }\ ^{2}w_{\ ^{1}\alpha }, \\
&& \partial _{y^{7}}(\ ^{2}n_{\ ^{1}\alpha })=0,\ \partial _{\ ^{1}\alpha }\
^{2}n_{\ ^{1}\beta }=\partial _{\ ^{1}\beta }\ ^{2}n_{\ ^{1}\alpha }.
\end{eqnarray*
Such conditions can be satisfied by imposing certain constraints on the
considered classes of generating and integration functions. This class of
generic off--diagonal solutions are important if we wont to construct
trapping configurations from the HF brane gravity to GR on $\mathbf{V,}$
when the conditions (\ref{grcond}) are imposed.
\subsubsection{Remarks on (non) diagonal HF brane solutions on $T\mathbf{V}$}
The above solutions in HF gravity are still very general. It is not clear
what physical meaning they may have and we must impose additional
restrictions on some coefficients of metrics and sources in order to
construct in explicit form certain Finsler brane configurations resulting in
HL, or GR, theories and model a trapping mechanism with generic
off--diagonal metrics.
There is a class of sources in HF gravity when for trivial N--connection
coefficients (i.e. for zero values (\ref{ncon8d})) the sources $\widetilde
\mathbf{\Upsilon }}_{\ \ ^{2}\delta }^{\ ^{2}\beta }$ (\ref{sourcb})
transform into data (which in the diagonal limit we get sources for the
gravitational equations for the Levi--Civita connection labeled with a left
low bar) $\ _{\shortmid }\Upsilon _{\ \ ^{2}\delta }^{\ ^{2}\beta }$ (\re
{source3}), with nontrivial limits for $\ _{\shortmid }\Upsilon _{\ \delta
}^{\beta }=\Lambda -M^{-(m+2)}\overline{K}_{1}(y^{5})$ and $\ _{\shortmid
}\Upsilon _{\ 5}^{5}=\ _{\shortmid }\Upsilon _{\ 6}^{6}=\Lambda -M^{-(m+2)
\overline{K}_{2}(y^{5}),$ being preserved certain conditions of type (\re
{sourcb1}). The generating functions are taken in the form when
\begin{eqnarray*}
h_{5}(x^{i},y^{5}) &=&\ ^{\ast }\mathit{l}_{P}\frac{\overline{h}(y^{5})}
\phi ^{2}(y^{5})}\ ^{q}h_{5}(x^{i},y^{5}),\\
h_{6}(x^{i},y^{5})&=&\ ^{\ast
\mathit{l}_{P}\frac{\overline{h}(y^{5})}{\phi ^{2}(y^{5})}\
^{q}h_{6}(x^{i},y^{5}), \\
h_{7}(x^{i},y^{5},y^{7}) &=&\ ^{\ast }\mathit{l}_{P}\frac{\overline{h}(y^{5}
}{\phi ^{2}(y^{5})}\ ^{q}h_{7}(x^{i},y^{5},y^{7}), \\
h_{8}(x^{i},y^{5},y^{7}) &=&\ ^{\ast }\mathit{l}_{P}\frac{\overline{h}(y^{5}
}{\phi ^{2}(y^{5})}\ ^{q}h_{8}(x^{i},y^{5},y^{7}),
\end{eqnarray*
where the generating functions are parametrized in such a form that $\phi
^{2}(y^{5})$ and $h_{5}(y^{5})$ are those for diagonal metrics and $\
^{q}h_{5},\ ^{q}h_{6},\ ^{q}h_{7},$ $\ ^{q}h_{8}$ are computed following
formulas (\ref{solf1}) and (\ref{solf2}). This class of off--diagonal
metrics are parametrized in the form
\begin{eqnarray}
\mathbf{g} &=&g_{1}dx^{1}\otimes dx^{1}+g_{2}dx^{2}\otimes dx^{2}+h_{3
\mathbf{e}^{3}{\otimes }\mathbf{e}^{3}\ +h_{4}\mathbf{e}^{4}{\otimes
\mathbf{e}^{4}\ + \label{fbr} \\
&&\left( \ ^{\ast }\mathit{l}_{P}\right) ^{2}\frac{\overline{h}}{\phi ^{2}
[\ \ ^{q}h_{5}\mathbf{e}^{5}\otimes \ \mathbf{e}^{5}+\ ^{q}h_{6}\mathbf{e
^{6}\otimes \ \mathbf{e}^{6}+\ ^{q}h_{7}\mathbf{e}^{7}\otimes \ \mathbf{e
^{7}+\ ^{q}h_{8}\mathbf{e}^{8}\otimes \ \mathbf{e}^{8}], \notag
\end{eqnarray
where
\begin{eqnarray}
\ \mathbf{e}^{3} &=&dy^{3}+w_{i}dx^{i},\mathbf{e}^{4}=dy^{4}+n_{i}dx^{i},
\label{ncfbr} \\
\ \mathbf{e}^{5} &=&dy^{5}+\ ^{1}w_{i}dx^{i},\mathbf{e}^{6}=dy^{6}+\
^{1}n_{i}dx^{i}, \notag \\
\mathbf{e}^{7} &=&dy^{7}+\ ^{2}w_{i}dx^{i},\mathbf{e}^{8}=dy^{8}+\
^{2}n_{i}dx^{i}. \notag
\end{eqnarray
Such off--diagonal parameterizations of metrics where considered in \cit
{vbrane} but the coefficients of the metric and N--connection where computed
there for a different d--connection (for the Cartan d--connection).
Any solution of type (\ref{fbr}) describes an off--diagonal canonical
nonholonomic trapping for 8--d (respectively, for corresponding classes of
generating and integration functions, 5--, 6--, 7--d) to 4--d modifications
of HL and/or GR with some corrections depending on bulk Finsler
''fluctuations'' and LV effects. There is a class of sources when for
vanishing N--connection coefficients (\ref{ncfbr}) we get diagonal metrics
of type (\ref{ans8d}), considered in Appendix, but multiplied to a conformal
factor $\phi ^{2}(y^{5})$ when the $h$--coefficients are solutions of
equations of type (\ref{4ep2b}).
With respect to local coordinate cobase $du^{\ ^{2}\alpha
}=(dx^{i},dy^{a},dy^{\ ^{1}a},dy^{\ ^{2}a})$ a solution (\ref{fbr}) is
parametrized by an off--diagonal matrix $g_{\ ^{2}\alpha \ ^{2}\beta }=
\begin{equation*}
\left[
\begin{array}{cccccccc}
B_{11} & B_{12} & w_{1}h_{3} & n_{1}h_{4} & \ ^{1}w_{1}h_{5} & \
^{1}n_{1}h_{6} & \ ^{2}w_{1}h_{7} & \ ^{2}n_{1}h_{8} \\
B_{21} & B_{22} & w_{2}h_{3} & n_{2}h_{4} & \ ^{1}w_{2}h_{5} & \
^{1}n_{2}h_{6} & \ ^{2}w_{2}h_{7} & \ ^{2}n_{2}h_{8} \\
w_{1}h_{3} & w_{2}h_{3} & h_{3} & 0 & 0 & 0 & 0 & 0 \\
n_{1}h_{4} & n_{2}h_{4} & 0 & h_{4} & 0 & 0 & 0 & 0 \\
\ ^{1}w_{1}h_{5} & ^{1}w_{2}h_{5} & 0 & 0 & h_{5} & 0 & 0 & 0 \\
\ ^{1}n_{1}h_{6} & \ ^{1}n_{2}h_{6} & 0 & 0 & 0 & h_{6} & 0 & 0 \\
\ ^{2}w_{1}h_{7} & \ ^{2}w_{2}h_{7} & 0 & 0 & 0 & 0 & h_{7} & 0 \\
\ ^{2}n_{1}h_{8} & \ ^{2}n_{2}h_{8} & 0 & 0 & 0 & 0 & 0 & h_{8
\end{array
\right]
\end{equation*
where possible observable Finsler brane and LV contributions are
distinguished by terms proportional to $\left( \ ^{\ast }\mathit{l
_{P}\right) ^{2}$ in
\begin{eqnarray*}
B_{11} &=&g_{1}+w_{1}^{2}h_{3}+n_{1}^{2}h_{4}+\left( \ ^{\ast }\mathit{l
_{P}\right) ^{2}\frac{\overline{h}}{\phi ^{2}}\times \\
&&\left[ (\ ^{1}w_{1})^{2}\ ^{q}h_{5}+(\ ^{1}n_{1})^{2}\ ^{q}h_{6}+(\
^{2}w_{1})^{2}\ ^{q}h_{7}+(\ ^{2}n_{1})^{2}\ ^{q}h_{8}\right] , \\
B_{12} &=&B_{21}=w_{1}w_{2}h_{3}+n_{1}n_{2}h_{4}+\left( \ ^{\ast }\mathit{l
_{P}\right) ^{2}\frac{\overline{h}}{\phi ^{2}}\times \\
&&\left[ \ ^{1}w_{1}\ ^{1}w_{2}\ ^{q}h_{5}+\ ^{1}n_{1}\ ^{1}n_{2}\
^{q}h_{6}+\ ^{2}w_{1}\ ^{2}w_{2}\ ^{q}h_{7}+\ ^{2}n_{1}\ ^{2}n_{2}\ ^{q}h_{8
\right] , \\
B_{22} &=&g_{2}+w_{2}^{2}h_{3}+n_{2}^{2}h_{4}+\left( \ ^{\ast }\mathit{l
_{P}\right) ^{2}\frac{\overline{h}}{\phi ^{2}}\times \\
&&\left[ (\ ^{1}w_{2})^{2}\ ^{q}h_{5}+(\ ^{1}n_{2})^{2}\ ^{q}h_{6}+(\
^{2}w_{2})^{2}\ ^{q}h_{7}+(\ ^{2}n_{2})^{2}\ ^{q}h_{8}\right] .
\end{eqnarray*
As a matter of principle, it is possible to distinguish \ experimentally
some generic off--diagonal metrics in Finsler geometry from certain diagonal
configurations of type (\ref{ansdiag8d}) with Levi--Civita connection on $
\mathbf{V}.$
Let us discuss and compare the above Finsler brane off--diagonal solutions
constructed above and those studied in Ref. \cite{vbrane}. The formulas for
coefficients (\ref{solf1}) and (\ref{solf2}) computed (in this work) for the
canonical d--connection describe an off--diagonal brane extension to a
Finsler spacetime from the HL gravity with scaling anisotropy (we proved
that there is a trapping mechanism encoded into such solutions relating HF
and HL gravity models). For Finsler branes induced from a QG models with LV,
based on "nonrenormalizable" GR (which we studied in the just mentioned our
paper) the trapping scenario was modelled by the Cartan d--connection on a
Finsler brane and the resulting configuration was certain one in "locally
isotropic" GR theory.
\section{Discussion and Conclusions}
\label{sec5} Summarizing the results of this paper (see also a series of partner works \cite{vcosm,vbrane,vnc3,vcrit}), we conclude that there are at least nine substantial arguments to consider that Finsler geometry and related geometric methods are of crucial importance in modern classical and quantum gravity, particle physics, cosmology and modifications:
\begin{enumerate}
\item The bulk of models of quantum gravity (QG) and related phenomenology
are with Lorentz violation (LV) being characterized by corresponding
modified dispersion relations (MDR). In its turn, such a MDR determines
naturally a fundamental/generating Finsler function on (co) tangent space.
This QG--LV--MDR--Finsler geometry scheme works for various models of QG
with limits to, or warped/trapped, configurations derived in (super) string/
brane/ noncommutative/ analogous gravity/ gauge gravity etc theories. The H
\v{r}ava--Lifshitz (HL) theory with scaling anisotropy and MDR can be
included into such a generalized Finsler gravity scheme.
\item Locally anisotropic structures and Finsler geometries are considered in analogous gravity, geometric mechanics and various models of condensed
matter physics; certain important ideas and methods from physics of phase transitions are exploited in modern QG and phenomenology.
\item The ideas on restricted special relativity, scenarios with LV, modified/ generalized Lorentz symmetries have straightforward relations to some special models of Finsler geometry, anisotropic symmetries and corresponding local/global transformation laws.
\item There are certain ideas and explicit constructions \ suggesting that various problems related to dark energy and dark matter physics, accelerating Universes, anisotropies etc can be cured by modifying the pseudo--Riemannian/ Lorentzian spacetime paradigm to (pseudo) Finsler spacetimes and generalizations.
\item Finsler like geometries are ''canonically'' generated/ induced as exact solutions of nonholonomic Ricci flows of (pseudo) Riemannian metrics, and
for various evolution scenarios with noncommutative and/or nonsymmetric metrics, gravitational diffusion and stochastic processes, fractional derivatives and/or fractional dimensions, memory and self--organization etc.
\item Noncommutative generalizations of gravity theories can be modelled equivalently as complex Finsler like geometries.
\item Finsler configurations can be derived as exact solutions of gravitational field equations in GR, string, brane, gauge gravity theories.
\item Finsler geometry methods happen to be very efficient in elaborating a new geometric method (the so--called anholonomic deformation method) of
constructing exact solutions in gravity, even for the general relativity (GR) theory. Such an approach allows us to generate very general classes of
exact solutions of Einstein equations and generalizations (with generic off--diagonal metrics, linear and nonlinear connections and nonholonomic frame coefficients depending generally on all coordinates etc). Constraining nonholonomically certain general integral varieties of solutions with generalized connections, we obtain subvarieties for the Levi--Civita connections in GR. Such a method of constructing exact solutions can be
applied in HL gravity.
\item Re--writing the Einstein gravity and/or certain generalizations in canonical Finsler variables, and then using almost K\"{a}hler equivalents,
we can quantize various types of gravity theories using methods of deformation quantization, A--brane approach, nonholonomic canonical quantization etc. It seems that it is possible to renormalize gravity using
the so--called bi--connection formalism and/or HL approach.
\end{enumerate}
Following the above mentioned reasons, we consider that HL\ gravity should be extended in a form encoding also the physics of MDR, nonholonomic configurations and anisotropic configurations. In explicit form, we
elaborated a model of Ho\v{r}ava--Finsler (HF) gravity following generalized relativity principles \cite{vcosm,vcrit,vrev1,vsgg}. We used metric
compatible distinguished connections from Finsler geometry which allows us to formulate and study classical and quantum models following standard approaches with spinors, Dirac operators and vielbeins, metrics and connections as in GR but adapted to nonholonomic structures, in our
approach, to nonlinear connections (N--connection).
The HF gravity theory is canonically formulated on tangent bundle. From a formal point of view, it is generally integrable and can be quantized/renormalized following standard methods. There are many open issues regarding HL and HF gravity models. Here, we emphasize the following. A sensible problem to be solved is that why classical limits do not contain
anisotropies and dependencies on velocity/momentum type coordinates. In explicit form, we can apply certain ideas and methods from brane gravity which was studied intensively during last twelve years beginning Gogberashvili and Rundal--Sundrum works. Nevertheless, for Finsler branes
and HF gravity, such constructions can not be applied in a straightforward form. Possible warping, trapping, compactification etc scenarios for Finsler
spaces should encode, in general, a nontrivial N--connection structure. Technically, to construct such HF brane exact solutions with generic off--diagonal metrics is a very difficult task. One of the main results of
this work is that we were able to solve and analyze such off--diagonal locally anisotropic trapping scenario from HF to HL and/or GR theories. Such a nonholonomic gravitational dynamics encode also in general form various
types of MDR, parametric dependencies, possible generalized symmetries etc.\
The length of this paper does not allow us to address the question of stability of Finsler brane solutions. In general, stable configurations can be constructed for diagonal solutions which survive for nonholonomically constrained off--diagonal ones (proofs are similar to those for extra
dimensional brane solutions; we plan to study the problem in details in our further works). Hopeful, future work will concern various topics from QG
with LV and Finser geometry methods and possible applications in modern cosmology and astrophysics.
\vskip5pt
\textbf{Acknowledgements: } I'm grateful for support, hospitality and/or important discussions on generalized/modified Finsler gravity to G. Calcagni, E. Elizalde, M. Francavigla, C. L\"{a}mmerzahl, N. Mavromatos, S. Odintsov, D. Pavlov, V. Perlick, E. Radu, S. Sarkar, F. P. Schuller, L. Sindoni and P. Stavrinos. I thank E. Hatefi, A. Kobakhidze, F. Mercati and D. Orlando for some critical remarks and pointing additional references related to existing problems, present status and further developments of HL models. The research for this paper was partially supported by Romanian Government via Program IDEI, PN-II-ID-PCE-2011-3-0256.
|
3,212,635,537,651 | arxiv | \section{Introduction}
One-dimensional (1D) quantum wires and the junctions of several 1D
quantum wires are expected to be important
for potential applications as components in future nano-electronic devices.
Such 1D quantum wires with interacting electrons are described by
the Tomonaga-Luttinger liquid (TLL) theory
[\onlinecite{Tomonaga, Mattis, Haldane, Luttinger, delft, Giamarchi}],
the low-energy excitations in
which are the collective density oscillations. These density oscillations
or plasmon modes, are markedly different from their counterparts - Landau's
quasi-particle excitations, in higher dimensions described very successfully
by the Fermi liquid (FL) theory [\onlinecite{Giovanni}]. This leads to
unique physics in 1D, such as the spin-charge separation in which the
spin and charge excitations propagate with different velocities
[\onlinecite{Auslaender1, Jompol}], or the phenomena of charge
fractionalization [\onlinecite{Safi, pham, Steinberg}]. Recently charge
fractionalization has also been observed using time resolved measurements
on coupled integer quantum Hall edge channels [\onlinecite{Kamata}].
In the present work, we investigate the time-dependent transport properties
of multi-wire junctions, and a three-wire $Y$-junction in particular. These
have already been realized
experimentally in crossed single-walled carbon nanotubes
[\onlinecite{fuhrer,terrones}]. Such $Y$-junctions with interacting
quantum wires are also
extremely `rich' from a basic physics viewpoint and continue to be explored
very actively in the literature
[\onlinecite{nayak, lal200, chen, chamon1, Meden_prb2005, das2, Giuliano,
Bellazzini, Hou, agarwal_tdos, abhiram, dasrao, Rahmani, Feldman_PRB2011,
wolfe1, Hou_prb2012}].
Earlier theoretical studies of $Y$-junctions have primarily focused on the
fixed points of the junction, their stability analysis, and the associated DC
conductivity. These studies either use the fermionic language and the weak
interaction renormalization group (RG) approach [\onlinecite{lal200}], or
the bosonic and conformal field theory language [\onlinecite{chamon1, das2}],
or other numerical
methods such as the functional RG [\onlinecite{Meden_prb2005}].
A comprehensive study of the fixed points of the $Y$-junction formed from
spin-less interacting electrons, and the DC conductance, was carried out in
Ref.~[\onlinecite{chamon1}]. This was later extended to include spin-ful
electrons giving a much richer phase diagram in the parameter space of charge
and spin interactions [\onlinecite{Hou}], and to account for different
interaction strengths in different wires [\onlinecite{Hou_prb2012}].
\begin{figure}[t]
\begin{center}
\includegraphics[width=.98 \linewidth]{fig0.pdf}
\end{center}
\caption{Schematic of a $Y$-junction composed of TLL wires. Panel (a) shows
a $Y$-junction of finite length TLL wires (red) connected to Fermi liquid
leads at the ends (blue) with different applied voltages. Panel (b) displays
a $Y$-junction of infinite TLL wires.}
\label{fig0}
\end{figure}
Time-dependent transport properties of 1D TLL wires have also been studied
earlier. Quantum noise for an infinite TLL wire with point-like tunneling
impurity, around the `connected' fixed point of a two wire junction was studied in
Ref.~[\onlinecite{chamon_prb1995}].
The AC conductivity of a clean finite length TLL wire was calculated in
Refs.~[\onlinecite{Safi},\onlinecite{Berg}]. This has been recently
generalized to include arbitrary wave packet shapes of the incident
current in Ref.~[\onlinecite{Perfetto2014}]. Comparatively, the
time-dependent transport properties of $Y$-junctions, have drawn much less
attention in the literature and it is the aim of this article to rectify this.
In this article we study the AC conductivity, the tunneling current and
quantum noise (including shot noise and Josephson noise) of a $Y$-junction
tuned to a dissipation-less fixed point with spin-less TLL wires. We consider
both time-reversal symmetry (TRS) preserving and TRS violating junctions and
use the single-parameter description of the dissipation-less fixed points of
the junction given in Refs.~[\onlinecite{das2},\onlinecite{agarwal_tdos}].
Our analysis may be useful for interpreting time-resolved experiments
[\onlinecite{Kamata}] in multi-wire junctions and for designing
nano-electronic quantum circuits [\onlinecite{Jezouin}].
This article is organized as follows. In Sec.~\ref{secII} we discuss the
details of the three-wire $Y$-junction and show that both the Coulomb
interactions in the wire and the `scattering' boundary conditions at the
junction, can be treated using bosonization with delayed evaluation of the
boundary conditions [\onlinecite{chamon1}].
In Sec.~\ref{secIII}, we calculate the AC
conductivity of the $Y$-junction formed from finite
length TLL wires which are connected to FL leads --- see Fig.~\ref{fig0} (a).
We also reproduce the known results for a two-wire junction, and the
DC conductivity as a limiting case of our calculations.
In Sec.~\ref{secIV}, we calculate the tunneling current and quantum noise
at the junction with infinite TLL wires [see Fig.~\ref{fig0} (b)],
in the presence of point-like tunneling impurities at the $Y$-junction
tuned to a dissipation less fixed point. Finally
we summarize our findings in Sec.~\ref{summary}.
\section{Bosonization of the junction -- delayed evaluation of the boundary
condition}
In this section we review the technique of bosonization for the wire,
and subsequently the parametrization of the dissipation-less fixed
points at the junction.
\label{secII}
\subsection{Bosonization of the wires}
To model a junction of multiple wires, let us assume that N semi-infinite wires meet at a junction. The wires are modeled as spin-less
TLL on a half-line and are parametrized by coordinates $x_i, i =1,2, . . . ,N$ such that ($x_i > 0$).
We use a folded basis to describe the junction, {\it i.e.~}we choose a convention that for all the wires, $x_i = 0$ at the junction and $x_i$ increases from $0$ as one goes outwards from the wires.
We denote the incoming and outgoing single electron wave functions on wire $i$ by $\phi_{iI}$ and $\phi_{iO}$ respectively, which in turn are proportional to plane waves $\exp{[-i k( x_i + v t)]}$ and $\exp{[i k( x_i - v t)]}$ respectively, for a given wavenumber $k > 0$ and velocity $v$.
For simplicity of analysis,
we consider all the semi-infinite spin-less TLL wires to have the same short ranged
electron-electron ({\it e-e}) interaction strength and Fermi velocity.
The spin-less electron field on each wire can be expressed as $\psi (x) =
\psi_{\rm I} (x) + \psi_{\rm O}(x) $ where the incoming/outgoing fermionic
fields $\psi_{\rm I /\rm O} $ can be bosonized [\onlinecite{delft}] as
\begin{eqnarray} \label{BI}
\psi_{\rm I} (x) &=&\frac{1}{\sqrt{2 \pi \alpha}} F_{\rm I}~
e^{2i\pi N_{\rm I} (x + vt)/L}~e^{ - i k_{\rm F} x + i
\phi_{\rm I}(x)} ~, \nonumber \\
\psi_{\rm O} (x) &=& \frac{1}{\sqrt{2 \pi \alpha}} F_{\rm O}~
e^{2i\pi N_{\rm O} (x - vt)/L}~e^{ i k_{\rm F} x + i
\phi_{\rm O}(x)} ~.
\end{eqnarray}
Here $F_{\rm I}$ and $F_{\rm O}$ are Klein factors
for the incoming and outgoing electrons respectively, $k_{\rm F}$ is the
Fermi momentum, and $\alpha$ is the inverse ultraviolet (short distance) cut-off.
$N_{\rm I}$ and $N_{\rm O}$ count the number of incoming and outgoing chiral
particles with respect to the filled Fermi sea. The fields $\phi_{\rm I}(x)$
and $\phi_{\rm O}(x)$ are the incoming (left moving) and the outgoing
(right moving)
chiral bosonic fields in each wire and can be expressed in terms of the
bosonic creation and destruction operators as,
\begin{equation} \phi_{\rm O/I} \equiv \sum_{q>0}\frac{1}{\sqrt{n_q}} (b_{q{\rm O/I}}
e^{\pm iqx} + b^\dagger_{q{\rm O/I}}e^{\mp iqx}) e^{-\alpha|q|/2}~. \end{equation}
The Lagrangian of the system is given by ${\cal L} = {\cal L}_0+
{\cal L}_{\rm int}$ where ${\cal L}_0$ describes free electrons in the wire,
and is given by
\begin{eqnarray} {\cal L}_0 &=& \frac{1}{4\pi} \sum_{i=1}^N \int_0^L dx ~ \Big[ \partial_x
\phi_{i{\rm O}}~ (- \partial_t - v \partial_x) ~\phi_{i{\rm O}} \nonumber \\
& +& \partial_x \phi_{iI} ~(\partial_t - v \partial_x) ~\phi_{iI}) \Big]~,
\label{lag11} \end{eqnarray}
where $v$ denotes the Fermi velocity which we take to be same in all the
wires and $i$ is the wire index. The corresponding incoming and outgoing
density and current fields in each wire are given by
\begin{eqnarray} \label{eq:density}
\rho_{iO} = \frac{\partial_x \phi_{iO}}{2\pi}+ \frac{N_{iO}}{L}~, & &
J_{iO} = -\frac{\partial_t \phi_{iO}}{2\pi} - \frac{N_{iO}}{L} ~,\nonumber \\
\rho_{iI} = - \frac{\partial_x \phi_{iI}}{2\pi}+ \frac{N_{iI}}{L}~, & & J_{iI}
= ~\frac{\partial_t \phi_{iI}}{2\pi} - \frac{N_{iI}}{L}~.
\end{eqnarray}
We emphasize here that the second term in the expressions for density and current
arise from excessive number of incoming and outgoing fermions with respect to the ground state (filled Fermi sea), and can be controlled by
applying an external DC voltage in each TLL wire. These terms will be very useful in Sec.~\ref{secIV}, where we apply different DC bias voltage on
each of the three wires. However for calculating the AC conductivity in Sec. \ref{secIII}, only the first term of the
current expression (temporal derivative of the fluctuating fields) is needed since the average DC voltage is zero in all the wires, and we will use the notation,
\begin{equation} j_{iO} = -\frac{\partial_t \phi_{iO}}{2\pi}~,~~~ {\rm and }~~~ j_{iI} = ~\frac{\partial_t \phi_{iI}}{2\pi}~,
\label{eq:j}
\end{equation}
in Sec.~\ref{secIII}
For a short range {\it e-e} interaction between the two chiral modes in the wire,
the term in the Lagrangian for each wire $i$ is of the form
\begin{equation} {\cal L}_{int}^i = \frac{\lambda}{4\pi} ~ \int_0^L dx ~ \partial_x
\phi_{iI} ~ \partial_x \phi_{iO}~, \label{lag20} \end{equation}
where $\lambda$ is the {\it e-e} interaction strength (positive for repulsive
interactions) with the dimensions of velocity. Note that for each of the wires
described by Eqs.~\eqref{lag11} and \eqref{lag20}, the effective TLL velocity
and the effective TLL interaction strength are given by
\begin{equation} \label{eq:v}
\tilde{v} = \sqrt{v^2 -\lambda^2/4}~, ~~~~{\rm and}~~~~g = \sqrt{
\frac{v - \lambda /2}{v + \lambda/2}}~. \end{equation}
\subsection{Bosonization of the junction}
To describe the junction uniquely, we need to impose an appropriate boundary
condition on the fields at the junction, {\it i.e.~}at $x=0$. Following
standard procedure [\onlinecite{chamon1, das2, agarwal_tdos}], the incoming and
outgoing currents, and consequently the bosonic fields, are related at the
junction by a current splitting matrix ${\mathbb{M}}$, {\it i.e.~} $j_{Oi} =
\sum_j {\mathbb{M}}_{ij} ~ j_{Ij}$, which leads to $\phi_{Oi} = \sum_j
{\mathbb{M}}_{ij} ~\phi_{Ij}$. Here we have ignored an integration constant
which plays no (physical) role in the computation of the Green's functions of
the fields and consequently
on the scaling dimensions of various operators. In order to
ensure that the matrix ${\mathbb{M}}$ represents a fixed point of the
theory, the incoming and outgoing fields must satisfy appropriate bosonic
commutation relations; this restricts the matrix ${\mathbb{M}}$ to be
orthogonal. Scale invariance or conformal invariance imposes the same
constraints of orthogonality [\onlinecite{dasrao}] on ${\mathbb{M}}$.
The constraint
of orthogonality also implies that there is no dissipation in the system
[\onlinecite{agarwal_prb2009}]. In addition, to ensure current conservation
at the junction, its rows (or columns) have to add up to unity.
Since $\phi_{O}$ and $\phi_{I}$ are interacting fields, we need to perform
a Bogoliubov transformation on them,
\begin{equation} \phi_{O/I}=\frac{1}{2{\sqrt g}}\left[(1 +g)\tilde{\phi}_{O/I} +
(1 - g){\tilde {\phi}_{I/O}} \right], \end{equation}
to obtain the corresponding `free' outgoing and incoming ($\tilde\phi_{O/I}$)
chiral fields, which satisfy the `free' field commutation relations:
$[\tilde{\phi}_{O/I}(x,t),\tilde{\phi}_{O/I}(x',t)]=\pm i\pi
{\rm sign} (x-x')$, where the sign function is defined as $\text{sign}(x) =
1,0,-1$ for $x>0, ~x=0$ and $x<0$ respectively.
However, unlike the usual Bogoliubov transformation in
the bulk, here we also need to consider the effect of the junction
matrix ${\mathbb{M}}$ relating the interacting incoming and outgoing fields
[\onlinecite{das2}], which leads to a `Bogoliubov transformation' of the
matrix:
${\mathbb{M}} \to \widetilde{\mathbb{M}}$. Qualitatively, ${\mathbb{M}}$ is
related to tunnelings between the different wires and tunneling in each wire,
at a dissipation-less junction. The Bogoliubov transformed matrix
$\widetilde{\mathbb{M}}$ which relates the `free' incoming and outgoing fields,
$\tilde{\phi}_{Oi} (x) = \sum_j ~ \widetilde{{\mathbb{M}}}_{ij}
~\tilde{\phi}_{Ij} (-x)$, is given by
\begin{equation} \widetilde{{\mathbb{M}}} = \left[(1+g){\mathbb{I}}-(1-g){\mathbb{M}}
\right]^{-1} \left[(1+g){\mathbb{M}}-(1-g)\mathbb{I}\right]~. \end{equation}
We emphasize that this description is valid for a dissipation-less junction of
any number of interacting one-dimensional wires.
For the case of a two-wire junction, there are only two classes of
orthogonal matrices: a rotation matrix whose determinant is $1$ and a
reflection matrix whose determinant is $-1$. The constraint that the columns
(or rows) add up to one, imply that there is only one matrix in each class.
These are given by
\begin{equation} \left(\begin{array}{cc}
1 & 0 \\
0 & 1 \\ \end{array}\right)~, \quad{\rm and} \quad
\left(\begin{array}{cc}
0 & 1 \\
1 & 0 \\ \end{array}\right)~,
\end{equation}
which corresponds to the cases of the `dis-connected' and the `connected'
fixed points of a two-wire junction, respectively.
In what follows, we focus on a three-wire $Y$-junction.
A detailed study of the three-wire spin-less TLL junction using bosonization
and boundary conformal field theory can be found
in Refs.~[\onlinecite{chamon1,dasrao}].
In particular, for a three-wire charge-conserving junction all current splitting orthogonal matrices ${\mathbb M}$ whose rows add up to one can be parametrized by a single continuous parameter $\theta$, and are divided into
two classes on the basis of TRS:
$\det {\mathbb{M}}_1 = 1$, and $\det {\mathbb{M}}_2=-1$. These two classes
of matrices are explicitly given by
\begin{equation} \label{m1} {\mathbb{M}}_1 = \left(\begin{array}{ccc}
a & b & c \\
c & a & b \\
b & c & a \end{array}\right), \quad {\mathbb{M}}_2 =
\left(\begin{array}{ccc}
b & a & c \\
a & c & b \\
c & b & a \end{array}\right)~,
\end{equation}
where, $a=(1+2\cos\theta)/3$, $b=(1-\cos \theta+ \sqrt{3} \sin \theta)/3$, and
$c=(1-\cos\theta -\sqrt{3} \sin\theta)/ 3$.
This gives us an explicit single parameter characterization of the two
families of fixed points; any fixed point in the
theory can now be identified in terms of $\theta$, with the fixed
points at $\theta=0$ and $\theta=2 \pi$ being identical.
Note that the current splitting matrix ${\mathbb{M}}$ preserves TRS,
only if it is symmetric. Thus the junction current splitting matrices
belonging to the ${\mathbb{M}}_2 $ class, represents an asymmetric class
(in wire indices) of fixed points for systems with TRS.
The ${\mathbb{M}}_1$ class represents a $Z_3$ symmetric (in the wire
index) class of fixed points and generally denotes systems with broken
TRS, which can arise, for instance, due to a magnetic field at the
junction (assuming a finite cross-sectional area). In
the ${\mathbb{M}}_1$ class of fixed points, only two points given by
$\theta = 0, \pi$, at which the asymmetry producing $\sin \theta$ term
vanishes, are TRS invariant.
For the ${\mathbb{M}}_1$ class, $\theta = \pi$, or $[a,b,c] = [-1/3, 2/3,
2/3]$, corresponds to the so called Dirichlet fixed point ($D_P$). The
disconnected fixed point ($N$), where there is no tunneling between any
pair of wires, is given by $\theta=0$ ({\it i.e.~}$[a,b,c] = [1,0,0]$).
The case of $\theta = 2\pi/3$ ({\it i.e.~} $[a,b,c] = [0,1,0]$) and $4\pi/3$
correspond to the chiral $\chi_{-}$ and $\chi_{+}$ fixed points respectively, following
the notation of Ref.~[\onlinecite{chamon1}].
Note that the ${\mathbb{M}}_2$ class of fixed point matrices has the
interesting property that $({\mathbb{M}}_2)^2=\mathbb{I}$. As a consequence
$\widetilde{{\mathbb{M}}}_2={\mathbb{M}}_2$, which implies that both the
interacting and the free fields satisfy identical boundary
conditions at the junction.
This is not true for the ${\mathbb{M}}_1$ class of fixed point matrices, but
the matrix $\widetilde{{\mathbb{M}}}_1$ still has the same form as the matrix
${{\mathbb{M}}}_1$ with the corresponding parameters given by $\tilde a =
{(3g^2-1 + (3g^2+1)\cos{\theta})}/{\delta}$ and $\tilde b / \tilde c
= {2(1 - \cos{\theta} \pm \sqrt{3} g \sin{\theta} )}/\delta$, where
$\delta={3(1+g^2+ (g^2-1)\cos{\theta})}$. Note that the matrices
$\widetilde{{\mathbb{M}}}_1$ are non-linear functions of the TLL
parameter $g$, while the matrices $\widetilde{{\mathbb{M}}}_2$ are
independent of $g$. This will have non-trivial manifestations for physical
observables ({\it e.g.}~quantum noise, tunneling current etc. --- see
Sec.~\ref{secIV}), when we consider a junction slightly
away form the fixed points, as scaling dimensions of operators switched on
perturbatively around the ${\mathbb{M}}_1$ class will generally be non-linear
functions of $g$.
On the other hand, for the ${\mathbb{M}}_2$ class of fixed points, the
scaling dimensions of operators, will always be linear functions of $g$.
Having characterized the junction, we now proceed to study the AC
conductivity of a $Y$-junction formed from finite length TLL wires,
connected to FL leads --- see Fig.~\ref{fig0} (a).
\section{AC conductivity}
\label{secIII}
In this section, we consider an incident charge wave packet originating in
the FL lead connected to one of the TLL wires, say $i$, and its
consequent motion after undergoing charge fractionalization at the FL-TLL
boundaries and at the junction. This will also allow us to calculate the
low frequency AC current splitting matrix $\mathbb{S}$, which relates the
complex amplitudes of the incoming AC currents, to the complex amplitudes of
the outgoing current, in the linear response regime. Such a time resolved
measurement of an incident wave packet in a TLL wire of integer quantum Hall
edge channels, was recently used to identify a single charge fractionalization
event [\onlinecite{Kamata}]. In our language this correspond to a
two-wire junction tuned to be at the `connected' fixed point
(effectively a single finite length TLL wire connected to FL leads).
In the DC limit, {\it i.e.}~$\omega \to 0$, all signatures of charge
fractionalization are lost and $\mathbb{S} \to \mathbb{M}$, which is the
non-interacting current splitting matrix for a junction with finite TLL wires
connected to the FL leads [Fig.~\ref{fig0} (a)].
For a junction with TLL wires extending to infinity, it simply reduces to
the interacting current splitting matrix at the junction for all
frequencies: $\mathbb{S} \to\widetilde{\mathbb{M}}$, since there is no
FL-TLL interface. However, for finite length wires at finite frequencies,
$\mathbb{S}$ depends on the fixed point,
the strength of the {\it e-e} interaction, and the length of the TLL wires $L$,
and it carries the signature of charge fractionalization events at the
FL-TLL boundary.
In our model of the junction, there is no mechanism of power
dissipation. Thus the average over one oscillation cycle of
the incoming energy must be equal to the average outgoing energy per cycle.
This imposes the constraint of unitarity on the
${\mathbb S}$ matrix, which also serves as a useful check for our
calculations. Also note that we are considering all three wires to
have the same Fermi velocity and {\it e-e} interaction strengths, and these are
connected at the junction described by
boundary conditions which are cyclic in nature. Thus we expect to have only
a few independent coefficients in
${\mathbb S}$ which should also appear in a cyclic manner.
Before discussing the solution of the generalized plasmon scattering problem
[\onlinecite{Perfetto2014}] at the junction, we emphasize that this
calculation is valid only in the linear response regime and only for AC
frequencies which do not breach the linearization regime for each TLL wire,
{\it i.e.~}$ \omega < v/\alpha$.
Also note again that we use a folded basis for describing the junction
such that all the wires go from $x = 0$ to $x = \infty$ and the
junction lies at $x = 0$.
The time evolution of the `injected' wave packet is given by the coupled
equation of motion (EOM) for the expectation value of the incoming
($\phi_{iI}$) and outgoing ($\phi_{iO}$) fields in wire $i$, which are
governed by the Lagrangian given in Eqs.~(\ref{lag11}) and~(\ref{lag20}).
The EOM are
\begin{eqnarray} \partial_x\left[\partial_t \phi_{iI} - v \partial_x \phi_{iI} +
\frac{\lambda}{2} \partial_x \phi_{iO}\right] &=& 0~, \label{EOM1}
\\
\partial_x\left[\partial_t \phi_{iO} + v \partial_x \phi_{iO} -
\frac{\lambda}{2} \partial_x \phi_{iI}\right]&=&0~. \label{EOM2} \end{eqnarray}
Let us now consider an electronic wave packet incident on TLL wire $i$ from
the FL lead. The incoming bosonic field $\phi_I (x,t)$ in the FL lead ($x>L$), can
be expressed, in terms of scattering states of energy $\omega = v q $, by the
following relation
\begin{equation} \phi_{iI} (x,t) = \int_{-\infty}^{\infty} \frac{dq}{2\pi} \phi_{i I} (q)
e^{- i( q (x-L) + \omega t)} ~. \label{eq:phiiI} \end{equation}
Here $\phi_{i I } (q)$ is specified by the Fourier transform $\rho_{i I} (q)$ of the
incident charge density in wire $i$, $\rho_{iI}(x,t=0) $, by the relation
$\phi_{iI}(q) = \frac{2 \pi}{i q}\rho_{i I } (q)$ --- see Eq.~\eqref{eq:density}. The
extra factor of $e^{i q L}$ in the above equation just shifts the position of
the origin of the axis in the FL leads, and it simplifies the calculations
below. The outgoing bosonic scattering state in the FL lead of wire $j$ due to in injected
state in wire $i$ only,
is given by
\begin{equation} \phi_{jO}^{(i)} (x,t) = \int_{-\infty}^{\infty} \frac{dq}{2\pi} \phi_{j O }^{(i)} (q)
e^{ i( q(x-L) - \omega t)} ~, \label{eq:phijO}
\end{equation}
where the outgoing amplitude in the momentum space is related to the
incoming amplitude via the elements of the AC current splitting matrix:
\begin{equation}
\phi_{jO}^{(i)}(q) = s_{ji}(q) \phi_{iI} (q)~,
\end{equation}
with $s_{ji}$ denoting the matrix elements of ${\mathbb S}$ and $q = \omega/v$ .
We emphasise here that we are considering all the wires to have the same Fermi velocity.
In the case of bosonic states being incident in all the wires, the total outgoing bosonic field
gets contribution from all the incoming fields and it is explicitly given by $\phi_{jO} (x,t) = \sum \phi_{jO}^{(i)} (x,t) $,
or equivalently,
\begin{equation}
\phi_{jO} (x,t) = \sum_{i} \int_{-\infty}^{\infty} \frac{dq}{2\pi} s_{ji}(q) \phi_{i I } (q)
e^{ i( q(x-L) - \omega t)} ~.\end{equation}
If the elements $s_{ij}$ of $\mathbb{S}$ are known, then the
total time-dependent density ($\rho = \rho_I + \rho_O$) and the total
outgoing current ($j = j_I + j_O$) in the FL of wire $j$, due to an
incoming wave packet in wire $i$, is given by
\begin{equation} \rho_j^{(i)} (x,t) = \int_{-\infty}^{\infty} \frac{dq}{2\pi} \rho_{iI} (q) e^{-i
\omega t} \left[e^{-i q (x-L)} \delta_{ij}+ s_{ji} e^{i q(x-L)}\right], \end{equation}
and
\begin{equation} j_j^{(i)} (x,t) = v \int_{-\infty}^{\infty} \frac{dq}{2\pi} \rho_{i I}(q) e^{-i
\omega t} \left[ -e^{-i q (x-L)} \delta_{ij}+ s_{ji} e^{i q(x-L)}\right]. \end{equation}
In the TLL wire region ($x<L$), the incoming and outgoing fields, corresponding to a situation when there is only an incoming filed in wire $i$, are given by
\begin{equation} \label{eq:ansatz}
\phi_{j\frac{I}{O}}^{(i)} (x,t) = \int_{-\infty}^{\infty} \frac{dq}{2\pi} \phi_{i I} (q)
e^{ -i \omega t} \left(a_{j\frac{I}{O}}^{(i)} e^{ -ikx} + b_{j\frac{I}{O}}^{(i)} e^{ i kx}
\right). \end{equation}
Here the AC frequency $\omega = \tilde{v} k$, where $\tilde{v}$ is the
renormalized Fermi velocity in the TLL region and is given by Eq.~\eqref{eq:v}.
Note that in Eq.~\eqref{eq:ansatz} above, $q$ is the wave-vector in the noninteracting FL leads, and
$k$ denotes the wave vector in the interacting TLL region for the fixed incoming energy $\omega$ and they are related to each other via the equation $k = v q /\tilde{v}$.
We now proceed to solve the `plasmon scattering' problem and obtain the
elements of $\mathbb{S}$.
Let us consider an incoming current (from FL lead) only in wire $1$.
The continuity of the incoming and the outgoing currents at $x=L$, [using Eqs.~\eqref{eq:phiiI}-\eqref{eq:phijO}, and Eq.~\eqref{eq:ansatz}, in Eq.~\eqref{eq:j}]
gives the following equations in each wire (six in all)
\begin{eqnarray} & & a_{iI}^{(1)}~ e^{-i k L} ~+~ b_{iI}^{(1)}~ e^{i k L} ~=~ \delta_{i1}~, \quad
{\rm and} \\
& & a_{iO}^{(1)}~e^{-i k L} ~+~ b_{iO}^{(1)}~ e^{i k L} ~=~ s_{i1} ~, \end{eqnarray}
where $s_{i1}$ are the elements of the
first column of ${\mathbb S}$, and the superscript is used to indicate that the incoming current is in wire $1$. Within the TLL region ($x<L$),
substituting Eq.~\eqref{eq:ansatz} in Eqs. \eqref{EOM1}-\eqref{EOM2},
gives the following set of equations for each wire (six in all):
\begin{eqnarray} & & 2 (\omega - v k )~a_{iI}^{(1)}~+~ k \lambda ~a_{iO}^{(1)}~=~0 ~,~~~{\rm and} \\
& & 2 (\omega - v k )~b_{iI}^{(1)}~-~k \lambda ~b_{iO}^{(1)}~=~0 ~, \label{eq:24}\end{eqnarray}
in addition to the consistency condition, $\omega=\tilde v k$, with
$\tilde{v}=\sqrt{v^2 - \lambda^2/4}$. Besides
these the boundary condition at the junction ($x=0$) is given by the field
(current) splitting matrix ${\mathbb M}$ as
\begin{equation} a_{iO}^{(1)} ~+~ b_{iO}^{(1)} ~=~ \sum_j~{\mathbb{M}}_{ij}~ ( a_{jI}^{(1)} ~+~ b_{jI}^{(1)}) ~.
\end{equation}
Solving these 15 equations simultaneously gives us the three elements of
the first column of the AC current splitting matrix. Repeating this
calculation for the case with an incoming unit current in the other wires
will give us the elements in the other two columns.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0 \linewidth]{fig1.pdf}
\end{center}
\caption{The amplitudes of the AC current splitting matrix for (a) the
${\mathbb{M}}_1$ class as a function of the frequency $\omega$ at the fixed
point parameterized by $~\theta = \pi/3$, (b) the ${\mathbb{M}}_1$ class as
a function of $\theta$ for $\omega L/v=\pi/2$, (c) the
${\mathbb{M}}_2$ class as a function of $\omega$ with $~\theta = \pi/3$, and
(d) the ${\mathbb{M}}_2$ class as a function of $\theta$ for $\omega
L/v=\pi/2$.
The curves marked by blue squares, red circles, and magenta diamonds, represent
$|s_{11}|,~|s_{21}|,~|s_{31}|$ respectively in all the panels.
The {\it e-e} interaction strength is chosen to be $\lambda = 0.5 v$~ (which gives
$\tilde{v}=0.97 v$ and $g = 0.88$).
\label{fig1} }
\end{figure}
\subsection{TRS preserving (${\mathbb M}_2$) fixed points}
Let us first consider the TRS preserving systems, {\it i.e.~}
$Y$-junctions with the ${\mathbb{M}}_2$ class of fixed points. Following the
procedure described above, we calculate the AC current splitting matrix
${\mathbb S}$, which has only six independent components. These are are
given by
\begin{eqnarray}
s_{11} &= & \frac{1}{\xi} \left[ 2~ \tilde v ~b - i ~\lambda ~\sin (2k L) \right]~, \label{s11_a}\\
s_{22}&=& \frac{1}{\xi} \left[ 2 ~\tilde v ~c - i ~\lambda ~\sin (2k L) \right]~ , \\
s_{33}&=& \frac{1}{\xi} \left[ 2~ \tilde v ~a - i~ \lambda~ \sin (2k L) \right] ~,\\
s_{21}&=& \frac{2}{\xi} ~\tilde{v}~a ~, \label{s21_a} \\
s_{31}&=& \frac{2}{\xi} ~\tilde{v} ~c ~, \\
s_{32}&=& \frac{2}{\xi} ~ \tilde{v}~b ~, \label{s32_a}
\end{eqnarray}
where $a$, $b$ and $c$ are defined below Eq.~\eqref{m1}, and finally
\begin{equation} \xi = 2 \left[\tilde{v}~\cos (2k L) - i v~\sin (2 k L) \right] ~. \label{xi}\end{equation}
We note again that the omega dependence of $\mathbb{S}$,
appears in Eqs.~\eqref{s11_a}-\eqref{xi}, via $k$ which is defined after Eq.~\eqref{eq:24}.
The other elements of $\mathbb{S}$ are given by $s_{12}=s_{21},~
s_{13}=s_{31}$ and finally$~s_{23}=
s_{32}$. Note that the three off-diagonal
elements and the three diagonal elements have a very similar structure and
differ only due to the different corresponding element in the current
splitting matrix ${\mathbb{M}}_2$ at the junction.
To get some physical insight for the form of ${\mathbb S}$, let us consider
the specific case of a junction with $\theta = 0$ in the ${\mathbb{M}}_2$
class, {\it i.e.}
$[a, b, c] = [1,0,0]$, which parameterizes the case of wires $1$ and $2$ being
directly connected effectively becoming one wire of length $2L$, and the wire
$3$ being completely disconnected from the other two.
For this case, Eq.~\eqref{s11_a} and Eq.~\eqref{s21_a} reduce to,
\begin{eqnarray} \label{chk1} s_{11} &=& \frac{ -i (\lambda/2)
\sin{2 k L}}{\tilde{v}~\cos (2k L) - i v~\sin (2 k L) }~, \\
\label{chk2} s_{21} &=& \frac{ \tilde v}{\tilde{v}~\cos (2k L) -
i v~\sin (2 k L)}~. \end{eqnarray}
As a check of our calculations we note that Eqs.~\eqref{chk1} and \eqref{chk2}
are identical to the set of equations given in Eq.~(17) of
Ref.~[\onlinecite{agarwal_ac}], which were derived
for counter-propagating quantum Hall edge states which interact with each
other. Furthermore this simpler case can also be
derived by considering a step-like variation of the interaction strength,
{\it i.e.}~$g(x) = g$ for $-L<x<L$, and $g(x) =1$ otherwise, in an interacting
1D wire [\onlinecite{Berg, Safi}]. Consider an electronic wave packet incident
on the interacting region from the non-interacting region. Fractionalization
of charge [\onlinecite{pham}]
in the interacting region, implies the reflection of fractional
charge $q^* = r_0 e$, where $r_0 = (1-g)/(1+g) $ and transmission of a
fractional charge $q^* = t_0 e$ into the interacting region, where $t_0 =
2 g/(1+g)$. Other reflection and transmission coefficients for a single
impact are given by $r_0' = -r_0$, and $t_0' = 2/(1+g)$. The overall
reflection and transmission probability in this case can be obtained by
considering the infinite sequence of reflection and transmission from the
two boundaries of the finite length interacting region, and are given by,
\begin{equation} r(\omega) = r_0 + t_0 t_0' \sum_{n=1}^{\infty} (r_0' e^{2i \omega L/
\tilde{v}})^{2n}~ = r_0 \frac{ 1- e^{4i \omega L/\tilde{v}}}{1 - r_0^2
e^{4i \omega L/\tilde{v}}}~, \end{equation}
which is identical to Eq.~\eqref{chk1}. Note that $r_0 = \lambda/2(v
+\tilde{v})$. The overall transmission coefficient
is given by the sum of the following infinite series,
\begin{equation} t(\omega) = t_0 t_0' e^{2i \omega L/\tilde{v}}\sum_{n=0}^{\infty}
(r_0' e^{2i\omega L/\tilde{v}})^{2n}~ = \frac{ t_0 t_0' e^{2i \omega L/
\tilde{v}}}{1 - r_0^2 e^{4i \omega L/\tilde{v}}}~, \end{equation}
and is identical to Eq.~\eqref{chk2}.
We thus see that the AC scattering coefficients encode the full history
of the trajectory of the electron including multiple charge fractionalization
events at the FL-TLL interfaces, and at the junction.
\subsection{TRS violating (${\mathbb M}_1$) fixed points}
We now consider the case of $Y$-junctions which do not preserve TRS,
{\it i.e.~} the ${\mathbb{M}}_1$ class of fixed points. In this
case the ${\mathbb S}$ matrix has the same cyclic form of the
${\mathbb{M}}_{1}$ class of matrices and it has only three independent
elements, since all the diagonal elements of ${\mathbb{M}}_{1}$ are identical.
Following the same procedure as in the previous case, we calculate the elements of $\mathbb{S}$ to be
\begin{widetext}
\begin{eqnarray}
s_{11}& = & \lambda^{-1} \eta^{-1} \Big[\tilde{v} \Big\{8
\tilde{v} \Big(2 \tilde{v} e^{3 i k L} \cos (k L) (2 \lambda \cos \theta +3
\cos (2 k L) (-2 \lambda \cos \theta -\lambda +4v) + \lambda -12 v) -3
\tilde{v}^2 \left(-1+e^{2 i k L}\right) \nonumber \\
& \times & \left(1+e^{2 i k L}\right)^2 + i e^{3 i k L} \sin (k L) (3 \lambda
(2 \cos \theta +1) (2v - \lambda) \cos (2 k L)-2 \lambda \cos \theta (2 v+
\lambda )+(6 v+\lambda ) (4v-\lambda ))\Big) \nonumber \\
&-& 3 \left(-1+e^{2 i k L}\right)^2 \left(1+e^{2 i k L}\right)
\left(\lambda ^3+16 v^3-8 \lambda v^2 \cos \theta -4 \lambda v^2\right)\Big\}
\nonumber \\
&+& 3 v \left(-1+e^{2 i k L}\right)^3 (2v-\lambda) \left(\lambda ^2+4 v^2-4
\lambda v \cos \theta \right)\Big] ~,
\end{eqnarray}
where
\begin{equation}
\eta ~=~ 12 ~e^{3 i k L} ~[2\zeta - \lambda \sin(kL) ] \times \left( [2\zeta
- \lambda \sin(kL) ]^2 + 4 i \lambda \sin (kL) \zeta (1-\cos \theta) \right)~,
\end{equation}
\end{widetext}
and
\begin{equation}
\zeta = \tilde v \cos (kL) - i v \sin(kL)~.
\end{equation}
The other elements of $\mathbb{S}$ are given by
\begin{eqnarray}
s_{21}&=& -48 ~\eta^{-1}~ \tilde{v}^2 e^{3 i k L} \left[ 2 \zeta
c \right. -\left. i
\lambda \sin(kL) b ~
\right]~, \\
s_{31}&=& 48 ~\eta^{-1}~ \tilde{v}^2 e^{3 i k L} \left[ 2 \zeta^*
c + i \lambda \sin(kL)
b \right]~,
\end{eqnarray}
along with $s_{22}=s_{33}= s_{11},~ s_{12}=s_{23}= s_{31}$, and
finally $s_{13}=s_{32} =s_{21}$.
In Fig.~\ref{fig1}, we plot the absolute values of some of the elements of
$\mathbb{S}$, as
a function of the incoming energy ($\omega=\tilde{v} k$) and the parameter
$\theta $ describing the fixed points of the junction. Note that unlike the
DC conductivity, the AC current amplitudes carry signatures of the
{\it e-e} interactions, {\it i.e.~}they depend on the {\it e-e} interaction strength
$\lambda$, and the finite length $L$ of the TLL wires. The amplitudes
oscillate as a function of the frequency of the incident AC current with
a period of $ 2 \pi \tilde{v}/L$ for the ${\mathbb{M}}_1$ class of fixed
points and with a period of $ \pi \tilde{v}/L$ for the ${\mathbb{M}}_2$ class
of fixed points as can be seen from Fig.~\ref{fig1} (a) and \ref{fig1} (c)
respectively.
Experimentally such measurements of oscillations of the AC current amplitude,
as a function of the frequency may be used to classify the $Y$-junctions,
whose fixed point may not be known {\it a priori}.
Motivated by recent time-resolved experiments on 1D TLL wires
[\onlinecite{Kamata, Perfetto2014}], we study the propagation of a
wave packet incident from FL lead in wire $1$, in Fig.~\ref{fig2}.
Note that the results depicted in Fig.~\ref{fig2} (a) are similar to the
results for reflected current in type-I geometry for a 1D TLL wire,
reported in Ref. [\onlinecite{Kamata}]. Our results generalize the recent
results of Ref.~[\onlinecite{Perfetto2014}], for arbitrary shaped wave packet
propagation in a single TLL wire to the case of multi-wire junctions.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0 \linewidth]{fig2.pdf}
\end{center}
\caption{Pulse propagation in a $Y$-junction as a function of time, for an
incoming current only in wire $1$. For the junction tuned to $\theta = 0$ fixed
point of class ${\mathbb{M}}_2$, panel (a) shows the outgoing reflected
current in wire $1$, and panel (b) shows the outgoing transmitted current in
wire $2$.
In this case, wires $1$ and $2$ are completely connected effectively becoming
one wire, and wire $3$ is completely disconnected. For the junction tuned to
the chiral fixed point $\chi_{+}$, {\it i.e.~}$\theta = 4 \pi/3$ of the
${\mathbb{M}}_1$ class of fixed points, panel (c) shows the outgoing
(reflected) current in wire $1$, and panel (b) shows the outgoing
(transmitted) current in wire $3$.
\label{fig2} }
\end{figure}
\subsection{ The DC limit}
In the linear response regime
the DC conductivity of the $Y$-junction is different if the TLL wires
are connected to FL leads and if the TLL wires extend to infinity. This is
well known for the case of a single TLL wire, whose linear DC
conductance is $e^2/h$ when connected to FL leads and is
$ge^2/h$ for an infinite TLL wire [\onlinecite{Stone}].
To obtain the DC conductivity for a $Y$-junction connected to FL leads,
from our AC results, we note that in the DC limit, {\it i.e.~}as
$\omega ~\to~ 0$ (or as $k \to 0$), for both classes of fixed points we have
\begin{equation}
\lim_{\omega \to 0}{\mathbb S} = {\mathbb{M}}~.
\end{equation}
This implies $ I^{\rm out}_i = \sum_j {\mathbb{M}}_{ij} I^{\rm in}_j$, where we have defined $I^{{\rm in}({\rm out})}_i$ to be the current flowing towards
(away from) the junction in the TLL wire $i$.
Further if $V_j$ is the voltage applied in the FL lead connected to wire $j$,
then the incoming current (per spin) is
related to it by $I^{\rm in}_i = \sum_j (e^2/h)~{\delta}_{ij} V_j$. Now
using the definition of the junction conductance $\mathbb{G}$, which relates the net current flowing towards the junction to the external voltages, {\it i.e.~}$I_i \equiv I^{\rm in}_i - I^{\rm out}_i= \sum_j \mathbb{G}_{ij} V_j$,
we obtain the DC conductance matrix (per spin orientation) to be
\begin{equation} {\mathbb{G}} =(e^2/h) ({\mathbb{I}}~-~{\mathbb{M}})~. \label{GF} \end{equation}
For a $Y$-junction with TLL leads extending to infinity, the voltage applied
in the LL lead of wire $j$ is related to the incoming current by
$I^{\rm in}_i = \sum_j (g e^2/h)~{\delta}_{ij} V_j$, and the current splitting
matrix at the junction is $\tilde{\mathbb{M}}$. Thus the conductance matrix
(per spin) is given by
\begin{equation} {\mathbb{G}} \label{GL}~=~
(g e^2/h) ({\mathbb{I}}~-~\widetilde{\mathbb{M}})~.
\end{equation}
As a check of Eqs.~\eqref{GF} and \eqref{GL}, we note that they are consistent
with the conductance of several fixed points reported in
Ref.~[\onlinecite{chamon1}] using the Kubo formula and other methods.
We emphasize here that the DC conductivity for a junction of finite length TLL wires
connected to FL leads does not carry any
signature of interactions and charge fractionalization events in the system.
In contrast, the AC conductivity depends on the {\it e-e} interactions as well
as the length of each wire.
In the next section, we consider a $Y$-junction of infinite length TLL
wires --- [see Fig.~\ref{fig0} (b)], with point-like tunneling impurities at
the junction, and calculate the `tunneling' current
and quantum noise at the junction.
\section{Tunneling current and tunneling noise at the junction}
\label{secIV}
We now consider the effect of point-like
charge conserving tunneling operators between infinite TLL wires, at the
junction ($x=0$), and study the tunneling current and low frequency quantum
noise arising due to
these. Note that each of the boundary condition at the junction characterized by $\theta$ corresponds
to a scale invariant boundary condition, or a RG fixed point, of the bosonic field theory.
The knowledge of $\mathbb{M}$ at each $\theta$ thus completely specifies
all the reflection and transmission amplitudes at the level of the Hamiltonian for the Y-junction --- see Ref.~[\onlinecite{chamon1}].
Additional small tunneling (boundary operators) between wires may be treated as a small variation
of the amplitudes.
If all of the tunneling operators at the junction, are irrelevant in a RG sense, then the fixed point is stable, otherwise switching on of relevant tunneling operators around any fixed point makes the junction `flow' to another fixed point on changing length and energy scales in the system. In our case, a tunneling operator is relevant (irrelevant), if the boundary scaling dimension of
the tunneling operators is less than (greater than) unity, {\it i.e.~}$d_0 < 1$ ($d_0 > 1$). However we emphasize that, as long as the wire length is not very large (as compared to the other length scales set by the temperature or the external voltages), such that the RG flow does not take one far away from the fixed point (either stable or unstable), the calculations described in this section are still valid for all fixed points.
A similar set-up was used in
Ref.~[\onlinecite{Feldman_PRB2011}]
to study rectification in a $Y$-junction of TLL wires, which was found to be
strongest for junctions violating TRS, and for strongly coupled junctions.
We consider very narrow (point-like) tunneling barriers, so that the time
duration of the tunneling event is much smaller that the time duration between
two successive tunneling events. Such discrete tunneling events lead to the so
called `shot noise', whose spectrum carries the signature of correlations
between different tunneling events. In addition, we also have different
voltages in different leads. This leads to the so called `Josephson noise',
which arises from the quantum interference of the wave-functions, on different
sides of the tunneling impurity (different wires in our case), and it may
lead to a divergence
in the noise spectrum at frequency $\omega = q V_{\rm eff}/h$, where
$V_{\rm eff}$ is the effective voltage difference that the tunneling operator is
subjected to [\onlinecite{chamon_prb1995}]. In what follows, we derive
the tunneling noise at the junction from a perturbative calculation, which
gives both the shot noise and Josephson noise contributions.
In a clean junction (no tunneling `impurity'), the current in wire $i$ is
given by $I_i = {\mathbb G}_{ij} V_j$, where ${\mathbb G}$ is given by
Eq.~\eqref{GL} for a $Y$-junction with infinite TLL wires. The switching on of
tunneling operators ($\psi_{iO}^\dagger \psi_{jI}$) in the vicinity
($x_i \le1/k_{\rm F}$ ) of the $Y$-junction which is
tuned to be at a particular fixed point, leads to an additional tunneling
current $\delta I_i$, such that $I_i^{\rm total} =
I_i + \delta I_{i}$. If the tunneling Hamiltonian is expressed as,
\begin{equation} \label{eq:Htunn} H_{\rm tunn}~=~ \gamma ~\psi_{iO}(0,t)^\dagger ~
\psi_{jI}(0,t) +h.c~, \end{equation}
then the tunneling current operator ($\delta \hat{I}_{\rm i}$) is defined by
\begin{eqnarray} \delta \hat I_{i} (t) &=& q \frac{d\hat \rho_{iO}}{dt} = -i q \hbar^{-1}
\left[\hat \rho_{iO} , \hat H_{\rm tunn}\right] \\
&=& i q \gamma \hbar^{-1} [\psi^\dagger_{iO} \psi_{jI}
- h.c.~]~. \nonumber \end{eqnarray}
It can be calculated at any time $t$ from the following expression,
\begin{equation} \langle \delta \hat I_{i} \rangle = \langle 0| ~S(-\infty;t) ~\delta
\hat I_{i} (t)~ S(t;-\infty) ~|0 \rangle~, \end{equation}
where $|0 \rangle$ denotes the ground state of the unperturbed system,
{\it i.e.}~the initial state at $t \to -\infty$. Here $S$ is the scattering
matrix arising due to the tunneling impurities, and it is given by
\begin{equation}
S(t;-\infty) = S^\dag (-\infty;t) ={\cal T} e^{-i \hbar^{-1}
\int_{-\infty}^{t} \hat H_{\rm tunn}(t') dt' }~, \end{equation}
where ${\cal T}$ denotes the time ordering operator.
Using the notation: $\hat B_{ij}(x,t) \equiv \psi_{iO}^\dagger
\psi_{jI}$ for the tunneling operator, and expressing the fermonic operators
in terms of bosonic fields using Eq.~\eqref{BI}, we get
\begin{eqnarray} \hat B_{ij}(x,t) &=& \frac{1}{2 \pi \alpha} F_{iO}^\dagger F_{jI}~ e^{i
(2 \pi/L)(N_{iO}-N_{jI})vt} \nonumber \\
& \times & e^{-i\phi_{jI}(x,t)}~ e^{-i\phi_{iO}(x,t)}~. \end{eqnarray}
The tunneling current, in terms of the tunneling operator is,
$\delta \hat I_{i} (t) = i q \gamma \hbar^{-1} [\hat B_{ij}(t) - h.c.]$,
while the scattering
matrix is given by $S (t,-\infty) = 1 - i\hbar^{-1} \gamma \int_{-\infty}^t
dt' [\hat B_{ij}(t') + h.c.]$, up to first order in the tunneling amplitude
$\gamma$. Thus the expectation value of the tunneling current operator,
up to second order in $\gamma$, is given by
\begin{eqnarray} \label{eq:Ibs}
\delta I_{i} &=&\frac{q\gamma^2}{ \hbar^2 } \int_{-\infty}^t dt' \times \\
&& \langle 0| ( \hat B_{ij}^\dagger (t) B_{ij}(t') - \hat B_{ij}(t') \hat
B_{ij}^\dagger (t) ) + h.c. ~|0 \rangle ~. \nonumber \end{eqnarray}
The symmetrized noise is given by the Fourier transform of the
current-current correlator:
\begin{equation} S(\omega)= \int_{-\infty}^{\infty} ~dt~ e^{-i\omega t} \langle \delta
\hat I_{i}(t)\delta \hat I_{i}(0) +\delta \hat I_{i}(0)\delta \hat I_{i}(t)
\rangle~, \end{equation}
and up to second order in $\gamma$, we obtain,
\begin{eqnarray} S(\omega) &=& \frac{q^2\gamma^2}{\hbar^2 } \int_{-\infty}^{\infty} ~dt~
e^{-i\omega t} \\
&\times & \langle0| (\hat B_{ij} (t) B_{ij}^\dagger(0) + \hat B_{ij}(t)
\hat B_{ij}^\dagger (0))+H.C. |0\rangle. \nonumber \end{eqnarray}
\begin{table}[t!]
\begin{center}
\begin{tabular}{l r } \hline \hline \\
Operators (${\mathbb{M}}_1$ class) & Scaling dimension ($d_0$) \\ \hline
$\psi_{iO}^\dagger \psi_{iI}$ & $\frac{4 g (1 - \cos \theta )}{3 \left(g^2+
\left(g^2-1\right) \cos \theta +1\right)}$ \\
$\psi_{2O}^\dagger \psi_{1I},\psi_{3O}^\dagger \psi_{2I},\psi_{1O}^\dagger
\psi_{3I}$ & $ \frac{2 g \left(\cos \theta +\sqrt{3} \sin \theta
+2\right)}{3 \left(g^2+\left(g^2-1\right) \cos \theta +1\right)}$ \\
$\psi_{1O}^\dagger \psi_{2I},\psi_{2O}^\dagger \psi_{3I},\psi_{3O}^\dagger
\psi_{1I}$ & $\frac{2 g \left(\cos \theta -\sqrt{3} \sin \theta
+2\right)}{3 \left(g^2+\left(g^2-1\right) \cos \theta +1\right)} $\\
\\ \hline \hline \\
Operators (${\mathbb{M}}_2$ class) & Scaling dimension ($d_0$)\\ \hline
$\psi_{1O}^\dagger \psi_{1I}$ & {\footnotesize $\frac{1}{3} g (2+\cos \theta -
\sqrt{3} \sin \theta)$ }\\
$\psi_{2O}^\dagger \psi_{2I}$ & {\footnotesize$\frac{1}{3} g (2+\cos \theta +
\sqrt{3} \sin \theta)$} \\
$\psi_{3O}^\dagger \psi_{3I}$ & {\footnotesize$\frac{2}{3} g (1-\cos
\theta )$} \\
$\psi_{1O}^\dagger \psi_{2I}$,$~\psi_{2O}^\dagger \psi_{1I}$ &
{\footnotesize$\frac{3+g^2}{6g} (1-\cos \theta )$} \\
$\psi_{2O}^\dagger \psi_{3I}$,$~\psi_{3O}^\dagger \psi_{2I}$ &
{\footnotesize$\frac{3+g^2}{12 g}(2+ \cos \theta -\sqrt{3} \sin \theta )$ }\\
$\psi_{3O}^\dagger \psi_{1I}$,$~\psi_{1O}^\dagger \psi_{3I}$ &
{\footnotesize$ \frac{3+g^2}{12g} (2+\cos \theta+\sqrt{3} \sin \theta )$} \\
\\ \hline \hline
\end{tabular}
\end{center}
\caption {Scaling dimensions of various tunneling operators for both
${\mathbb{M}}_1$ and ${\mathbb{M}}_2$ classes of fixed points.}
\label{T1}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0 \linewidth]{fig3.pdf}
\end{center}
\caption{Color plot of the scaling dimension for various tunneling operators.
Panels (a), (b) and (c) represent the operators \textcolor{blue}{$\psi_{iO}^\dagger
\psi_{iI}$, $\psi_{1O}^\dagger \psi_{2I}$ and $\psi_{2O}^\dagger \psi_{1I}$ }
respectively, for the ${\mathbb{M}}_1$ class of fixed points (as they appear
in Table \ref{T1}). Panels (d) - (i) display the scaling dimensions of various
tunneling operators as they appear in Table \ref{T1}, for the ${\mathbb{M}}_2$
class of fixed points. The purple region in all the panels has $d_0 <1/2$, the
green region represents $1/2 < d_0< 1$ and the blue region in all the panels
has $d_0 > 1$. Both the backscattering current and the quantum noise show
diverging behavior for $d_0 < 1/2$, {\it i.e.}~in the $(\theta, g)$ parameter
space represented in purple. Also note that the tunneling operators become
relevant in the region $d_0 <1$, {\it i.e.}~purple and green regions, and
will make the junction `flow' to another fixed point.
\label{fig3}}
\end{figure}
To obtain the final expressions for the tunneling current and for the
symmetrized quantum noise, we need the ground state expectation values of
operators, such as ${\cal O} \equiv \hat B_{ij}(x,t)
\hat B_{ij}^\dagger(x',t')$. Following a standard procedure
[\onlinecite{delft}], at zero temperature ($T$), these are given by
\begin{equation} \label{eq:expectation}
\langle 0|{ \cal O} |0\rangle = \frac{\alpha^{2d_0}}{4 \pi^2 \alpha^2}
\frac{e^{i \frac{2 \pi}{L} \langle 0| N_{iO}-N_{jI}|0\rangle v (t-t') }}{[
(x-x')^2 - (v(t-t') - i \alpha)^2]^{d_0}} ~, \end{equation}
where $d_0$ is the boundary scaling dimension of the tunneling operator
involved, {\it i.e.}~$\hat B_{ij} = \psi_{iO}^\dagger \psi_{jI}$. For all
possible tunneling operators, $d_0$ is tabulated in Table \ref{T1} for both
the ${\mathbb{M}}_1$ and ${\mathbb{M}}_2$ class of fixed points,
[\onlinecite{agarwal_tdos}].
In addition we also have terms like $N_{iO}-N_{jI}$ in the exponential whose
expectation values depend on the external chemical potentials $\mu_i$ (or
voltages $V_i$) applied on each wire in the grand canonical ensemble picture.
The outgoing $N_{iO}$ are related by the current splitting matrix
${\mathbb{M}}$ to the incoming $N_{iI}$ which are in turn
related to the external reservoir voltages. Thus we have
\begin{equation} N_{iO}=\sum_p {\mathbb{M}}_{ip} N_{pI}~, \quad {\rm and} \quad
\frac{h v }{L} \langle N_{iI}\rangle = \mu_{i} = q V_i ~. \end{equation}
The expectation value of $\langle N_{iO}-N_{jI} \rangle$ now defines a
new frequency scale which is related to external voltages by
\begin{equation} \label{eq:omega} \omega_0 \equiv \frac{2\pi v}{L} \langle N_{iO}-N_{jI}
\rangle = h^{-1}q \left(\sum_p( {\mathbb{M}}_{ip} V_p) - V_j\right)~, \end{equation}
where $j$ and $p$ are wire indices. Physically $\hbar \omega_0 $ is the
effective voltage difference that the tunneling operator `feels' (is subjected
to) for an electron incoming in lead $j$ and finally outgoing in lead $i$.
We now proceed to calculate the tunneling current by substituting
Eqs.~\eqref{eq:omega} and \eqref{eq:expectation}, in Eq.~ \eqref{eq:Ibs}.
A straightforward calculation, using the integral,
\begin{equation}
I_{\pm} = \int_{-\infty}^{\infty} dt' \frac{e^{\pm i \omega_0 t'}}{(
\frac{\alpha}{v} - i t')^{2 d_0} } = \frac{2 \pi~|\omega_0|^{2d_0-1}}{\Gamma
(2 d_0)} e^{-\frac{\alpha |\omega_0|}{v}} \theta(\mp \omega_0)~, \end{equation}
gives,
\begin{equation} \label{dI}
\delta I_{i}=q \frac{2\pi\gamma^2}{h^2 \alpha^2}~\frac{1}{\Gamma(2d_0)}~
\left(\frac{\alpha}{v}\right)^{2d_0}|\omega_0|^{2d_0-1} {\rm sign}(\omega_0).
\end{equation}
Here $\Gamma(2d_0)$ appearing in the denominator is the Gamma function.
The
scaling dimension $d_0$ in general depends on the strength of interactions
and the fixed point ($\theta$) that the junction is tuned to. It is tabulated
in Table \ref{T1}, and a contour plot of $d_0$ in the ($\theta, g$) parameter
space is presented in Fig.~\ref{fig3}. For the case of a `non-interacting'
junction, {\it i.e.~}$d_0 \to 1$ (which is equivalent to the case of
$g \to 1 $ in a single wire scenario), $ \delta I_i |_{d_0 \to 1}= q
\frac{2\pi\gamma^2}{h^2 v^2} \omega_0$. In the limiting case of $d_0 \to 1/2$
(which is equivalent to the case of $g \to 1/2 $ in the single wire scenario),
we have $\delta I_i|_{d_0 \to \frac{1}{2}} = q \frac{2\pi\gamma^2}{h^2
\alpha v} {\rm sign}(\omega_0)$.
To relate it to an earlier work let us consider the fixed point $\theta = 0$
of the ${\mathbb{M}}_2$ class, {\it i.e.}, $[a, b, c] = [1,0,0]$, which
represents the specific case of wires $1$ and $2$ being directly connected
and the wire $3$ being completely disconnected. Now consider a tunneling
operator $\psi_{2O}^\dagger \psi_{2I}$, for which $\omega_0 = h^{-1}q (V_1
- V_2)$ and $d_0 = g$. The tunneling current in this case is given by
\begin{equation} \delta I_{i}=q \frac{2\pi\gamma^2}{h^2 \alpha^2}~\frac{1}{\Gamma(2g)}~
\left(\frac{\alpha}{v}\right)^{2g}|h^{-1} q (V_1-V_2)|^{2g-1} ~, \end{equation}
which has earlier been reported in the context of current enhancement by a
tunneling impurity in Ref.~[\onlinecite{feldman_prb2003}], and as a
limiting case of two or more impurity scattering in TLL wires in
Refs.~[\onlinecite{makogon, agarwal_prb2007}].
Equation~\eqref{dI} can be generalized to finite temperatures by using the
following transformation [\onlinecite{chamon_prb1995}]:
\begin{equation} \label{cm}
I_{\pm} = \int_{-\infty}^{\infty} dt' \frac{e^{\pm i \omega_0 t'}}{\left(
\frac{\alpha}{v} - i t'\right)^{2 d_0} } \to e^{i \pi d_0}
\int_{-\infty}^\infty dt' \frac{e^{\pm i \omega_0 t'}}
{\left|\frac{\sinh(\pi T t')}{ \pi T}\right|^{2 d_0} }~, \end{equation}
which gives,
\begin{equation}
I_{\pm} (T)= 2 (\pi T)^{2 d_0-1} B\left(d_0 + \frac{i \omega_0}{2 \pi T},
d_0 - \frac{i \omega_0}{2 \pi T}\right)~ e^{\pm \frac{ \omega_0}{2T}}~, \end{equation}
where $T$ denotes the temperature in units of $k_{\rm B}/\hbar$, with
$k_{\rm B}$ being the Boltzman constant, and $B(x,y) = B(y,x)$ is the
$\beta$-function. The $\beta$-function can also be written in terms of the
Gamma
function: $B(x,y) = \Gamma(x) \Gamma(y)/\Gamma(x+y)$. The tunneling current
at finite $T \le \hbar v / \alpha$ is now given by,
\begin{eqnarray}
\delta I_i (T) &=& q \frac{4\gamma^2}{h^2 \alpha^2} \left(\frac{\alpha}{v}
\right)^{2d_0} (\pi T)^{2 d_0-1} \\
&\times & B\left(d_0 + \frac{i \omega_0}{2 \pi T}, d_0 - \frac{i \omega_0}{2
\pi T}\right) \sinh\left(\frac{ \omega_0}{2T}\right) ~ \nonumber. \end{eqnarray}
In the limiting case of $d_0 \to 1$, we can use the identity $\Gamma(1+ix)
\Gamma(1-ix) = \pi x /\sinh(\pi x)$, to obtain $\delta I_i (T)|_{ d_0 \to 1}
= q \frac{2\pi\gamma^2}{h^2 v^2} \omega_0$, independent of the temperature.
For the case of $d_0 \to 1/2$, one can use the identity $\Gamma(1/2+ix)
\Gamma(1/2-ix) = \pi /\cosh(\pi x)$, to get $\delta I_i (T)|_{d_0 \to
\frac{1}{2}} = q \frac{4\pi\gamma^2 \alpha}{h^2 v} \tanh[\omega_0/(2T)] $.
The symmetrized quantum noise, up to second order in the tunneling strength
$\gamma$, can also be calculated in a similar fashion and is given by
\begin{eqnarray}
S(\omega)&=& q^2 \frac{2\pi\gamma^2}{h^2 \alpha^2}~\frac{1}{\Gamma(2d_0)}~
\left(\frac{\alpha}{v}\right)^{2d_0} \nonumber \\
&\times & \left(|\omega-\omega_0|^{2d_0-1}+ |\omega +\omega_0|^{2d_0-1}\right)
\label{noise1} ~.
\end{eqnarray}
It can be expressed in terms of the tunneling current as
\begin{equation} S(\omega) = q\delta I_{i}\left(|1-\omega/\omega_0|^{2d_0-1}+|1+\omega/
\omega_0|^{2d_0-1}\right) ~. \end{equation}
As a check of our calculations, we note that for the specific case of
$\theta =0$, discussed in the previous paragraph, Eq.~\eqref{noise1} of our
manuscript, reproduces Eq.~(17) of Ref.~[\onlinecite{chamon_prb1995}] in which
the authors studied the perturbative noise for a small point impurity in an
otherwise clean TLL. In the limit $|\omega/\omega_0| \to 0$, or at small
frequencies, $S(\omega) \approx 2q \delta I_{i}$ independent of
the interaction parameter, which is the typical Schottky's shot noise result.
It corresponds to the uncorrelated arrival of particles at the tunnel barrier, whereby the time
interval between arrival times is described by a Poissonian distribution.
In the opposite limit of $|\omega_0/\omega| \to 0$,
we get $S(\omega) \approx 2q \delta I_{i} |\omega/\omega_0|$, consistent with
results for non-interacting electrons [\onlinecite{chamon_prb1995}]. In the
limiting case of $d_0 \to 1$, for low frequencies ($\omega < \omega_0$) we
have $S(\omega)|_{d_0 \to 1} = 2 q \delta I_i$, while for high frequencies
($\omega > \omega_0$), we have $S(\omega) = 2 q \delta I_i \frac{\omega}{\omega_0}$
giving a linear dependence on the frequency. Note that the high frequency
limit of the noise spectrum for $d_0 \to 1$, is primarily determined by
zero point fluctuations and is independent of the applied voltages, as
expected [{\onlinecite{Buttiker}]. In the limit $d_0 \to 1/2$, we obtain
$S(\omega)|_{d_0 \to \frac{1}{2}} = 2 q \delta I_i$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.98 \linewidth]{fig4.pdf}
\end{center}
\caption{ Plot of the the tunneling current in panel (a) and the quantum
noise in panel (b) in wire $1$, as a function of $\omega_0$, for different
values of the interaction strength. The junction is tuned to the `chiral'
fixed point ($\chi_+$), {\it i.e.~} $\theta = 4 \pi/3$ in the $\mathbb{M}_1$
class, and the tunneling operator is chosen to be the backscattering operator
$\psi_{O1}^\dagger \psi_{I1}$. The scaling dimension of $\psi_{O1}^\dagger
\psi_{I1}$ at the $\chi_+$ fixed point for $g = (0.3, 0.7, 1, 1.5)$, which
corresponds to the lines represented by the blue square, red circle, black
solid, and magenta diamond markers, respectively in both the panels, is given
by $d_0 = (0.39, 0.84, 1, 1.14)$. We have chosen $\omega = 0.1 \alpha /v$ for
panel (b).
\label{fig4} }
\end{figure}
The noise power spectrum in Eq.~\eqref{noise1} can also be generalized to
include finite temperature effects. Using the mapping of
Eq.~\eqref{cm}, we obtain the finite temperature symmetrized noise to be
\begin{equation} \label{SwT}
S(\omega) = q^2 \frac{4\gamma^2}{h^2 \alpha^2} \left(\frac{\alpha}{v}
\right)^{2d_0} (\pi T)^{2 d_0-1} \left[ f(\omega + \omega_0) + f(\omega -
\omega_0) \right]~, \end{equation}
where
\begin{equation}
f(x) = \cosh \left[\frac{x}{2 T}\right] B\left(d_0 + \frac{i x}{2 \pi T},
d_0 - \frac{i x}{2 \pi T}\right)~. \end{equation}
Note that finite temperature smears the singularities of the noise power
spectrum. The zero frequency limit of Eq.~\eqref{SwT}, gives
\begin{equation}
S(\omega \to 0) = 2q\delta I_{i} (T)\coth\left[\frac{\omega_0}{2T}\right]~,
\end{equation}
which is the equivalent of the equilibrium Johnson-Nyquist noise for a
$Y$-junction. For the case of $d_0 \to 1$, we get $S(\omega, T)|_{d_0 \to 1}
= q \frac{2 \pi \gamma^2}{h^2 v^2} \left[h(\omega + \omega_0) + h(\omega -
\omega_0)\right]$ where $h(x) = x \coth[x/(2T)]$. For $d_0 \to 1/2$, we
have, $S(\omega, T)|_{d_0 \to 1/2} = 2q \delta I_{i} (T)\coth[\omega_0/(2T)]$.
We plot the ratio $\delta I_1/\delta I_1 (d_0 =1) $ versus $\omega_0$ in
Fig.~\ref{fig4} (a), for the backscattering operator $\psi^\dagger_{1O}
\psi_{1I}$ when the junction is tuned to be at the $\chi_+$ fixed point.
The divergence of this ratio whenever $2d_0 -2 <0$ is evident. The ratio
$S(\omega)/2 q \delta I_1$, is plotted in panel (b) of Fig.~\ref{fig4}.
This ratio diverges whenever $2 d_0 -1 <0 $.
An important difference in the three-wire case compared to the two-wire case
is that both $\omega_0$ and $d_0$, {\it i.e.}~the frequency of divergence in
$S(\omega)$ as well as the power law of divergence, are in general complicated
functions of the boundary conditions at the junction (${\mathbb{M}}$), and
the {\it e-e} interaction
strength. Note that the noise diverges as $\omega \to \pm \omega_0$ when
$~d_0<1/2~$. We believe that this divergence is not a limitation of our
$\gamma^2$ perturbation theory and it will persist even if we go to higher
orders in $\gamma$, as in the case of a `tunneling' impurity in a single TLL
wire [\onlinecite{chamon_prb1995}]. However, this divergence is a limitation
of our low-energy theory, and in a realistic experimental scenario it should be regularized by
the highest relevant energy scale ({\it e.g.}~temperature or the maximum external voltage)
. This is usually achieved by replacing the ultraviolet energy cut-off, $ \hbar v/\alpha$, by $k_{\rm B} T $ or $max[ V_1, V_2, V_3]$.
Finally we note that the results of this section are valid for an electronic
$Y-$junction as well as for a quasi-particle junction formed from quantum
Hall (QH) edge states. The substitution, $q \to e$ accounts for electron
tunneling and $q \to \nu e$ takes care of quasi-particle tunneling in QH edge
states, where $e$ denotes the electron charge and $\nu $ is the QH filling
fraction.
\section{Summary and conclusions}
\label{summary}
In this article we investigated the AC conductivity of a $Y$-junction formed
from finite length TLL wires connected to FL reservoirs, based on the plasmon
scattering approach, for injected charge pulses of
arbitrary shapes. This formalism, gives the full spatiotemporal profile of
the charge wave packet in all the wires, and is therefore very useful for
analyzing time resolved transport experiments in TLL wires
[\onlinecite{Perfetto2014, Kamata}] and their junctions.
We find that unlike the DC conductivity of a `clean' junction, the AC
conductivity depends on the strength of the {\it e-e} interactions and the length
of the wire. Consequently it carries signatures of charge fractionalization at the
TLL-FL interface as well as at the junction.
The AC conductivity also displays an oscillatory behavior as a function of
the frequency of the incoming pulse, with the
periodicity of $\pi \tilde{v}/L$ for the time-reversal symmetric junctions,
{\it i.e.~} junctions characterized by ${\mathbb{M}}_2$ class of fixed points,
and with a period of $ 2 \pi \tilde{v}/L$ for junctions which break
time-reversal symmetry, {\it i.e.~}those characterized by the
${\mathbb{M}}_1$ class of fixed points. The limitation of our calculation is
that it is valid only for low AC frequencies which do not breach the
linearization regime of each TLL wire, {\it i.e.}~$\omega < v/\alpha$.
Additionally, we consider point-like tunneling impurities at the junction of
infinite
TLL wires, and find the corresponding tunneling current and
quantum noise spectrum. We explicitly show that the correlations arising from strong
{\it e-e} interactions in TLL wires, give rise to singularities in the noise
spectrum (calculated up to second order in $\gamma$), as a function of the frequency
or the applied voltage. The divergence in the noise spectrum for some specific
frequencies will possibly persist to even higher orders in
$\gamma$, and is an artefact of the effective low-energy TLL Hamiltonian
that we are using. In any realistic experimental scenario, the high energy
or ultraviolet cut-off $\alpha^{-1}$ will get replaced by the other energy scales
such as the temperature or the maximum applied voltage,
which would cut off the divergences. Another important aspect to consider
is that these calculations are valid only in the `tunneling' limit, until
$\gamma$ does not flow (in a RG sense) beyond the TLL bandwidth
[\onlinecite{Kane_prl1992}], {\it i.e.} $\gamma^2
< (\alpha (|\omega\pm \omega_0|/v)^{1-2d_0} $.
Note that similar effects have been reported in a tunneling scenario in a
two-wire junction [\onlinecite{chamon_prb1995,feldman_prb2003}], where such
divergences occur at very strong {\it e-e} interaction
strength of $g< 1/2$, which is a difficult regime to probe experimentally.
However the three-wire junction offers the possibility of being tuned (by
means of nano-gates applied in the vicinity the junction) to various fixed
points, where these enhancement in the tunneling current and divergence
of the quantum noise can also occur in a very wide regime of $g$, including
attractive {\it e-e} interaction strengths --- see Fig.~\ref{fig3}.
We firmly believe that both of these studies, {\it i.e.}~the effects of
pulse propagation in a $Y$-junction and `backscattering' by tunneling
impurities at the junction, will be
very useful for interpreting time resolved experiments
[\onlinecite{Kamata, Perfetto2014}] in multi-wire junctions of interacting
electrons, and in the design and fabrication of quantum circuitry in the
future.
Experimentally, TLL wire $Y$-junctions may be fabricated using carefully
patterned 1D wires in a 2DEG, and tuned to various fixed points by means of
nano-gates applied near the junction. Another possibility is an `island'
set-up proposed in Ref.~[\onlinecite{das2}], formed from quantum-Hall edge
states, which may be more feasible. In this case, the tunneling operators
can be controlled by means of gate voltage operated constrictions in the
central region of the `island'.
\section*{Acknowledgment} We thank Diptiman Sen for stimulating discussions
and for carefully reading the manuscript. We gratefully acknowledge funding
from the INSPIRE Faculty Award by DST (Govt. of India), and from the
Faculty Initiation Grant by IIT Kanpur, India.
|
3,212,635,537,652 | arxiv | \section{Introduction}
The two-term Machin-like formula for pi is given by
\begin{equation}\label{eq_1}
\frac{\pi }{4} = {\alpha _1}\arctan \left( {\frac{1}{{{\beta _1}}}} \right) + {\alpha _2}\arctan \left( {\frac{1}{{{\beta _2}}}} \right),
\end{equation}
where ${\alpha _1}$, ${\alpha _2}$, ${\beta _1}$ and ${\beta _2}$ are some integers or rational numbers. The Maclaurin series expansion of the arctangent function, also known as the Gregory's series \cite{Lehmer1938, Borwein1989}, can be represented as
$$
\arctan \left( x \right) = \sum\limits_{m = 1}^\infty {\frac{{{{\left( { - 1} \right)}^{m + 1}}}}{{2m - 1}}{x^{2m - 1}}} = x - \frac{{{x^3}}}{3} + \frac{{{x^5}}}{5} - \frac{{{x^7}}}{7} + \ldots.
$$
Since from this series expansion it follows that $\arctan \left( x \right) = x + O\left( {{x^3}} \right)$, it is reasonable to look for a two-term Machin-like formula for pi with smaller arguments (by absolute value) of the arctangent function to improve convergence rate in computation \cite{Chien-Lih2004, Abrarov2017a, Abrarov2017b}.
Using an identity relating arctangent function with natural logarithm
$$
\arctan \left( {\frac{1}{x}} \right) = \frac{1}{{2i}}\ln \left( {\frac{{x + i}}{{x - i}}} \right),
$$
after some trivial rearrangements from equation \eqref{eq_1} it follows that \cite{Abrarov2017a}
\begin{equation}\label{eq_2}
i = {\left( {\frac{{{\beta _1} + i}}{{{\beta _1} - i}}} \right)^{{\alpha_1}}}{\left( {\frac{{{\beta _2} + i}}{{{\beta _2} - i}}} \right)^{{\alpha_2}}}.
\end{equation}
The equations \eqref{eq_1} and \eqref{eq_2} can be significantly simplified for theoretical analysis by taking ${\alpha _2} = 1$. This leads to
\begin{equation}\label{eq_3}
\frac{\pi }{4} = {\alpha _1}\arctan \left( {\frac{1}{{{\beta _1}}}} \right) + \arctan \left( {\frac{1}{{{\beta _2}}}} \right).
\end{equation}
and
$$
i = {\left( {\frac{{{\beta _1} + i}}{{{\beta _1} - i}}} \right)^{{\alpha _1}}}\frac{{{\beta _2} + i}}{{{\beta _2} - i}},
$$
respectively. The solution with respect to ${\beta _2}$ in the last equation is given by
\begin{equation}\label{eq_4}
{\beta _2} = \frac{2}{{{{\left[ {\left( {{\beta _1} + i} \right)/\left( {{\beta _1} - i} \right)} \right]}^{{\alpha _1}}} - i}} - i.
\end{equation}
It is not difficult to see that that if ${\alpha _1}$is an integer and ${\beta _1}$ is an integer or a rational number, then ${\beta _2}$ is a rational number \cite{Abrarov2017a, Abrarov2017b}.
Equation \eqref{eq_3} becomes particularly interesting by considering the second term as a small remainder defined as \cite{Abrarov2017a}
$$
\Delta = \arctan \left( {\frac{1}{{{\beta _2}}}} \right).
$$
Specifically, if the condition $\left|\Delta\right| << 1$ is satisfied we can write
\begin{equation}\label{eq_5}
\frac{\pi }{4} \approx {\alpha _1}\arctan \left( {\frac{1}{{{\beta _1}}}} \right).
\end{equation}
Consequently, if there is an identity of kind
\begin{equation}\label{eq_6}
\frac{\pi }{4} = {\alpha _1}\arctan \left( {\frac{1}{\gamma }} \right),
\end{equation}
then ${\beta _1}$ in equation \eqref{eq_5} can be chosen such that ${\beta _1} \approx \gamma $ in order to obtain the term ${\alpha _1}\arctan \left( {1/{\beta _1}} \right)$ sufficiently close to $\pi /4$. This enables us to reduce the argument of the second arctangent function in equation \eqref{eq_3} such that $1/\left|\beta _2\right| << 1/\beta _1$.
It should be noted that an elegant method showing how to reduce the arguments of the arctangent function in the two-term Machin-like formula for pi was initially suggested by Chien-Lih \cite{Chien-Lih2004}. However, in contrast to the Chien-Lih's method, our approach does not require multiple step-by-step algebraic manipulations and can be developed by a relatively simple iteration procedure (see \cite{Abrarov2017b} for details).
A random choice of the values ${\alpha _1}$ and ${\beta _1}$ is inefficient. For example, substituting ${\alpha _1} = 7$ and ${\beta _1} = {10^9}$ into equation \eqref{eq_4} leads to ${\beta _2}$ that is very close to unity and, therefore, not small enough for rapid convergence (see \cite{Abrarov2017a} for specific details). However, using the following equation \cite{Abrarov2017a, Abrarov2017b}
$$
\frac{\pi }{4} = {2^{k - 1}}\arctan \left( {\frac{{\sqrt {2 - {c_{k - 1}}} }}{{{c_k}}}} \right),
$$
where the nested radicals are ${c_{k + 1}} = \sqrt {2 + {c_k}} $ and ${c_1} = \sqrt 2 $, we can construct a two-term Machin-like formula for pi providing a rapid convergence. Specifically, taking an integer ${\alpha _1} = {2^{k - 1}}$ and an integer or a rational number ${\beta _1}$ such that
$$
{\beta _1} = \frac{{{c_k}}}{{\sqrt {2 - {c_{k - 1}}} }} + \varepsilon, \qquad\qquad \left| \varepsilon \right| << \beta_1,
$$
where $\varepsilon $ is the error term, we obtain
\begin{equation}\label{eq_7a}
\frac{\pi }{4} = {2^{k - 1}}\arctan \left( {\frac{1}{{{\beta _1}}}} \right) + \arctan \left( {\frac{2}{{{{\left[ {\left( {{\beta _1} + i} \right)/\left( {{\beta _1} - i} \right)} \right]}^{{2^{k - 1}}}} + i}} + i} \right),
\end{equation}
where
\[
\frac{2}{{{{\left[ {\left( {{\beta _1} + i} \right)/\left( {{\beta _1} - i} \right)} \right]}^{{2^{k - 1}}}} + i}} + i = \frac{1}{{{\beta _2}}},
\]
that can result in a very rapid convergence rate in computing pi (see \cite{Abrarov2017a, Abrarov2017b} for more information).
Although this implementation can provide small arguments (by absolute value) of the arctangent functions in equation \eqref{eq_3}, it leads to a value ${\beta _2}$ with a large number of the digits in its numerator and denominator that may cause some complexities in computation \cite{Guillera2017}. In this work we show how the Newton--Raphson iteration method may be applied to resolve effectively such a problem.
Several efficient algorithms for computing pi have been developed by using the Newton--Raphson iteration method (see for example \cite{Gallagher2015}). However, its application to the two-term Machin-like formula for pi may be more promising since the ratios $1/\beta_1$ and $1/\beta_2$ can be chosen arbitrarily small by absolute value in order to achieve faster convergence in computation (see \cite{Abrarov2017a} and \cite{Abrarov2017b} describing how the ratios $1/\beta_1$ and $1/\beta_2$ can be reduced by absolute value). To the best of our knowledge this approach has never been reported in scientific literature.
\section{Methodology}
There are several efficient approximations for the arctangent function \cite{Abrarov2017b, Chien-Lih2005, Milgram2005}. Recently we have derived a new series expansion of the arctangent function \cite{Abrarov2017b}
\begin{equation}\label{eq_8}
\begin{aligned}
\arctan \left( x \right) &= i\sum\limits_{m=1}^{\infty}{\frac{1}{2m-1}}\left(\frac{1}{\left(1+2i/x\right)^{2m-1}}-\frac{1}{\left(1-2i/x\right)^{2m-1}}\right) \\
&=2\sum\limits_{m = 1}^\infty {\frac{1}{{2m - 1}}\frac{{{g_m}\left( x \right)}}{{g_m^2\left( x \right) + h_m^2\left( x \right)}}},
\end{aligned}
\end{equation}
where
$$
{g_1}\left( x \right) = 2/x, \,\, {h_1}\left( x \right) = 1,
$$
$$
{g_m}\left( x \right) = {g_{m - 1}}\left( x \right)\left( {1 - 4/{x^2}} \right) + 4{h_{m - 1}}\left( x \right)/x
$$
and
$$
{h_m}\left( x \right) = {h_{m - 1}}\left( x \right)\left( {1 - 4/{x^2}} \right) - 4{g_{m - 1}}\left( x \right)/x.
$$
Although this series expansion \eqref{eq_8} is very rapid in convergence, especially at $x < < 1$, the large numbers in the ratio of the rational number ${\beta _2}$ decelerate the computation \cite{Guillera2017}. In order to resolve this problem we suggest the application of the Newton--Raphson iteration method.
Let
\begin{equation}\label{eq_9}
\arctan \left( x \right) = y.
\end{equation}
Then from this equation it follows that
$$
x = \tan \left( y \right)
$$
or
\begin{equation}\label{eq_10}
\tan \left( y \right) - x = 0.
\end{equation}
The Newton--Raphson iteration method is based on the formula \cite{Householder1970, Alefeld1981, Scavo1995, Recktenwald2000}
\begin{equation}\label{eq_11}
{y_{n + 1}} = {y_n} - \frac{{f\left( {{y_n}} \right)}}{{f'\left( {{y_n}} \right)}}.
\end{equation}
Therefore, in accordance with relation \eqref{eq_10}
$$
f\left( y \right) = \tan \left( y \right) - x \Rightarrow f'\left( y \right) = \frac{d}{{dy}}\left( {\tan \left( y \right) - x} \right) = {\sec ^2}\left( y \right),
$$
the equation \eqref{eq_11} yields a very efficient iteration formula for the arctangent function
\begin{equation}\label{eq_12}
{y_{n + 1}} = {y_n} - {\cos ^2}\left( {{y_n}} \right)\left( {\tan \left( {{y_n}} \right) - x} \right),
\end{equation}
such that (see equation \eqref{eq_9})
$$
\mathop {\lim }\limits_{n \to \infty } {y_n} = \arctan \left( x \right).
$$
\section{Implementation}
The computational test shows that approximation of the function $1 - {\sin ^2}\left( {{y_n}} \right)$ is faster in convergence than an approximation of the function ${\cos ^2}\left( {{y_n}} \right)$. Therefore, taking this into account and considering that for our case $x = 1/{\beta _2}$, it is convenient to rewrite the formula \eqref{eq_12} in form
\begin{equation}\label{eq_13}
{y_{n + 1}} = {y_n} - \left( {1 - {{\sin }^2}\left( {{y_n}} \right)} \right)\left( {\tan \left( {{y_n}} \right) - \frac{1}{{{\beta _2}}}} \right).
\end{equation}
There is an expansion series for the tangent function
$$
\tan \left( \theta \right) = \sum\limits_{m = 1}^\infty {\frac{{{{\left( { - 1} \right)}^{m - 1}}{2^{2m}}\left( {{2^{2m}} - 1} \right){B_{2m}}{\theta ^{2m - 1}}}}{{\left( {2m} \right)!}}}, \qquad\qquad - \frac{\pi }{2} < \theta < \frac{\pi }{2}.
$$
However, this series expansion contains the Bernoulli numbers ${B_{2m}}$ that requires intense computation when index $m$ increases. Therefore, application of this expansion series may not be optimal. Instead, we can significantly simplify the computation by rewriting equation \eqref{eq_13} in form
\begin{equation}\label{eq_14}
{y_{n + 1}} = {y_n} - \left( {1 - {{\sin }^2}\left( {{y_n}} \right)} \right)\left( {\frac{{\sin \left( {{y_n}} \right)}}{{\cos \left( {{y_n}} \right)}} - \frac{1}{{{\beta _2}}}} \right),
\end{equation}
where the sine and cosine functions can be approximated, for example, by truncating their Maclaurin series expansions as given by
$$
\sin \left( {{y_n}} \right) = \sum\limits_{m = 0}^\infty {\frac{{{{\left( { - 1} \right)}^m}y_n^{2m + 1}}}{{\left( {2m + 1} \right)!}}}
$$
and
$$
\cos \left( {{y_n}} \right) = \sum\limits_{m = 0}^\infty {\frac{{{{\left( { - 1} \right)}^m}y_n^{2m}}}{{\left( {2m} \right)!}}},
$$
respectively.
\section{Sample computation}
In our work \cite{Abrarov2016} we have derived a simple formula for pi
$$
\frac{\pi }{4} = {2^{k - 1}}\arctan \left( {\frac{{\sqrt {2 - {c_{k - 1}}} }}{{{c_k}}}} \right).
$$
By comparing this equation with equation \eqref{eq_6} we can see that for this specific case ${\alpha _1} = {2^{k - 1}}$ while $\gamma = c_k/\sqrt {2 - {c_{k - 1}}}$. Consequently, we can construct the two-term Machin-like formula for pi with small arguments of the arctangent function by choosing the integer ${\beta _1}$ such that
$$
{\beta _1} \approx \frac{{{c_k}}}{{\sqrt {2 - {c_{k - 1}}} }}.
$$
Denote $\left\lfloor {\,\,\,} \right\rfloor $ as the floor function. Then the error term $\varepsilon $ can be taken as
$$
\varepsilon = \left\lfloor {\frac{{{c_k}}}{{\sqrt {2 - {c_{k - 1}}} }}} \right\rfloor - \frac{{{c_k}}}{{\sqrt {2 - {c_{k - 1}}} }}, \qquad\Rightarrow\varepsilon < 0.
$$
Therefore, it is convenient to apply a simple equation in order to determine the integer ${\beta _1}$ as follows \cite{Abrarov2017b}
$$
{\beta _1} = \left\lfloor {\frac{{{c_k}}}{{\sqrt {2 - {c_{k - 1}}} }}} \right\rfloor.
$$
We can assign, for example, $k = 6$. Consequently, we have
$$
{\beta _1} = \left\lfloor {\frac{{\sqrt {2 + \sqrt {2 + \sqrt {2 + \sqrt {2 + \sqrt {2 + \sqrt 2 } } } } } }}{{\sqrt {2 - \sqrt {2 + \sqrt {2 + \sqrt {2 + \sqrt {2 + \sqrt 2 } } } } } }}} \right\rfloor = 40.
$$
Substituting these values of $k$ and ${\beta _1}$ into equation \eqref{eq_7a} yields
\begin{equation}\label{eq_15}
\frac{\pi }{4} = 32\arctan \left( {\frac{1}{{40}}} \right) + \arctan \left( x \right),
\end{equation}
where
$$
x = \frac{1}{{{\beta _2}}} = - \frac{{{\text{38035138859000075702655846657186322249216830232319}}}}{{{\text{2634699316100146880926635665506082395762836079845121}}}}.
$$
The following is the Mathematica code that validates the Machin-like formula \eqref{eq_15} for pi by returning the output '{\bfseries{{\ttfamily{True}}}}':
\small
\begin{verbatim}
Pi/4 ==
32*ArcTan[1/40] +
ArcTan[-(38035138859000075702655846657186322249216830232319/
2634699316100146880926635665506082395762836079845121)]
\end{verbatim}
\normalsize
It is relatively easy to compute $\arctan \left( {1/\beta_1} \right)$ by using, for example, the equation \eqref{eq_8} since ${\beta _1}$ is just an integer. Even though it is advantageous when the arguments of the arctangent function in the Machin-like formula \eqref{eq_3} for pi are smaller by absolute value, its computation may be challenging since the rational number ${\beta _2}$ consists of the numerator and denominator with large number of the digits \cite{Guillera2017}. This complexity occurs since each additional term in equation \eqref{eq_8} increases tremendously the number of the digits due to exponentiations in determination of the intermediate values ${g_m}\left( x \right)$ and ${h_m}\left( x \right)$.
Using an example based on the equation \eqref{eq_15} we can show how to overcome this problem. Suppose that the value of $\arctan \left( {1/40} \right)$ is computed with required accuracy (by equation \eqref{eq_8} or by any other approximation or numerical method) and suppose that at the beginning we know only five decimal digits of the constant pi. Then the initial value ${y_1}$ can be computed by substituting $\pi \approx 3.14159$ into equation \eqref{eq_15}. This gives
\[
{y_1} = \frac{{3.14159}}{4} - 32\arctan \left( {\frac{1}{{40}}} \right) = - 0.0144358958054451040550 \ldots
\]
Since we employ the approximated value of the constant pi with five decimal digits only, there is no any specific reason to compute all digits of the value ${y_1}$. Therefore, the value ${y_1}$ can be computed with same (or slightly better) accuracy than accuracy of the initial approximation $\pi \approx 3.14159$. Thus, with only first six decimal digits
$$
{y_1} \approx - 0.\underbrace {014435}_{6\,\,{\text{digits}}}
$$
we get
$$
\pi \approx 4\left( {32\arctan \left( {\frac{1}{{40}}} \right) - {y_1}} \right) = 3.\underbrace{{14159}}_{5\,\,{\text{digits}}}358322178041622\ldots
$$
$5$ correct decimal digits of pi as expected. Substituting this approximated value ${y_1}$ into the Newton--Raphson iteration formula \eqref{eq_14} results in
\[
{y_2} = - 0.014435232407997574182 \ldots
\]
Experimental observation shows that each iteration step doubles the number of the correct digits of pi \footnote{The convergence rate can be accelerated even further by using a higher order iteration like the Halley's method \cite{Alefeld1981, Scavo1995}, the Householder's method \cite{Householder1970} and so on. However, the Newton--Raphson iteration method is simplest in implementation (see for example \cite{Recktenwald2000}).}. Therefore, we can double the number of the decimal digits (from $6$ to $12$) for approximation of the value ${y_2}$ as follows
\[
{y_2} \approx - 0.\underbrace {014435232407}_{12\,\,{\text{digits}}}.
\]
This provides
$$
\pi \approx 4\left( {32\arctan \left( {\frac{1}{{40}}} \right) - {y_2}} \right) = 3.\underbrace {1415926535}_{10\,\,{\text{digits}}}8979323846\ldots
$$
$10$ correct decimal digits of pi. Substituting the approximated value ${y_2}$ into the Newton--Raphson iteration formula \eqref{eq_14} leads to
$$
{y_3} = - 0.01443523240799679443929512531345 \ldots.
$$
Increasing again the number of the decimal digits by factor of two (from $12$ to $24$) for approximating $y_3$ in form
$$
{y_3} \approx - 0.\underbrace {014435232407996794439295}_{24\,\,{\text{digits}}},
$$
we can gain
$$
\pi \approx 4\left( {32\arctan \left( {\frac{1}{{40}}} \right) - {y_3}} \right) = 3.\underbrace {141592653589793238462643}_{24\,\,{\text{digits}}}383279 \ldots
$$
$24$ correct decimal digits of pi. This procedure can be repeated over and over again in order to achieve the required accuracy for pi.
The following is the Mathematica code showing computation of pi according to the described iteration procedure:
\bigskip
\small
\begin{verbatim}
(* M is trancating integer *)
M := 20
(* Series expansion for sine function *)
sinF[y_, M_] := Sum[((-1)^m*y^(2*m + 1))/(2*m + 1)!, {m, 0, M}]
(* Series expansion for cosine function *)
cosF[y_, M_] := Sum[((-1)^m*y^(2*m))/(2*m)!, {m, 0, M}]
(* x is argument of the arctangent function *)
x := -(38035138859000075702655846657186322249216830232319/
2634699316100146880926635665506082395762836079845121)
Print["--------------------------------------------"]
y1 := 3.14159`200./4 - 32*ArcTan[1/40]
Print["The value of y1 is ", N[y1, 21], "..."]
y1 := -0.014435`200.
Print["Approximated value of y1 = ", N[y1, 5]]
Print["Actual value of pi is ", N[Pi, 21], "..."]
Print["Approximated value of pi is ",
N[4*(32*ArcTan[1/40] + y1), 21], "..."]
Print["--------------------------------------------"]
y2 := y1 - (1 - sinF[y1, M]^2)*(sinF[y1, M]/cosF[y1, M] - x)
Print["The value of y2 is ", N[y2, 20], "..."]
y2 := -0.014435232407`200.
Print["Approximated valu of y2 = ", N[y2, 11]]
Print["Actual value of pi is ", N[Pi, 21], "..."]
Print["Approximated value of pi is ",
N[4*(32*ArcTan[1/40] + y2), 21], "..."]
Print["--------------------------------------------"]
y3 := y2 - (1 - sinF[y2, M]^2)*(sinF[y2, M]/cosF[y2, M] - x)
Print["The value of y3 is ", N[y3, 31], "..."]
y3 := -0.014435232407996794439295`200.
Print["Approximated value of y3 = ", N[y3, 23]]
Print["Actual value of pi is ", N[Pi, 31], "..."]
Print["Approximated value of pi is ",
N[4*(32*ArcTan[1/40] + y3), 31]]
\end{verbatim}
\normalsize
In our publication \cite{Abrarov2017b} we have presented the two-term Machin-like formula \eqref{eq_3} for pi, where $\alpha_1 = 2^{26}$, $\beta_1 = 85445659$ and
\[
\begin{aligned}
{{\beta}_{2}}&=-\frac{\overbrace{\text{2368557598}\ldots \text{9903554561}}^{\text{522,185,816}\,\,\text{digits}}}{\underbrace{\text{9732933578}\ldots \text{4975692799}}_{\text{522,185,807}\,\,\text{digits}}} \\
& =-\text{2}\text{.43354953523904089818}\ldots \times {{10}^{8}}\,\,\left( \text{rational} \right),
\end{aligned}
\]
providing $16$ decimal digits per term increment while the series expansion \eqref{eq_8} is applied \footnote{Interested reader can upload all digits of the computed rational number $\beta_2$ here: \href{https://yorkspace.library.yorku.ca/xmlui/handle/10315/33173}{https://yorkspace.library.yorku.ca/xmlui/handle/10315/33173}}. The following Mathematica code shows this convergence rate:
\bigskip
\small
\begin{verbatim}
(* Define integer k *)
k := 27
(* Define rational value beta1 *)
beta1 := 85445659
(* This is an alternative representation of equation (4), see [5] *)
beta2 := (Cos[2^(k - 1)*ArcTan[(2*beta1)/(beta1^2 - 1)]])/(1 -
Sin[2^(k - 1)*ArcTan[(2*beta1)/(beta1^2 - 1)]])
(* Approximation of pi based on equations (7) and (8) *)
piApprox[M_] :=
N[4*I*Sum[(1/(2*m - 1))*(2^(k - 1)*(1/(1 + 2*I*beta1)^(2*m - 1) -
1/(1 - 2*I*beta1)^(2*m - 1)) + 1/(1 + 2*I*beta2)^(2*m - 1) -
1/(1 - 2*I*beta2)^(2*m - 1)), {m, 1, M}], 10000] // Re
Print["Number of correct digits of pi and convergence rate:"]
Print["----------------------------------------------------"]
piDigits[M_] := Abs[MantissaExponent[Pi - piApprox[M]]][[2]]
M = 10;
While[M <= 20, {Print["At M = ", M, " number of correct digits is ",
piDigits[M]], Print["The convergence rate is ",
piDigits[M] - piDigits[M - 1], " per term increment"]}; M++]
Print["----------------------------------------------------"]
Print["Actual value of pi is"]
N[Pi, 100]
Print["At M = 5 the approximated value of pi is"]
N[piApprox[5], 100]
\end{verbatim}
\normalsize
Since the value $y_n$ in the intermediate steps of computation does not require all decimal digits, this rational number $\beta_2$ can also be approximated accordingly at each iteration. Consequently, the described application of the Newton--Raphson iteration method can be effective to resolve this problem.
\section{Conclusion}
The Newton--Raphson iteration method is applied to a two-term Machin-like formula for the high-accuracy approximation of the constant pi. The accuracy of approximated value of pi is doubled at each iteration step. Consequently, the accuracy of intermediate values $y_n$ can also be doubled stepwise at each iteration (for example by rounding or by gradual increase of the truncating integers for the relevant functions involved in computation). This methodology significantly simplifies the computation of the $\arctan \left( {1/{\beta _2}} \right)$ in the two-term Machin-like formula \eqref{eq_3} for pi and effectively resolves the problem related to the large number of the digits in the numerator and denominator of the rational value ${\beta _2}$.
\section*{Acknowledgments}
This work is supported by National Research Council Canada, Thoth Technology Inc. and York University. The authors wish to thank Dr. Jes\'us
Guillera for constructive discussions.
\bigskip
|
3,212,635,537,653 | arxiv | \section{Introduction}
It has been a longstanding challenge to predict accurately the
equation of state and in particular the phase diagrams of fluids
and fluid mixtures from atomistic models via computer
simulation.\cite{1,2,3,4,5} Such applications have required a widespread
development of computer simulation methodology: significant
advances were possible through the invention of Gibbs ensemble
\cite{6,7,8} and configurational bias \cite{9,10,11}
methodologies, grand canonical Monte Carlo simulations combined
with histogram reweighting methods \cite{12,13,14} and finite size
scaling \cite{15,16,17,18} including field mixing,
\cite{19,20,21,22} umbrella sampling \cite{23,24} and other
expanded ensemble methods.\cite{25,26,27} A lot of
effort has also been spent towards developing more and more accurate
effective potentials from quantum chemistry methods (e.g.
Refs.\ \onlinecite{28,29,30,31,32,33}). However, for simple
and industrially relevant
fluids such as carbon dioxide \cite{34,35} it is still difficult
to predict the equation of state with high accuracy, such that
experimental data in the critical region and for temperatures $\pm
30 \%$ around it are reproduced to an accuracy of a few percent.
\cite{36,37}
Extending such calculations to mixtures
(in particular, solutions of polymers with supercritical carbon
dioxide as a solvent)
is even more
of a problem, due to the less complete knowledge of effective
potentials, and due to the extensive numerical effort required. A
three-dimensional parameter space involving the variables
temperature $T$, pressure $p$ and mole fraction $x$ needs to be
scanned for a binary system, and the phase diagrams are typically
very complicated, because vapor-liquid and fluid-fluid phase
equilibria compete with each other.\cite{38,39,40,D_98}
If polymers are chosen as a solute, their molecular weight enters as
an additional fourth variable. Moreover, the coarse-grained representation
of the solvent (e.g.\ carbon dioxide) and the solute have to be
compatible, i.e., one cannot combine an atomistic description of
the solvent with a much coarser representation of a macromolecular solute.
There is clearly a need to
devise models that are simple enough to allow extensive simulation
studies with an affordable effort and nevertheless accurate enough
to be interesting for applications to experiment and in the
context of industrial processing. Such validated coarse-grained models that
accurately reproduce thermodynamic bulk properties are also a starting
point for investigating the kinetics of phase separation or spatially
inhomogeneous systems (e.g.\ wetting and catalysis).
In the present work, we wish to make a step towards this goal,
extending our previous study of a selected sample of simple pure
fluids, in particular carbon dioxide \cite{36,37} to various
binary mixtures.
We want to stress that our aim is not to reach the most accurate
prediction of the phase diagram of a specific system. Indeed,
motivated by the excellent results obtained for the pure carbon
dioxide and for simple quadrupolar molecules in general, \cite{36}
we want to investigate how this model performs for mixtures,
especially solutions of various alkanes. In particular
we will show that the new coarse grained (CG) model
avoids the need for a big violation
of the Lorentz-Berthelot (LB) combining rules (that was required in previous work \cite{44}).
This violation
destroys the predictivity of the model because
extensive experimental data for the mixture would be required to determine
a parameter describing the violation of the LB combining rule.
Due to the generality of the approach and the level of accuracy
for the pure components,
\footnote{
Indeed using results of Ref.\ \onlinecite{36} the CG parameters for given
substances (with a reasonable quadrupolar moment) can be computed
in a straightforward way without additional simulations.
}
the present investigation is relevant both for practical
purposes and for a general understanding of coarse graining procedures.
\cite{Voth,Yip-2005,KT-2004}
We will also present results of an analytical Equation of State (EOS)
which (apart from some region of the phase diagram near critical points) is
able to yield
rather satisfactory predictions in agreement with Monte Carlo results.
It is very important to note that this EOS uses the same model parameters
as the Monte Carlo simulation. This implies that in principle we
are in a position to attempt to predict the phase diagram of a binary
mixture (which is very complex \cite{38,39,40,D_98}) with comparatively small
computational effort. In this view the reader should also interpret
our choice to use LB combining rules: of course there are no reasons
to believe
that such approximations should be exact,
and certainly there will be cases where more complicated combining
rules are preferable. However, the simple LB combining rules used here
suffice for a wide class of systems with quite acceptable errors.
Due to the generality of the scheme presented in this work we expect
discrepancies, and some regions of the phase diagram might not be
predicted properly. This is related to several limitations of the present
procedure like {\em i}) the large $T$ expansion involved in the building
of the CG model for quadrupolar solvents \cite{36,37} and {\em ii}) limitations
related to our simple modeling approach like the simple potentials involved (Lennard-Jones),
the neglect of atomistic details, and the use of the LB combining
rules for which discrepancies are
\footnote{
However sometimes is far from being clear if deviations from LB combining
rules are infact compensation of some bias of the model used. For instance in
the case of polar substances the present work shows very clearly how
strong violations of the rules in Ref.\ \onlinecite{44} were more properly
due to a bad modelling for the solvents.
}
known to arise (see e.g.\ \onlinecite{Hicks77} for some systems
also investigated in
this work). In order to disentangle point {\em i}) from point {\em ii}) we also
present investigations of similar apolar mixtures for which the new
CG model \cite{36,63} does not result in any improvement. The results
show similar discrepancies from experiment as the polar phase diagrams,
confirming the quality of the choice in Refs.\ \onlinecite{36,63}.
We want to stress that in order to test the goodness of
our CG model, the only reliable method is a Monte Carlo
investigation.
Indeed, without MC simulation it is impossible to distinguish
the bias related to the approximations involved in the EOS from
the bias involved in the CG model [point {\em ii}) above]. For instance,
we will present results for the mixture of methane and carbon dioxide
for which EOS results will be in better agreement with experiments
than MC results: this is clearly a fortuitous cancellation!
It is important to report that other interesting and significant attempts
to build a systematic description of mixture phase diagrams are present in
the literature. For instance in \onlinecite{Stoll2003,Vrab2005}
mixtures are treated
with models previously investigated in \onlinecite{Vrab2001}.
For some of the molecules studied, these models allow for an additional
parameter that can be adjusted and consequently a more accurate fit
of experimental data is possible.
On the other hand, there is a loss in predictivity because
the full phase diagrams of the pure substances are required
in order to determine
the simulation parameters (computed in a $\chi$ square fit which minimizes
discrepancies with experiment\cite{Vrab2001}) plus mixture data
\cite{Stoll2003,Vrab2005} to determine the mixing parameters.
\footnote{
It is interesting to observe how atomistic models that are not improved
to describe all the pure substances phase diagram but take as
experimental input only the critical temperature and density (like in our case)
are less accurate than our simpler (see for instance the discussion of
EPM2 model \cite{HY-95} in \onlinecite{36}).
}
So the strategy of the present work is to deal with relatively simple models,
where (in the framework of Monte Carlo simulations) the statistical mechanics
can be dealt with at a very good level of accuracy (e.g.\ long runs employing
advanced Monte Carlo techniques are possible to minimise statistical
errors and systematic errors
due to finite size effects which are avoided by finite size scaling
analysis). These
models are suitable for analytic EOS models as well, and can serve as a
starting point for the coarse grained modeling of polymer solutions. Of
course, we do not imply that a complementary simulation strategy (making
models as detailed as possible, to account for the packing of molecules in
the liquid as accurately as possible, including polarizability, etc.) is
not worth pursuing in its own right, but it is outside of the scope of the
present work.
\section{Computational details and outline
}
It is well established
\cite{5,19,20,21,22,41,42,43,44,44a,44b} that the most reliable approach to
study the phase behavior of fluids is based on grand canonical
Monte Carlo simulations together with histogram reweighting and finite size
scaling techniques, especially if one wishes to include the critical region. In this
study, we follow this approach, and amend it by successive
umbrella sampling \cite{24} to obtain coexistence curves far from
criticality. This method has the additional advantage that the
interfacial free energy between the coexisting phases can be
extracted as well.\cite{47,48,49,50} As we are interested in a very
fast simulation code, we omit any potentials including effective
charges, and restrict our attention to short range effective
potentials. Three-body (nonbonded)
forces are avoided as well. Electrostatic quadrupole-quadrupole interactions
are treated as a perturbation (which is practically justifiable
\cite{37}), such that an effective angular-independent (but
temperature-dependent \cite{36,37,51,52}) interaction decaying
proportionally to the power $r^{-10}$ of the interparticle distance
$r$ results. The dispersion forces are modeled by Lennard-Jones
(LJ) potentials. For the sake of computational efficiency, all
potentials are cut at the distance $r=r_c=2(2^{1/6})\sigma$ and
shifted to zero at r$_c$ ($\sigma $ is the range parameter of the LJ
potential). When we deal with alkane chains, we disregard any
torsional forces and bond-angle potentials and integrate a few
successive chemical monomers into one effective monomeric unit
(cf.\ fig.~\ref{fig1}). This is done in the way that one such unit contains
three carbon-carbon bonds between successive carbon atoms, and we
do not distinguish between interior CH$_2$ monomers and the CH$_3$
groups at the chain ends. Thus, for example, hexadecane
(C$_{16}$H$_{34}$) is represented by a chain molecule containing
five effective monomers (see fig.~\ref{fig1}).\cite{44,51}
The procedure of coarsening
three carbon atoms in a bead has been proven to be optimal
in several theoretical investigations \cite{54}
(see Sec.\ 4.3.2). We stress that the particular choice of coarsening
three carbon units into one bead has nothing to do with the physical lengths
of the chain (like for instance the Kuhn length), but is a choice
that depends more on the potentials used. Indeed as
neighboring beads along a chain interact with a bonding potential
(see Sec.\ IIIC for definitions) in addition to the Lennard Jones potential, the
coarse grained model of the chain exhibits a degree of local stiffness, although neither
bond angle or torsional potentials are included explicitely.
This implies that the Kuhn length is longer
than the diameter of our beads.
Of course, the suitable choice of parameters is crucial for such coarse-grained models:
we choose the strength of the quadrupole moment $Q$
(if there is one) such that it is compatible with experimental data, and adjust
the range $\sigma$ and strength $\epsilon$ of the LJ potential
such that the experimental critical density $\rho_c$ and critical
temperature $T_c$ are reproduced precisely in the simulation. In
Sec.\ III, we will briefly discuss the accuracy of this procedure
for a variety of pure systems (noble gases, CO$_2$, CH$_4$,
C$_6$H$_6$, short alkanes) while Sec.\ IV contains the central
part of our work, in which we present a variety of results for binary
mixtures. The additional interactions needed for the mixtures are
chosen by the simple Lorentz-Berthelot combining rules.\cite{52}
Technical aspects of our simulations are similar to previous
studies.\cite{36,37,44} Far from the critical point coexistence
densities are computed using the successive sampling algorithm of
Virnau and M\"uller \cite{24} in which high free energy barriers
are overcome constraining the algorithm --at a certain time of the
simulation-- to sample configurations of a system where the
number of particles is $n$ or $n+1$.
Varying $n$ from $n=0$ to $n=N_\mathrm{MAX}$ one is able to reconstruct (after
proper reweighting) the free energy profile $F(n)$ at coexistence in the range
of densities of interest.
At phase coexistence, we expect a distribution $F(n)$ with two
peaks (corresponding to the two coexisting phases that differ in
particle number) which have equal weight.
In few very fast runs (using a small cubic box
$L\approx 7 \sigma_M$, where $\sigma_M$ is the biggest LJ length parameter of
the model), invoking the equal weight rule for $F(n)$, we
are able to tune the chemical potential(s) to their coexistence values, with
a reasonable error ($\approx$1-5\%) which in some cases should be enough.
Then, we start a second long simulation for a larger elongated box (to
enhance the formation of the liquid-gas interface) $V=2\cdot L^3$ with $L=9
\sigma_M$ in which every window is sampled with 2-10$\cdot 10^4$ MC
steps. Every MC step includes: 100 grand canonical moves in which we try to
insert/delete solvent (and chain) particles, 1 local move in which a number
of monomers equal to the total number of monomers are rearranged, and
10$\cdot$N$_\mathrm{chain}$ reptation moves, where N$_\mathrm{chain}$ is the
number of the chains in the box.
Such a run requires on average 10 h of cpu time on 32 nodes of an IBM Power4
cluster. The precision of the measured coexistence densities (for
instance) is roughly 1\%. Using a spherical averaged potential allows us to speed
up
computations by a factor $\approx$5 \cite{37} in comparison with the full quadrupolar model. A number of chains
$N_\mathrm{MAX}\approx 1100$ usually allows a complete sampling of
the liquid peak, while the number of solvent particles is
typically of the same order of magnitude. We emphasize that --unlike simulations
in the Gibbs-ensemble-- in addition to the densities and compositions of
the coexisting phases and their compressibilities, the simulation technique
also provides information about the interface tension.
At the critical point,
we use the same kind of simulation described above, but unconstrained. (At
every time the number of particles is free to fluctuate in all the region
[0,$N_\mathrm{MAX}$]). For more detail on the finite size analysis used
we refer the reader to Sec.\ IVC.
Even with all these simplifying approximations, establishing the
phase behavior and thermodynamic properties of binary mixtures
comprehensively still requires a lot of work with Monte Carlo
simulations. Far away from critical points, such an effort is not
needed, and one can try to use an analytical equation of
state. We use a previously developed theory based on
Wertheim thermodynamic perturbation theory
\cite{52´} (TPT). We strictly follow Ref.\ \onlinecite{53,McD_2002}.
In particular the free energy of the system $A$ is decomposed in
a contribution due to a mixture of unbonded monomers (the reference system)
plus a contribution due to chain associativity $A_\mathrm{chain}$,
\cite{McD_2002} $A=A_\mathrm{ref}+A_\mathrm{chain}$. Wertheim's
theory allows us to compute $A_\mathrm{chain}$ perturbatively
using quantities of the reference system (like pair correlation functions)
and the known bonding potential. We use a first order perturbation
theory (TPT1) which (at this point) reduces the problem to the
computation of pair correlation functions and the free energy
($A_\mathrm{ref}$)
of a binary mixture of non-bonded monomers interacting
with LJ potentials (chain-chain monomers and solvent-chain monomers) and the LJ
+ quadrupolar interaction (see Sec.\ IIIB) for solvent-solvent monomers.
$A_\mathrm{ref}$ is computed using standard perturbation theory: the
Ornstein-Zernike equation is solved using a Mean Spherical (MSA)
closure.\cite{HM-86}
In particular, one chooses as reference system a mixture of
hard spheres with diameters computed using the repulsive part of the
monomer-monomer potential in a Barker Henderson approximation, \cite{BH-67}
while the attractive part of the potential is treated as a perturbation.
A MSA solution is then obtained using the analytical implementation of
Tang and Lu, \cite{53a,53b} in which the repulsive part of the LJ
potential is fitted by a couple of Yukawa tails which allow to
obtain an analytical result.\cite{McD_2002} In our present
modeling approach we need to consider LJ potentials plus quadrupolar
interactions. This problem has been solved in \onlinecite{36} (see App.\ A
of \onlinecite{36}) by applying a second pair of Yukawa tails to fit the
quadrupolar interaction.
In our MSA scheme we also use a ``one fluid approximation''.\cite{HM-86}
Our results will show how this simple theory is able to reproduce
results in rather good agreement with MC data away from the critical point.
On the other hand, big discrepancies occur near the critical points, due
to the Mean Field nature of the MSA while experimental results exhibit
critical behavior characteristic of the Ising universality class.
We are aware of significant efforts
to design proper EOS which include Ising fluctuation near the
critical point.\cite{Parola_2008,Parola_1984} However, such
investigations are beyond the scope of the present paper.
Another popular method based on TPT1 is known as
``statistical associating fluid theory'' (SAFT). \cite{saft}
We want to stress that Monte Carlo simulations
remain an indispensable tool in investigations of the phase behavior
of polymer solutions and mixtures.
Indeed,
in the present study the model parameters
($\epsilon$, $\sigma$ and $q_c$) have been determined\cite{36}
using the simulation critical points which were obtained by Monte
Carlo simulation in \onlinecite{36}. (Any mean field approximation
has difficulties in reproducing the critical line with sufficient accuracy).
Note that supercritical fluids are interesting
and useful for practical applications, mainly
due to
their high compressibility and the concomitant large variations
of density upon small changes of pressure,
which are the origin of the breakdown of a mean field
approximation like TPT.
This means that in some very interesting regions of the phase
diagram Monte Carlo simulations are indeed a very valuable tool.
\section{Phase behavior of selected pure systems}
When we discuss the extent to which the Lorentz-Berthelot combining
rule can account for the phase behavior of mixtures, we need to
distinguish between inaccuracies arising from an imperfect
description of the pure components and those arising from the
Lorentz-Berthelot rule.
Therefore it is
necessary to give an overview of our modeling of the pure
components at the outset. Note that a possible additional source of errors
are entropic packing effects of non-spherical molecules that may
show up differently in a mixture of two molecules having different
shapes rather than for a pure system, where all molecules have the
same shape. Such effects are lost in our coarse-grained
models. However, this latter criticism cannot be applied when we
consider mixtures of noble gases, since in the framework of
classical statistical mechanics the description of noble gas atoms
as point particles, where two such atoms interact with a potential
depending on the absolute value of their distance only,
is certainly appropriate. (Disregarding the case of He, quantum effects
are negligible indeed at temperatures of interest \cite{55}).
For that reason, noble gases are also included in our discussion,
because they will bring out the possible limitations of our
modeling in terms of pair-wise effective potentials between
point-like particles most clearly. Thereafter, we shall deal with
CO$_2$, C$_6$H$_6$, CH$_4$, and selected short alkanes.
\subsection{Noble Gases}
The interaction between neutral point-like particles in our work
is always described by the Lennard-Jones (LJ) potential,
\begin{equation}\label{eq1}
U_{ij}^{LJ} = 4 \epsilon [(\frac {\sigma}{r_{ij}})^{12} -
(\frac{\sigma}{r_{ij}})^6]\quad .
\end{equation}
Rather than working with the full LJ potential as written in
Eq.~(\ref{eq1}), we find it computationally more convenient and
efficient to cut off this potential at $r=r_c=2^{7/6} \sigma$ and
shift it to zero there, such that
\begin{equation}\label{eq2}
U_{ij}(r) = U_{ij}^{LJ}(r)+ 4 \epsilon S\quad , \quad U_{ij}(r\geq r_c)=0\quad ,
\end{equation}
where S = 127/16384 for our choice of $r_c$, so that the potential
is continuous everywhere. When we require that Eqs.~(\ref{eq1},
\ref{eq2}) yield a vapor-liquid phase diagram such that the
critical temperature $T_c$ coincides with the experimental
critical temperature $T^{\textrm{exp}}_c$ of a particular system,
the strength ($\epsilon$) of the LJ potential is fixed once
$T_c^* = k_BT_c/\epsilon$ has been determined for the model.
Likewise, requiring that the critical density $\rho_c$ of the
model coincides with the experimental critical density
$\rho_c^{\textrm{exp}}$ of that system the range $(\sigma)$ of the
LJ potential is fixed once $\rho_c^*=\rho_c \sigma ^3$ is known for the
model. Here, $T^*=k_BT/\epsilon $ and $\rho^*=\rho\sigma
^3$ are dimensionless temperature and density, respectively.
Actually, the phase diagrams of both the full (untruncated) LJ
potential and of its truncated version Eqs.~(\ref{eq1}, \ref{eq2})
have been estimated with high precision.\cite{44,49}
Fig. 11 of Ref.\ \onlinecite{36} compares these phase diagrams
with each other and
with experimental data for the noble gases Ne, Ar, Kr and Xe.
\cite{56} One can see that in this scaled representation the
differences between the phase diagrams based on full and truncated
LJ models are quite minor. Although noble gases are thought to be
the best possible experimental realization of LJ fluids, the
agreement is not perfect either: while Ne and Ar are very close to
the LJ prediction, the data for the fluid branch of Kr and Xe are
somewhat off. This implies that even noble gases do not strictly
satisfy the ``law of corresponding states'', and hence a
description in terms of classical point particles interacting with
purely pairwise potentials of the same functional form,
$U_{ij}(r)=\epsilon \; f(r/\sigma)$, with one parameter for the
strength $(\epsilon$) and another for the range $(\sigma)$ of the
potential cannot be strictly true, irrespective of the form of the
function $f(r/\sigma)$: either a more complicated form of the
pairwise interaction, involving a third system-specific parameter
is needed, or (what is usually assumed) some effect of three-body
interactions \cite{57,58,59} are present.
An even more pronounced deviation from the simple LJ model shows
up, however, when additional quantities are analyzed, such as the
vapor pressure $p_{\textrm{coex}}(T)$ at liquid-vapor coexistence
and the interfacial tension $\gamma(T)$ between the coexisting
vapor and liquid phases of the fluid (see figs.~\ref{fig3},
\ref{fig4}). It is clear that adjusting $\sigma$ from
$\rho_c^{\textrm{exp}}$ implies that the whole curve for the
coexistence pressure $p_{\textrm{coex}}(T)$ in the $(p,T)$ plane
is underestimated for both Kr and Xe. This is a serious drawback
for the description of binary mixtures, of course, where one
wishes to work in the $(T,p,x)$ ensemble, $x$ being the molar
fraction of the solute. Therefore, we have tried an alternative,
namely adjusting $\sigma$ such that the experimental critical
pressure $p_c^{\exp} =p_{\textrm{coex}}(T_c)$ is correctly
reproduced. For Kr the critical temperature $T_c^{\exp} = 209.46~K$
\cite{56} implies $\epsilon = 2.8971 \cdot 10^{-21}~J$. If one
uses $\rho_c=11.0~\mathrm{mol}/\ell$ \cite{56} to fit $\sigma $ one
obtains $\sigma = 3.6524~$\AA , while using $p_c^{\exp}= 55.20$ bar
\cite{56} instead would yield $\sigma = 3.58782~$\AA.
(For a discussion of the accuracy of our estimation of $\epsilon$ and
$\sigma$, we refer to table I. In order to guarantee the
reproducibility of our results we always present $\epsilon$ and
$\sigma$ with all the digits that have been used in our programs.)
Fig.~\ref{fig4} shows that a somewhat better description of
the vapor pressure $p_{\textrm{coex}}(T)$ is obtained over the
full temperature regime from $140 K < T <T_c^{\exp}$.
The deviation from the data for the surface tension $\sigma$ has also
become smaller (fig.~\ref{fig3}b), but now there is a strong
deviation between the data for the liquid branch of the
coexistence curve and the model (fig.~\ref{fig3}a).
Similar problems are observed for
Xe, where $T_c^{\exp} = 289.74K$ yields $\epsilon =
4.00747 \cdot 10^{-21}J$, while the use of $\rho_c^{\exp}=8.371~\mathrm{mol}/\ell$
yields $\sigma = 4.00053 \mathrm{\AA}$ and use of $p_c^{\exp}= 58.41$ bar
yields $\sigma = 3.92326 \mathrm{\AA}$. Figs.~\ref{fig3}, \ref{fig4} show
that for these noble gases the description of the coexistence
curve, vapor pressure at coexistence and surface tension
is clearly not as good as for the model of CO$_2$ and C$_6$H$_6$ proposed in
Ref.\ \onlinecite{36}. These problems carry over to our modelling of
binary rare gas mixtures (see Sec.~III A), as the comparison with
experimental data shows.\cite{59'}
At this point we recall that as outlined in the introduction,
the investigation of such a system has been undertaken in order
to get an order of magnitude estimate of the errors inherent to
our very simple models: the goal is not the derivation of a very
elaborate description of noble gas mixtures.
Figure \ref{fig4} clearly shows that this model allows for a fairly good
description of the mixture phase diagram (if compared to other
mixtures presented in this work). In the present
context it was not necessary to include more complex potentials
available in the literature since long
times (e.g.\ \onlinecite{ARD-79,KH-70,HK-72}).
\subsection{Small Molecules: Methane, Carbon Dioxide, Benzene}
Methane (CH$_4$) is also described as a point particle, and again we
take Eqs.~(\ref{eq1}), (\ref{eq2}) as a coarse-grained
description of the interaction between methane molecules. Using
$T_c^{\exp}= 130.6~K$ \cite{56} and $\rho_c^{\exp} = 10.1~\mathrm{mol}/\ell$
\cite{56} as experimental input to determine
$\epsilon$ and $\sigma$, we obtain $\epsilon =
2.63624\cdot 10^{-21}~J$ and $\sigma = 3.75792~\mathrm{\AA}$. Fig.~\ref{fig5}
compares the resulting model prediction for the coexistence curve
in the temperature-density plane, the vapor pressure at
coexistence and the surface tensions with the corresponding
experimental data.\cite{56} It is remarkable that in this case
the simple potential model \{Eqs.~(\ref{eq1}), (\ref{eq2})\} works
better than in the case of the noble gas.
For molecules such as carbon dioxide (CO$_2$) and benzene
(C$_6$H$_6$) the situation is more complicated: while CH$_4$ is a
molecule of approximately spherical shape and does not have a
quadrupole moment, both CO$_2$ and C$_6$H$_6$ have quadrupole
moments. Note that (at least to a very good approximation
\cite{60,61}) CO$_2$ is a linear molecule while C$_6$H$_6$ is
disk-like. In \onlinecite{36} we have shown that a very good description
for both molecules is obtained when Eqs.~(\ref{eq1}) are augmented,
(\ref{eq2}) by a quadrupole-quadrupole interaction term. As the
latter is only a relatively small perturbation of the
Lennard-Jones-type interaction, it suffices to treat the
(angular-dependent) quadrupolar interactions via thermodynamic
perturbation theory. To leading order this yields the
following effective potential \cite{36,62,63}
\begin{equation}\label{eq3}
U_{ij}^{IQQ} = - \frac 7 5 \frac {1}{k_BT} Q^4/r_{ij}^{10} .
\end{equation}
Here, Q is the strength of the quadrupole moment of the considered
molecule. Note that the interaction is isotropic and inversely
proportional to temperature. We also cut off this part of the
interaction at the same radius $r_c$ as the LJ interaction, and
shift it to zero at r$_c$ as well, which yields the following total
pairwise interaction for these molecules
\begin{eqnarray}\label{eq4}
U(r_{ij})= \left\{ \begin{array}{r@{\; , \;}l} 4 \epsilon
[(\sigma/r_{ij})^{12}-(\sigma/r_{ij})^6- \frac{7}{20}
q(\sigma/r_{ij})^{10} + S] & r \leq r_c \\ 0 & r \geq r_c
\end{array} \right.
\end{eqnarray}
where
\begin{equation}\label{eq5}
S = \frac {127}{16384} + \frac 7 5 \frac {q}{256}
\end{equation}
and $q$ is the reduced quadrupolar interaction parameter,
\begin{equation}\label{eq6}
q = Q^4/[\epsilon \sigma ^{10} k_BT] = q_cT_c/T\quad , \quad
q_c\equiv q(T_c) \quad .
\end{equation}
Note that Eq.~(\ref{eq6}) is given in CGS units; in SI units, there
would be an additional factor $(4\pi \epsilon_0)^{-2}$.
Using Eqs.~(\ref{eq4})-(\ref{eq6}), one can fix $\epsilon$
and $\sigma$ such that critical temperature $T_c^{\exp}$ and
density $\rho _c^{\exp}$ are reproduced. (For $Q$,
the experimental value is taken as a first guess). As discussed in
Ref.~\onlinecite{36}, this leads to a self-consistency problem, since
Eq.~(\ref{eq6}) must hold together with
\begin{equation}\label{eq7}
\epsilon(q_c) = k_BT_c^{\exp} /T_c^* (q_c), \quad \sigma ^3 (q_c)
= [ \frac {\rho_c^* (q_c)M_{\textrm{Mol}}}{\rho_c^{\exp}N_A}] \;,
\end{equation}
where $M_{\textrm{Mol}}$ is the molar mass of the molecule and
$N_A$ is Avogadro's number. This problem was solved in \onlinecite{36}
by determining the functions $T_c^*(q_c)/T_c^* (0)$, and $\rho
_c^*(q_c)/\rho_c(0)$ by extensive Monte Carlo simulations for a
broad range of values for $q_c$. It turns out that for CO$_2$ the
experimental value $Q=4.3 \pm 0.2 D \mathrm{\AA}$ yields
\begin{equation}\label{eq8}
q_c = 0.387,\; \epsilon = 3.491 \times 10^{-21}~J, \quad \sigma =
3.785~\mathrm{\AA} \;,
\end{equation}
while for the case of benzene the value $Q=12 D \mathrm{\AA}$ would imply
\begin{equation}\label{eq9}
q_c = 0.247,\quad \epsilon = 6.910 \times 10^{-21}~J, \quad \sigma = 5.241~\mathrm{\AA} \quad .
\end{equation}
The corresponding results for the vapor-liquid coexistence curves
in the $(T,\rho)$ and $(p,T)$ planes as well as the temperature
dependence of the interfacial tension for both CO$_2$ and
C$_6$H$_6$ were already presented in \onlinecite{36} and shown to give a
rather good agreement with experiments. \cite{56}
Of course, the disregard of the angular dependence of the
quadrupolar part of the interactions is a matter of concern. This
point was investigated by us in \onlinecite{37}, where detailed
comparisons of Monte Carlo results for the full angular-dependent
quadrupole-quadrupole interaction and the isotropic approximation
\{Eqs.~(\ref{eq3})-(\ref{eq6})\} were performed for the case of
CO$_2$. It was shown \cite{37} that the model with LJ + full
quadrupolar interactions (which is still a crude coarse-grained
model, in comparison with all-atom models including partial
charges etc.) does not provide a better account of the
experimental data than the spherically averaged one.
Another point of concern is the possible sensitivity of the
results of such models to the precise value of $q_c$. Note that
$q$ is proportional to $Q^4$ \{Eq.~(\ref{eq6})\}. Consequently, a small
experimental error in $Q$ is magnified considerably. There may also
be systematic effects since $Q$ is often determined in the dilute
gas phase. Here, we are interested in using densities around
the critical density, and $Q$ could be slightly renormalized
there.
Packing effects should also be taken into account.
Indeed, CO$_2$ is not a spherical
molecule, and at high density a local orientational order could arise.
This packing could enhance some favorable angular correlations that
give rise to a higher effective quadrupolar moment.
One can argue that high temperature perturbative theory
\{see Eq.\ (\ref{eq3})\} may not be very accurate and higher order
terms could be important: in fact our previous investigation
in which a full (angular dependent) quadrupolar interaction was
considered,\cite{37} proves that this is not the case.
In addition, one may argue that the model of
Eqs.~(\ref{eq3})-(\ref{eq6}) is an effective model, intended for
a good representation of equation of state data, particular for
vapor-liquid equilibria (VLE). Therefore, $q_c$ should be treated as
an effective parameter which can be used to optimize the description of such VLE
data. In this spirit, we have also tried different choices of
$q_c$ and found that a slightly better description of CO$_2$ is obtained
\begin{equation}\label{eq10}
q_c = 0.47\;, \quad \epsilon = 3.349 \cdot 10^{-21}~J, \quad \sigma
= 3.803~\mathrm{\AA}.
\end{equation}
This choice was already included in our previous work.
\cite{36,37} For benzene, a very good agreement with experiments can be achieved for
\begin{equation}\label{eq11}
q_c=0.38\; , \quad \epsilon = 6.472\cdot 10^{-21}~J,\quad \sigma =5.284~\mathrm{\AA}.
\end{equation}
Fig.~\ref{fig6} presents the coexistence curve of benzene in the $\rho - T$
and $T-p$ planes as well as the interfacial tension.
Results based on Eq.~(\ref{eq11}) are compared with results based on
the previous choice \{Eq.~(\ref{eq9})\} \cite{36} and with
experimental data.\cite{56} The description of
the experimental data is clearly remarkable over a wide range of
temperatures. It turns out (see below) that these ``optimized''
choices of parameters \{Eqs.~(\ref{eq10}), (\ref{eq11})\} also yield
a much better description when we consider mixing behavior (e.g.
C$_6$H$_6$ + CH$_4$).
\subsection{Short Alkanes}
In this section we briefly discuss the extension of our
methodology to systems such as propane (C$_3$H$_8$), pentane
(C$_5$H$_{12}$) and hexadecane
(C$_{16}$H$_{34})$. These short alkanes are just
treated as test systems for our methodology and will be used in
Sec. III as components in binary mixtures. Our methodology can be
used, in principle, for any alkanes, provided information on the
vapor-liquid critical point ($T_c^{\exp},\rho_c^{\exp}$) is
available. (Unfortunately, this is not the case for much
longer chains).
As it was already emphasized (fig.~\ref{fig1}) we do not attempt an
all-atom description of alkanes. We also do not use an united atom
model where CH$_2$ (or CH$_3$) groups are described as one
spherical pseudo-atom.\cite{64,65} Such a model requires
torsional and bond angle potentials and is still rather demanding to
simulate. As indicated in fig.~\ref{fig1}, we reduce the
description to a coarse-grained bead-spring model, where a
small number of successive CH$_2$ or CH$_3$ groups are combined
into a single effective monomeric unit.
For C$_{16}$H$_{34}$ we choose 5 effective units, so
each unit contains about 3 C-C bonds. For pentane and hexane we
choose a dimer (but the effective
LJ parameters $\epsilon$ and $\sigma$ are different, of
course). Such a model is perhaps most questionable in the case
of C$_3$H$_8$, which we treat as a single effective unit (i.e.,
such molecules are treated like almost spherically symmetric
molecules such as methane).
We keep the (truncated and shifted) LJ potential
\{Eqs.~(\ref{eq1}), (\ref{eq2})\} between all pairs of effective
units, bonded and non-bonded ones. In addition we use the well-known FENE potential for the bonded ones \cite{66}
\begin{equation}\label{eq12}
U_{\textrm{FENE}}(r) = -33.75\epsilon \ln [1-(r/1.5\sigma )^2]
\end{equation}
We note that in Eq.\ (\ref{eq12}) $\epsilon$ and $\sigma$ are
the same parameters as in the LJ potential between
the monomers. The parameters of the FENE potential
have been chosen to prevent the
crossing of macromolecules in the course of their motion.
We note that this choice does not reproduce the characteristic ratio of alkanes accurately.
This means that the FENE potential is fully constrained, and the model
remains a two
parameter model with parameters chosen to match the
critical temperature and density.
On this coarse-grained level both torsional potentials and
bond-angle potentials between effective beads are ignored.
Hence, it is worthwhile to test whether such crude models are still
able to reproduce the phase diagram and other thermodynamic
properties of the real system correctly. Thus,
figs.~\ref{fig7}-\ref{fig9} show results for the phase diagrams of
several members of the alkane series (including C$_3$H$_8$,
C$_5$H$_{12}$ and C$_{16}$H$_{34}$) in the T-$\rho$ plane, as well
as the corresponding coexistence pressures and interfacial
tensions between the coexisting vapor and liquid phases. The
agreement between the model results and the corresponding
experimental data \cite{56} is remarkable, again, although it is not as
convincing as for methane (which we have included for comparison).
In particular, for C$_5$H$_{12}$ deviations
clearly occur. Table I collects the experimental critical
temperatures, densities, and pressures,\cite{56} as well as our
choices for $\epsilon$ and $\sigma$ for the materials studied, and
the prediction for the critical pressure that results from our
model.
In all cases the critical pressure is predicted with an accuracy of a few
percent, and a glance on Fig.\ \ref{fig8} shows that the slope of the
vapor pressure versus temperature curve is close to the slope derived from experiments, too. For
temperatures away from the critical region (say, 20$\%$ below
$T_c$), deviations
between experiment and the model predictions become visible, both in the
coexistence curve, coexistence pressure, and interface tension (Fig.\
\ref{fig9}), in particular for propane and pentane. Of course, the accuracy of
the modeling could be enhanced by allowing for additional adjustable parameters
like in many models in the literature, e.g.\ by introducing a
bond-angle potential, or more interaction sites (see e.g.\
\onlinecite{KT-2004}). Then, quantities such
as the acentric factor (referring to the shape of the coexistence curve 30$\%$
below $T_c$ \cite{Pitzer_1955}) can presumably be fitted nicely. However, the
simplicity of the coarse-grained model is lost.
Experience with
such somewhat more complicated models shows that these models still require
correction parameters $\xi$ to the LB combining rules that deviate from unity
by about 10$\%$ (see e.g.\ \onlinecite{Vrab2005}). Without these additional
parameters (note that it is not at all straightforward to find optimal values
for these parameters) the gain in accuracy that such models
yield for the description of mixtures is rather modest. Note that an important
motivation for the present work is to develop simple models suitable for the
simulation of polymer solutions (the case of hexadecane in CO$_2$ being just a
prototype case). We are not focusing on pushing the accuracy of modeling of
pure short alkanes to its limit.
\section{PHASE BEHAVIOR OF SELECTED BINARY MIXTURES}\label{secmixture}
Extending our treatment to binary systems (A,B) one wishes to
describe the interactions between unlike particles by a potential
of the same functional form as it is used for the interactions
between particles of the same type, i.e. the Lennard-Jones
potential in our case. The simplest choice, most often used in the
literature, is the Lorentz-Berthelot combining rule \cite{52}
\begin{equation}\label{eq13}
\sigma_{AB} = (\sigma_{AA} + \sigma_{BB})/2,\quad \epsilon _{AB} =
\sqrt{\epsilon_{AA}\epsilon_{BB}}
\end{equation}
As is well-known, there is really no convincing derivation of
Eq.~(\ref{eq13}), so there is no reason to believe that Eq.~(\ref{eq13})
is exact. At best it is a practically useful approximation. As a
matter of fact, several alternatives to Eq.~(\ref{eq13}) have been
proposed in the literature.\cite{45,52,67,68,69} Although it has
been demonstrated that there are some cases where some of these
alternative combining rules work better, in general none of these
alternative combining rules has a really clear advantage.\cite{41}
Since we wish to explore a very simple and general approach, we do
not implement any alternatives to the simple Lorentz-Berthelot
rule in our paper, even when one has to pay the price of
sacrificing a small improvement in the accuracy of our modeling.
We also note that the Lorentz-Berthelot rule works very well
for the prediction of virial coefficients for the mixture of Argon
plus CO$_2$, a mixture of an apolar and a quadrupolar fluid.\cite{69´}
We want to stress that proceeding in such a way no experimental input
from the mixture phase diagram is required for
testing a full predictive model for the mixture. This also holds for
the TPT1 computations which require only
$\epsilon$ and $\sigma$ that can be obtained using Monte Carlo results
of the pure component critical line.\cite{36}
Coexistence densities and pressure have been computed as in
pure component systems.\cite{24} On the other hand, the computation
of the critical points is more complicated. Indeed,
in a binary mixtures close to criticality the
proper identification of the order parameter is a subtle problem.
\cite{70} In principle, complete scaling \cite{71,72,73} in
the case of binary mixtures implies that three scaling fields
occur, which are linear combinations of four independent intensive
variables: the deviations of two chemical potentials, temperature,
and pressure from their values at the critical point.
Consequently, the order parameter density becomes a function of
the appropriate conjugate variable, and the relevant physical
densities (particle number densities, entropy density) become
nonlinear functions of the proper scaling fields.\cite{70}
Since this formalism is somewhat cumbersome for the case of
compressible binary fluid mixtures,
we simplify the problem by applying ``field-mixing''-procedures
analogous to the method of Wilding \cite{19,20,21} which is
rather successful for most one-component fluids.
Details on this procedure are reported in the appendix
\ref{appendix}, presenting the analysis done for a critical point
of the Krypton Xenon mixture. In order to estimate systematic
errors of this procedure, in appendix \ref{appendix} we also present
results with a full finite size analysis with cumulants
crossing \cite{16} for a highly asymmetric mixture
like carbon dioxide in hexadecane.
\subsection{Mixtures of Small Apolar Molecules}
As a first example of apolar mixture we present results for
kripton plus xenon.
As it has been discussed in Sec.~II, the noble gases already exhibit rather
large deviations between the experimental data and the model
calculations based on the Lennard-Jones potential. Thus, it is
interesting to see whether these problems get even worse when
mixtures are considered.
The resulting critical line in the ($p,T$) plane
is shown in fig.~\ref{fig4} for both choices of $\epsilon$
and $\sigma$ as discussed in Sec.~II A.
If we fit $\rho_c$ and $T_c$ for the pure systems, the
predicted critical points for the mixture
deviate from the experimental curve about as much as for the pure
systems. If we adjust $\epsilon, \sigma$ such that $p_c,T_c$ is
reproduced, the data \cite{59'} for the two mixed systems are
almost perfectly reproduced.
The variation
of the critical concentration with temperature is also rather well
reproduced (fig.~\ref{fig11}) by both models where $\rho _c$ and $T_c$
or $p_c$ and $T_c$ are fitted to experimental values.
As a second case we consider now methane in butane.
In Secs.~II B, C we showed that the simple LJ model gives
a fairly accurate account of the equation of state of both CH$_4$
and C$_3$H$_8$. Therefore, it is natural to consider a mixture of those two
molecules as a next step. Of course, a comprehensive study of the
phase behavior of such mixtures in the space of all three
variables $(T,p,x)$ is a nontrivial effort. Therefore we limit
ourselves to consider only isothermal slices through the phase
diagram, following a standard practice in the literature.
\cite{41,43} As an example, fig.~\ref{fig12} shows two such
slices at $T=327K$ (a) and $T=277 K $ (b), and compares
experimental data \cite{74} with selected Monte Carlo data and results
from our implementation of the TPT1-MSA (which is described in
appendix B of \cite{36}).
We emphasize that the various parameters characterizing
the interactions among the various molecules are those obtained
from Monte Carlo simulations of the pure materials (Sec.\ II),
together with the Lorentz-Berthelot rule.
These parameters also serve as input for
TPT1-MSA: there are no additional parameters that
enter the latter approach.
Thus we present comparisons between experiments,
simulations and theory in which no adjustable parameters for the
mixture have been used.
It should be noted that both chosen temperatures in fig.\
\ref{fig12} fall
below the critical temperature of C$_3$H$_8$ but exceed the
critical temperature of CH$_4$. Therefore, the characteristic
bubble-shaped liquid-vapor coexistence curve results, starting out
at the ordinate axis at the vapor-liquid coexistence point of pure
C$_3$H$_8$, but not extending to CH$_4$ concentrations close to
$x=1$. The critical point occurs at the maximum of this closed
loop. (The liquid phase is located on the upper part of the loop to the
left of the critical point, the remaining part of the curve
describes the vapor). For $T=327K $ and $x \leq
0.35$ both experiment, TPT1 and Monte Carlo agree nicely. For
larger $x$, however, a systematic discrepancy between Monte Carlo data
and experiment shows up. The TPT1-MSA approximation overestimates
the critical pressure substantially. This problem already
occurs in the pure systems, as is well-known, and is an inevitable
consequence of simple mean-field-like approximations.
\cite{36,53,54} Fig.~\ref{fig5} shows that the critical
temperature and pressure of pure CH$_4$ are both overestimated.
The same holds for pure C$_3$H$_8$, and the whole line of
critical points $T_c(x)$ that connects $T_c(0)$ and $T_c(1)$ when
we would project them into the $(p,T)$ plane as we did for the Kr-Xe
mixture (fig.~\ref{fig4}). As in the latter case, the mixture of
CH$_4$ and C$_3$H$_8$ has a simple ``type I'' phase diagram in the
classification scheme of fluid binary mixtures \cite{38,39,40}
(type 1$^P$ in the modern classification \cite{D_98}). As
a consequence, we expect that TPT1-MSA predicts too large
vapor-liquid coexistence loops in the $(p,x)$ plane at all
temperatures that are supercritical for CH$_4$ but subcritical for
C$_3$H$_8$.
A more disturbing discrepancy seems to occur between the data
\cite{74} and the theoretical results at the lower temperature
$(T=277K)$, where at small $x$ the vapor pressure at coexistence
falls slightly but systematically below the experimental data. For
molar concentrations well below criticality, Monte Carlo results
and TPT1-MSA agree very well, and our numerical
procedures are accurate for our model. Hence, assuming that
the experimental data are accurate enough so that the discrepancy
is meaningful, this result indicates that some limitations of our
model become apparent. This is not really a surprise, of course,
because in the data for pure propane at this temperature
discrepancies of the order of a few percent do occur as well
(figs.~\ref{fig7}-\ref{fig9}).
As a third case we now consider the mixture of CH$_4$, and
C$_5$H$_{12}$, because for pentane slightly larger deviations
between the predicted and observed coexistence vapor pressure do
occur over a much broader temperature range (fig.~\ref{fig8}).
Indeed, the corresponding isothermal slices through the phase
diagram of that mixture (fig.~\ref{fig13}), which still is a
type-I phase diagram, show that slight but systematic
discrepancies are now seen at the higher temperature as well. At
the low temperature, the phase diagram can only be reproduced in a
rather qualitative manner. Note, however, that $T=237K$ is less
than 50\% of the critical temperature of pentane, where the
effective interactions of pentane were adjusted: of course, the
coarse-grained modelling used in our work should not be pushed to
too low temperatures. Keeping this limitation in mind, we conclude
that a rather satisfactory description of mixing behavior of these
systems is in fact reached by our models. Hoping for
perfect agreement would have been premature, in view of the
simplicity of our models. But the phase diagram predictions should
allow a useful first orientation at temperatures not too far below
of the higher critical temperature of the components in such a binary
mixture.
\subsection{Mixtures of small molecules, one of which has a quadrupole moment}
We begin with a mixture of CH$_4$ and CO$_2$,
because for both pure molecules a particularly accurate
description of the equation of state was
obtained (see Sec.\ II). Again we note that the CH$_4$ + CO$_2$
system belongs to the category of ``type I'' phase diagram in the
classification scheme of Scott and van Konynenburg \cite{38,39,40}
(1$^P$ in the modern classification \cite{D_98})
and the temperature regime of interest for our modeling is the
regime in between the critical temperatures of the two
constituents of this mixture. Note that Eq.~(\ref{eq13}) only applies to
the LJ part of the interactions of CO$_2$, since CH$_4$ has
no quadrupole moment.
In fig.~\ref{fig14} we present isothermal slices through the
phase diagram in the space of variables $(T,p,x)$. If one uses TPT1-MSA
the model for CO$_2$ based on Eq.~(\ref{eq10}) can describe the
mixing behavior with CH$_4$ very accurately at molar
concentrations $x$ of CH$_4$ and pressures that are not close to
criticality. As emphasized above,
mean-field theories such as TPT1-MSA are not expected to be
accurate near critical points. Hence, the discrepancy that
TPT1-MSA predicts a too large loop inside of which two-phase
coexistence occurs, is inevitable and expected. But for the model
Eq.~(\ref{eq10}) the part of the loop at not too large $x$ is
significantly more accurate (full curves) than a simple LJ model
for CO$_2$ would be (broken curves). As expected, at low
temperatures (such as $T=230K$) the quadrupolar model for CO$_2$
\{Eq.~(\ref{eq8})\} also starts to show slight but systematic deviations
from the experiment at the vapor branch of the vapor-liquid
coexistence curve. This is similar to our finding for the apolar
mixtures (Sec.~III B).
In order to verify that the good agreement between experiment and
theory for the quadrupolar model of CO$_2$ in
the CH$_4$+ CO$_2$ mixture is not just fortuitous, we
show in fig.~\ref{fig15} corresponding results for the mixture of
benzene (C$_6$H$_6$) and methane (CH$_4$). This is a more
stringent test, since the critical temperatures of the two
constituents are rather far apart from each other (cf.\
figs.~\ref{fig5}, \ref{fig6}). Nevertheless, the conclusions are
the same as in the case of CH$_4$+CO$_2$: using interaction
parameters that were optimized for the pure systems, namely those
of Eq.~(\ref{eq11}) in the case of C$_6$H$_6$, and adjusting them to
Monte Carlo results as described in Sec.~II, we can proceed to
the description of the mixture data \cite{78} and estimate the
missing mixed interaction parameters from the Lorentz-Berthelot
rule, Eq.~(\ref{eq13}). The use of these interaction parameters in a
simple and fast analytical theory for the EOS such as TPT1-MSA
then provides a satisfactory description of the phase behavior of
the mixture, apart from the vicinity of critical points (this
drawback can be rectified by carrying out MC work for the mixture
as well, of course) and for not too low temperatures. (For
temperatures of the order of 50\% of the critical temperature
$T_c$ of the constituent with the higher $T_c$ systematic
deviations start to appear rather generally.)
The last example of this section deals with a slightly more
complicated case, namely the CO$_2$ + C$_5$H$_{12}$ system
(fig.~\ref{fig16}): while CO$_2$ is still represented as a point
particle with a quadrupole moment, as in the previous examples,
the other partner of this mixture (C$_5$H$_{12}$) should not be
coarse-grained into a point particle any more, but rather needs to
be represented as a dimer (i.e., a dumbbell-like effective
molecule). In this case the TPT1-MSA theory predicts
unmixing over a far too large range of molar CO$_2$
concentrations, and the improvement provided by the inclusion
of the quadrupolar moment at small $x$ is only qualitative, but
not quantitative. On the other hand, the Monte Carlo results for
this model are in rather good agreement with the corresponding
experimental data.\cite{80m} Since MC and TPT1-MSA are using
precisely the same interaction parameters, we conclude that
for this particular case TPT1-MSA is somewhat inaccurate for the vapor branch
of the mixture, far away from criticality. A related discrepancy
was already noted for the CH$_4$ + C$_5$H$_{12}$ system at $T =
378K$ (fig.~\ref{fig13}a). Perhaps this indicates that TPT1-MSA
does not capture the statistical mechanics of flexible dimers well
enough.
\subsection{Polymer solutions: The CO$_2$+C$_{16}$H$_{34}$
system revisited}
Virnau et al.~\cite{44,44a,44b} already attempted to model this system,
describing CO$_2$ as a point particle with no quadrupole
moment. They found that using the Lorentz-Berthelot rule
\{Eq.~(\ref{eq13})\} the phase diagram predicted by the model
belongs to type I, while experiments suggest
\cite{78,79} that this system belongs to the type III class
(1$^C$1$^Z$, according to \onlinecite{D_98}, where 1$^C$ means that
the critical line emanating from the pure component critical point of the
hexadecane goes to high pressure regions without joining the solvent critical
point like in diagrams starting with 1$^P$).
Virnau et al. \cite{44} proposed that one can improve the
description by using an empirical factor $\xi$ to modify
Eq.~(\ref{eq13}), assuming that $\epsilon_{AB} = \xi
\sqrt{\epsilon_{AA}\epsilon_{BB}}$ instead of
$\epsilon_{AB}=\sqrt{\epsilon_{AA}\epsilon_{BB}}$.
In the literature, the value of $\xi$ depends on the specific
mixture and typically is written in the form $\xi_{AB}=1-k_{AB}$,
with $k_{AB}\ge 0$. Of course,
there is not really a theoretical justification for doing so, and
$\xi$ simply plays the role of a fitting parameter. By trial and
error it was found that $\xi = 0.886$ provides a description
compatible with the experimental data.
In the present subsection of our paper, we show that the main
source of the problems encountered in \onlinecite{44} was the neglect of
the quadrupole moment. Thus, we have repeated the study of the
CO$_2$+C$_{16}$H$_{34}$ system, insisting on the Lorentz-Berthelot
rule, Eq.~(\ref{eq13}), but using Eq.~(\ref{eq10}) as an improved
model for CO$_2$, as in the previous subsection. Again, the
Lorentz-Berthelot rule is only applied to the Lennard-Jones part of the
interactions, since C$_{16}$H$_{34}$ does not have a
quadrupole moment.
Following the strategy of the previous subsection, we have
computed an isothermal slice through the phase diagram at $T=486
K$, where data from the previous simulation \cite{44} were
available both for $\xi=1$ and for $\xi = 0.886$. Indeed it is
found that the data of the present model ($\xi=1$, but optimized
quadrupolar interaction $q_c=0.47$ for pure CO$_2$) are well
compatible with the experimental data \cite{80} and almost fall on
top of the results of the previous calculation with $q_c=0$ and
$\xi=0.886$.\cite{44}
Of course, we have already seen in the previous subsections, that
often a very good agreement between our description based on a
very simplified model occurs at high enough temperatures. In order
to test, to what extent this problem arises for the present
system, we have followed the strategy of \onlinecite{44} to compute the
full critical line $T_c(x),p_c(x)$ for the full range of molar
concentrations $x$ of CO$_2$. Fig.~\ref{fig18} shows the resulting
projection into the $p^*,T^*$ plane. (Here $p,T$ are given in LJ
units, with the LJ parameters of the effective monomers used to
rescale the variables). One sees that the simulations with nonzero
quadrupole moment included in the figure are close to those for
$\xi=0.9$, $q_c=0$, for $T^*\leq 1.3$. As a consequence, the model
that we have developed for CO$_2$, Eq.~(\ref{eq10}), is still
not able to yield the correct phase diagram topology.
(For $\xi=0.9$, transition type IV was observed in Ref.~\onlinecite{44}, as opposed to type III which was observed experimentally.)
For $T^* <0.8$ the model does not
yet describe the properties of hexadecane + carbon
dioxide mixtures accurately, although for $T^*\geq 1.3 \; (T \geq 545 K)$ the
properties of the system are predicted rather satisfactorily. Of
course, this result is not unexpected.
For $T\leq
0.5T_c^{hex} \approx 360 K$ the model based on fitting the
critical parameters of hexadecane to fix its interaction
parameters starts to become inaccurate.
On the other hand, the proper prediction of the phase diagram type is
a very stringent test. Indeed, variation of the interaction parameters
by a few percent could drastically change the type of
the phase diagram. \cite{44}
\section{Conclusions}
In this paper, we have studied the phase diagrams of a variety
of fluid binary mixtures, with particular emphasis on mixtures
of alkanes in supercritical carbon dioxide and benzene. In order to
better understand the performance of our modeling for these systems,
we have also investigated mixtures with apolar solvents including
noble gases and methane.
We have investigated the accuracy of the use of the Lorentz-Berthelot
rules for describing the mixing behavior, based on interaction parameters
for the pure systems that are tuned such that the critical point
(critical temperature, critical density or pressure) of the pure systems
are well reproduced.
Using a simple Lennard-Jones model for interaction parameters of
pure apolar fluids, Monte Carlo calculations in the grand-canonical
ensemble, analyzed by appropriate finite size scaling methods, readily yield
the desired accuracy for this procedure.
For the polar molecules we use a spherically averaged point-like quadrupolar
interaction, \cite{62,63,36} which was shown to produce very good
phase diagrams, \cite{36} also if compared to more realistic atomistic
models. Our model takes as experimental input the critical temperatures
and densities of the pure components (like in previous coarse grained schemes
\cite{44}) plus the experimental quadrupolar moments.
For pure CO$_2$ and C$_6$H$_6$ this choice leads to a
significant improvement in comparison with a simple LJ model
without explicitly accounting for the polar interactions. In
Ref.\ \onlinecite{36} and Fig.\ \ref{fig6}, as a second option, we
have treated the quadrupole moment as an effective parameter
in an attempt to optimize agreement with experiments.
We tune this parameter such that the liquid branch of the
vapor-liquid coexistence curve of pure CO$_2$ or pure C$_6$H$_6$
is optimally represented. In the case of benzene, for which
the optimization procedure seems to work very well,
the agreement with the coexistence pressure is also improved. (This is not
the case of CO$_2$ which is however better described than benezene
if the experimental values for the quadrupole moments are used.)
The physical reason for this requirement to work with an effective
quadrupole moment is presumably that actual molecules are
not point-like particles, of course: CO$_2$ is a rather elongated molecule,
while C$_6$H$_6$ is disk-like. So packing effects should occur, i.e.\
local orientational correlations, which are underestimated by the
quadrupolar interaction.
Thus, it is gratifying to note that
a remarkable improvement of accuracy in the prediction of
the phase behavior of mixtures is achieved if this effective quadrupole moment is used.
These energy parameters, which we fixed from the description of
the pure systems, together with the Lorentz-Berthelot rules, allow us
to predict phase diagrams of mixtures, with no ambiguity whatsoever,
since no further adjustable parameters occur. Two methods of prediction
are used: (i) Monte Carlo simulations (ii) TPT1-MSA calculations. The Monte Carlo approach
has the substantial advantage that it is also accurate near critical
points of the mixture. In principle, we obtain the exact statistical
mechanics of the model system. Any discrepancy between experiment and prediction
is entirely due to a shortcoming of the (simplified) model.
The TPT1-MSA approach has the merit that relatively little computational
effort is necessary to implement it. However, it clearly involves
various approximations and hence the interpretation of discrepancies
between TPT1-MSA and experiment is not so clear - part of them being
due to inadequacies of the model, part of them stem from inaccurate
approximations. For instance, TPT1-MSA, like all mean-field theories,
overestimates the critical temperature and pressure, so the
isothermal slices through the phase diagram of the mixture always
involve two-phase regions which are too large.
We note that fluids like CO$_2$ have an important application as
supercritical solvents. If one aims at describing the behavior in
the critical region of the pure solvent and of the mixtures
correctly, Monte Carlo methods have a clear advantage.
Now, one could try to readjust parameters in the TPT1-MSA approach
to improve agreement with experiment (like done for instance in
\onlinecite{McD_2002}, where $\epsilon$ and $\sigma$ for the EOS have
been rescaled in comparison
to the model used in MC simulations in order to properly reproduce
the critical points of the pure compounds), but this would be just an
attempt to provide a partial cancellation of errors, and in other regions
of the phase diagram the description would necessarily get worse.
\cite{McD_2002} Since we feel that relatively little physical insight
is gained by such fitting procedures, they have not been implemented in our
paper.
Our overall conclusion is that in the framework of the modeling
as defined above the Lorentz-Berthelot rules work very well, in
the sense that an ad-hoc change of mixed binary interactions by
at most a few percent (typically one or two percent) would lead
to almost perfect agreement with experiment.
As a piece of evidence for this claim, we note that in the study
of the CO$_2$+C$_{16}$H$_{34}$ system by Virnau et al., \cite{44}
where the CO$_2$ molecule was modeled as a point particle with
LJ interactions with no account of the quadrupole moment, a
correction factor $\xi=0.886$ to the Lorentz-Berthelot rule
was required to produce good agreement with experiment.
However, the present model (with a quadrupolar interaction and
no correction factor) yields results that are almost identical to
those of Virnau et al.\ \cite{44} when $\xi=0.900$ is chosen.
As a consequence, we conclude that in the present model a
correction factor $\xi\approx0.985$ would suffice to reproduce the results.
Noting that the Lorentz-Berthelot rule assumes that the
mixed correlation functions in the fluid described
atomistically behave in the same way as in the coarse-grained
descriptions, deviations of such a fitting parameter $\xi$
from unity in the range from 1 to 2\% are no surprise at all.
Thus we feel that the present level of accuracy cannot easily
be improved in the framework of our model. As it has been emphasized
already in the introduction, many more complicated models for fluids
are discussed in the literature (see \onlinecite{28,29,30,31,32,33,81}).
Optimizing parameters in those models such that the critical properties
of the pure systems and their vapor-liquid coexistence curves are
very well reproduced, might be an alternative starting point
to test the validity of combining rules.
\cite{Vrab2001,Stoll2003,Vrab2005} However, even for our very
simple model the Monte Carlo runs require substantial computer
resources, and hence we have not attempted to generalize our
approach to other models.
The strength of our method, if compared to more detailed
models with a lot of parameters the optimization of which would require
massive computation, is its generality. It is possible to
have the potentials for a given mixture, without any extra computational
efforts from the results of the pure components. \cite{36}
It is also important to mention that the present work
validates the use of spherical averaged quadrupolar potential. \cite{62,63,36}
Finally, it is important to observe that our way to coarse grain solvent
molecules into single beads has the advantage, with respect to atomistic models
like multi center Lennard Jones, to be accessible to advanced equation of
state machineries. In this paper we have shown how, with rather small efforts,
significant results can be obtained. We are also aware of the fact that
several improvements could be done (e.g.\ using some integral equation scheme
which should improve the MSA solution near the critical point). However, most
of the advanced methods in equation of state modeling, apply only for
reference systems that are mixture of monomers (i.e.\ beads with point-like
interactions), the associating part being taken into account by TPT1. On the
other hand TPT1 gives a reasonable description only if in the ``associated
molecule'' diameters of the beads do not overlap. This is of course the case
in our CG model for alkanes in which the experimental distance between three
carbon units (d=4.59 \AA) is bigger than the typical $\sigma$ used
($\sigma\approx$4 \AA).
(This is another reason the FENE potential uses the same simulation parameters
of the LJ interaction.) The condition d$>\sigma$ guarantees that the reference
system (a mixture of monomers) is a good starting point for a perturbation
theory. On the other hand if d$<\sigma$ association is too strong and cannot
be properly taken into account by TPT1 (i.e.\ the monomer reference system is
not the adequate starting point for a perturbation expansion). In models which
describe simple solvent molecules with several interacting points (see e.g.\
Tab.\ 1 of \onlinecite{Vrab2001} for typical parameters of two center LJ) we have
d$<\sigma$: this implies that these models cannot be investigated with
associating theories. In conclusion, our modeling approach might enable
the application of modern EOS which is another important motivation of our work.
We do feel that the approach
based on the present models is able to make nontrivial and
practically useful predictions for a large class of
systems. Hopefully more experimental data on mixtures will become available to
allow for more stringent tests.
\\
\\
{\bf ACKNOWLEDGEMENTS}
CPU times was provided by the NIC J\"ulich and the ZDV Mainz.
We would like to thank M.\ Oettel (University of Mainz), F.\ Heilmann and H.\ Weiss (BASF AG, Ludwigshafen) for fruitful discussions.
BMM would also like to acknowledge BASF AG (Ludwigshafen)
for financial support.
LGM wishes to acknowledge support from Ministerio de Educacion y
Ciencia
(project FIS2007-66079-C02-01) and Comunidad Autonoma de Madrid
(project MOSSNOHO-S0505/ESP/0299).
|
3,212,635,537,654 | arxiv | \section{Introduction}\label{sec1}
In the study of population genetics models, it is of great importance
to identify their stationary distributions. Such identifications
provide us with basic information of possible equilibria of the models
and are needed prior to quantitative discussions on statistical
inference. Since \cite{DK99,Hiraba} and \cite{BL}, theory of
generalized Fleming--Viot processes has served as a new area to be
cultivated and has been developed considerably. (See \cite{BB} for an
exposition.) In view of such progress, it seems that we are in a
position to explore the aforementioned problems for some appropriate
subclass of those models. In this respect, it would be natural to think
of the one-dimensional Wright--Fisher diffusion with mutation as a
prototype. This celebrated process is prescribed by its generator
\begin{equation}\label{1.1}
A:=\frac{1}{2}x(1-x)\frac{d^2}{dx^2} +\frac{1}{2}
\bigl[c_1(1-x)-c_2x \bigr]\frac{d}{dx},\qquad x \in[0,1],
\end{equation}
where $c_1$ and $c_2$ are positive constants interpreted as mutation
rates. The stationary distribution is a beta distribution
\begin{equation}\label{1.2}
{B}_{c_1,c_2}(dx):= \frac{\Gamma(c_1+c_2)}{\Gamma(c_1)\Gamma(c_2)}
x^{c_1-1}(1-x)^{c_2-1}\,dx,
\end{equation}
where $\Gamma(\cdot)$ is the gamma function. In addition, the process
associated with (\ref{1.1}) admits an infinite-dimensional
generalization known as the Fleming--Viot process with
parent-independent mutation, whose stationary distribution is
identified with the law of a Dirichlet random measure.
In the present paper, we consider a problem of finding a class of
generalized Fleming--Viot processes whose stationary distributions can
be identified. As far as the first term on the right-hand side of
(\ref{1.1}) is concerned, its jump-type version has been discussed in
population genetics as the generator of a model with ``occasional
extreme reproduction''. (See Section~1.2 of \cite{BB} for a
comprehensive account.) We additionally need to look for an appropriate
modification of the second term, which should correspond to a
generalization of the mutation mechanism. With these situations in
mind, our problems can be described as follows.
\begin{longlist}[(II)]
\item[(I)] By modifying both mechanisms of reproduction and mutation,
find a jump process on $[0,1]$ whose generator extends (\ref{1.1})
and whose stationary distribution can be identified.
\item[(II)] Establish an analogous generalization for the Fleming--Viot
process with parent-independent mutation.
\end{longlist}
Since these problems are rather vague, it may be worth showing now the
generator we will believe to give an ``answer'' to~(I). For each
$\alpha\in(0,1)$, define an operator $A_{\alpha}$ by
\begin{eqnarray}\label{1.3}
\qquad A_{\alpha}G(x)&=& \int_0^1\frac{B_{1-\alpha,1+\alpha}(du)}{u^2} \bigl[xG
\bigl((1-u)x+u \bigr)\nonumber
\\
&&\hspace*{82pt}{}+(1-x)G \bigl((1-u)x \bigr)-G(x) \bigr]
\nonumber\\[-8pt]\\[-8pt]
&&{} +\int_0^1\frac{B_{1-\alpha,\alpha}(du)}{(\alpha+1)u}\bigl[c_1G \bigl((1-u)x+u \bigr)\nonumber
\\
&&\hspace*{85pt}{} +c_2G \bigl((1-u)x\bigr)-(c_1+c_2)G(x) \bigr],\nonumber
\end{eqnarray}
where $G$ are smooth functions on $[0,1]$. Observe that
$A_{\alpha}G(x)\to AG(x)$ as $\alpha\uparrow1$. It should be noted that
$A_{\alpha}$ is a one-dimensional version of the generator of the
process studied in \cite{BBC} if $c_1=c_2=0$. See also \cite{F} and
\cite{FH}. The reader, however, is cautioned that our notation $\alpha$
is in conflict with that of these papers, in which $\alpha$ plays the
same role as $\alpha+1$ in our notation. (We~adopt such notation in
order for the formulae below to be simpler.) The constant $c_1$ (resp.,
$c_2$) in (\ref{1.3}) can be interpreted as the rate of ``simultaneous
mutation'' from one type to the other type and a proportion $u$ of the
individuals with that type, which are supposed to have the frequency
$1-x$ (resp.,~$x$) in the population, are involved in this ``mutation''
event with\vadjust{\goodbreak} intensity $B_{1-\alpha,\alpha}(du)/((\alpha+1)u)$. [Note
that $(1-u)x+u=x+u(1-x)$.] As will be seen in Proposition~\ref{3.1}
below for more general case, the closure of~(\ref{1.3}) with a suitable
domain generates a Feller semigroup on $C([0,1])$, and our main concern
is the equilibrium state of the associated Markov process. It will be
shown in the forthcoming section that a unique stationary distribution
of the process governed by (\ref{1.3}) is identified with
\begin{eqnarray}\label{1.4}
\qquad && P_{\alpha,(c_1,c_2)}(dx)
\nonumber\\[-6pt]\\[-12pt]
&&\qquad:=\Gamma(\alpha+1) \int_0^1{B}_{c_1,c_2}(dy)
E_{\alpha,y} \biggl[(Y_1+Y_2)^{-\alpha};
\frac{Y_1}{Y_1+Y_2}\in dx \biggr],\nonumber
\end{eqnarray}
where $E_{\alpha,y}$ denotes the expectation with respect to
$(Y_1,Y_2)$ with law determined by $\log E_{\alpha,y}[e^{-\lambda_1
Y_1-\lambda_2 Y_2}] =-y\lambda_1^{\alpha}-(1-y)\lambda_2^{\alpha}$
($\lambda_1,\lambda_2\ge0$). Again we see that~(\ref{1.4}) with
$\alpha=1$ reduces to (\ref{1.2}).
One might think that (\ref{1.3}) is one of many possible
generalizations of (\ref{1.1}). In fact it arises naturally in the
following manner. It is well-known \cite{Shiga} that the Fleming--Viot
process with parent-independent mutation can be obtained by way of a
normalization and a random time change from a measure-valued branching
diffusion with immigration. (See also \cite{EM} and~\cite{P}.) An
extension of this significant result was shown in \cite{BBC} for a
class of generalized Fleming--Viot processes, which in the
one-dimensional setting corresponds to (\ref{1.3}) with $c_1=c_2=0$.
Moreover, \cite{BBC} proved that such a jump mechanism is necessary for
a generalized Fleming--Viot process to have the above mentioned link to
a measure-valued branching process with immigration (henceforth
MBI-process). Recently, \cite{FH} showed essentially that the second
term of (\ref{1.3}) is required when we additionally take a
generalization of the mutation mechanism into account. Our argument
will be crucially based on this kind of relationship between the
generalized Fleming--Viot process associated with a natural
generalization of (\ref{1.3}) and a certain ergodic MBI-process. That
relationship can be reformulated as a factorization result on the level
of generators and hence is expected to yield also an explicit
connection between stationary distributions. In principle, the problems
(I)~and~(II) can be considered in a unified way. Nevertheless, we shall
discuss (I)~and~(II) separately. This is mainly because the
factorization identity will turn out to yield a correct answer only for
certain restricted cases and in one dimension one can avoid its use by
taking an analytic approach instead (although this does not reveal
clearly the mathematical structure underlying).\looseness=1
The organization of this paper is as follows. Section~\ref{sec2} is
devoted to derivation of~(\ref{1.4}) by purely analytic argument.
Exploiting the relationship to MBI-processes, we show in
Section~\ref{sec3} that the above mentioned answer to (I) has a natural
generalization which settles (II). The irreversibility of the processes
we consider is discussed in Section~\ref{sec4}.\newpage
\section{The one-dimensional model}\label{sec2}
Let $0<\alpha<1$, $c_1>0$ and $c_2>0$ be given. The purpose of this
section is to show that (\ref{1.4}) is a unique stationary distribution
of the process with generator (\ref{1.3}). Analytically, we shall prove
that a probability measure $P$ on $[0,1]$ satisfying
\begin{eqnarray}\label{e2.1}
\int_0^1 A_{\alpha}G(x)P(dx)=0
\nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{for all }G(x)=\varphi_n(x):=x^n\mbox{ with }n=1,2,\ldots}
\end{eqnarray}
is uniquely identified with (\ref{1.4}).
Actual starting point of the calculations below is
\begin{eqnarray}\label{e2.2}
\int_0^1 A_{\alpha}G(x)P(dx)=0
\nonumber\\[-8pt]\\[-8pt]
\eqntext{\mbox{for all }G(x)=G_t(x):=(1+tx)^{-1}\mbox{ with }t>0.}
\end{eqnarray}
The equivalence of (\ref{e2.1}) and (\ref{e2.2}) is
a consequence of uniform estimates
\[
\bigl|A_{\alpha}\varphi_n(x)\bigr|\le\biggl(1+\frac{c_1+c_2}{\alpha+1}
\biggr)2^n,\qquad n=1,2,\ldots,
\]
which can be shown by observing that
\begin{eqnarray}\label{2.3}
&& c_1 \bigl((1-u)x+u \bigr)^n+c_2
\bigl((1-u)x \bigr)^n-(c_1+c_2)x^n
\nonumber
\\
&&\qquad = c_1 \bigl[ \bigl((1-u)x+u \bigr)^n-
\bigl((1-u)x+ux \bigr)^n \bigr]\nonumber
\\
&&\quad\qquad{} +c_2x^n\bigl[(1-u)^n- \bigl((1-u)+u \bigr)^n \bigr]\nonumber
\\
&&\qquad = c_1\sum_{k=1}^n \pmatrix{n
\cr k}(1-u)^{n-k}x^{n-k}u^k
\bigl(1-x^k \bigr)
\nonumber\\\\[-16pt]
&&\quad\qquad{} -c_2x^n\sum _{k=1}^n \pmatrix{n\cr k}(1-u)^{n-k}u^k\nonumber
\\
&&\qquad = \sum_{k=1}^n \pmatrix{n\cr k}(1-u)^{n-k}u^k \bigl[c_1x^{n-k}-(c_1+c_2)x^n\bigr]\nonumber
\\
&&\qquad = u\sum_{k=1}^n \pmatrix{n\cr k}(1-u)^{n-k}u^{k-1} \bigl[c_1x^{n-k}-(c_1+c_2)x^n\bigr]\nonumber
\end{eqnarray}
and in particular
\begin{eqnarray*}
&& x \bigl((1-u)x+u \bigr)^n+(1-x) \bigl((1-u)x\bigr)^n-x^n
\\
&&\qquad = \sum_{k=2}^n \pmatrix{n\cr
k}(1-u)^{n-k}u^k \bigl(x^{n-k+1}-x^n\bigr)
\\
&&\qquad = u^2\sum_{k=2}^n \pmatrix{n
\cr k}(1-u)^{n-k}u^{k-2} \bigl(x^{n-k+1}-x^n\bigr).
\end{eqnarray*}
Indeed, these bounds ensure that the function
\[
t \mapsto\int_0^1A_{\alpha}G_t(x)P(dx)
=\sum_{n=1}^{\infty}(-t)^n\int
_0^1A_{\alpha}\varphi_n(x)
P(dx)
\]
is real analytic at least for $-1/2<t<1/2$. We prepare a simple lemma
in order to calculate $A_{\alpha}G_{t}$.
\begin{lemma}\label{le2.1}
Assume that $b>0$ and $a+b>0$.
\begin{longlist}[(ii)]
\item[(i)] It holds that for any $\theta_1>0$ and $\theta_2>0$
\begin{equation}\label{2.4}
\int_0^1\frac{B_{\theta_1,\theta_2}(du)} {
(au+b)^{\theta_1+\theta_2}}
=(a+b)^{-\theta_1}b^{-\theta_2}.
\end{equation}
\item[(ii)] In addition, suppose that $a'\ne a$ and $a'+b>0$. Then
\begin{equation}\label{2.5}
\int_0^1\frac{B_{1-\alpha,1+\alpha}(du)}{(au+b)(a'u+b)} =
\frac{1}{\alpha(a-a')b^{1+\alpha}} \bigl[(a+b)^{\alpha}- \bigl(a'+b
\bigr)^{\alpha} \bigr].
\end{equation}
\end{longlist}
\end{lemma}
Equation (\ref{2.4}) is a one-dimensional version of the formula due to
\cite{CR}, which is sometimes referred to as the Markov--Krein
identity. (See, e.g., \cite{VYT04} or (\ref{3.5}) below.) We will give
a self-contained proof based essentially on the well-known relationship
between beta and gamma laws.
\begin{pf*}{Proof of Lemma~\ref{le2.1}}
The proof of (\ref{2.4}) is simply
done by noting that
\[
(a+b)^{-\theta_1}b^{-\theta_2}= \int_0^{\infty}
\frac{dz_1}{\Gamma(\theta_1)} z_1^{\theta_1-1}e^{-(a+b)z_1} \int
_0^{\infty}\frac{dz_2}{\Gamma(\theta_2)} z_2^{\theta_2-1}e^{-b z_2}
\]
and then by change of variables to $u:=z_1/(z_1+z_2)$, $v:=z_1+z_2$.
The proof of~(\ref{2.5}) can be deduced from (\ref{2.4}) with
$\theta_1=1-\alpha$ and $\theta_2=\alpha$ since
$B_{1-\alpha,1+\alpha}(du) =B_{1-\alpha,\alpha}(du)(1-u)/\alpha$ and
\[
\frac{1-u}{(au+b)(a'u+b)} =\frac{1}{(a-a')b} \biggl(\frac{a+b}{au+b}-
\frac{a'+b}{a'u+b} \biggr).
\]\upqed
\end{pf*}
We proceed to calculate $A_{\alpha}G_t$.
\begin{lemma}\label{le2.2}
For any $t>0$ and $x\in[0,1]$,
\begin{eqnarray}\label{2.6}
A_{\alpha}G_t(x) &=& t\cdot\frac{(1+t)^{\alpha}-1}{\alpha}\cdot
\frac{x(1-x)}{(1+tx)^{2+\alpha}}
\nonumber\\[-8pt]\\[-8pt]
&&{} -\frac{t}{\alpha+1}\cdot\frac{c_1(1-x)(1+t)^{\alpha-1}-c_2x}{(1+tx)^{1+\alpha}}.\nonumber
\end{eqnarray}
\end{lemma}
\begin{pf}
By straightforward calculations
\begin{eqnarray*}
&& c_1G_t \bigl((1-u)x+u \bigr)+c_2G_t
\bigl((1-u)x \bigr)-(c_1+c_2)G_t(x)
\\
&&\qquad = -\frac{tu}{1+tx} \biggl[\frac{c_1(1-x)}{1+t(1-u)x+tu} -\frac{c_2x}{1+t(1-u)x} \biggr].
\end{eqnarray*}
Replacing $c_1$ and $c_2$ by $x$ and $1-x$, respectively, we get
\begin{eqnarray*}
&& xG_t \bigl((1-u)x+u \bigr)+(1-x)G_t\bigl((1-u)x \bigr)-G_t(x)
\\
&&\qquad = \frac{t^2u^2x(1-x)}{1+tx}\cdot\frac{1}{(1+t(1-u)x+tu)(1+t(1-u)x)}.
\end{eqnarray*}
Plugging these equalities into (\ref{1.3}) with $G=G_t$ and then
applying Lemma~\ref{le2.1} yield
\begin{eqnarray*}
A_{\alpha}G_t(x) & = & \frac{t^2x(1-x)}{1+tx} \int
_0^1\frac{B_{1-\alpha,1+\alpha}(du)} {
(1+t(1-u)x+tu)(1+t(1-u)x)}
\\
&&{} -\frac{t}{(\alpha+1)(1+tx)} \cdot c_1(1-x)\int_0^1
\frac{B_{1-\alpha,\alpha}(du)}{1+t(1-u)x+tu}
\\
& &{} +\frac{t}{(\alpha+1)(1+tx)} \cdot c_2x\int_0^1
\frac{B_{1-\alpha,\alpha}(du)}{1+t(1-u)x}
\\
& = & \frac{t^2x(1-x)}{1+tx}\cdot\frac{1}{\alpha t (1+tx)^{1+\alpha
}}\cdot\bigl[(1+t)^{\alpha}-1
\bigr]
\\
& &{} -\frac{t}{(\alpha+1)(1+tx)} \biggl[\frac{c_1(1-x)}{(1+t)^{1-\alpha
}(1+tx)^{\alpha}} -\frac{c_2x}{(1+tx)^{\alpha}} \biggr],
\end{eqnarray*}
which equals the right-hand side of (\ref{2.6}).
\end{pf}
Next, we are going to characterize stationary distributions $P$
in terms of
\begin{equation}\label{2.7}
S_{\alpha}(t):=\int_0^1
\frac{P(dx)}{(1+tx)^{\alpha}},\qquad t\ge0,
\end{equation}
which is a variant of the generalized Stieltjes transform of
order $\alpha$.
\begin{prop}
A probability measure $P$ on $[0,1]$ is a stationary distribution of
the process associated with (\ref{1.3}) if and only if $S_{\alpha}$
defined by (\ref{2.7}) satisfies for all $t>0$
\begin{eqnarray}\label{2.8}
&&\frac{(1+t)^{\alpha}-1}{\alpha}(1+t)S_{\alpha}''(t)\nonumber
\\
&&\quad{} + \biggl[ \biggl(c_1+1+\frac{1}{\alpha} \biggr)
\bigl((1+t)^{\alpha}-1 \bigr) +c_1+c_2
\biggr]S_{\alpha}'(t)
\\
&&\quad{} +\alpha c_1(1+t)^{\alpha-1}S_{\alpha}(t)=0.\nonumber
\end{eqnarray}
\end{prop}
\begin{pf}
By virtue of Theorem 9.17 in Chapter~4 of \cite{EK}, $P$ is a
stationary distribution of the process associated with $A_{\alpha}$ if
and only if (\ref{e2.1}) [or (\ref{e2.2})] holds. By Lemma~\ref{le2.2},
(\ref{e2.2}) now reads for all $t>0$
\begin{eqnarray*}
&& -\frac{(1+t)^{\alpha}-1}{\alpha} \int_0^1
\frac{x(1-x)}{(1+tx)^{2+\alpha}}P(dx)
\\
&&\quad{} +\frac{c_1}{\alpha+1}(1+t)^{\alpha-1} \int_0^1
\frac{1-x}{(1+tx)^{1+\alpha}}P(dx)
\\
&&\quad{} -\frac{c_2}{\alpha+1}\int_0^1
\frac{x}{(1+tx)^{1+\alpha}}P(dx)=0.
\end{eqnarray*}
This equation becomes (\ref{2.8}) by substituting the equalities
\begin{eqnarray*}
-\int_0^1\frac{x(1-x)}{(1+tx)^{2+\alpha}}P(dx) &=&
\frac{1+t}{\alpha(\alpha+1)}S_{\alpha}''(t) +\frac{1}{\alpha}S_{\alpha}'(t),
\\
\int_0^1\frac{1-x}{(1+tx)^{1+\alpha}}P(dx) &=&
\frac{1+t}{\alpha}S_{\alpha}'(t)+S_{\alpha}(t)
\end{eqnarray*}
and
\[
\int_0^1\frac{x}{(1+tx)^{1+\alpha}}P(dx) =-
\frac{1}{\alpha}S_{\alpha}'(t),
\]
all of which are verified easily.
\end{pf}
We now derive (\ref{1.4}) as the unique stationary
distribution we are looking for.
Recall that for each $y\in(0,1)$ we denote by $E_{\alpha,y}$
the expectation with respect to the two-dimensional random
variable $(Y_1,Y_2)$ with joint law determined by
\[
E_{\alpha,y} \bigl[e^{-\lambda_1 Y_1-\lambda_2 Y_2} \bigr] =e^{-y\lambda
_1^{\alpha}-(1-y)\lambda_2^{\alpha}}, \qquad
\lambda_1,\lambda_2\ge0.
\]
By using
$t^{-\alpha}=\Gamma(\alpha)^{-1}\int_0^{\infty}\,dvv^{\alpha-1}e^{-vt}$
$(t>0)$ and Fubini's theorem, observe that
\begin{eqnarray}\label{2.9}
&& E_{\alpha,y} \bigl[(tY_1+Y_2)^{-\alpha} \bigr]\nonumber
\\
&&\qquad = \Gamma(\alpha)^{-1}\int_0^{\infty}\,dvv^{\alpha-1}
\exp\bigl[-y(vt)^{\alpha}-(1-y)v^{\alpha} \bigr]
\\
&&\qquad = \frac{1}{\Gamma(\alpha+1)}\cdot\frac{1}{1+(t^{\alpha}-1)y}\nonumber
\end{eqnarray}
for $t\ge0$. In particular,
$E_{\alpha,y} [(Y_1+Y_2)^{-\alpha} ]=1/\Gamma(\alpha+1)$
and hence
\begin{eqnarray}\label{2.10}
&& P_{\alpha,(c_1,c_2)}(dx)
\nonumber\\[-8pt]\\[-8pt]
&&\qquad =\Gamma(\alpha+1)\int_0^1{B}_{c_1,c_2}(dy)
E_{\alpha,y} \biggl[(Y_1+Y_2)^{-\alpha};
\frac{Y_1}{Y_1+Y_2}\in dx \biggr]\nonumber
\end{eqnarray}
defines a probability measure on $[0,1]$.
Although for each $y\in(0,1)$
an expression of the distribution function
\[
[0,1]\ni x\mapsto\Gamma(\alpha+1)E_{\alpha,y} \biggl[(Y_1+Y_2)^{-\alpha};
\frac{Y_1}{Y_1+Y_2}\le x \biggr]
\]
is given as the formula (3.2) in \cite{Y}, that is,
\[
\frac{\sin\alpha\pi}{\pi} \int_0^x
\frac{(1-y)(x-u)^{\alpha-1}u^{\alpha}\,du} {
(1-y)^{2}u^{2\alpha}+y^{2}(1-u)^{2\alpha}
+2y(1-y)u^{\alpha}(1-u)^{\alpha}\cos\alpha\pi},
\]
we do not have any explicit form concerning $P_{\alpha,(c_1,c_2)}$
except the case $c_1+c_2=1$. [See Remark (ii) at the end of this
section.]
The main result of this section is the following.
\begin{theorem}\label{th2.4}
The process associated with (\ref{1.3}) has
a unique stationary distribution, which
coincides with $P_{\alpha,(c_1,c_2)}$.
\end{theorem}
\begin{pf}
Notice that the existence of a stationary distribution
follows from compactness of the state space $[0,1]$.
(See, e.g., Remark 9.4 in Chapter~4 of \cite{EK}.)
Let $P$ be an arbitrary stationary distribution of
the process associated with (\ref{1.3})
and $S_{\alpha}$ be defined by (\ref{2.7}).
Put
\[
T_{\alpha}(u)=S_{\alpha} \bigl((1+u)^{1/\alpha}-1 \bigr)
\]
for $u\ge0$.
Setting $t=(1+u)^{1/\alpha}-1$ or $u=(1+t)^{\alpha}-1$,
observe that for $u>0$
\[
T_{\alpha}'(u) =\frac{1}{\alpha}(1+u)^{(1/\alpha)-1}
S_{\alpha}'(t)
\]
and
\begin{eqnarray*}
T_{\alpha}''(u) & = & \frac{1}{\alpha}
\biggl(\frac{1}{\alpha}-1 \biggr) (1+u)^{(1/\alpha)-2}S_{\alpha}'(t)
+ \biggl[\frac{1}{\alpha}(1+u)^{(1/\alpha)-1} \biggr]^2
S_{\alpha}''(t)
\\
& = & \biggl(\frac{1}{\alpha}-1 \biggr) (1+u)^{-1}T_{\alpha}'(u)
+\frac{1}{\alpha^2}(1+u)^{(2/\alpha)-2}S_{\alpha}''(t).
\end{eqnarray*}
Hence, $S_{\alpha}'(t)=\alpha(1+u)^{1-(1/\alpha)}T_{\alpha}'(u)$ and
\[
S_{\alpha}''(t) =\alpha^2(1+u)^{2-(2/\alpha)}
\biggl[T_{\alpha}''(u) - \biggl(
\frac{1}{\alpha}-1 \biggr) (1+u)^{-1}T_{\alpha}'(u)
\biggr].
\]
Also, (\ref{2.8}) can be rewritten as
\begin{eqnarray*}
&& \frac{u}{\alpha}(1+u)^{1/\alpha}S_{\alpha}''(t)
+ \biggl[ \biggl(c_1+1+\frac{1}{\alpha} \biggr)u
+c_1+c_2 \biggr]S_{\alpha}'(t)
\\
&&\quad{} + \alpha c_1(1+u)^{1-(1/\alpha)}S_{\alpha}(t)=0.
\end{eqnarray*}
From these preliminary observations, it is direct to see that the
equation (\ref{2.8}) is transformed into a hypergeometric equation of
the form
\begin{eqnarray}\label{2.11}
\qquad u(1+u)T_{\alpha}''(u) + \bigl[
(c_1+c_2 ) + (c_1+2 )u \bigr]T_{\alpha}'(u)
+c_1T_{\alpha}(u)=0,
\nonumber\\[-10pt]\\[-10pt]
\eqntext{u>0.}
\end{eqnarray}
Clearly, $T_{\alpha}(0)=S_{\alpha}(0)=1$.
In addition,
\[
T_{\alpha}'(0)=S_{\alpha}'(0)/\alpha=-
\int_0^1P(dx)x=-c_1/(c_1+c_2),
\]
where the last equality follows from (\ref{e2.1}) with $n=1$.
These facts together imply that
\[
T_{\alpha}(u) =\int_0^1
\frac{{B}_{c_1,c_2}(dy)}{1+uy}, \qquad u\ge0
\]
or
\[
S_{\alpha}(t) =\int_0^1
\frac{{B}_{c_1,c_2}(dy)} {
1+ \{(1+t)^{\alpha}-1 \}y}, \qquad t\ge0.
\]
(See, e.g., Sections~7.2 and 9.1 in \cite{Lebedev}.)
Combining this with
\begin{eqnarray*}
&& \frac{1}{1+ \{(1+t)^{\alpha}-1 \}y}
\\
&&\qquad =\Gamma(\alpha+1)\int_0^1
\frac{1}{(1+tx)^{\alpha}} E_{\alpha,y} \biggl[(Y_1+Y_2)^{-\alpha};
\frac{Y_1}{Y_1+Y_2}\in dx \biggr],
\end{eqnarray*}
which is immediate from (\ref{2.9}), we arrive at
\begin{equation}\label{2.12}
S_{\alpha}(t) =\int_0^1
\frac{P_{\alpha,(c_1,c_2)}(dx)}{(1+tx)^{\alpha}}, \qquad t\ge0
\end{equation}
in view of (\ref{2.10}). Therefore, we conclude that
$P=P_{\alpha,(c_1,c_2)}$ and the proof of Theorem~\ref{th2.4} is
complete.
\end{pf}
\begin{rems*} (i) In the case where $c_1+c_2>1$, an
alternative expression for $P_{\alpha,(c_1,c_2)}$ exists:
\begin{eqnarray}\label{2.13}
&& P_{\alpha,(c_1,c_2)}(dx)\nonumber
\\
&&\qquad = \Gamma(\alpha+1) (c_1+c_2-1) E
\biggl[(Z_1+Z_2)^{-\alpha}; \frac{Z_1}{Z_1+Z_2}\in dx\biggr]
\\
&&\qquad =: \widetilde{P}_{\alpha,(c_1,c_2)}(dx),\nonumber
\end{eqnarray}
where $Z_1$ and $Z_2$ are independent random variables with
Laplace transforms
\begin{equation}\label{2.14}
E \bigl[e^{-\lambda Z_i} \bigr] =\exp\bigl[-c_i\log\bigl(1+
\lambda^{\alpha} \bigr) \bigr], \qquad\lambda\ge0.
\end{equation}
This reflects the fact that the solution to
(\ref{2.11}) with the same initial conditions
$T_{\alpha}(0)=1$ and $T_{\alpha}'(0)=-c_1/(c_1+c_2)$
admits another integral expression of the form
\[
T_{\alpha}(u) =\int_0^1
\frac{{B}_{1,c_1+c_2-1}(dy)} {(1+uy)^{c_1}}, \qquad u\ge0
\]
and accordingly by (\ref{2.12})
\begin{equation}
\int_0^1\frac{P_{\alpha,(c_1,c_2)}(dx)}{(1+tx)^{\alpha}} =\int
_0^1\frac{{B}_{1,c_1+c_2-1}(dy)} {
[1+ \{(1+t)^{\alpha}-1 \}y ]^{c_1}}, \qquad t\ge0.
\label{2.15}
\end{equation}
On the other hand, it is not difficult to show that (\ref{2.15}) with
$\widetilde{P}_{\alpha,(c_1,c_2)}$ in place of ${P}_{\alpha,(c_1,c_2)}$
holds, too. In fact, we prove in Lemma~\ref{le3.5} below a
generalization of the coincidence (\ref{2.13}) in the setting of random
measures. Also, the role of $Z_1$~and~$Z_2$ will be made clear in
connection with branching processes with immigration related closely to
the process generated by (\ref{1.3}). [Compare (\ref{2.14}) with
(\ref{3.8}) below.]
\begin{longlist}[(iii)]
\item[(ii)] It will be shown in the Remark after Lemma~\ref{le3.5}
below that $P_{\alpha,(c_1,c_2)}=B_{\alpha c_1,\alpha c_2}$ holds
whenever $c_1+c_2=1$. At least at a formal level, this would be
seen by letting $c_1+c_2\downarrow1$ in (\ref{2.15}) and then by
making use of (\ref{2.4}).
\item[(iii)] In contrast with the case of the Wright--Fisher diffusion
mentioned in the \hyperref[sec1]{Introduction}, $P_{\alpha,(c_1,c_2)}$ with
$0<\alpha<1$ is not a reversible distribution for the generator
(\ref{1.3}) at least in case $c_1\ne c_2$. This will be seen in
Section~\ref{sec4}.
\end{longlist}
\end{rems*}
\section{The measure-valued process case}\label{sec3}
The main subject of this section is an extension of Theorem~\ref{th2.4}
to a class of generalized Fleming--Viot processes. But the strategy
will be different from that in the previous section, and so an
alternative proof of Theorem~\ref{th2.4} will be given as a by-product.
To discuss in the setting of measure-valued processes, we need new
notation. Let $E$ be a compact metric space having at least two
distinct points and $C(E)$ [resp., $B_+(E)$] the set of continuous
(resp., nonnegative, bounded Borel) functions on $E$. Define
$\mathcal{M}(E)$ to be the totality of finite Borel measures on $E$,
and we equip $\mathcal{M}(E)$ with the weak topology. Denote by
$\mathcal{M}(E)^{\circ}$ the set of nonnull elements of
$\mathcal{M}(E)$. The set $\mathcal{M}_1(E)$ of Borel probability
measures on $E$ is regarded as a subspace of $\mathcal{M}(E)$. We also
use the notation $\langle\eta, f\rangle$ to stand for the integral of a
function $f$ with respect a measure~$\eta$.
For each $r\in E$, let $\delta_r$ denote
the delta distribution at $r$. Given a probability measure~$Q$,
we write also $E^Q[\cdot]$ for the expectation with
respect to $Q$.
Let $0<\alpha<1$ and $m\in\mathcal{M}(E)$ be given.
We shall discuss in this section
an $\mathcal{M}_1(E)$-valued Markov process associated with
\begin{eqnarray}\label{3.1}
&& \mathcal{A}_{\alpha,m}\Phi(\mu)\nonumber
\\[-1pt]
&&\qquad:= \int_0^1 \frac{B_{1-\alpha,1+\alpha}(du)}{u^2}\int
_E\mu(dr) \bigl[\Phi\bigl((1-u)\mu+u\delta_r
\bigr)-\Phi(\mu) \bigr]
\nonumber\\[-8pt]\\[-8pt]
&&\quad\qquad{} +\int_0^1\frac{B_{1-\alpha,\alpha}(du)}{(\alpha+1)u} \int
_E m(dr) \bigl[\Phi\bigl((1-u)\mu+u\delta_r
\bigr)-\Phi(\mu) \bigr],\nonumber
\\[-3pt]
\eqntext{\mu\in\mathcal{M}_1(E),}
\end{eqnarray}
where $\Phi$ belongs to the class $\mathcal{F}_1$ of functions of the
form $\Phi_f(\mu):=\langle\mu^{\otimes n},f \rangle$ for some positive
integer $n$ and $f\in C(E^n)$. Equation~(\ref{3.1}) shows clearly that
$\mathcal{A}_{\alpha,m}$ satisfies the positive maximum principle and
hence is dissipative. (See Lemma~2.1 in Chapter~4 of \cite{EK}.) We
begin by seeing that $\mathcal{A}_{\alpha,m}$ defines a Markov process
on $\mathcal{M}_1(E)$ in an appropriate sense. For this purpose, we
need an expression for $\mathcal{A}_{\alpha,m}\Phi_{f}$ with $f\in
C(E^n)$. Set $(a)_b=\Gamma(a+b)/\Gamma(a)$ for $a>0$ and $b\ge0$, and
let $|\cdot|$ stand for the cardinality. It holds that for any
$\theta\ge0$ and $\nu\in\mathcal{M}_1(E)$
\begin{eqnarray}\label{3.1a}
&& \mathcal{A}_{\alpha,\theta\nu}\Phi_f(\mu)\nonumber
\\[-2pt]
&&\qquad = \sum_{k=2}^n \frac{(1-\alpha)_{k-2} (\alpha+1)_{n-k}}{\Gamma(n)} \sum
_{I\dvtx |I|=k} \bigl( \bigl\langle\mu^{\otimes n},
\Theta^{(n)}_If \bigr\rangle-\Phi_f(\mu)
\bigr)
\\
&&\quad\qquad{} + \theta\sum_{k=1}^n
\frac{(1-\alpha)_{k-1} (\alpha)_{n-k}}{(\alpha
+1)\Gamma(n)} \sum_{I\dvtx |I|=k} \bigl( \bigl\langle
\mu^{\otimes n},\Xi^{(n)}_{I,\nu}f \bigr\rangle-
\Phi_f(\mu) \bigr),\nonumber
\end{eqnarray}
where $I$ are nonempty subsets of $\{1,\ldots,n\}$,
$\Theta^{(n)}_I\dvtx C(E^n)\to C(E^n)$ is defined by letting
$\Theta^{(n)}_If$ be the function obtained from $f$ by replacing all
the variables~$r_i$ with $i\in I$ by $r_{\min I}$ and
$\Xi^{(n)}_{I,\nu}\dvtx C(E^n)\to C(E^n)$ is defined by letting
$\Xi^{(n)}_{I,\nu}f$ be the function obtained from $f$ by replacing all
the variables $r_i$ with $i\in I$ by $r$ and then by integrating with
respect to $\nu(dr)$. (For a degenerate $\nu$, (\ref{3.1a}) is a
special case of the corresponding expression found in the proof of
Lemma 11 in \cite{F}.) Equation~(\ref{3.1a}) can be deduced from the following
identities [cf.~(\ref{2.3})] among signed measures on~$E^n$:
\begin{eqnarray*}
&& \bigotimes_{i=1}^n \bigl((1-u)
\mu(dr_i)+u\delta_r(dr_i) \bigr) -
\bigotimes_{i=1}^n\mu(dr_i)
\\
&&\qquad = \bigotimes_{i=1}^n \bigl((1-u)
\mu(dr_i)+u\delta_r(dr_i) \bigr) -
\bigotimes_{i=1}^n \bigl((1-u)
\mu(dr_i)+u\mu(dr_i) \bigr)
\\
&&\qquad = \sum_{I\neq\varnothing} \bigotimes
_{j\notin I} \bigl((1-u)\mu(dr_j) \bigr) \biggl[
\bigotimes_{i\in I} \bigl(u\delta_r(dr_i)
\bigr) -\bigotimes_{i\in I} \bigl(u
\mu(dr_i) \bigr) \biggr]
\\
&&\qquad = \sum_{I\neq\varnothing} u^{|I|}(1-u)^{n-|I|}
\bigotimes_{j\notin I}\mu(dr_j) \biggl[
\bigotimes_{i\in I}\delta_r(dr_i)
-\bigotimes_{i\in I}\mu(dr_i) \biggr].
\end{eqnarray*}
As for the Fleming--Viot process with parent-independent mutation, the
result corresponding to the next proposition is a special case of
Theorem~3.4 in \cite{EK93}.
\begin{prop}\label{pr3.1}
For each $m\in\mathcal{M}(E)$
the closure of $\mathcal{A}_{\alpha,m}$ defined on $\mathcal{F}_1$
generates a Feller semigroup on $C(\mathcal{M}_1(E))$.
\end{prop}
\begin{pf}
Let $\theta\ge0$ and $\nu\in\mathcal{M}_1(E)$ be such that
$m=\theta\nu$. We simply mimic the proof of Theorem~3.4 in \cite{EK93}.
In particular, the Hille--Yosida theorem (Theorem~2.2 in Chapter~4 of
\cite{EK}) will be applied. Let $n$ be an arbitrary positive integer.
Rewrite~(\ref{3.1a}) as
\[
\mathcal{A}_{\alpha,\theta\nu}\Phi_f(\mu) = \bigl\langle
\mu^{\otimes n},\Theta^{(n)}f \bigr\rangle+\theta\bigl\langle
\mu^{\otimes n},\Xi^{(n)}_{\nu}f \bigr\rangle
-c_n(\alpha,\theta)\Phi_f(\mu),
\]
where $\Theta^{(n)}$, $\Xi^{(n)}_{\nu}\dvtx C(E^n)\to C(E^n)$ and
$c_n(\alpha,\theta)$ are, respectively, the nonnegative operators and
the positive constant-defined implicitly by the above equation combined
with (\ref{3.1a}). Let $\lambda>0$ be arbitrary. Given $g\in C(E^n)$,
define
\[
h= \bigl(\lambda+c_n(\alpha,\theta) \bigr)^{-1}\sum
_{k=0}^{\infty} \bigl[ \bigl(
\lambda+c_n(\alpha,\theta) \bigr)^{-1} \bigl(
\Theta^{(n)}+\theta\Xi^{(n)}_{\nu} \bigr)
\bigr]^k g.
\]
Then $h\in C(E^n)$ since the operator norm of
$\Theta^{(n)}+\theta\Xi^{(n)}_{\nu}$
equals $c_n(\alpha,\theta)$. Moreover,
\[
\bigl(\lambda+c_n(\alpha,\theta) \bigr)h- \bigl(
\Theta^{(n)} +\theta\Xi^{(n)}_{\nu} \bigr)h=g,
\]
so $(\lambda-\mathcal{A}_{\alpha,\theta\nu})\Phi_h=\Phi_g$. This
implies that the range of $\lambda-\mathcal{A}_{\alpha,\theta\nu}$
contains $\mathcal{F}_1$, which is dense in $C(\mathcal{M}_1(E))$. The
rest of the proof is the same as that of Theorem~3.4 in~\cite{EK93}.
\end{pf}
For simplicity, we call the $\mathcal{A}_{\alpha,m}$-process the Markov
process governed by $\mathcal{A}_{\alpha,m}$ in the sense of
Proposition~\ref{pr3.1}. This process is a natural generalization of
the process generated by (\ref{1.3}) in the following sense. Suppose
that $E$ consists of two points, say $r_1$ and $r_2$, set
$m=c_1\delta_{r_1}+c_2\delta_{r_2}$, and let $\{X(t)\dvtx t\ge0\}$ be
the process generated by (\ref{1.3}). Then, verifying the identity
$\mathcal{A}_{\alpha,m}\Phi(\mu)=A_{\alpha}G(x)$ for
$\mu=x\delta_{r_1}+(1-x)\delta_{r_2}$ and $\Phi(\mu)=G(x)$, we see that
the process $\{X(t)\delta_{r_1}+(1-X(t))\delta_{r_2}\dvtx t\ge0\}$
defines an $\mathcal{A}_{\alpha,m}$-process. We note that \cite{FH}
discusses the case where $E=[0,1]$ and $m=c\delta_0$ for some $c>0$.
We could also establish the well-posedness of the martingale problem
for $\mathcal{A}_{\alpha,m}$ by modifying some existing arguments. More
precisely, the existence could be shown through a limit theorem for
suitably generalized Moran particle systems by modifying those
considered in the proof of Theorem~2.1 [especially (2.2)]
of~\cite{Hiraba}, which took account of the jump mechanism describing
simultaneous reproduction (sampling) only, so that simultaneous
movement (mutation) of particles to a random location (type)
distributed according to $m(dr)/m(E)$ is allowed. The uniqueness would
follow by the duality argument employing a function-valued process as
in the proof of Theorem~2.1 of \cite{Hiraba}. Its possible transitions
and the associated transition rates are found in (\ref{3.1a}). The
duality would be useful in discussing (weak) ergodicity of the
$\mathcal{A}_{\alpha,m}$-process. (See, e.g., Theorem~5.2 in
\cite{EK93} for such a result in the Fleming--Viot process case.)
The following argument is based primarily on the relationship between
the $\mathcal{A}_{\alpha,m}$-process and a suitable MBI-process, which
takes values in $\mathcal{M}(E)$. More precisely, the generator, say
$\mathcal{L}_{\alpha,m}$, of the latter will be chosen so that for some
constant $C>0$
\begin{equation}\label{3.2}
\mathcal{L}_{\alpha,m}\Psi(\eta) =C{\eta(E)^{-\alpha}}
\mathcal{A}_{\alpha,m} \Phi\bigl(\eta(E)^{-1}\eta\bigr), \qquad
\eta\in\mathcal{M}(E)^{\circ},
\end{equation}
where $\Psi(\eta)=\Phi(\eta(E)^{-1}\eta)$ and $\Phi$ is in the linear
span $\mathcal{F}_0$ of functions of the form $\mu\mapsto\langle\mu,
f_1\rangle\cdots\langle\mu, f_n\rangle$ with $f_i\in C(E)$,
$i=1,\ldots,n$ and $n$ being a positive integer. In the case of the
Fleming--Viot process (which corresponds to $\alpha=1$ formally), such
a relation is well known. For instance, it played a key role in
\cite{Shiga}. As for the generalized Fleming--Viot process,
factorizations of the form (\ref{3.2}) have been shown in \cite{BBC}
for $m=0$ (the null measure) and in \cite{FH} for degenerate measures
$m$. From now on, suppose that $m\in\mathcal{M}(E)^{\circ}$. To exploit
(\ref{3.2}) in the study of stationary distributions, we further
require the MBI-process associated with $\mathcal {L}_{\alpha,m}$ to be
ergodic, that is, to have a unique stationary distribution, say
$\widetilde{Q}_{\alpha,m}$, supported on $\mathcal{M}(E)^{\circ}$. Once
these requirements are fulfilled, (\ref{3.2}) suggests that
\begin{equation}\label{3.3}
\widetilde{P}_{\alpha,m}(\cdot):= E^{\widetilde{Q}_{\alpha,m}} \bigl[
\eta(E)^{-\alpha}; \eta(E)^{-1}\eta\in\cdot\bigr]
/E^{\widetilde{Q}_{\alpha,m}} \bigl[\eta(E)^{-\alpha} \bigr]
\end{equation}
would give a stationary distribution of the
$\mathcal{A}_{\alpha,m}$-process provided that $\eta(E)^{-\alpha}$ is
integrable with respect to $\widetilde{Q}_{\alpha,m}$. This conditional
answer may be modified to be a general one, which must be consistent
with the one-dimensional result (\ref{1.4}).
To describe the answer, we need both the $\alpha$-stable random measure
with parameter measure $m$ and the Dirichlet random measure with
parameter measure $m$, whose laws on $\mathcal{M}(E)^{\circ}$ and
$\mathcal{M}_1(E)$ are denoted by $Q_{\alpha,m}$ and $\mathcal{D}_{m}$,
respectively. These infinite-dimensional laws are determined uniquely
by the identities
\begin{equation}\label{3.4}
\int_{\mathcal{M}(E)^{\circ}}Q_{\alpha,m}(d\eta) e^{-\langle\eta,f
\rangle}
=e^{-\langle m,f^{\alpha} \rangle}
\end{equation}
and
\begin{equation}\label{3.5}
\int_{\mathcal{M}_1(E)}\mathcal{D}_{m}(d\mu) {\langle\mu,1+f
\rangle}^{-m(E)} =e^{-\langle m,\log(1+f)\rangle},
\end{equation}
where $f\in B_+(E)$ is arbitrary. A random measure with law
$Q_{\alpha,m}$ is constructed from a Poisson random measure on
$(0,\infty)\times E$. (See also Definition 6 in \cite{VYT04}.) Observe
from (\ref{3.4}) that ${E}^{Q_{\alpha,m}}[\eta(E)^{-\alpha}]
=1/(m(E)\Gamma(\alpha+1))$. As in \cite{Ferguson}, $\mathcal{D}_{m}$~is
defined originally to be the law of a random measure whose arbitrary
finite-dimensional distributions are Dirichlet distributions with
parameters specified by~$m$. The useful identity (\ref{3.5}) is due to
\cite{CR} and reduces to (\ref{2.4}) in one-dimension. We now state the
main result of this paper.
\begin{theorem}\label{th3.2}
For any $m\in\mathcal{M}(E)^{\circ}$, the
$\mathcal{A}_{\alpha,m}$-process has a unique stationary distribution,
which is identified with
\begin{equation}\label{3.6}
P_{\alpha,m}(\cdot):= \Gamma(\alpha+1)\int_{\mathcal{M}_1(E)}
\mathcal{D}_{m}(d\mu) {E}^{Q_{\alpha,\mu}} \bigl[\eta(E)^{-\alpha};
\eta(E)^{-1}\eta\in\cdot\bigr].
\end{equation}
\end{theorem}
To illustrate, consider the trivial case where $m=\theta\delta_r$ for
some $\theta>0$ and $r\in E$. Then it is verified easily that
$P_{\alpha,m}$ concentrates at $\delta_r\in\mathcal{M}_1(E)$, and this
is consistent with the equality
$\mathcal{A}_{\alpha,m}\Phi(\delta_r)=0$ in that case. Also, for every
$m\in\mathcal{M}(E)^{\circ}$, we note that
$P_{\alpha,m}\to\mathcal{D}_m$ as $\alpha\uparrow1$ since by
(\ref{3.4}) $Q_{\alpha,\mu}$ converges weakly to the delta distribution
at $\mu$ for each $\mu\in\mathcal{M}_1(E)$.
The proof of Theorem~\ref{th3.2} will be divided into three steps. As
mentioned earlier, we first find an ergodic MBI-process whose generator
satisfies (\ref{3.2}) and show, under necessary integrability
condition, that $\widetilde{P}_{\alpha,m}$ in (\ref{3.3}) gives a
stationary distribution of the $\mathcal{A}_{\alpha,m}$-process. [In
fact, the condition will turn out to be that $m(E)>1$. This motivates
us to make a reparametrization $m=:\theta\nu$ with $\theta>0$ and~$\nu\in\mathcal{M}_1(E)$.] Second, for each $\nu\in\mathcal{M}_1(E)$,
we prove that $\widetilde{P}_{\alpha,\theta\nu}=P_{\alpha,\theta\nu}$
for any $\theta>1$. As the last step, we extend stationarity of
$P_{\alpha,\theta\nu}$ with respect to $\mathcal{A}_{\alpha,\theta\nu}$
to all $\theta>0$ by interpreting the condition of stationarity as
certain recursion equations among moment measures which are seen to be
real analytic in $\theta>0$. Also, the recursion equations will be
shown to yield uniqueness of the stationary distribution.
For the first step, we prove in the next proposition
that the MBI-process with the following generator
is the desired one:
\begin{eqnarray}\label{3.7}
\qquad&& \mathcal{L}_{\alpha,m}\Psi(\eta)\nonumber
\\
&&\qquad:= \frac{\alpha+1}{\Gamma(1-\alpha)}\int_0^{\infty}
\frac{dz}{z^{2+\alpha}}\int_E\eta(dr) \biggl[\Psi(\eta+z
\delta_r)-\Psi(\eta)-z\frac{\delta\Psi}{\delta\eta}(r) \biggr]
\nonumber\\[-4pt]\\[-12pt]
&&\quad\qquad{} -\frac{1}{\alpha} \biggl\langle\eta,\frac{\delta\Psi}{\delta\eta} \biggr\rangle\nonumber
\\
&&\quad\qquad{} +\frac{\alpha}{\Gamma(1-\alpha)}\int_0^{\infty}
\frac{dz}{z^{1+\alpha}}\int_Em(dr) \bigl[\Psi(\eta+z
\delta_r)-\Psi(\eta) \bigr],\nonumber
\end{eqnarray}
where $\Psi$ is in the class $\mathcal{F}$ of functions of the form
$\eta\mapsto F(\langle\eta,f_1\rangle,\ldots,\langle\eta,f_n\rangle )$
for some $F\in C_b^2(\mathbf{R}^n)$, $f_i\in C(E)$ and a positive
integer $n$, and $\frac{\delta\Psi}{\delta\eta}(r)
=\frac{d}{d\varepsilon}\Psi(\eta+\varepsilon\delta_r)\rrvert_{\varepsilon=0}$.
Up to this first order differential term, the operator (\ref{3.7}) for
$E=[0,1]$ and $m=c\delta_0$ with $c>0$ is the same as the one discussed
in Lemma 5.5 of \cite{FH}, in which the factorization (\ref{3.2}) has
been proved. Thus, our main observation in the next proposition is
that, keeping the validity of (\ref{3.2}), such an extra term yields
the ergodicity. Note that the generator (\ref{3.7}) is a special case
of the one discussed in Chapter~9 of \cite{Li}. [See (9.25) combined
with (7.12) there for an expression of the generator.] In particular, a
unique solution to the martingale problem for $\mathcal{L}_{\alpha,m}$
defines an $\mathcal{M}(E)$-valued Markov process, which henceforth we
call the \mbox{$\mathcal{L}_{\alpha,m}$-}process. Intuitively, because of
absence of the ``motion process'', the law of this process is
considered as continuum convolution of the continuous-state branching
process with immigration (CBI-process) studied in \cite{KW}. [See
(\ref{3.10}) below.] In addition, Example 1.1 and Theorem 2.3 in
\cite{KW} concern the one-dimensional version of the
$\mathcal{L}_{\alpha,m}$-process without the drift. The latter proved
that the offspring distribution and the distribution associated with
immigration of the approximating branching processes may have
probability generating functions of the form $s+c(1-s)^{\alpha+1}$ and
$1-d(1-s)^{\alpha}$, respectively.
\begin{prop}\label{pr3.3}
Let $m\in\mathcal{M}(E)^{\circ}$. Then $\mathcal{L}_{\alpha,m}$ in
(\ref{3.7}) and $\mathcal{A}_{\alpha,m}$ in (\ref{3.1}) together
satisfy (\ref{3.2}) with $C=\Gamma(\alpha+2)$ and
$\Psi(\eta)=\Phi(\eta(E)^{-1}\eta)$ for any \mbox{$\Phi\in\mathcal{F}_0$}.
Moreover, the $\mathcal{L}_{\alpha,m}$-process has a unique stationary
distribution $\widetilde{Q}_{\alpha,m}$ with Laplace functional
\begin{equation}\label{3.8}
\int_{\mathcal{M}(E)^{\circ}}\widetilde{Q}_{\alpha,m}(d\eta)
e^{-\langle\eta,f \rangle} =e^{-\langle m,\log(1+f^{\alpha}) \rangle},\qquad f\in B_+(E).
\end{equation}
\end{prop}
A random measure with law $\widetilde{Q}_{\alpha,m}$ may be called a
Linnik random measure since it is an infinite-dimensional analogue of
the random variable with law sometimes referred to as a (nonsymmetric)
Linnik distribution, whose Laplace transform appeared already in
(\ref{2.14}). It is obtained by subordinating to an $\alpha$-stable
subordinator by a gamma process. (See, e.g., Example 30.8 in
\cite{Sato}.) Namely, letting $\{Y_{\alpha}(t)\dvtx t\ge0\}$ and
$\{\gamma(t)\dvtx t\ge0\}$ be independent L\'evy processes such that
\[
E \bigl[e^{-\lambda Y_{\alpha}(t)} \bigr]=e^{-t\lambda^{\alpha}}\quad\mbox{and}\quad E
\bigl[e^{-\lambda\gamma(t)} \bigr]=e^{-t\log(1+\lambda)},\qquad t,\lambda\ge0,
\]
we have for each $c>0$
\[
E \bigl[e^{-\lambda Y_{\alpha}(\gamma(c))} \bigr] =E \bigl[e^{-\gamma
(c)\lambda^{\alpha}} \bigr]=e^{-c\log(1+\lambda^{\alpha})},
\qquad\lambda\ge0.
\]
The first equality implies that
\[
P \bigl(Y_{\alpha} \bigl(\gamma(c) \bigr)\in\cdot\bigr) =\int
_0^{\infty}P \bigl(\gamma(c)\in dt \bigr)P
\bigl(Y_{\alpha}(t)\in\cdot\bigr).
\]
Equation (\ref{3.8}) clearly shows an analogous structure underlying,
that is,
\[
\widetilde{Q}_{\alpha,m}(\cdot) =\int_{\mathcal{M}(E)^{\circ}}
\mathcal{G}_m(d\eta)Q_{\alpha,\eta
}(\cdot),
\]
where $\mathcal{G}_m$ is the law of the standard gamma process on
$(E,m)$. (See Definition~5 in \cite{VYT04}). It is also obvious from
(\ref{3.8}) that, as $\alpha\uparrow1$, $\widetilde{Q}_{\alpha,m}$
converges to $\mathcal{G}_m$. In addition, one can see that
\[
\lim_{\alpha\uparrow1}\mathcal{L}_{\alpha,m}\Psi(\eta) = \biggl
\langle\eta,\frac{\delta^2\Psi}{\delta\eta^2} \biggr\rangle- \biggl
\langle\eta,
\frac{\delta\Psi}{\delta\eta} \biggr\rangle+ \biggl\langle m,\frac
{\delta\Psi}{\delta\eta} \biggr
\rangle=:\mathcal{L}_{m}\Psi(\eta)
\]
for ``nice'' functions $\Psi$, where
$\frac{\delta^2\Psi}{\delta\eta^2}(r) =\frac{d^2}{d\varepsilon^2}
\Psi(\eta+\varepsilon\delta_r)\rrvert_{\varepsilon=0}$. This is a
special case of the generator of MBI-processes discussed in Section~3
of \cite{Stannat}. It has been proved there that $\mathcal{G}_m$ is a
reversible stationary distribution of the process associated with
$\mathcal{L}_{m}$.
\begin{pf*}{Proof of Proposition~\ref{pr3.3}}
As already remarked, if the term\break
$-\alpha^{-1}\langle\eta,\frac{\delta\Psi}{\delta\eta}\rangle$ in
(\ref{3.7}) would vanish, (\ref{3.2}) can be shown by essentially the
same calculations as in the proof of Lemma 17 in \cite{FH}. [In fact,
the change of variable $z=:\eta(E)u/(1-u)$ in the integrals with
respect to $dz$ in (\ref{3.7}) almost suffices for our purpose.] So,
for the proof of (\ref{3.2}), we only need to observe\vspace*{-2pt} that $\langle
\eta,\frac{\delta\Psi}{\delta\eta}\rangle=0$ for $\Psi$ of the form
$\Psi(\eta)=\Phi(\eta(E)^{-1}\eta)$ with $\Phi\in\mathcal{F}_0$. But
this is readily done by giving a specific form of $\Phi$. Indeed, for
$\Phi(\mu)=\langle\mu, f_1\rangle\cdots\langle\mu, f_n\rangle$ the
function $\Psi$ takes the form $\Psi(\eta)=\langle\eta,
f_1\rangle\cdots\langle\eta, f_n\rangle\langle\eta, 1\rangle^{-n}$,
from which it follows that
\[
\frac{\delta\Psi}{\delta\eta}(r) =\sum_{i=1}^n
\frac{f_i(r)\langle\eta, 1\rangle-\langle\eta,
f_i\rangle} {
\langle\eta, 1\rangle^{n+1}}\prod_{j\ne i}\langle\eta,
f_j\rangle.
\]
After integrating with respect to $\eta(dr)$,
the numerator on the right-hand side vanishes.
The argument regarding ergodicity is based on a well-known formula for
Laplace functionals of transition functions. (See (9.18) in \cite{Li}
for a much more general case than ours.) To write it down, we need only
auxiliary functions called $\Psi$-semigroup \cite{KW} because there is
no ``motion process''. These functions form a one-parameter family
$\{\psi(t,\cdot)\}_{t\ge0}$ of nonnegative functions on $[0,\infty)$
and are determined by the equation
\begin{equation}\label{3.9}
\frac{\partial\psi}{\partial t}(t,\lambda) =-\frac{1}{\alpha}\psi
(t,\lambda)^{1+\alpha}
-\frac{1}{\alpha}\psi(t,\lambda), \qquad\psi(0,\lambda)=\lambda
\end{equation}
with $\lambda\ge0$ being arbitrary. An explicit
expression is found in Example 3.1 of \cite{Li}:
\[
\psi(t,\lambda) =\frac{e^{-t/\alpha}\lambda} {
[1+(1-e^{-t})\lambda^{\alpha} ]^{1/\alpha}}.
\]
Let $\{\eta_t\dvtx t\ge0\}$ be an $\mathcal{L}_{\alpha,m}$-process, and
for each $\eta\in\mathcal{M}(E)$ denote by $E_{\eta}$ the expectation
with respect to $\{\eta_t\dvtx t\ge0\}$ starting at $\eta$. Then for
any $f\in B_+(E)$ and $t\ge0$
\begin{equation}\label{3.10}
E_{\eta} \bigl[e^{-\langle\eta_t, f\rangle} \bigr] =\exp\biggl
[-\langle\eta,
V_tf\rangle-\int_0^t \bigl
\langle m, (V_sf)^{\alpha} \bigr\rangle \,ds \biggr],
\end{equation}
where $V_tf(r)=\psi(t,f(r))$. As $t\to\infty$
the right-hand side converges to
\[
\exp\biggl[-\int_0^{\infty} \bigl\langle m,
(V_tf)^{\alpha} \bigr\rangle \,dt \biggr] = \exp\bigl[- \bigl
\langle m, \log\bigl(1+f^{\alpha} \bigr) \bigr\rangle\bigr]
\]
since by (\ref{3.9})
\[
\frac{d}{dt}\log\bigl(1+ \bigl(V_tf(r) \bigr)^{\alpha}
\bigr) =- \bigl(V_tf(r) \bigr)^{\alpha}.
\]
This shows the ergodicity required and completes the proof.
\end{pf*}
\begin{prop}\label{pr3.4}
Suppose that $m(E)>1$ and let $\widetilde{Q}_{\alpha,m}$ be as in
Proposition~\ref{pr3.3}. Then
\[
E^{\widetilde{Q}_{\alpha,m}} \bigl[\eta(E)^{-\alpha} \bigr] = \bigl
(\Gamma(\alpha+1)
\bigl(m(E)-1 \bigr) \bigr)^{-1}.
\]
Moreover,
\begin{equation}
\widetilde{P}_{\alpha,m}(\cdot) = \Gamma(\alpha+1) \bigl(m(E)-1 \bigr)
E^{\widetilde{Q}_{\alpha,m}} \bigl[\eta(E)^{-\alpha}; \eta(E)^{-1}\eta
\in\cdot
\bigr] \label{3.11}
\end{equation}
is a stationary distribution of the $\mathcal{A}_{\alpha,m}$-process.
\end{prop}
\begin{pf}
The first assertion is shown by using
$t^{-\alpha}=\Gamma(\alpha)^{-1}\int_0^{\infty}\,dvv^{\alpha-1}e^{-vt}$
$(t>0)$ and (\ref{3.8}) with $f\equiv v$. Indeed, these equalities
together with Fubini's theorem yield
\begin{eqnarray*}
E^{\widetilde{Q}_{\alpha,m}} \bigl[\eta(E)^{-\alpha} \bigr] &=& \Gamma(
\alpha)^{-1}\int_0^{\infty}\,dvv^{\alpha-1}
\exp\bigl[-m(E)\log\bigl(1+v^{\alpha} \bigr) \bigr]
\\
&=& \Gamma(\alpha+1)^{-1}\int_0^{\infty}\,dz
\exp\bigl[-m(E)\log(1+z) \bigr]
\\
&=& \Gamma(\alpha+1)^{-1} \bigl(m(E)-1 \bigr)^{-1}.
\end{eqnarray*}
As in the one-dimensional case, Theorem 9.17 in Chapter~4 of \cite{EK}
reduces the proof of stationarity of (\ref{3.11}) with respect to
$\mathcal{A}_{\alpha,m}$ to showing that
\begin{equation}\label{3.12}
\int_{\mathcal{M}_1(E)} \widetilde{P}_{\alpha,m}(d\mu)
\mathcal{A}_{\alpha,m}\Phi(\mu)=0
\end{equation}
for any $\Phi$ of the form
$\Phi(\mu)=\langle\mu,f_1\rangle\cdots\langle\mu,f_n\rangle$ with
$f_i\in C(E)$ and $n$ being a positive integer. Without any loss of
generality, we can assume that $0\le f_i(x) \le1$ for any $x\in E$ and
$i=1,\ldots,n$. Furthermore, we only have to consider the case where
$f_1=\cdots=f_n=:f$ because the coefficients of the monomial $t_1\cdots
t_n$ in $\langle\mu,t_1f_1+\cdots+t_nf_n\rangle^n$ equals
$n!\langle\mu,f_1\rangle\cdots\langle\mu,f_n\rangle$. Thus, we let
$\Phi(\mu)=\langle\mu,f\rangle^n$ with $0\le f(x) \le1$ for any $x\in
E$. Because of the basic relation (\ref{3.2}) and (\ref{3.11})
together, (\ref{3.12}) can be rewritten as
\begin{equation}\label{3.13}
\int_{\mathcal{M}(E)^{\circ}} \widetilde{Q}_{\alpha,m}(d\eta)
\mathcal{L}_{\alpha,m}\Psi(\eta)=0,
\end{equation}
where $\Psi(\eta)=\langle\eta,f\rangle^n\langle\eta,1\rangle^{-n}$. The
main difficulty comes from the fact that $\Psi$ does not belong to
$\mathcal{F}$. For each $\varepsilon>0$, introduce
$\Psi_{\varepsilon}(\eta):= \langle\eta,f\rangle^n
(\langle\eta,1\rangle+\varepsilon)^{-n}$ and observe that
$\Psi_{\varepsilon}\in\mathcal{F}$. Thanks to Proposition~\ref{pr3.3},
we then have (\ref{3.13}) with $\Psi_{\varepsilon}$ in place of $\Psi$
provided that $\mathcal{L}_{\alpha,m}\Psi_{\varepsilon}$ is bounded.
Thus, the proof of (\ref{3.13}) reduces to showing the following two
assertions:
\begin{longlist}[(ii)]
\item[(i)] For every $\varepsilon>0$,
$\mathcal{L}_{\alpha,m}^{(1)}\Psi_{\varepsilon}$,
$\mathcal{L}_{\alpha,m}^{(2)}\Psi_{\varepsilon}$ and
$\mathcal{L}_{\alpha,m}^{(3)}\Psi_{\varepsilon}$ are bounded
functions on~$\mathcal{M}(E)$.
\item[(ii)] It holds that for each $k\in\{1,2,3\}$
\begin{equation}
\quad \lim_{\varepsilon\downarrow0}\int_{\mathcal{M}(E)^{\circ}}
\widetilde{Q}_{\alpha,m}(d\eta)\mathcal{L}_{\alpha,m}^{(k)}\Psi
_{\varepsilon}(\eta) = \int_{\mathcal{M}(E)^{\circ}} \widetilde{Q}_{\alpha,m}(d
\eta)\mathcal{L}_{\alpha,m}^{(k)}\Psi(\eta). \label{3.14}
\end{equation}
\end{longlist}
Here, $\mathcal{L}_{\alpha,m}
=\mathcal{L}_{\alpha,m}^{(1)}+\mathcal{L}_{\alpha,m}^{(2)}+\mathcal
{L}_{\alpha,m}^{(3)}$,
and the operators $\mathcal{L}_{\alpha,m}^{(1)}$,
$\mathcal{L}_{\alpha,m}^{(2)}$ and $\mathcal{L}_{\alpha,m}^{(3)}$
correspond, respectively, to the first, second and last term on the
right-hand side of~(\ref{3.7}).
First, we consider $\mathcal{L}_{\alpha,m}^{(2)}$.
Observe that
\begin{eqnarray}\label{3.15}
\frac{\delta\Psi_{\varepsilon}}{\delta\eta}(r) &=& \frac{nf(r)\langle\eta,f\rangle^{n-1}}{
(\langle\eta,1\rangle+\varepsilon)^n} -\frac{n\langle\eta,f\rangle^n} {(\langle\eta,1\rangle+\varepsilon)^{n+1}}
\nonumber\\[-8pt]\\[-8pt]
&=& \frac{n(f(r)\langle\eta,1\rangle-\langle\eta,f\rangle+\varepsilon
f(r) )\langle\eta,f\rangle^{n-1}} {
(\langle\eta,1\rangle+\varepsilon)^{n+1}},\nonumber
\end{eqnarray}
from which it follows that
\begin{eqnarray*}
\alpha\mathcal{L}_{\alpha,m}^{(2)}\Psi_{\varepsilon}(\eta) &=&
- \biggl\langle\eta,\frac{\delta\Psi_{\varepsilon}}{\delta\eta} \biggr
\rangle
\\
&=& -\frac{n (\langle\eta,f\rangle\langle\eta,1\rangle -\langle\eta,f\rangle\langle\eta,1\rangle+\varepsilon\langle
\eta,f\rangle ) \langle\eta,f\rangle^{n-1}} {(\langle\eta,1\rangle+\varepsilon)^{n+1}}
\\
&=& -n\varepsilon\frac{\Psi_{\varepsilon}(\eta)}{\langle\eta,1\rangle
+\varepsilon}.
\end{eqnarray*}
Hence, $\mathcal{L}_{\alpha,m}^{(2)}\Psi_{\varepsilon}$ is a bounded
function on $\mathcal{M}(E)$ and
$\mathcal{L}_{\alpha,m}^{(2)}\Psi_{\varepsilon}(\eta)\to
0=\mathcal{L}_{\alpha,m}^{(2)}\Psi(\eta)$ boundedly as
$\varepsilon\downarrow0$. This proves that (i)~and~(ii) hold true for~$\mathcal{L}_{\alpha,m}^{(2)}$.
In calculating $\mathcal{L}_{\alpha,m}^{(3)}\Psi_{\varepsilon}$,
(\ref{3.15}) is useful since
$\frac{d}{dz}\Psi_{\varepsilon}(\eta+z\delta_r)=
\frac{\delta\Psi_{\varepsilon}}{\delta(\eta+z\delta_r)}(r)$. Indeed, by
Fubini's theorem
\begin{eqnarray}\label{3.16}
&& \int_0^{\infty} \frac{dz}{z^{1+\alpha}} \bigl[
\Psi_{\varepsilon}(\eta+z\delta_r)-\Psi_{\varepsilon}(\eta)
\bigr]\nonumber
\\
&&\qquad = \int_0^{\infty} \frac{dz}{z^{1+\alpha}} \int
_0^z\,dw\frac{\delta\Psi_{\varepsilon}}{\delta(\eta+w\delta_r)}(r)
\\
&&\qquad = \frac{1}{\alpha}\int_0^{\infty}
w^{-\alpha}\,dw\frac{\delta\Psi_{\varepsilon}}{\delta(\eta+w\delta_r)}(r)\nonumber
\end{eqnarray}
and combining with (\ref{3.15}) yields
\begin{eqnarray}\label{3.17}
&&{\fontsize{10pt}{12pt}\selectfont{\mbox{$\displaystyle{\biggl\llvert\int_0^{\infty}
\frac{dz}{z^{1+\alpha}} \bigl[\Psi_{\varepsilon}(\eta+z\delta_r)-
\Psi_{\varepsilon}(\eta) \bigr] \biggr\rrvert}$}}}\nonumber
\\
&&\hspace*{-3pt}{\fontsize{10pt}{12pt}\selectfont{\mbox{$\displaystyle{\qquad \le \frac{1}{\alpha}\int_0^{\infty}w^{-\alpha}\,dw
\frac{n \llvert f(r)\langle\eta+w\delta_r,1\rangle
-\langle\eta+w\delta_r,f\rangle+\varepsilon f(r)\rrvert
\langle\eta+w\delta_r,f\rangle^{n-1}} {(\langle\eta+w\delta_r,1\rangle+\varepsilon)^{n+1}}}$}}}
\nonumber\hspace*{-21pt}
\\
&&\hspace*{-3pt}{\fontsize{10pt}{12pt}\selectfont{\mbox{$\displaystyle{\qquad \le \frac{n}{\alpha}\int_0^{\infty}w^{-\alpha}\,dw
\frac{1}{\langle\eta,1\rangle+w+\varepsilon}}$}}}
\\
&&\hspace*{-3pt}{\fontsize{10pt}{12pt}\selectfont{\mbox{$\displaystyle{\qquad = \frac{n}{\alpha}\int_0^{\infty}w^{-\alpha}\,dw
\int_0^{\infty}\,dv e^{-v(\langle\eta,1\rangle+w+\varepsilon)}}$}}}
\nonumber
\\
&&\hspace*{-3pt}{\fontsize{10pt}{12pt}\selectfont{\mbox{$\displaystyle{\qquad = n\frac{\Gamma(\alpha)\Gamma(1-\alpha)}{\alpha} \bigl(\langle\eta,1\rangle+\varepsilon
\bigr)^{-\alpha}.}$}}}\nonumber
\end{eqnarray}
This shows not only that
$\mathcal{L}_{\alpha,m}^{(3)}\Psi_{\varepsilon}$ is bounded but also
\[
\bigl|\mathcal{L}_{\alpha,m}^{(3)}\Psi_{\varepsilon}(\eta)\bigr| \le n
\Gamma(\alpha) \cdot\frac{\langle m,1\rangle}{\langle\eta,1\rangle
^{\alpha}},
\]
which is integrable with respect to $\widetilde{Q}_{\alpha,m}$ as
proved already. It can be seen also from (\ref{3.15}) and (\ref{3.16})
that $\mathcal{L}_{\alpha,m}^{(3)}\Psi_{\varepsilon}$ converges
pointwise to $\mathcal{L}_{\alpha,m}^{(3)}\Psi$ as
$\varepsilon\downarrow0$. By Lebesgue's dominated convergence theorem
we have proved (\ref{3.14}) for $\mathcal{L}_{\alpha,m}^{(3)}$.
The final task is to deal with $\mathcal{L}_{\alpha,m}^{(1)}\Psi
_{\varepsilon}$. Similar to (\ref{3.16})
\begin{eqnarray*}
I_{\varepsilon}(\eta,r) &:= & \int_0^{\infty}
\frac{dz}{z^{2+\alpha}} \biggl[\Psi_{\varepsilon}(\eta+z\delta_r)-
\Psi_{\varepsilon}(\eta) -z\frac{\delta\Psi_{\varepsilon}}{\delta\eta}(r)
\biggr]
\\
&=& \int_0^{\infty} \frac{dz}{z^{2+\alpha}} \int
_0^z\,dw \biggl[\frac{\delta\Psi_{\varepsilon}}{\delta(\eta+w\delta_r)}(r) -
\frac{\delta\Psi_{\varepsilon}}{\delta\eta}(r) \biggr]
\\
&=& \frac{1}{1+\alpha}\int_0^{\infty}
\frac{dw}{w^{1+\alpha}} \biggl[\frac{\delta\Psi_{\varepsilon}}{\delta(\eta
+w\delta_r)}(r) -\frac{\delta\Psi_{\varepsilon}}{\delta\eta}(r) \biggr].
\end{eqnarray*}
By (\ref{3.15})
$\frac{\delta\Psi_{\varepsilon}}{\delta(\eta+w\delta_r)}(r)
-\frac{\delta\Psi_{\varepsilon}}{\delta\eta}(r)$ equals
{\fontsize{10.3pt}{12pt}\selectfont{\begin{eqnarray*}
\hspace*{-4pt}&& \frac{(\langle\eta,1\rangle+\varepsilon)^{n+1}n
(f(r)\langle\eta,1\rangle-\langle\eta,f\rangle+\varepsilon
f(r) )[\langle\eta+w\delta_r,f\rangle^{n-1}
-\langle\eta,f\rangle^{n-1} ]} {
(\langle\eta,1\rangle+w+\varepsilon)^{n+1}
(\langle\eta,1\rangle+\varepsilon)^{n+1}}
\\
\hspace*{-4pt}&&\quad{} + \frac{[(\langle\eta,1\rangle+\varepsilon)^{n+1}
-(\langle\eta,1\rangle+w+\varepsilon)^{n+1} ]
n (f(r)\langle\eta,1\rangle-\langle\eta,f\rangle
+\varepsilon f(r) )\langle\eta,f\rangle^{n-1}} {
(\langle\eta,1\rangle+w+\varepsilon)^{n+1}
(\langle\eta,1\rangle+\varepsilon)^{n+1}}.
\end{eqnarray*}}
Moreover, we have bounds
\begin{eqnarray*}
\bigl\llvert\langle\eta+w\delta_r,f\rangle^{n-1} -
\langle\eta,f\rangle^{n-1} \bigr\rrvert &=& \biggl\llvert\int
_0^w\,dv(n-1)f(r) \langle\eta+v \delta_r,f\rangle^{n-2} \biggr\rrvert
\\
&\le& w(n-1) \bigl(\langle\eta,1\rangle+w \bigr)^{n-2}
\end{eqnarray*}
and
\begin{eqnarray*}
\bigl\llvert\bigl(\langle\eta,1\rangle+\varepsilon\bigr)^{n+1} -
\bigl( \langle\eta,1\rangle+w+\varepsilon\bigr)^{n+1} \bigr\rrvert
&=& (n+1)\int_0^w\,dv \bigl(\langle\eta,1\rangle
+v+\varepsilon\bigr)^{n}
\\
&\le& w(n+1) \bigl(\langle\eta,1\rangle+w+\varepsilon\bigr)^{n}.
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
&& \biggl\llvert\frac{\delta\Psi_{\varepsilon}}{\delta(\eta+w\delta_r)}(r)
-\frac{\delta\Psi_{\varepsilon}}{\delta\eta}(r) \biggr\rrvert
\\
&&\qquad \le w \frac{n(\langle\eta,1\rangle+\varepsilon)^{n+2}
(n-1)(\langle\eta,1\rangle+w)^{n-2}} {
(\langle\eta,1\rangle+w+\varepsilon)^{n+1}
(\langle\eta,1\rangle+\varepsilon)^{n+1}}
\\
&&\quad\qquad{} +w\frac{(n+1)(\langle\eta,1\rangle+w+\varepsilon)^{n}
n(\langle\eta,1\rangle+\varepsilon)\langle\eta,1\rangle^{n-1}} {
(\langle\eta,1\rangle+w+\varepsilon)^{n+1}
(\langle\eta,1\rangle+\varepsilon)^{n+1}}
\\
&&\qquad \le w\frac{2n^2}{(\langle\eta,1\rangle+w+\varepsilon)
(\langle\eta,1\rangle+\varepsilon)}.
\end{eqnarray*}
Therefore, analogous calculations to those in (\ref{3.17}) lead to
\begin{eqnarray*}
\bigl\llvert\mathcal{L}_{\alpha,m}^{(1)}\Psi_{\varepsilon}(
\eta) \bigr\rrvert &=& \biggl\llvert\frac{\alpha+1}{\Gamma(1-\alpha
)} \int _E I_{\varepsilon}(\eta,r)\eta(dr) \biggr\rrvert
\\
&\le& 2n^2\Gamma(\alpha) \bigl(\langle\eta,1\rangle+\varepsilon
\bigr)^{-\alpha} \cdot\frac{\langle\eta,1\rangle}{\langle\eta,1\rangle +\varepsilon}.
\end{eqnarray*}
This makes it possible to argue as in the case of
$\mathcal{L}_{\alpha,m}^{(3)}\Psi_{\varepsilon}$ to verify (i) and (ii)
for~$\mathcal{L}_{\alpha,m}^{(1)}$. We complete the proof of
Proposition~\ref{pr3.4}.
\end{pf}
Next, we show the coincidence of two distributions
(\ref{3.3}) [or (\ref{3.11})] and (\ref{3.6}).
Before going to the proof, it is worth noting that
\begin{equation}\label{3.18}
{P}_{\alpha,m}(\cdot) =\int_{\mathcal{M}_1(E)}\mathcal{D}_{m}(d
\mu)\mathcal{D}^{(\alpha,\alpha)}_{\mu}(\cdot),
\end{equation}
where in general, for $\theta>-\alpha$ and $m\in\mathcal{M}(E)$,
$\mathcal{D}^{(\alpha,\theta)}_{m}$ is the law of the two-parameter
generalization of the Dirichlet random measure with parameter
$(\alpha,\theta)$ and parameter measure $m$ defined by
\[
\mathcal{D}^{(\alpha,\theta)}_{m}(\cdot) = \frac{\Gamma(\theta
+1)}{\Gamma((\theta/\alpha)+1)}
{E}^{Q_{\alpha,m}} \bigl[\eta(E)^{-\theta}; \eta(E)^{-1}\eta\in
\cdot\bigr].
\]
(See, e.g., Section~5 of \cite{VYT04}.)
We will make use of the identity
\begin{equation}\label{3.19}
\qquad\qquad \int_{\mathcal{M}_1(E)}\mathcal{D}^{(\alpha,\alpha)}_{m}(d\mu)
\langle\mu, 1+f\rangle^{-\alpha} = \bigl\langle m, (1+f)^{\alpha}
\bigr\rangle^{-1},\qquad f\in B_+(E).
\end{equation}
This is a special case of Theorem 4 in \cite{VYT04} and
can be shown as follows:
\begin{eqnarray*}
&& \int_{\mathcal{M}_1(E)}\mathcal{D}^{(\alpha,\alpha)}_{m}(d\mu)
\langle\mu, 1+f\rangle^{-\alpha}
\\
&&\qquad = \Gamma(\alpha+1)E^{{Q}_{\alpha,m}}
\bigl[\langle\eta,1\rangle^{-\alpha} \bigl(1+\langle\eta,1\rangle
^{-1}\langle\eta,f\rangle\bigr)^{-\alpha} \bigr]
\\
&&\qquad = \Gamma(\alpha+1)E^{{Q}_{\alpha,m}} \bigl[\langle\eta,1+f\rangle
^{-\alpha} \bigr]
\\
&&\qquad = \alpha\int_0^{\infty}\,dvv^{\alpha-1} \exp
\bigl[-v^{\alpha} \bigl\langle m,(1+f)^{\alpha} \bigr\rangle\bigr]
\\
&&\qquad = \bigl\langle m, (1+f)^{\alpha} \bigr\rangle^{-1}.
\end{eqnarray*}
\begin{lemma}\label{le3.5}
If $m(E)>1$, then $\widetilde{P}_{\alpha,m}$ in (\ref{3.11})
coincides with ${P}_{\alpha,m}$ in~(\ref{3.6}).
\end{lemma}
\begin{pf}
It suffices to show that for any $f\in B_+(E)$
\begin{eqnarray*}
\widetilde{I}(f)&:=&\int_{\mathcal{M}_1(E)}\widetilde{P}_{\alpha,m}(d
\mu) \langle\mu,1+f\rangle^{-\alpha}
\\
&=& \int_{\mathcal{M}_1(E)}{P}_{\alpha,m}(d\mu) \langle\mu,1+f\rangle^{-\alpha}=:I(f).
\end{eqnarray*}
In view of (\ref{3.11}), calculations similar to the proof of
(\ref{3.19}) show that
\begin{eqnarray*}
&& \bigl(\Gamma(\alpha+1) \bigl(m(E)-1 \bigr)
\bigr)^{-1}\widetilde{I}(f)
\\
&&\qquad = E^{\widetilde{Q}_{\alpha,m}} \bigl [\langle\eta,1+f \rangle^{-\alpha}
\bigr]
\\
&&\qquad = \Gamma(\alpha)^{-1}\int_0^{\infty}\,dvv^{\alpha-1}
\exp\bigl[- \bigl\langle m,\log\bigl(1+v^{\alpha}(1+f)^{\alpha}
\bigr) \bigr\rangle\bigr]
\\
&&\qquad = \Gamma(\alpha+1)^{-1}\int_0^{\infty}\,dz
\exp\bigl[- \bigl\langle m,\log\bigl(1+z(1+f)^{\alpha} \bigr) \bigr
\rangle\bigr]
\\
&&\qquad = \frac{1}{\Gamma(\alpha+1)}\int_0^{1}\,du(1-u)^{-2}
\exp\biggl[- \biggl\langle m,\log\biggl(1+\frac{u}{1-u}(1+f)^{\alpha}
\biggr) \biggr\rangle\biggr]
\\
&&\qquad = \frac{1}{\Gamma(\alpha+1)}\int_0^{1}\,du(1-u)^{m(E)-2}
\exp\bigl[- \bigl\langle m,\log\bigl(1+u \bigl((1+f)^{\alpha}-1 \bigr)
\bigr) \bigr\rangle\bigr]
\\
&&\qquad = \frac{1}{\Gamma(\alpha+1)}\int_0^{1}\,du(1-u)^{m(E)-2}
\\
&&\quad\qquad{}\times \int_{\mathcal{M}_1(E)}\mathcal{D}_{m}(d\mu) \bigl\langle\mu,
1+u \bigl((1+f)^{\alpha}-1 \bigr)\bigr\rangle^{-m(E)},
\end{eqnarray*}
where the last equality follows from (\ref{3.5}).
Hence, by applying Fubini's theorem and (\ref{2.4})
\begin{eqnarray*}
\widetilde{I}(f) &=& \int_{\mathcal{M}_1(E)}\mathcal{D}_{m}(d\mu) \int
_0^1\frac{B_{1,m(E)-1}(du)} {
\langle\mu, 1+u((1+f)^{\alpha}-1)\rangle^{m(E)}}
\\
&=& \int_{\mathcal{M}_1(E)}\mathcal{D}_{m}(d\mu) \bigl
\langle\mu, (1+f)^{\alpha} \bigr\rangle^{-1}.
\end{eqnarray*}
On the other hand, combining (\ref{3.18}) with (\ref{3.19}),
we get
\begin{equation}\label{3.20}
I(f)=\int_{\mathcal{M}_1(E)}\mathcal{D}_{m}(d\mu) \bigl
\langle\mu, (1+f)^{\alpha} \bigr\rangle^{-1}
\end{equation}
and therefore $I(f)=\widetilde{I}(f)$ as desired.
\end{pf}
\begin{rem*}
The ``semi-explicit'' form (\ref{3.18}) can be explicit if $m$ is a
probability measure. More precisely, we have
$P_{\alpha,\nu}=\mathcal{D}_{\alpha\nu}$ for any $\nu\in
\mathcal{M}_1(E)$. Indeed, observe that by (\ref{3.20}) with $m=\nu$
\begin{eqnarray*}
\int_{\mathcal{M}_1(E)}{P}_{\alpha,\nu}(d\mu)\langle\mu, 1+f\rangle
^{-\alpha} &=& \int_{\mathcal{M}_1(E)}\mathcal{D}_{\nu}(d
\mu) \bigl\langle\mu, (1+f)^{\alpha} \bigr\rangle^{-1}
\\
&=& \exp\bigl[- \bigl\langle\nu, \log\bigl\{(1+f)^{\alpha} \bigr\}
\bigr\rangle\bigr]
\\
&=& \exp\bigl[- \bigl\langle\alpha\nu, \log(1+f) \bigr\rangle\bigr]
\\
&=& \int_{\mathcal{M}_1(E)}{\mathcal{D}}_{\alpha\nu}(d\mu)\langle
\mu, 1+f\rangle^{-\alpha},
\end{eqnarray*}
where (\ref{3.5}) has been applied twice. [A one-dimensional version of
the identity $P_{\alpha,\nu}=\mathcal {D}_{\alpha\nu}$ is mentioned in
Remark (ii) at the end of Section~\ref{sec2}.] By (\ref{3.18}) what we
have just seen is rewritten as
\[
\int_{\mathcal{M}_1(E)}\mathcal{D}_{\nu}(d\mu)\mathcal
{D}^{(\alpha,\alpha)}_{\mu}(\cdot) = \mathcal{D}_{\alpha\nu}(\cdot),
\]
which is a special case of
\[
\int_{\mathcal{M}_1(E)}\mathcal{D}^{(\beta,\theta/\alpha)}_{\nu
}(d\mu)
\mathcal{D}^{(\alpha,\theta)}_{\mu}(\cdot) = \mathcal{D}^{(\alpha\beta,\theta)}_{\nu}(\cdot), \qquad\beta\in[0,1), \theta>-\alpha\beta.
\]
Here notice that, in case $\beta=0$,
$\mathcal{D}^{(0,\theta)}_{\nu}=\mathcal{D}_{\theta\nu}$ by definition.
This generalization can be proved analogously by virtue of the
two-parameter generalization of (\ref{3.5}) and (\ref{3.19}). (See,
e.g., Theorem 4 in \cite{VYT04}.)
\end{rem*}
We can now prove our main result, Theorem~\ref{th3.2}. In the proof, we
write $\theta\nu$ [$\theta>0$, $\nu\in\mathcal{M}_1(E)$] for the
parameter measure $m$.
\begin{pf*}{Proof of Theorem~\ref{th3.2}}
Let $\nu\in\mathcal{M}_1(E)$ be
given. We first show that, for arbitrary $\theta>0$,
$P_{\alpha,\theta\nu}$ is a stationary distribution of the
$\mathcal{A}_{\alpha,\theta\nu}$-process. For the same reason as in the
proof of Proposition~\ref{pr3.4} [cf.~(\ref{3.12})], it is sufficient
to prove that
\begin{equation}\label{3.21}
\int_{\mathcal{M}_1(E)} {P}_{\alpha,\theta\nu}(d\mu)\mathcal{A}_{\alpha,\theta\nu}
\Phi(\mu)=0
\end{equation}
for $\Phi$ of the form $\Phi(\mu)=\langle\mu,f\rangle^n$ with $f\in
C(E)$ and $n$ being a positive integer. Since Proposition~\ref{pr3.4}
and Lemma~\ref{le3.5} together imply that (\ref{3.21}) holds true for
any $\theta>1$, it is enough to show that the left-hand side of
(\ref{3.21}) defines a real analytic function of $\theta>0$. We claim
that
\begin{eqnarray}\label{3.22}
\hspace*{27pt}&& \mathcal{A}_{\alpha,\theta\nu}\Phi(\mu)\nonumber
\\
&&\qquad = \frac{1}{\Gamma(n)} \sum
_{k=2}^n \pmatrix{n\cr k} (1-\alpha)_{k-2} (\alpha+1)_{n-k} \bigl( \bigl\langle
\mu,f^k \bigr\rangle\langle\mu,f\rangle^{n-k}-\langle\mu,f\rangle^{n} \bigr)\nonumber
\\
&&\quad\qquad{} + \frac{\theta}{(\alpha+1)\Gamma(n)}\nonumber
\\
&&\hphantom{\quad\qquad{} +}{}\times \sum_{k=1}^n \pmatrix{n
\cr k} (1-\alpha)_{k-1} (\alpha)_{n-k} \bigl( \bigl\langle
\nu,f^k \bigr\rangle\langle\mu,f\rangle^{n-k}-\langle\mu,f\rangle
^{n} \bigr)
\\
&&\qquad = \frac{1}{\Gamma(n)} \sum_{k=2}^n \pmatrix{n
\cr k} (1-\alpha)_{k-2} (\alpha+1)_{n-k} \bigl\langle
\mu,f^k \bigr\rangle\langle\mu,f\rangle^{n-k}\nonumber
\\
&&\quad\qquad{} + \frac{\theta}{(\alpha+1)\Gamma(n)} \sum_{k=1}^n \pmatrix{n
\cr k} (1-\alpha)_{k-1} (\alpha)_{n-k} \bigl\langle
\nu,f^k \bigr\rangle\langle\mu,f\rangle^{n-k}\nonumber
\\
&&\quad\qquad{} -\frac{(\alpha+1)_{n-1}}{(\alpha+1)\Gamma(n)} (\theta+n-1)\langle
\mu,f\rangle^{n}.\nonumber
\end{eqnarray}
The first equality is a special case of (\ref{3.1a}),
and the second one can be shown with the help of Leibniz's formula
\[
(\phi_1\phi_2)^{(n)}(0)=\sum
_{k=0}^n \pmatrix{n\cr k} \phi_1^{(n-k)}(0)
\phi_2^{(k)}(0)
\]
for $\phi_1(t)=(1-t)^{-a}$ and $\phi_2(t)=(1-t)^{-b}$ with
$(a,b)=(\alpha+1,-\alpha-1)$ or $(a,b)=(\alpha,-\alpha)$. In view of
(\ref{3.22}), it is clear that the proof reduces to verifying real
analyticity of $\int
{P}_{\alpha,\theta\nu}(d\mu)\langle\mu,f_1\rangle\cdots
\langle\mu,f_n\rangle$ in $\theta$ for arbitrary $f_1,\ldots,f_n\in
C(E)$.
To this end, we shall exploit the following identity
which is equivalent to~(\ref{3.20}):
\begin{equation}\label{3.23}
\qquad\quad\int_{\mathcal{M}_1(E)} {P}_{\alpha,\theta\nu}(d\mu)\langle\mu,1+f\rangle
^{-\alpha} = \int_{\mathcal{M}_1(E)}\mathcal{D}_{\theta\nu}(d\mu)
\bigl\langle\mu, (1+f)^{\alpha} \bigr\rangle^{-1},
\end{equation}
where $f\in B_+(E)$ is arbitrary. Clearly, this remains true for all
bounded Borel functions $f$ on $E$ such that $\inf_{r\in E}f(r)>-1$.
Therefore, for any $t_1,\ldots,t_n\in\mathbf{R}$ with
$|t_1|+\cdots+|t_n|$ being sufficiently small, (\ref{3.23}) for
$f=-\sum_{i=1}^nt_if_i$ is valid, that is,
$I(t_1,\ldots,t_n)=J(t_1,\ldots,t_n)$, where
\begin{equation}\label{3.24}
I(t_1,\ldots,t_n) =\int_{\mathcal{M}_1(E)}
{P}_{\alpha,\theta\nu}(d\mu) \Biggl(1- \Biggl\langle\mu,\sum
_{i=1}^nt_if_i \Biggr\rangle\Biggr)^{-\alpha}
\end{equation}
and
\begin{equation}\label{3.25}
J(t_1,\ldots,t_n) = \int_{\mathcal{M}_1(E)}
\mathcal{D}_{\theta\nu}(d\mu) \Biggl\langle\mu, \Biggl(1-\sum
_{i=1}^nt_if_i
\Biggr)^{\alpha} \Biggr\rangle^{-1}.
\end{equation}
Noting that $(1-t)^{-\alpha}=1+\sum_{k=1}^{\infty}(\alpha)_kt^k/k!$ as
long as $|t|$ is small enough, we see from (\ref{3.24}) that the
coefficient of the monomial $t_1\cdots t_n$ in the expansion of
$I(t_1,\ldots,t_n)$ is given by
\begin{equation}\label{3.26}
(\alpha)_n\int_{\mathcal{M}_1(E)} {P}_{\alpha,\theta\nu}(d\mu)
\langle\mu,f_1\rangle\cdots\langle\mu,f_n\rangle.
\end{equation}
To find the corresponding coefficient for $J(t_1,\ldots,t_n)$,
define
\[
h_{\alpha}(t)=1-(1-t)^{\alpha}= \alpha\sum
_{l=1}^{\infty}(1-\alpha)_{l-1}t^l/l!
\]
and observe from (\ref{3.25}) that
$J(t_1,\ldots,t_n)$ equals
\begin{eqnarray*}
&& \int_{\mathcal{M}_1(E)}\mathcal{D}_{\theta\nu}(d\mu) \Biggl
\langle\mu, 1-h_{\alpha} \Biggl(\sum_{i=1}^nt_if_i
\Biggr) \Biggr\rangle^{-1}
\\
&&\quad = 1+ \sum_{k=1}^{\infty} \int
_{\mathcal{M}_1(E)}\mathcal{D}_{\theta\nu}(d\mu) \Biggl\langle\mu,
h_{\alpha} \Biggl(\sum_{i=1}^nt_if_i
\Biggr) \Biggr\rangle^{k}
\\
&&\quad = 1+ \sum_{k=1}^{\infty}
\alpha^k \int_{\mathcal{M}_1(E)}\mathcal{D}_{\theta\nu}(d
\mu) \sum_{l_1,\ldots,l_k=1}^{\infty} \prod
_{j=1}^k \Biggl\{\frac{(1-\alpha)_{l_j-1}}{l_j!} \Biggl\langle
\mu, \Biggl(\sum_{i=1}^nt_if_i
\Biggr)^{l_j} \Biggr\rangle\Biggr\}.
\end{eqnarray*}
One can see that
the coefficient of the monomial $t_1\cdots t_n$
in the expansion of $J(t_1,\ldots,t_n)$ can be expressed as
\begin{equation}\label{3.27}
\qquad\sum_{k=1}^n\alpha^k k!\sum
_{\gamma\in\pi(n,k)} \int_{\mathcal{M}_1(E)}
\mathcal{D}_{\theta\nu}(d\mu) \prod_{j=1}^k
\biggl\{\frac{(1-\alpha)_{|\gamma_j|-1}}{|\gamma_j|!} \biggl\langle\mu,\prod_{i\in\gamma_j}f_i
\biggr\rangle\biggr\},
\end{equation}
where $\pi(n,k)$ is the set of partitions $\gamma$ of $\{1,\ldots,n\}$
into $k$ unordered nonempty subsets $\gamma_1,\ldots,\gamma_k$. By
Lemma~2.2 of \cite{E} (or equivalently by Lemma 2.4 of \cite{EG}), each
integral in the above sum is a real analytic function of $\theta>0$.
Hence, so is the integral in (\ref{3.26}) and the stationarity of
$P_{\alpha,\theta\nu}$ with respect to $\mathcal{A}_{\alpha,\theta\nu}$
follows.
It remains to prove the uniqueness of stationary distribution $P$ of
the \mbox{$\mathcal{A}_{\alpha,\theta\nu}$-}pro\-cess for each $\theta>0$. But
this is an immediate consequence of (\ref{3.21}) with $P$ in place of
$P_{\alpha,\theta\nu}$ and (\ref{3.22}), which together determine
uniquely $\int P(d\mu)\langle\mu,f\rangle^n$ and hence the $n$th moment
measure
\[
M_n(dr_1\cdots dr_n): =\int
_{\mathcal{M}_1(E)} P(d\mu)\mu(dr_1)\cdots\mu(dr_n)
\]
for any $n=1,2,\ldots.$ This completes the proof of
Theorem~\ref{th3.2}.
\end{pf*}
It is not clear whether we can derive from (\ref{3.27}) an extension of
the Ewens sampling formula in some explicit and informative form. (See
Remarks after the proof of Lemma~2.2 in \cite{E}.) In view of
(\ref{3.18}), one might think that Pitman's sampling formula would be
applicable. But it is not the case since $\mathcal{D}_m(\mu\mbox{ is
discrete})=1$. The expression (\ref{3.11}) might be rather useful for
such a purpose.
\section{Irreversibility}\label{sec4}
In this section, we discuss reversibility of our processes. In contrast
with the Fleming--Viot diffusion case, we guess that for any
$0<\alpha<1$ and nondegenerate $m$ the $\mathcal{A}_{\alpha,m}$-process
would be irreversible.
Unfortunately, the following result does not give
an affirmative answer in all cases.
However, this does not suggest any possibility of the reversibility
in the exceptional case, which is believed to be
dealt with a different choice of test functions.
\begin{theorem}\label{th4.1}
Let $m\in\mathcal{M}(E)^{\circ}$ be given.
Assume that either of the following two conditions holds.
\begin{longlist}[(ii)]
\item[(i)] The support of $m$ has at least three distinct points.
\item[(ii)] The support of $m$ has exactly two points, say $r_1$ and
$r_2$ and $m(\{r_1\})\ne m(\{r_2\})$.
\end{longlist}
Then the stationary distribution ${P}_{\alpha,m}$ of the
$\mathcal{A}_{\alpha,m}$-process is not a reversible distribution of
it.
\end{theorem}
\begin{pf}
As in the proof of Theorem~\ref{th3.2}, we write $\theta\nu$ instead of
$m$. Thus, $\theta>0$ and $\nu\in\mathcal{M}_1(E)$. Recall that an
equivalent condition to the reversibility of $P_{\alpha,\theta\nu}$
with respect to $\mathcal{A}_{\alpha,\theta\nu}$ is the symmetry
\[
E \bigl[\Phi\mathcal{A}_{\alpha,\theta\nu}\Phi' \bigr] = E \bigl[
\Phi'\mathcal{A}_{\alpha,\theta\nu}\Phi\bigr],\qquad\Phi,
\Phi'\in\mathcal{F}_0,
\]
in which $E[\cdot]$ stands for the expectation with respect to
$P_{\alpha,\theta\nu}$. (See the proof of Theorem 2.3 in \cite{E}.) In
the rest of the proof, we suppress the suffix ``$\alpha,\theta\nu$''
for simplicity. Let $f\in C(E)$ be given and define
$\Phi_n(\mu)=\langle\mu,f\rangle^n$ for each positive integer $n$. We
are going to calculate
\begin{equation}\label{4.1}
\Delta:= E [\Phi_2\mathcal{A}\Phi_1 ]-E [
\Phi_1\mathcal{A}\Phi_2 ].
\end{equation}
For this purpose, observe from (\ref{3.22}) that
\begin{equation} \label{4.2}
\mathcal{A}\Phi_1(\mu) =\frac{\theta}{\alpha+1} \bigl(\langle\nu,f
\rangle-\langle\mu,f\rangle\bigr),
\end{equation}
\begin{eqnarray}\label{4.3}
\mathcal{A}\Phi_2(\mu)
&=& \bigl\langle\mu,f^2 \bigr\rangle+\frac{2\alpha\theta}{\alpha+1}\langle\nu,f\rangle\langle\mu,f\rangle
\nonumber\\[-8pt]\\[-8pt]
&&{} + \frac{(1-\alpha)\theta}{\alpha+1} \bigl\langle\nu,f^2 \bigr\rangle-(\theta+1)
\langle\mu,f\rangle^2\nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{4.4}
&& \Gamma(3)\mathcal{A}\Phi_3(\mu)\nonumber
\\
&&\qquad = 3(\alpha+1) \bigl\langle
\mu,f^2 \bigr\rangle\langle\mu,f\rangle+(1-\alpha) \bigl\langle
\mu,f^3 \bigr\rangle\nonumber
\\
&&\quad\qquad{} +\frac{\theta}{\alpha+1}\cdot3\alpha(\alpha+1) \langle\nu,f\rangle
\langle\mu,f\rangle^2
\nonumber\\[-12pt]\\[-6pt]
&&\quad\qquad{} +\frac{\theta}{\alpha+1}\cdot3(1-\alpha)\alpha\bigl
\langle\nu,f^2 \bigr\rangle\langle\mu,f\rangle\nonumber
\\
&&\quad\qquad{} +\frac{\theta}{\alpha+1}\cdot(1-\alpha) (2-\alpha) \bigl\langle\nu,f^3 \bigr\rangle-(\alpha+2) (\theta+2)\langle\mu,f\rangle
^3.\nonumber
\end{eqnarray}
Combining (\ref{4.2}) with the stationarity $E[\mathcal{A}\Phi _1]=0$,
we get $E[\langle\mu,f\rangle]=\langle\nu,f\rangle$. Therefore, it is
possible to deduce from (\ref{4.3}) and $E[\mathcal{A}\Phi_2]=0$
\[
(\theta+1)E \bigl[\langle\mu,f\rangle^2 \bigr] =
\frac{2\alpha\theta}{\alpha+1} \langle\nu,f\rangle^2 + \biggl(1+
\frac{(1-\alpha)}{\alpha+1}\theta\biggr) \bigl\langle\nu,f^2 \bigr
\rangle.
\]
Moreover, this equality between quadratic forms is enough to imply
the one between symmetric bilinear forms:
\begin{eqnarray}\label{4.5}
&& (\theta+1)E \bigl[\langle\mu,f\rangle\langle\mu,g\rangle\bigr]
\nonumber\\[-8pt]\\[-8pt]
&&\qquad = \frac{2\alpha\theta}{\alpha+1}\langle\nu,f\rangle\langle\nu,g\rangle
+ \biggl(1+\frac{(1-\alpha)}{\alpha+1}\theta\biggr)\langle\nu,fg\rangle,\nonumber
\end{eqnarray}
where $g\in C(E)$ is also arbitrary.
In the rest of the proof, we assume that \mbox{$\langle\nu,f\rangle=0$}.
This makes the calculations below considerably simple.
By (\ref{4.5})
\begin{equation}\label{4.6}
M_{1,2}:=E \bigl[\langle\mu,f\rangle\bigl\langle\mu,f^2
\bigr\rangle\bigr] = \frac{(\alpha+1)+(1-\alpha)\theta}{(\alpha
+1)(\theta+1)} \bigl\langle\nu,f^3 \bigr
\rangle.
\end{equation}
The equality $E[\mathcal{A}\Phi_3]=0$ together with (\ref{4.4})
implies that
\begin{eqnarray}\label{4.7}
&& (\alpha+2) (\theta+2)E \bigl[\langle\mu,f\rangle^3 \bigr]
\nonumber\\[-9pt]\\[-9pt]
&&\qquad =3(\alpha+1)M_{1,2} +(1-\alpha) \biggl(1+\frac{2-\alpha}{\alpha+1} \theta
\biggr) \bigl\langle\nu,f^3 \bigr\rangle.\nonumber
\end{eqnarray}
These preliminaries help us calculate $\Delta$ in (\ref{4.1}) as follows.
By (\ref{4.3}) and (\ref{4.4})
\begin{eqnarray*}
\Delta &=& E \biggl[\langle\mu,f\rangle^2 \biggl(-
\frac{\theta}{\alpha+1}\langle\mu,f\rangle\biggr) \biggr] -E \bigl
[\langle\mu,f\rangle\bigl( \bigl\langle\mu,f^2 \bigr\rangle-(\theta+1) \langle
\mu,f\rangle^2\bigr) \bigr]
\\
&=& \frac{(\alpha+1)+\alpha\theta}{\alpha+1}E \bigl[\langle\mu,f\rangle^3 \bigr]-M_{1,2}
\end{eqnarray*}
and hence (\ref{4.7}) yields
\begin{eqnarray*}
&& (\alpha+1) (\alpha+2) (\theta+2)\Delta
\\
&& \qquad = \bigl[(\alpha+1)+\alpha\theta\bigr] \biggl[3(\alpha+1)M_{1,2}
+(1-\alpha) \biggl(1+\frac{2-\alpha}{\alpha+1}\theta\biggr) \bigl
\langle\nu,f^3 \bigr\rangle\biggr]
\\
&&\quad\qquad{} -(\alpha+1) (\alpha+2) (\theta+2)M_{1,2}
\\
&&\qquad = (\alpha+1) (\alpha-1) (2\theta+1)M_{1,2}
\\
&&\quad\qquad{}+ \bigl[(\alpha+1)+
\alpha\theta\bigr](1-\alpha) \biggl(1+\frac{2-\alpha}{\alpha+1}\theta
\biggr) \bigl\langle\nu,f^3 \bigr\rangle.
\end{eqnarray*}
Plugging (\ref{4.6}) into this expression, we obtain
\[
(\alpha+1) (\alpha+2) (\theta+2)\Delta= \frac{1-\alpha}{(\alpha
+1)(\theta+1)} U(\alpha,\theta)
\bigl\langle\nu,f^3 \bigr\rangle,
\]
where
\begin{eqnarray*}
U(\alpha,\theta) &=& -(\alpha+1) (2\theta+1) \bigl[(\alpha
+1)+(1-\alpha)\theta\bigr]
\\
&&{} + \bigl[(\alpha+1)+\alpha\theta\bigr](\theta+1) \bigl[(\alpha
+1)+(2-\alpha)
\theta\bigr]
\\
&=& \alpha\theta^2 \bigl[(\alpha+4)+(2-\alpha)\theta\bigr] =: V(
\alpha,\theta).
\end{eqnarray*}
[The second equality between quadratic functions of $\alpha$ is
verified by checking that
$U(-1,\theta)=-3\theta^2(\theta+1)=V(-1,\theta)$,
$U(0,\theta)=0=V(0,\theta)$ and
$U(1,\theta)=\theta^2(\theta+5)=V(1,\theta)$.] Consequently, whenever
$\langle\nu,f\rangle=0$, we have
\[
\Delta=\frac{\alpha(1-\alpha)
\theta^2 [(\alpha+4)+(2-\alpha)\theta]} {
(\alpha+1)^2(\alpha+2)(\theta+1)(\theta+2)} \bigl\langle\nu,f^3 \bigr
\rangle.
\]
Thus, all that remains is to construct an $f\in C(E)$ such that
$\langle\nu,f\rangle=0$ and $\langle\nu,f^3\rangle>0$. Because of the
assumption, we can choose a closed subset $E_0$ of $E$ such that
$0<\nu(E_0)<1/2$. Indeed, in the case (ii) this is trivial while in the
case (i) there exist disjoint closed subsets $E_1,E_2$ and $E_3$ of $E$
such that $\nu(E_1)\nu(E_2)\nu(E_3)>0$ and so $0<\nu(E_i)<1/2$ for some
$i\in\{1,2,3\}$. Letting $g$ denote the indicator function of $E_0$, we
observe that
\begin{eqnarray*}
\bigl\langle\nu, \bigl(g-\langle\nu,g\rangle\bigr)^3 \bigr
\rangle &=& \bigl\langle\nu,g^3 \bigr\rangle-3 \bigl\langle
\nu,g^2 \bigr\rangle\langle\nu,g\rangle+3\langle\nu,g\rangle
\langle\nu,g\rangle^2-\langle\nu,g\rangle^3
\\
&=& \nu(E_0)-3\nu(E_0)^2+2
\nu(E_0)^3
\\
&=& \nu(E_0) \bigl(1-\nu(E_0) \bigr) \bigl(1-2
\nu(E_0) \bigr)>0.
\end{eqnarray*}
Finally, the required $f$ exists since $g$ can be approximated
boundedly and pointwise by a sequence of functions in $C(E)$. The proof
of the theorem is complete.
\end{pf}
It is worth noting that the exceptional case of Theorem~\ref{th4.1}
corresponds to a subclass of the one-dimensional case discussed in
Section~\ref{sec1}, more specifically, the process generated by
(\ref{1.3}) with $c_1=c_2$. There is no reason why this class should be
so special with respect to the reversibility, and it seems that such a
``spatial symmetry'' makes it more subtle to see the asymmetry in time.
The actual difficulty in showing the irreversibility for these
processes along similar lines to the above proof is
that expressions of $E[\Phi_{n_1}\mathcal{A}\Phi_{n_2}]$
with $n_1+n_2\ge4$ as functions of $\alpha$~and~$\theta$
are too complicated to handle.
\section*{Acknowledgment}
The author would like to thank the referees for pointing out some
mistakes and helpful comments on the earlier version of the manuscript.
He is indebted to Professor S. Mano for the reference to \cite{BB}.
Part of the work was carried out under
the ISM Cooperative Research Program
(2012${}\cdot{}$ISM${}\cdot{}$CRP-5008).
|
3,212,635,537,655 | arxiv | \section{Introduction}
High-$P_T$ processes taking place in the background of the medium produced in ultrarelativistic heavy-ion (A-A) collisions are a cornerstone of the experimental A-A program at the LHC. The aim is to use these processes to do
``jet tomography'', i.e. to study both the short-distance physics of
the bulk medium (i.e. its relevant degrees of freedom) and
the geometry of its expansion.
However, so far attempts to extract even solid qualitative statements about the nature of parton-medium interaction have not been successful. Two main reasons can be identified: 1)
For steeply falling primary parton spectra, medium-induced shifts in parton energy cannot be distinguished from parton absorption, hence observables lose sensitivity to model
details; 2) Computed observable quantities depend both on assumptions made about the bulk medium evolution and the parton-medium interaction. In this work, we propose to resolve the second ambiguity by a systematic investigation of multiple observables, while we demonstrate that the first is significantly lessened for the harder LHC parton kinematics.
\section{Dependence on medium modelling}
We test different combinations of medium evolution and parton-medium interaction models against a large body of high $P_T$ observables. In particular, for the medium evolution we use a 3+1d ideal \cite{hyd3d}, a 2+1d ideal \cite{hyd2d,hydEbyE}
and a 2+1d viscous hydro code \cite{vhyd} with both CGC and Glauber initial conditions. On the parton-medium interaction side, we test a radiative energy loss model \cite{ASW}, a parametrized \cite{Elastic} and a Monte-Carlo (MC) model \cite{ElasticMC} for incoherent energy loss, a strong-coupling phenomenological model based on AdS/CFT ideas \cite{AdS} and the MC in-medium shower code YaJEM \cite{YaJEM1,YaJEM2} with its variant YaJEM-D \cite{YaJEM-D} which introduces an explicit pathlength/energy dependence into the minimum virtuality scale down to which the shower is evolved.
\begin{figure}[htb]
\epsfig{file=RAA-hydcomp-b7.49_f.eps, width=7.8cm}\epsfig{file=ang_b8.87v2_f.eps, width=7.8cm}
\caption{\label{F-1}Left panel: $R_{AA}(P_T)$ for 30-40\% cental 200 AGeV Au-Au collisions for in plane (solid) and out of plane (dashed) emission computed for the same energy loss model (ASW) with different hydrodynamical backgrounds, compared with PHENIX data \cite{PHENIX_RAA_phi}. Right: $R_{AA}(\phi)$ at $P_T=10$ GeV for smooth, initial-state averaged and event-by-event fluctuating hydrodynamics for different fluctuation size scale.}
\end{figure}
In a first run, we pose the question to what degree the underlying medium model is able to influence high $P_T$ observables. In Fig.~\ref{F-1} we present an example of results from a systematic investigation of both the influence of smooth hydrodynamical models \cite{JetHydSys} and event-by-event hydrodynamics \cite{hydEbyE} with initial state density fluctuations \cite{JetFluct}. Summarizing the result, we find that the medium model has a considerable (factor $\sim 2$) influence on observables such as $v_2$ at high $P_T$ or extracted parameters such as the transport coefficient $\hat{q}$. Chiefly responsible is the location of the freeze-out hypersurface --- the agreement with data in general improves if the freeze-out hypersurface is large, but noticeable effects are also caused by the initialization time or the presence/absence of viscosity. Fluctuations in the hydrodynamical initial state play a minor role for extracted parameters ($\sim$ 20\%), due to a cancellation of competing effects (see \cite{JetFluct}), but for non-central collisions the cancellation is incomplete, leading to a decrease in suppression for small fluctuation size scale. If hard probes are used to constrain the fluctuation size, a scale of $\sim 0.8$ fm is preferred.
\section{Pathlength dependence}
In a second run, we test the pathlength dependence of the available parton-medium interaction models against the data \cite{YaJEM-D,JetHydSys}. In a static medium with constant density, we expect incoherent processes to scale with pathlength $L$ (elastic), radiative energy loss (ASW) with $L^2$ due to coherence effects and the strong coupling model (AdS) with $L^3$ due to the drag-force like interaction of the virtual gluon cloud with the medium. The shower code YaJEM is known to have in principle an $L^2$ dependence due to coherence, but which effectively reduces to $L$ by finite-energy corrections, whereas YaJEM-D has a complicated non-linear pathlength dependence. In an evolving hydrodynamical medium, the pathlength dependence is effectively much more complicated due to effects like spatial inhomogeneities in the medium, longitudinal and transverse flow and viscous entropy production.
We find that the data allow to unambiguously rule out linear pathlength dependence, leading to the conclusions that a large component ($>10$\%) of elastic energy loss is not favoured by the data and that finite-energy
corrections to coherence arguments need to be taken seriously. The other models we tested (ASW, AdS and YaJEM-D) remain viable with the data,
although in each case only in combination with a particular hydrodynamical evolution model.
\section{Extrapolation to large $\sqrt{s}$}
Using the EKRT saturation model, we can extrapolate one of our default hydro runs to LHC energies \cite{RAA_LHC} and thus
significantly reduce the uncertainty in the hydrodynamical modelling, as the well-defined extrapolation procedure allows to compare results for 'the same' hydrodynamics at different $\sqrt{s}$.
\begin{figure}[htb]
\vspace*{-0.3cm}
\epsfig{file=RAA-LHC_f.eps, width=7.7cm}\epsfig{file=RAA-LHC2.76-RP_f.eps, width=7.7cm}
\caption{\label{F-LHC}Left: The nuclear suppression factor $R_{AA}$ extrapolated from best fits at RHIC to central Pb-Pb collisions at LHC for various models (see text) compared with ALICE data \cite{ALICE}. Right: $R_{AA}$ for non-central collisions in-plane and out of plane based on a best fit to central LHC data.}
\end{figure}
Fig.~\ref{F-LHC} shows results for $R_{AA}$ in central and non-central collisions.
As expected, the sensitivity to model details in the $P_T$ dependence of the
nuclear suppression factor is found to be much larger at LHC than at RHIC.
Combined with pathlength-dependent observables such as $R_{AA}(\phi)$ or the dihadron suppression factor $I_{AA}$, precision LHC data thus
have a high potential for distinguishing between different models.
We can quantify the quality of extrapolation in $\sqrt{s}$ by the factor $R$ which quantifies the differences of parton-medium interaction parameters for a best fit to RHIC and LHC
data, using the same hydrodynamical models. $R=1$ indicates an extrapolation without any tuning.
We find that the shower codes
YaJEM-D ($R=0.92$) and YaJEM ($R=0.61$)
extrapolate reasonably well, whereas the radiative energy loss scenario
ASW ($R{\,=\,}0.47$) is not favoured by the data, and a strongly coupled scenario AdS is strongly disfavoured with $R=0.31$. The latter effect can be readily understood by a dimensional analysis --- if the pathlength dependence is $L^3$, then for dimensional reasons the model must probe the medium temperature as $T^4$, i.e. the model
responds much more strongly than all others to the higher initial medium density at the LHC, which leads to overquenching. Given this finding, there is currently no reason to assume that the data would prefer a strongly coupled over a perturbative scenario of parton-medium interaction.
Combining the constraints from pathlength dependence and $\sqrt{s}$
extrapolation, assuming no systematic uncertainty on the published LHC
data \cite{ALICE}, out of the models tested here only YaJEM-D together
with a hydrodynamics similar to the 3+1d code remains a viable description.
We take this as a strong indication that systematic studies along the
lines discussed here are indeed a suitable tool to do jet tomography.
\vspace*{0.3cm}
{\bf Acknowledgments:}
This work was supported by the Finnish Academy (projects 130472 and 133005),
the Finnish Graduate School of Particle and Nuclear Physics, the Jenny and Antti Wihuri Foundation, the Magnus
Ehrnrooth Foundation and the U.S. Department of Energy (grants
\rm{DE-SC004286} and (within the framework of the JET Collaboration)
\rm{DE-SC0004104}).
\section*{References}
|
3,212,635,537,656 | arxiv | \section{INTRODUCTION}
In the recent paper~\cite{anos},
Adler, Nemenman, Overduin and
Santiago criticize a limit on the measurability
of distances which was originally
derived by Salecker and Wigner in the 1950s~\cite{wign}.
If correct, this criticism would have implications for all the recent
papers which have used in one way or another the
celebrated Salecker-Wigner study.
In particular, some of quantum-gravity ideas
that can be tested using the interferometry-based experiments I
proposed in Refs.~\cite{gacgwi,bignapap} are motivated by
the Salecker-Wigner limit; moreover,
the Salecker-Wigner limit is the common ingredient
(even though this ingredient was used in very different ways and
from very different viewpoints~\cite{bignapap,gacmpla})
of several recent studies
concerning possible limitations on the measurability of
distances~\cite{gacmpla,ng,gacgrf98}
or limitations in the ``tightness'' achievable~\cite{diosi}
in the operative definition of a network of geodesics.
I show here that the analysis reported in Ref.~\cite{anos}
is incorrect. It relies
on assumptions which cannot be justified in the
framework set up by Salecker and Wigner (while the same assumptions
would be reasonable in the context of certain measurements using
rudimentary experimental setups).
In particular, contrary to the claim made in Ref.~\cite{anos},
the source of $\sqrt{T_{obs}}$ uncertainty
(with $T_{obs}$ denoting the time of
observation in a sense which will be clarified in the following)
that was considered by Salecker and Wigner cannot be
truly eliminated;
unsurprisingly, it can only be traded for another
source of $\sqrt{T_{obs}}$ uncertainty.
The analysis reported in Ref.~\cite{anos} also handles
inadequately the idealized concept of ``clock'' relevant for
the type of ``in-principle analysis'' discussed
by Salecker and Wigner.
In addition to this incorrect criticism
of the limit derived by Salecker and Wigner,
Ref.~\cite{anos} also misrepresented the role
of the Salecker-Wigner
limit in providing motivation
for the mentioned
proposal~\cite{gacgwi,bignapap}
of interferometry-based space-time-foam studies.
The reader unfamiliar with the relevant literature
would come out of reading Ref.~\cite{anos}
with the impression that such interferometry-based tests
could only be sensitive to quantum-gravity ideas
motivated by the Salecker-Wigner limit;
instead\footnote{This was already discussed in
detail in Ref.~\cite{bignapap}
which appeared six moths before Ref.~\cite{anos}
but was not mentioned (or taken into account in any other way)
in Ref.~\cite{anos}.}
only some of the quantum-gravity ideas that can
be probed with modern interferometers are motivated by the
Salecker-Wigner limit. The bulk of the insight we can expect from
such interferometric studies concerns the stochastic properties
of "foamy" models of space-time, which are intrinsically interesting
independently of the Salecker-Wigner limit.
I shall articulate this Letter in sections,
each making one conceptually-independent and simple point.
I start in the next Section~2 by reviewing which type of ideas
concerning stochastic properties
of "foamy" models of space-time
can be tested with modern interferometers.
From the discussion it will be clear that
interest in these ``foamy" models of space-time
is justified quite independently of
the Salecker-Wigner limit (in fact, this limit
will not even be mentioned in Section~2).
Section~2 is perhaps the most
important part of this Letter,
since its primary objective
is the one of making sure that experimentalists
do not loose interest in the proposed interferometry-based tests
as a result of the confusion generated by Ref.~\cite{anos}.
The remaining sections do concern the Salecker-Wigner limit,
reviewing some relevant results and clarifying various
incorrect statements provided in Ref.~\cite{anos}.
Section~3 briefly reviews the argument
put forward by Salecker and Wigner.
Section~4 emphasizes that the Salecker-Wigner limit is obtained
in ordinary quantum mechanics, but it can provide
motivation for a certain type of ideas concerning
quantum properties of space-time.
The nature of the idealized clock relevant for the type of
analysis performed by Salecker and Wigner
is discussed in Section~5, also clarifying in which sense
some comments on this clock that were made
in Ref.~\cite{anos} are incorrect.
Section~6 clarifies how the potential well
considered in Ref.~\cite{anos} would simply trade one source
of $\sqrt{T_{obs}}$ uncertainty
for another source of $\sqrt{T_{obs}}$ uncertainty.
In Section~7 I clarify that the comments on
decoherence of the clock presented in Ref.~\cite{anos}
would not apply to the Salecker-Wigner setup.
Section~8 is devoted to some closing remarks.
\section{FOAMY \space SPACE-TIME \space AND \space MODERN
\space $~$ \space $~$ \space $~$ \space
INTERFEROMETERS}
A prediction of nearly
all approaches to the unification
of gravitation and quantum mechanics is that
at very short distances the sharp
classical concept of space-time should give way
to a somewhat ``fuzzy'' (or ``foamy'') picture,
possibly involving virulent
geometry fluctuations (sometimes intuitively/heuristically
associated with virtual black holes and wormholes).
This subject originates from observations
made by Wheeler~\cite{wheely}
and Hawking~\cite{hawk}
and has developed into a rather vast literature.
Examples of recent proposals in this area
(and good starting points for a literature search)
can be found in
Refs.~\cite{arsarea,peri,fotinilee,gacgrb,gampul,fordlightcone},
which explored possible implementations/consequences of space-time
foam ideas in various versions of quantum gravity, and in
Refs.\cite{emn,aemn1,adrian}, which performed similar studies
in an approach based on non-critical strings.
Although the idea of space-time foam appears to have significantly
different incarnations in different quantum-gravity approaches,
a general expectation that emerges from this framework is that
the distance between two bodies ``immerged" in the space-time foam
would be affected by (quantum-gravity-induced) fluctuations.
A phenomenological model of fluctuations affecting
a quantum-gravity distance must describe
the underlying stochastic processes.
As explained in detail in Refs.~\cite{bignapap,polonpap},
from the point of view of comparison with data
obtainable with modern interferometers
the best way to characterize such models is through
the associated amplitude spectral density of
distance fluctuations~\cite{amplspectdef,saulson}.
A natural starting point for the parametrization
of this amplitude spectral density
is given by\footnote{Of course,
a parametrization such as the one in
Eq.~(\ref{gacspectrbeta})
could only be valid for frequencies $f$ significantly
smaller than the Planck frequency $c/L_{p}$
and significantly larger than the inverse of the time scale
over which the classical geometry
of the space-time region where
the experiment is performed manifests
significant curvature effects.}
\begin{eqnarray}
S(f)=f^{-\beta} \, ({\cal L}_{\beta})^{{3 \over 2}-\beta}
\, c^{\beta-{1 \over 2}} ~,
\label{gacspectrbeta}
\end{eqnarray}
where $c$ is the speed-of-light constant, the
dimensionless parameter $\beta$
carries the information on the nature of the underlying stochastic
processes and the dimensionful
(length) parameter ${\cal L}_{\beta}$ carries
the information on the magnitude and rate of the
fluctuations\footnote{I am assigning an
index $\beta$ to ${\cal L}_{\beta}$ just in order to facilitate
a concise description of experimental bounds.
For example,
if data were to rule out
the fluctuations scenario with, say, $\beta = 0.6$
for all values of the effective length
scale down to, say, $10^{-27}m$
one could simply write the
formula ${\cal L}_{\beta= 0.6} < 10^{-27}m$.}.
A detailed discussion of the definition and applications
of this type of amplitude spectral density
can be found in Ref.~\cite{amplspectdef,saulson}.
For the readers unfamiliar with the use of amplitude spectral
densities some useful intuition can be obtained
from the fact that~\cite{amplspectdef,rwold}
the standard deviation of the fluctuations is
formally related to $S(f)$ by
\begin{eqnarray}
\sigma^2 = \int_{1/T_{obs}}^{f_{max}}
[S(f)]^2 \, df ~,
\label{gacspectrule}
\end{eqnarray}
where $T_{obs}$ is the time over which the distance is kept
under observation.
In Eq.~(\ref{gacspectrbeta})
the parameter $\beta$ could in principle take any value,
and it is even quite plausible that
in reality the stochastic processes
(if at all present) would have a more
complex structure than the simple power law
codified in Eq.~(\ref{gacspectrbeta}). Still,
Eq.~(\ref{gacspectrbeta}) appears to be the natural starting
point for a phenomenological programme of exploration
of the possibility of ``distance-fuzziness'' effects induced
by quantum properties of space-time.
In particular, it seems natural to devote special attention
to values of $\beta$ in the range $1/2 \le \beta \le 1$;
in fact, as explained in greater detail in Refs.~\cite{bignapap},
$\beta = 1/2$ is the type of behaviour one would expect~\cite{jare}
in fuzzy space-times without quantum decoherence
(without ``information loss''),
while the case $\beta = 1$
provides the simplest model
of stochastic (quantum) fluctuations
of distances,
in which a distance is affected by completely random minute
(possibly Planck-length size) fluctuations which
can be modeled as stochastic processes of random-walk type.
Values of $\beta$ somewhere in between the
cases $\beta = 1/2$ and $\beta = 1$ could provide a
rough model of space-times with decoherence
effects somewhat milder than the $\beta = 1$ random-walk case.
In other words, in light of the realization~\cite{bignapap,jare}
that foamy space-times without decoherence would only be consistent
with distance fluctuations of type
$\beta = 1/2$ the popular arguments that support
quantum-gravity-induced deviations from quantum coherence
motivate interest in values of $\beta$ somewhat different from $1/2$.
Readers unfamiliar with the subject can get an intuitive picture
of the relation between the value of $\beta$ and decoherence
by resorting again to Eq.~(\ref{gacspectrule}).
For example, as discussed in greater detail in
Ref.~\cite{bignapap,rwold},
the case $\beta = 1$ corresponds to $\sigma \sim \sqrt{T_{obs}}$,
the standard deviation characteristic of random-walk processes,
and this type of $T_{obs}$-dependence would be consistent
with decoherence in the sense that the information stored
in a network of distances would degrade over
time\footnote{For example,
an observer could store ``information''
in a network of bodies by adjusting their
distances to given values at a given initial time.
If space-time did involve distance fluctuations with standard
deviation that grows with the time of observation,
there would be an intrinsic mechanism for
this information to degrade over time.
Other intuitive descriptions of the relation between
certain fuzzy space-times and decoherence
can be found in Ref.~\cite{ng}.
Depending on the reader's background
it might also be useful to adopt
the language of the ``memory effect'',
as done, for example, in Ref.~\cite{memory}.}.
Similar observations, but with weaker power-law dependence
on $T_{obs}$, hold for values of $\beta$
in the range $1/2 < \beta < 1$.
In the limiting case $\beta = 1/2$ the $T_{obs}$-dependence
turns from power-law to logarythmic,
and this is of course the closest one can get to modeling
space-times without intrinsic decoherence
({\it i.e.} such that the associated standard deviation
is $T_{obs}$-independent) within the
parametrization set up
in Eq.~(\ref{gacspectrbeta})\footnote{As explained in
Refs.~\cite{gacgwi,bignapap} and reviewed here below,
we are still very far from being able to test the type
fuzziness one might expect for space-times without decoherence.
It is therefore at present quite sufficient to model
this type of fuzziness by taking $\beta = 1/2$
in (\ref{gacspectrbeta}).
Readers with an academic interest in seeing a more complete
description of stochastic processes
plausible for a space-time without decoherence
can consult Ref.~\cite{jare}.}.
As observed in Ref.~\cite{bignapap}, independent support
for a fuzzy picture of space-time of the type here being considered
comes from recent
studies~\cite{aemn1,gacgrb,gampul,fordlightcone,adrian}
suggesting that space-time foam
might induced a deformation of the dispersion relation that
characterizes the propagation of the massless particles
used as space-time probes in the operative definition
of distances.
Such a deformation of the dispersion relation would
affect~\cite{aemn1,gacgrb,bignapap}
the measurability of distances just in the way
expected for a fuzzy picture of space-time
of the type here being considered.
In general the connection between loss of quantum coherence
and a foamy/fuzzy picture of space-time is very deep and
has been discussed in numerous publications
(a sample of recent ideas in this area can be found
in Refs.~\cite{ng,peri,elmn,hpcpt,hawkdeconew}).
However, while a substantial amount
of work has been devoted to the ``physics case''
for quantum-gravity induced decoherence,
enormous difficulties have
been encountered in developing a satisfactory
formalism for this type of quantum gravity.
The primary obstruction for the search of the
correct decoherence-encoding formalism
is the fact that a new mechanics would be needed
(ordinary quantum mechanics evolves pure states into pure states)
and the identification of such a new mechanics
in the absence of any guidance from experiments
is extremely hard.
It is in this context that a phenomenology
based on the parametrization (\ref{gacspectrbeta})
finds its motivation.
When a satisfactory workable formalism implementing the
intuition on quantum-gravity-induced
decoherence becomes available, we will be
in a position to extract from it
a specific form of the stochastic processes characterizing
the associated foamy space-time,
with a definite prediction for $S(f)$.
While waiting for these developments on the theoretical-physics side
we might get some help from experiments; in fact,
as observed in Refs.~\cite{gacgwi,bignapap},
the remarkable sensitivity of modern interferometers
(the ones whose primary objective is the detection
of the classical-gravity phenomenon of gravity
waves~\cite{saulson})
allows us to put significant bounds on the parameters
of Eq.~(\ref{gacspectrbeta}).
While it is remarkable that some candidate
quantum-gravity phenomena
are within reach of doable experiments,
it is instead quite obvious that interferometers
would be the natural in-principle
tools for the study of distance fluctuations.
In fact, the operation of interferometers
is based on the detection of
minute changes in the positions of some test masses
(relative to the position of a beam splitter),
and, if these positions were affected by
quantum fluctuations of the type discussed
above, the operation of interferometers
would effectively involve an additional
source of noise due to quantum gravity~\cite{gacgwi,bignapap}.
The data obtained at
the {\it Caltech 40-meter interferometer}, which
in particular achieved~\cite{ligoprototype}
displacement noise levels with amplitude spectral density
of about $3 \cdot 10^{-19} m/\sqrt{H\!z}$
in the neighborhood of $450$ $H\!z$,
allow us to set the bound~\cite{gacgwi,bignapap,polonpap}
\begin{eqnarray}
[{\cal L}_{\beta}]_{Caltech}
< \left[ {3 \cdot 10^{-19} m \over \sqrt{H\!z}}
\, (450 H\!z)^\beta \,
c^{(1-2\beta)/2} \right]^{2/(3 - 2 \beta)} ~.
\label{boundcalty}
\end{eqnarray}
In order to get some intuition
for the significance of this bound
let us consider the case $\beta = 1$.
For $\beta =1$
the bound in Eq.~(\ref{boundcalty})
takes the form $L_{\beta = 1} < 10^{-40}m$.
This is quite impressive
since $\beta = 1$, $L_{\beta=1} \sim 10^{-35}m$
corresponds to fluctuations in the 40-meter arms of
the Caltech interferometer
that are of Planck-length magnitude ($L_p \sim 10^{-35}m$)
and occur at a rate of one per each
Planck-time interval ($t_p = L_p/c \sim 10^{-44} s$).
The data obtained at
the {\it Caltech 40-meter interferometer}
therefore rule out this simple model in spite of the
minuteness (Planck-length!!)
of the fluctuations involved.
Another intuition-building
observation concerning the significance of this result
is obtained by considering the standard
deviation $\sigma \sim \sqrt{L_p c T_{obs}}$ which
would correspond to such Planck-length
fluctuations occurring at $1/t_p$
rate. From $\sigma \sim \sqrt{L_p c T_{obs}}$
one predicts fluctuations with standard deviation
even smaller than $10^{-5}m$ on a time of observation as large
as $10^{10}$ years (the size of the whole observable universe
is about $10^{10}$ light years!!) but
in spite of their minuteness these can
be ruled out exploiting the remarkable sensitivity of modern
interferometers.
Additional comments on values of $\beta$
in the range
$1/2 < \beta < 1$ can be found in Refs.~\cite{bignapap,polonpap}
(in Ref.~\cite{bignapap} the reader will find a detailed
discussion of the case $\beta = 5/6$).
In the present Letter it suffices to observe that
the bound encoded in Eq.~(\ref{boundcalty})
becomes less stringent as the value of $\beta$
decreases. In particular, in the limit $\beta = 1/2$,
the case providing an effective model for space-times
without intrinsic decoherence,
Eq.~(\ref{boundcalty}) only implies
${\cal L}_{\beta = 1/2} < 10^{-17}m$,
which is still very comfortably consistent with the
natural expectation~\cite{jare} that within that framework
one would have ${\cal L}_{\beta = 1/2} \sim L_p \sim 10^{-35}m$.
In this section I have in no way considered
the statements on the Salecker-Wigner limit
reported in Ref.~\cite{anos}.
As anticipated in the Introduction, I have opened the paper
with this section briefly summarizing
the status of interferometry-based studies of
distance fuzziness. The fact that the Salecker-Wigner limit
was not even
mentioned in this section should however clarify that, contrary to
the impression one gets from reading Ref.~\cite{anos},
these interferometric studies are intrinsically interesting,
quite independently of any consideration concerning
the Salecker-Wigner limit.
This is already clear at least to a portion of the community;
for example, in recent work~\cite{adrian} on foamy space-times
(without any reference to the Salecker-Wigner related literature)
the type of modern-interferometer sensitivity exposed in
Refs.~\cite{gacgwi,bignapap} was used
to constrain certain novel candidate light-cone-broadening effects .
The brief review provided in this section should also clarify
in which sense another statement provided in Ref.~\cite{anos}
is misleading. It was in fact stated in Ref.~\cite{anos}
that, since the sensitivity of modern interferometers is at the
level\footnote{For example,
planned interferometers~\cite{ligo,virgo}
with arm lengths of a few $Km$
expect to detect gravity waves of amplitude as
low as $3 \cdot 10^{-22}$ (at frequencies of about $100 Hz$).
This roughly means that these modern gravity-wave interferometers
should monitor the (relative) positions of their test masses
(the beam splitter and the mirrors)
with an accuracy of order $10^{-18} m$.}
of $10^{-18}m$,
any quantum-gravity model tested by such interferometers
should predict a break down of the classical space-time
picture on distance scales of order $10^{-18}m$.
Let me illustrate in which sense this statement misses the
substance of the proposed tests
by taking again as an example the one
with $\beta = 1$, which allows an intuitive discussion in terms of
simple random-walk processes.
We have seen that this can describe fluctuations of
Planck-length magnitude occurring at $1/t_p$ rate.
All the scales involved in the stochastic picture are at
the $10^{-35}m$ scale, but we can rule out
this scenario using a ``$10^{-18}m$ machine''
because this machine operates at frequencies
of order a few hundred $Hz$ (which correspond to
time scales of order a few milliseconds)
and therefore is effectively sensitive to the collective effect
of a very large number of minute Planck-scale effects
({\it e.g.}, in the simple random-walk case,
during a time of a few milliseconds as many as $10^{41}$
Planck-length fluctuations would affect the arms of the
interferometer).
This is not different from other similar experiments
probing fundamental physics.
For example, proton-decay
experiments use protons at rest (objects of size $10^{-16}m$)
to probe physics on distance scales of order $10^{-32}m$
(the conjectured size of gauge bosons mediating proton decay),
and this is done by monitoring a very large number of protons
so that the apparatus is sensitive to a collective effect
which is much larger than the decay probability of each
individual proton.
A similar idea has already been exploited in ``quantum-gravity
phenomenology''~\cite{polonpap}; in fact, the experiment
proposed in Ref.~\cite{gacgrb} is possible only because
the photons that reach us from distant astrophysical sources
have traveled for such a long
time that they are in principle
sensitive to the collective effect
of a very large number of interactions with the
space-time foam.
\section{THE SALECKER-WIGNER LIMIT IN ORDINARY QUANTUM MECHANICS}
Having clarified what part of the motivation for interferometric
studies is completely independent of the Salecker-Wigner limit
I have two remaining tasks: the one of providing
a brief review of
the Salecker-Wigner limit
and the one of correcting the incorrect statements on
the Salecker-Wigner limit which were given in Ref.~\cite{anos}.
Let me start by considering the original Salecker-Wigner limit
within ordinary quantum mechanics.
The analysis reported by Salecker and Wigner in Ref.~\cite{wign}
concerns the measurability of distances.
In particular, they considered
the measurement of the distances defined by the
network of free-falling
bodies that might compose an idealized ``material reference
system''~\cite{rovellimrs}. Those who have been developing
the research line started by Salecker and Wigner
have also considered more general distance measurements,
but the emphasis has remained on measurement analyses
that might provide intuition on the way
in which distances could be in principle operatively defined
in quantum gravity.
The essence
of the Salecker-Wigner argument can be summarized
as follows. They ``measured'' (in the ``{\it gedanken}'' sense)
the distance $D$ between two bodies
by exchanging a light signal between them.
The measurement procedure requires {\it attaching}\footnote{Of
course, for consistency with causality,
in such contexts one assumes devices to be ``attached non-rigidly,''
and, in particular, the relative position
and velocity of their centers of mass continue to satisfy the
standard uncertainty relations of quantum mechanics.}
a light-gun ({\it i.e.} a device
capable of sending
a light signal when triggered), a detector
and a clock to
one of the two bodies
and {\it attaching} a mirror to the other body.
By measuring the time $T_{obs}$ (time of observation)
needed by the light signal
for a two-way journey between the bodies one
also obtains a measurement of
the distance $D$.
For example, in flat space
and neglecting quantum effects
one simply finds that $D = c {T_{obs} / 2}$.
Unlike most conventional measurement analyses,
Salecker and Wigner were concerned with the quantum
properties of the devices involved in the measurement
procedure. In particular, since they were considering
a distance measurement, it was clear that
quantum uncertainties in the position (relative to, say,
the center of mass of the two
bodies whose distance is being measured) of some of the
devices involved in the measurement procedure would translate
into uncertainties in the overall measurement of $D$.
Importantly, the analysis of these device-induced
uncertainties leads to
a lower bound on the measurability of $D$.
To see this it is sufficient to consider the
contribution to $\delta D$ coming from
only one of the quantum uncertainties that affect
the motion of the devices.
In Ref.~\cite{wign} (and in the more recent studies
reported in Refs.~\cite{ng,diosi})
the analysis focused on the uncertainty in the position
of the Salecker-Wigner clock, while in some of my related
studies~\cite{gacmpla,gacgrf98} the analysis focused on
the uncertainties that affect the motion
of the center of mass of the system
composed by the light-gun, the detector and the clock.
These approaches are actually identical, since (as I shall
discuss in greater detail later) the Salecker-Wigner clock
is conceptualized~\cite{wign} as a device not only capable of
keeping track of time but also
capable of sending and receiving signals; it is therefore
a composite device including at least
a clock, a transmitter and a receiver.
Moreover, the substance of the argument
does not depend very sensitively on which position
is considered, as long as it is associated with a
device whose position must be known over the whole
time required by the measurement procedure.
For definiteness, let me here proceed
denoting with $x^*$ and $v^*$
the position and the velocity of an idealized Salecker-Wigner
clock. Assuming that the experimentalists prepare this device
in a state characterised by
uncertainties $\delta x^*$ and $\delta v^*$,
one easily finds~\cite{wign,gacmpla,ng,gacgrf98}
\begin{eqnarray}
\delta D \geq
\delta x^* + T_{obs} \delta v^*
\geq
\delta x^*
+ \left( {1 \over M_b} + {1 \over M_d} \right)
{ \hbar T_{obs} \over 2 \, \delta x^* }
~,
\label{deltawignOLDprologo}
\end{eqnarray}
where $M_b$ is the sum of the masses of the two bodies
whose distance is being measured, $M_d$ is
the mass of the device
being considered ({\it e.g.}, the mass of the clock)
and
I also used the fact
that Heisenberg's {\it Uncertainty Principle}
implies $\delta x^* \delta v^* \ge (1/M_b + 1/M_d) \hbar/2$.
[The {\it reduced mass} $(1/M_b+1/M_d)^{-1}$
is relevant for the relative motion of the clock
with respect to the position of
the center of mass of the system composed by the two
bodies whose distance is being measured.]
Evidently, from (\ref{deltawignOLDprologo})
it follows that for given $M_b$ and $M_d$ there is a lower bound
on the measurability of $D$
\begin{eqnarray}
\delta D \geq \sqrt{ {\hbar T_{obs} \over 2}
\left( {1 \over M_b} + {1 \over M_d} \right) }
~.
\label{deltawignOLD}
\end{eqnarray}
The result (\ref{deltawignOLD})
may at first appear somewhat puzzling, since
ordinary quantum mechanics should not
limit the measurability of any given observable.
[It only limits the combined measurability
of pairs of conjugate observables.]
However, from a physical/phenomenological and conceptual
viewpoint it is well understood that the
proper framework for the application of the formalism
of quantum mechanics is the
description of the results of measurements performed
by classical devices (devices that can be treated
as approximately classical within the level of
accuracy required by the measurement).
It is therefore not surprising
that the infinite-mass (classical-device\footnote{A rigorous
definition of a ``classical device'' is
beyond the scope of this Letter. However, it should be emphasized
that the experimental setups being here considered require
the devices to be accurately positioned during the time
needed for the measurement, and therefore an ideal/classical
device should be infinitely massive so
that the experimentalists can prepare it in a state
with $\delta x \, \delta v \sim \hbar/M \sim 0$.})
limit turns
out to be required
in order to bridge the gap between (\ref{deltawignOLD})
and the prediction $min \delta D = 0$
of the formalism
of ordinary quantum mechanics.\footnote{Perhaps
more troubling is the fact that $min \delta D = 0$
appears to require not only an infinitely large $M_d$
but also an infinitely large $M_b$.
One feels somewhat uncomfortable
treating the mass of the bodies whose distance is being
measured as a parameter of the apparatus. This might be
another pointer to the fact that quantum measurement
of gravitational/geometric observables requires
a novel conceptualization of quantum mechanics. I postpone
the consideration of this point to future work.}
In this section on the Salecker-Wigner limit
I have not taken into account the
gravitational properties of the devices.
It has been strictly confined within
ordinary (non-gravitational) quantum mechanics.
Actually, one can interpret the Salecker-Wigner limit
as one way to render manifest the true nature of the
physical applications of the quantum-mechanics formalism
and its relation with a certain class of experiments
(the ones performed by classical devices).
The picture emerging from the analysis
of Salecker and Wigner fits well
within a general picture
emerging from other similar studies.
In particular,
the celebrated Bohr-Rosenfeld analysis~\cite{rose}
of the measurability of the electromagnetic field
found that the accuracy allowed by the formalism
of ordinary quantum mechanics could only be achieved
using a very special type of device:~idealized
test particles with vanishing ratio between
electric charge and inertial mass.
\section{FROM \space THE \space SALECKER-WIGNER \space
LIMIT \space TO \space QUANTUM GRAVITY}
Let me now take the Salecker-Wigner limit as starting point
for a quantum-gravity argument. I will therefore now not only
consider the quantum properties of the devices, but also their
gravitational properties.
It is well understood (see, {\it e.g.},
Refs.~\cite{gacmpla,gacgrf98,diosi,bergstac,dharam94grf,dharam3QG})
that the combination of the gravitational properties
and the quantum properties of devices can have an important role
in the analysis of the operative definition of gravitational
observables.
Actually, by ignoring the way in which the gravitational properties
and the quantum properties of devices combine in measurements
of geometry-related physical properties of a system
one misses some of the fundamental elements
of novelty we should expect for the interplay of gravitation
and quantum mechanics; in fact, one would be missing an
element of novelty which is deeply associated with the Equivalence
Principle.
For example,
attempts to generalize the mentioned Bohr-Rosenfeld analysis
to the study of gravitational fields
(see, {\it e.g.}, Ref.~\cite{bergstac})
are of course confronted with the fact that the ratio between
gravitational ``charge'' (mass) and inertial mass
is fixed by the Equivalence Principle.
While ideal devices with vanishing ratio between
electric charge and inertial mass can
be considered at least in principle,
devices with vanishing ratio between
gravitational mass and inertial mass
are not admissible in any (however formal) limit
of the laws of gravitation.
This observation provides one of the strongest elements
in support of the idea~\cite{gacgrf98}
that the mechanics on which quantum
gravity is based must not be exactly
the one of ordinary quantum mechanics.
In turn this contributes to the whole spectrum of arguments
that support the expectation that the loss of quantum coherence
might be intrinsic in quantum gravity.
Similar support for quantum-gravity-induced
decoherence emerges from taking into account both
gravitational and quantum properties of devices in
the analysis of the Salecker-Wigner measurement procedure.
The conflict with ordinary quantum mechanics
immediately arises because the
infinite-mass limit
is in principle inadmissible for measurements
concerning gravitational effects.
As the devices get more and more massive they increasingly
disturb the gravitational/geometrical observables, and
well before reaching the infinite-mass limit the procedures
for the measurement of gravitational observables cannot
be meaningfully performed~\cite{gacmpla,ng,gacgrf98}.
These observations, which render unaccessible
the limit of vanishingly small right-hand-side
of Eq.~(\ref{deltawignOLD}),
provide motivation for the possibility~\cite{gacmpla,gacgrf98}
that in quantum gravity
there be a $T_{obs}$-dependent intrinsic uncertainty
in any measurement that monitors
a distance $D$ for a time $T_{obs}$.
Gravitation forces us to renounce to
the idealization of infinitely-massive devices
and this in turn forces us to deal with the element of
decoherence encoded in the fact that measurements
requiring longer times of observation are
intrinsically/fundamentally
affected by larger quantum uncertainty.
It is important to realize that this
element of decoherence found in the analysis of
the measurability of distances
comes simply from combining elements of quantum mechanics
with elements of classical gravity.
As it stands it is not to be interpreted as a
genuine quantum-gravity effect, but of course
this argument based on the Salecker-Wigner limit
provides motivation for the exploration of the possibility
that quantum gravity might accommodate this type of
decoherence mechanism at the fundamental level.
In the analysis of the Salecker-Wigner setup
the $T_{obs}$ dependence is not introduced at the fundamental
level; it is a derived property emerging from
the postulates of gravitation and quantum mechanics.
However, it is plausible that quantum gravity,
as a fundamental theory of space-time, might
accommodate this type of bound at the fundamental
level ({\it e.g.}, among its postulates or as a straightforward
consequence of the correct short-distance picture of space-time).
It is through this
(plausible, but, of course, not self-evident)
argument that the Salecker-Wigner
limit provides additional motivation for the interferometric studies
discussed in Section~2.
The element of decoherence encoded in the stochastic models of
fuzzy space-time is quite consistent with the type of decoherence
mechanism suggested by the analysis of the Salecker-Wigner
measurement procedure.
One could see the Wheeler-Hawking picture of an ``active''
quantum-gravity vacuum and the measurability bound
suggested by the analysis of the Salecker-Wigner
measurement procedure
as independent arguments in support of distance fuzziness
of the type here reviewed in Section~2.
Of course, the intuition associated to the arguments
of Wheeler, Hawking and followers is more fundamental
and has wider significance, but the
analysis of the Salecker-Wigner
measurement procedure has the advantage of allowing
to develop (however heuristic) arguments in support
of one or another form of fuzziness, whereas the lack of
explicit models providing a satisfactory implementation of the
Wheeler-Hawking intuition forces one to adopt parametrizations
as general as the one in Eq.~(\ref{gacspectrbeta}).
From this point of view, arguments based on the
Salecker-Wigner measurement procedure can play a role similar
to the one played by the arguments based on
quantum-gravity-induced deformations of dispersion relations,
which, as already mentioned in Section~2,
can also be used~\cite{bignapap} to support specific
corresponding models of fuzziness (values of $\beta$)
within the class of
models parametrized in Eq.~(\ref{gacspectrbeta}).
Let me devote the rest of this section to some of the
arguments based on analyses of the Salecker-Wigner
measurement procedure that provide support
for one or another form of distance fuzziness.
As observed in Refs.~\cite{gacgwi,bignapap}
a particular value of $\beta$ can be motivated by
arguing in favour of a corresponding
explicit form of the $T_{obs}$ dependence
of the bound on the measurability of distances.
Let me here emphasize that
the robust part of the quantum-gravity argument
based on the analysis of the Salecker-Wigner
measurement procedure only allows one to conclude
that the $T_{obs}$ dependence cannot be eliminated,
and this is not sufficient for obtaining an explicit
prediction for the $T_{obs}$-dependent measurability
bound. A robust derivation of such an explicit formula
would require one to have available the correct quantum
gravity and derive from it whatever quantity turns out
to play effectively the role of the minimum
quantum-gravity-allowed value of $M_b^{-1}+M_d^{-1}$.
Since quantum gravity is not available to us,
we can only attempt intuitive/heuristic
answers to questions such as:
should quantum gravity host such an effective
minimum value of $M_b^{-1}+M_d^{-1}$?
how small could this effective
minimum value of $M_b^{-1}+M_d^{-1}$ be?
could this minimum value depend on $T_{obs}$?
could it depend on the distance scales being probed?
This questions are discussed in detail in
Refs.~\cite{bignapap,gacmpla,gacgrf98,polonpap}.
For the objectives of the present Letter
it is important to discuss explicitly in which sense
one is seeking answers to these questions.
In seeking these answers one is trying to
get an intuition for the fundamental
conceptual structure of quantum gravity, and therefore
one considers the measurement
procedure from a viewpoint that would
seem appropriate for the definition of distances
possibly as short as the Planck length.
[Some authors (quite reasonably)
would also expect quantum gravity
to accommodate some sort of operative definition of
space-time based on a network of material-particle
(possibly minute clocks) worldlines.]
It is from these viewpoints that one must approach
the questions raised by analyses
of the Salecker-Wigner setup.
As it will be discussed in the next three sections,
one is led to very naive conclusions by adopting instead
a conventional viewpoint based on the intuition
that comes from present-day rudimentary (from a Planck-length
perspective) experimental setups.
The logic of the line of research started by the work
of Salecker and Wigner is the one of applying
the language/structures we ordinarily use in physical contexts
we do understand to contexts that instead seem to lie in
the realm of quantum gravity, hoping that this might guide us
toward some features of the correct quantum gravity.
We already know the answers to the above questions within
ordinary gravitation and quantum mechanics, and therefore an
exercise such as the one reported in Ref.~\cite{anos}
could not possibly teach us anything. It is instead at least
plausible that we get a glimpse of a true property of quantum
gravity by exploring the consequences of removing one of
the elements of the ordinary conceptual structure of quantum
mechanics. The Salecker-Wigner study
(just like the Bohr-Rosenfeld analysis)
suggests that among these conceptual elements of
quantum mechanics the one that is most likely
(although there are of course no guarantees)
to succumb to the unification of gravitation and quantum
mechanics is the requirement for devices to be treated
as classical. Removal of this requirement appears to guide
us toward some candidate properties of quantum gravity (not
of the ordinary laws of gravitation and quantum mechanics!),
which we can then hope to test directly in the laboratory
(as in some cases is actually possible~\cite{gacgwi,bignapap}).
I shall go back to these important points in the next three
sections, but before I do that let me just
briefly summarize the outcome of two simple
attempts to extract quantum-gravity intuition from
the conceptual framework set up by Salecker and Wigner.
One of these approaches I have developed in
Refs.~\cite{bignapap,gacmpla,gacgrf98}.
It is based on the simple observation
that if in quantum gravity the effective
minimum value of $M_b^{-1}+M_d^{-1}$
was $T_{obs}$-independent and $\delta D$-independent,
say $min (M_b^{-1}+M_d^{-1})
= [max(M^*)]^{-1} \equiv c L_{QG}/\hbar$,
we would then get
a bound on the measurability of distances
which goes like $\sqrt{T_{obs}}$
\begin{eqnarray}
\delta D \geq \sqrt{ {\hbar T_{obs} \over 2 \, max(M^*)}}
\equiv \sqrt{ {c T_{obs} L_{QG} \over 2}}
~,
\label{deltawignGACm}
\end{eqnarray}
and would therefore be
suggestive~\cite{gacgwi,bignapap,polonpap}
of random-walk stochastic processes.
I also observed that, if this effective $max(M^*)$
of quantum gravity could still be interpreted as some
maximum mass of the devices used in the measurement
procedure, the value of $max(M^*)$ could be bound by the
observation that in order to allow the measurement procedure
to be performed these devices should at least be light
enough not to turn into black holes.
This allows one to trade~\cite{bignapap,gacmpla,gacgrf98}
the effective mass scale $max(M^*)$
for an effective length scale $s^*$
which would be the maximum effective size\footnote{From
the viewpoint clarified above it is natural to envision
that this length scale $s^*$ would be a fundamental
scale of quantum gravity. Instead of introducing a dedicated scale
for it one could be tempted to consider the possibility that $s^*$
be identified with the only known quantum-gravity scale $L_p$,
even though this would render somewhat daring
the possible interpretation of $s^*$
as maximum size of the devices involved in the measurement.
In a sense more precisely discussed
in Refs.~\cite{gacgwi,bignapap,polonpap}, this identification
$s^* \equiv L_p$ is already ruled out by the same Caltech data
mentioned above~\cite{ligoprototype}.}
allowed in quantum gravity for the individual devices
partecipating to the measurement procedure:
\begin{eqnarray}
\delta D \geq \sqrt{ {L_p^2 \, c \, T_{obs} \over s^*}}
~.
\label{deltawignGACs}
\end{eqnarray}
[Of course, this whole exercise of trading $max(M^*)$
for $s^*$ only serves the purpose of giving an
alternative intuition for the new length scale $L_{QG}$,
which can now be seen as related to some effective maximum size
of devices $s^*$ by the equation $L_{QG} \equiv L_p^2/s^*$.]
Another approach to the derivation of a candidate quantum-gravity
bound on the measurability of distances
from an analysis of the Salecker-Wigner
measurement procedure has been developed by Ng
and Van Dam~\cite{ng}.
These authors took a somewhat different definition
of measurability bound~\cite{bignapap,gacmpla,gacgrf98}
and they also advocated a certain classical-gravity
approach to the estimate of $max(M^*)$.
The end result was
\begin{eqnarray}
\delta D \geq (L_p^2 \, c \, T_{obs})^{1/3}
~.
\label{deltawignNG}
\end{eqnarray}
In Ref.~\cite{gacgwi,bignapap}
it was observed that a $T_{obs}$-dependence of
the type in Eq.~(\ref{deltawignNG}) would be suggestive
of the stochastic space-time model
with $\beta = 5/6$.
It is interesting to observe~\cite{ng,gacmpla}
that relations such as (\ref{deltawignGACs})
and (\ref{deltawignNG}) can take the form of $D$-dependent
bounds on the measurability of $D$ by observing
that $D \sim T_{obs}$ in typical measurement setups.
The bounds would
be $\delta D \geq \sqrt{D L_{QG}} \equiv \sqrt{D L_p^2/s^*}$
and $\delta D \geq (D L_p^2)^{1/3}$
respectively for (\ref{deltawignGACs})
and (\ref{deltawignNG}).
\section{ON THE SALECKER-WIGNER CLOCK}
As manifest in the brief review provided in the previous two sections,
the Salecker-Wigner limit and the associated intuition concerning
quantum properties of space-time is based on an in-principle analysis
of the measurement of distances, with emphasis on the nature of the
devices used in the measurement procedure.
Accordingly, the measurement procedure is only schematically
described and only from a conceptual point of view.
The devices used in the measurement procedure are also only
considered from the point of
view of the role that they play in the conceptual
structure of the measurement procedure.
For example (an example which is relevant for some of the incorrect
conclusions drawn in Ref.~\cite{anos}), the
Salecker-Wigner ``clock'' is not simply a
timing device, but it is to be intended as the network of
instruments needed for the ``clock'' to play its role
in the measurement
procedure ({\it e.g.} instruments needed to trigger the
transfer of information from the clock to the rest of the network of
devices that form the apparatus or instruments needed to affect the
position of the clock in ways needed by the measurement procedure).
This was already very clearly explained in the early works~\cite{wign}
by Salecker and Wigner, which in various points state
that the relevant idealized clocks are, for example,
capable of sending and receiving signals
(they are therefore composite devices including at least
a clock, a transmitter and a receiver).
It is in this sense that Salecker
and Wigner~\cite{wign} consider the clock.
As mentioned, they also had in mind a rough picture
in which space-time could be in principle operatively
defined by a network of such free-falling clocks,
providing a material reference system~\cite{rovellimrs}.
If this (as it might well be)
was the proper way to obtain an operative definition of
space-time, one would obviously be led to consider each of the
clocks in the network to be extremely small and light.
In general a rather natural intuition is that the
ideal clocks to be used in the measurement of
a gravitational observable should be very light,
in order not to the disturb the observed quantity.
The same of course holds for all other devices
used in a gravitational measurement.
How light all these devices should be might depend on the
intended scale/sensitivity at which the measurement is
performed; for the operative definition of Planckian
distances one would expect that, since even tiny disturbances
would spoil the measurement, this ideal devices should be very
light, but the correct
quantum gravity would be needed for a definite answer.
The criticism of the Salecker-Wigner limit expressed
in Ref.~\cite{anos}, was essentially based on two observations.
One of the observations, which I will address in the next two
sections, was based on the idea that
it might be possible to avoid the $\sqrt{T_{obs}}$ dependence
characteristic of the Salecker-Wigner limit.
The other observation, which I want to address in this section,
was based on the fact that
the data already available from
the {\it Caltech 40-meter interferometer}
(the same here used in Section~2 to
set bounds on simple models of fuzziness)
imply that the effective clock mass to be used in the
Salecker-Wigner formula would have to be larger than 3 grams,
which the authors of Ref.~\cite{anos} felt to be to high a mass
to be believable as a candidate mass of
fundamental clocks in Nature.
As underlined by the choice
of observing~\cite{anos} that the 3-gram bound is comparable
to masses of wristwatch components,
this comment and criticism comes from taking literally
the Salecker-Wigner clock as a somewhat ordinary timing device.
This misses completely the point emphasized in the brief
review I have given above, {\it i.e.} that
the role of the effective Salecker-Wigner clock mass
cannot be taken literally as the mass of an ordinary
timing device: it is a more fundamental effective mass scale
characterizing the devices being used
(as clearly indicated by the fact that Salecker and Wigner
attribute to their conceptualization of
a ``clock'' the capability to transmit, receive
and process signals).
One must also consider that this idealized clock
was conceived as a device needed for a proper
operative definition of Planck-scale distances, and
therefore there is little to be gained from the intuition
of wristwatches and other ordinary timing devices.
The comment on the 3-gram bound given in Ref.~\cite{anos}
also fails to take into account the arguments,
which had already appeared in the
literature~\cite{bignapap,gacmpla,gacgrf98}
and have been here reviewed in Section~4,
concerning the need to interpret
the effective mass of the idealized Salecker-Wigner clock
as a fundamental but not necessarily
universal property of quantum gravity,
possibly depending on the type of length scales
involved/probed in
the experiment (as argued above for the
associated effective scale $max(M^*)$).
For experiments involving distance scales as large
as 40 meters, the result $max (M^*) > 3 grams$ seems perfectly
consistent\footnote{Perhaps a bound of the
type $max (M^*) > 3 grams$ would instead
be surprising if we had found it in experiments
defining Planckian distances in the spirit of the
type of networks of worldlines
considered by Salecker and Wigner (experiments which of course
are extremely far in the future if not impossible in principle).
Actually, it is quite daring to trust our feeling
of ``surprise'' when venturing so far from our present-day
intuition:~along the way to
the Planck scale we might be forced to change completely
our intuition about the natural world. For example,
on the subject of timing devices here of relevance
the interplay between gravitation and quantum mechanics
might even provide us new types of timing devices.
(One attempt to construct such new tools is discussed in
Ref.~\cite{dharam3QG} and some of the references therein.)}
with the idea that there should be some absolute bound
on $max(M^*)$ in any given quantum-gravity experimental
setup.
If experiments had given
a positive result (say, $max(M^*) \sim 2 grams$)
it would have not upset anything else we know abut the physical
world (only the most sensitive interferometers
would be sensitive to the effects of a Salecker-Wigner limit
with $max(M^*) \sim 2 grams$), but at the same time the fact that
it was instead found that $max (M^*) > 3 grams$
in experiments involving distance scales as large
as 40 meters should not surprise us
nor is it inconsistent with the arguments put forward by
Salecker and Wigner and followers.
Because of the present very early stage of development
of quantum gravity, we are at the same time
looking for the value (if any!)
of $max (M^*)$ and looking for an understanding
of what is the correct interpretation and the true physical
origin of such a bound on $max(M^*)$ in a quantum gravity
that would accommodate it at some fundamental level.
The points I discussed in this section
also clarify, within an explicit example,
the sense
in which the logic adopted in Ref.~\cite{anos} is inadequate
for the analysis of the conceptual framework set up by
Salecker and Wigner.
In Ref.~\cite{anos} the whole discussion of
the ``Salecker-Wigner clock'' remained
strictly within the confines
of the intuition and the logic
of ordinary gravitation
and quantum mechanics, where we have nothing to learn.
The conceptual framework set up by
Salecker and Wigner instead treats the clock
in a way which, in as much
as it renounces to the idealization of a classical clock,
encodes one plausible
departure from the ordinary laws of
quantum mechanics that could be induced by the process
of unification of gravitation with quantum mechanics.
\section{ON \space THE \space USE \space OF \space A \space
POTENTIAL \space WELL \space TO \space
REDUCE CLOCK-INDUCED UNCERTAINTY}
In Ref.~\cite{anos} the work of Salecker and Wigner
was also criticized by arguing that
it would be inappropriate to treat
the clock as freely moving, as effectively done in the derivation
of the Salecker-Wigner limit.
We were reminded in Ref.~\cite{anos} of the fact that
for a clock appropriately bound (say, by some ideal springs)
to another object in its vicinity
the uncertainty in the position of the clock
with respect to that object would
not increase with time, unlike the case of a free clock.
This observation completely misses the point of the
Salecker-Wigner limit. The uncertainty responsible for
the Salecker-Wigner limit comes from the uncertainty in
the relative position between the clock and the two bodies
whose distance is being measured (say, the distance between
the clock and the center of mass of the system composed of
the two bodies whose distance is being measured).
By binding in an harmonic potential the clock and an external
body one would not affect the nature of the Salecker-Wigner
analysis.
The position of the clock
(or, say, the center of mass of the system composed of
the clock and the external body)
relative to the two bodies
whose distance is being measured (or, say, relative to
the center of mass of the system composed of
the two bodies whose distance is being measured)
is still a free coordinate whose uncertainty contributes
directly to the uncertainty in our measurement of distance.
The uncertainty in this free coordinate
will spread according to the formula
\begin{eqnarray}
\delta x \geq \sqrt{ {\hbar T_{obs} \over 2}
\left( {1 \over M_b}
+ {1 \over M_c + M_{extra}} \right) }
~,
\label{deltawignANOS}
\end{eqnarray}
where $M_{extra}$ is the mass of the mentioned external body.
The $T_{obs}$ dependence necessary for
all the significant implications of the Salecker-Wigner analysis
is still with us.
Contrary to the claim made in Ref.~\cite{anos},
by binding in an harmonic potential the clock and an external
body, one does not truly eliminate the $T_{obs}$-dependent
uncertainty: one simply trades one source of $T_{obs}$-dependent
uncertainty for another essentially equivalent source.
This simply provides one more example of intuition for the $max (M^*)$
discussed in the preceding sections
(and in Refs.~\cite{bignapap,gacmpla,gacgrf98}), which in this context
would be identified with the inverse
of $min \{ 1/M_b + [1/(M_c + M_{extra})] \}$.
[In any case, as explained above, $M^*$ would plausibly not only
reflect the properties of the devices used for timing
but of the whole set of devices needed for the measurement of
distances.]
Whether or not there is a spring binding the clock and an external
body, as a result of the analysis of the Salecker-Wigner
measurement procedure we are still left with the intuition
that some fundamental (although perhaps dependent on the distance
scale which is to be measured~\cite{bignapap}) value for $max(M^*)$
might
be a prediction of quantum gravity and we are still left wondering
how large this $max(M^*)$ could be.
Perhaps when measuring large distances
with relatively low accuracy quantum gravity might allow us
to take rather large $M^*$ (which, if so desired,
one might effectively describe
in the language of Ref.~\cite{anos}
as the possibility to introduce
a rather heavy external body to be ``attached''
to the clock), but as shorter distances are probed
the disturbance of a large $M^*$ (or the introduction of heavy
bodies to which the clock would be attached) must eventually
become unacceptable. This is certainly plausible, but what could be
the value of $max (M^*)$ for measurements at a given distance scale?
The correct answer of course requires full quantum gravity
(because it must reflect the way in which the operative
definition of distances in codified in quantum gravity),
but we can try to gain some insight
by pushing further the experimental bounds on $max(M^*)$.
Even more complicated at the conceptual level
is the search of an analog of $M^*$
in attempts to operatively define a tight (perhaps Planck-length
tight) network of geodesic (world) lines,
in the spirit of ``material reference systems''~\cite{rovellimrs}
and of some of the comments
found in the work of Salecker and Wigner~\cite{wign}.
Is such a task to be required of quantum gravity?
How large/heavy could the clocks suitable for this task be?
Wouldn't it be paradoxical to consider the possibility of
attaching these free-falling clocks to some external bodies?
As already emphasized in Refs.~\cite{bignapap,gacmpla,gacgrf98}
there are several quite overwhelming open issues, but it
seems unlikely that we could gain some insight by
extrapolating {\it ad infinitum}
(as done in Ref.~\cite{anos})
from the intuition of
measurement-analysis ideas applicable to
rudimentary present-day experimental setups.
Before closing this section let me comment on another
scenario that some readers might be tempted to consider
as a modification of the potential-well proposal
put forward in Ref.~\cite{anos}. One might envisage
using some springs to connect the clock to one of the bodies
(say body $A$)
whose distance is being measured, rather than connecting
the clock to an external body.
This would assure that the uncertainty
in relative position between the clock
and that body $A$ does not increase with time, but it is easy to
verify that the disturbance
that this setup would introduce is of the same magnitude as the
uncertainty it eliminates.
In fact, the system composed of the clock and body $A$ would be
free. Essentially the uncertainty in the initial momentum
and position of the clock relative to the second body (body $B$)
would now be transferred to the body $A$ ``through the springs''.
This would introduce an uncertain disturbance to the distance
between body $A$ and body $B$ that is being measured, and the
disturbance is of course just of the same magnitude as the
uncertainty contribution arising in the original Salecker-Wigner
setup.
In addition, each time the (Salecker-Wigner-type) clock
emits a signal the corresponding uncertain recoil would be
transmitted through the spring to the body $A$.
\section{ON THE POSSIBILITY OF A FUNDAMENTALLY CLASSICAL CLOCK}
As an alternative possibility
to eliminate the $\sqrt{T_{obs}}$ dependence present
in the Salecker-Wigner limit, in Ref.~\cite{anos}
we are reminded of the fact that ordinary clocks
are immerged in a (thermal or otherwise) environment
that induces ``wave-function collapse''.
In fact, to extremely good approximation
these clocks behave classically.
Again this is a correct intuition derived from experience
with rudimentary (from a Planck-scale viewpoint) experimental
setups, which however (like the other points argued in
Ref.~\cite{anos}) appears to be incorrectly applied to
the conceptual framework considered by Salecker and Wigner.
While ``environment-collapsed'' clocks (and other
environment-collapsed devices)
could be natural in ordinary contexts, it seems worth exploring
the idea that quantum gravity, as a truly fundamental theory
of space and time, would not resort (at an in-principle level)
to collapse-inducing environments for the operative definition
of distances. In any case, this is the expectation concerning
quantum gravity that is being explored through the
relevant Salecker-Wigner-motivated research line.
It also seems that quantum gravity, having to incorporate
an operative definition of distances applicable even in
the Planck regime,
would have some difficulties introducing at a fundamental
level the use of environments to collapse the wave function
of devices. How would such an environment look like for
the case in which one is operatively defining a
nearly-Planckian distance?
(and which type of environment
would be suitable for the operative definition
of a Planck-length-tight network of world lines?
how would such an environment be introduced in
the operative definition of a material reference system?)
Concerning the possibility of a fundamentally classical
clock in Ref.~\cite{anos}
the reader also finds what appears to be a genuinely
incorrect statement (not another example of ordinary intuition
inappropriately applied
to the forward-looking framework set up by Salecker and Wigner,
but simply a case of incorrect analysis).
In fact, Ref.~\cite{anos} appears to suggest
that the interactions among
the components of even a perfectly/ideally isolated clock
might induce classicality of the position of the
center of mass of the clock, which is the physical quantity
whose quantum properties lead to the Salecker-Wigner limit.
While the interactions among
the components should lead to the emergence of some classical
variables ({\it e.g.}, the variable that keeps track of time),
if the clock is ideally isolated interactions
among its components should not have any effect
on the quantum properties of
the position of the center of mass of the clock.
[This is certainly the case for some of the explicit
examples of ``toy clocks''
considered by Salecker and Wigner, one of which
is only composed of three free-falling particles!]
\section{CLOSING REMARKS}
From a conceptual viewpoint the analysis reported in Ref.~\cite{anos}
can be divided in two parts. In one part
a set of questions was raised and in the other part
tentative answers to these questions were given.
As this Letter emphasized, some of the questions
considered in Ref.~\cite{anos} are indeed the most fundamental
questions facing research based on the Salecker-Wigner limit.
However, all of these questions had already been raised in
previous literature
(see, {\it e.g.}, Refs.~\cite{bignapap,gacmpla,gacgrf98}).
These questions have been here compactly phrased as:
should quantum gravity predict a $max (M^*)$ and could this be
interpreted as the maximum acceptable mass of one or more devices?
how large could $max (M^*)$ be?
should $max (M^*)$ depend on the distance scales being probed?
should the idealization of a classical clock survive the transition
from ordinary quantum mechanics to quantum gravity?
While the questions considered are just the right ones,
the answers given in Ref.~\cite{anos} are incorrect.
In this note I have tried to clarify how those answers
are the result of inappropriately applying
the intuition of rudimentary (from a Planck-scale viewpoint)
measurement analysis
to the forward-looking framework set up by Salecker and Wigner.
The debate on the Salecker-Wigner limit must of course
continue until the above-mentioned
outstanding open questions get settled, but
(if the objective remains the one of getting ideas
on plausible quantum-gravity effects)
the only possibly fruitful way to
approach this problem
is the one of seeking the answers within the same
forward-looking framework where the questions arose.
Nothing more than what we already know
can be learned by assuming that the laws
of ordinary gravitation and quantum mechanics
remain unaltered all the way down to the Planck regime.
As emphasized here,
the logic of the line of research started by the work
of Salecker and Wigner is the one of applying
the language/structures we ordinarily use in those
physical contexts that we do understand
to contexts that instead would naturally
lie in the realm of quantum gravity,
and then exploring the consequences of removing one of
the elements of the ordinary conceptual structure of quantum
mechanics. The Salecker-Wigner study
(just like the Bohr-Rosenfeld analysis)
suggests that among these conceptual elements of
quantum mechanics the one that is most likely
to succumb to the unification of gravitation and quantum
mechanics is the requirement for devices to be treated
as classical. Removal of this requirement appears to guide
us toward some candidate properties of quantum gravity (not
of the ordinary laws of gravitation and quantum mechanics!),
which we can then hope to test directly in the laboratory
(as in some cases is actually possible~\cite{gacgwi,bignapap}).
Quite aside from the
subject of open issues in the study of the Salecker-Wigner
limit, I have also emphasized in this Letter that, contrary
to the impression one gets from reading Ref.~\cite{anos},
there is substantial motivation for
the phenomenological programme of interferometric
studies~\cite{gacgwi,bignapap} of distance fuzziness here
reviewed in Section~2, independently
of the Salecker-Wigner limit (and independently
of the fact that, as clarified above,
the validity of this limit has not been seriously questioned).
As discussed in Section~2
(and discussed in greater detail
in Ref.~\cite{bignapap,polonpap}),
the general motivation for that phenomenological programme
comes from a long tradition of ideas
(developing independently of the ideas related to
the Salecker-Wigner limit)
on foamy/fuzzy space-time,
and also comes from more
recent work~\cite{aemn1,gacgrb,gampul,fordlightcone,adrian}
on the possibility that quantum-gravity
might induce a deformation of the dispersion relation that
characterizes the propagation of the massless particles
used as space-time probes in the operative definition
of distances.
It is actually quite important
that this interferometry-based phenomenological programme,
as well as other recently-proposed quantum-gravity-motivated
phenomenological
programmes~\cite{polonpap,elmn,hpcpt,gacgrb,stringcogwi,grwlarge},
be pursued quite aggressively,
since the lack of experimental input
has been the most important obstacle~\cite{nodata}
in these many years of research on quantum gravity.
\vglue 0.6cm
\leftline{\Large {\bf Acknowledgements}}
\vglue 0.4cm
Part of this work was done while
the author was visiting the {\it Center for Gravitational
Physics and Geometry} of Penn State University.
I happily acknowledge
discussions on matters related to the subject of this Letter
with several members and visitors of the Center, particularly
with R.~Gambini and J.~Pullin.
I am also happy to thank
C.~Kiefer, for discussions on decoherence,
and D.~Ahluwalia, for feed-back on a first rough draft
of the manuscript.
\bigskip
\baselineskip 12pt plus .5pt minus .5pt
|
3,212,635,537,657 | arxiv | \section{Introduction}\label{sec1}
In \cite{ambro}, the notion of quasi-log structures was introduced
in order to prove the cone and contraction theorem for $(X, \Delta)$ where
$X$ is a normal variety and $\Delta$ is an effective $\mathbb R$-divisor
such that $K_X+\Delta$ is $\mathbb R$-Cartier. Although the theory of quasi-log
schemes is very powerful and useful, it may look much harder than the usual X-method for
kawamata log terminal pairs.
Moreover, the paper \cite{fujino} recovers the main theorem of \cite{ambro} without
using the notion of quasi-log structures. So the theory of
quasi-log schemes is not yet popular. We note that the framework of \cite{fujino}
is more similar to the theory of algebraic multiplier ideal
sheaves than to the traditional X-method. Recently, the author
proved that every quasi-projective semi log canonical
pair has a natural quasi-log structure in \cite{fujino-slc}.
The theory of quasi-log schemes seems to be indispensable for the
study of semi log canonical pairs. Now the importance of quasi-log structures
is increasing. One of the main purposes of this paper
is to clarify the definition of quasi-log structures and
make the theory of quasi-log schemes more flexible and more useful.
The following theorem is the main theorem of this paper, which is natural but
missing in the literature.
For the precise statement, see Theorem \ref{main-thm} below.
\begin{thm}[Pull-back of quasi-log structures]\label{main}
Let $[X, \omega]$ be a quasi-log scheme and let
$h:X'\to X$ be a smooth quasi-projective
morphism.
Then $[X', \omega']$, where $\omega'=h^*\omega\otimes \omega_{X'/X}$
with $\omega_{X'/X}=\det \Omega^1_{X'/X}$,
has a natural quasi-log structure induced by $h$.
\end{thm}
Theorem \ref{main} does not directly follow from the original definition of
quasi-log schemes. We have to construct a quasi-log resolution of
$[X', \omega']$ suitably.
We make an important remark.
We do not know whether Theorem \ref{main} holds true or not
without assuming that $h$ is {\em{quasi-projective}}.
As a useful special case of Theorem \ref{main},
we have:
\begin{thm}[Finite \'etale covers]\label{thm1.2}
Let $[X, \omega]$ be a quasi-log scheme and let
$h:X'\to X$ be a finite \'etale morphism.
Then $[X', \omega']$, where $\omega'=h^*\omega$,
has a natural quasi-log structure induced by $h$.
\end{thm}
As an easy application of Theorem \ref{thm1.2} to singular Fano varieties, we obtain:
\begin{cor}\label{cor1.2}
Let $[X, \omega]$ be a projective quasi-log canonical pair such that
$-\omega$ is ample, that is, $[X, \omega]$ is a quasi-log canonical
Fano variety.
Then the algebraic fundamental group of $X$ is trivial, equivalently,
$X$ has no non-trivial finite \'etale covers.
\end{cor}
By Corollary \ref{cor1.2}, it is natural to conjecture:
\begin{conj}\label{conj1.3}
Let $[X, \omega]$ be a projective
quasi-log canonical pair such that $-\omega$ is ample.
Then $X$ is simply connected.
\end{conj}
In general, there exists an irreducible projective
variety whose algebraic fundamental group is trivial
and whose topological fundamental group is non-trivial (see Example \ref{ex5.1}).
As a special case of Conjecture \ref{conj1.3}, we have:
\begin{conj}\label{conj1.4}
Let $(X, \Delta)$ be a projective semi log canonical pair
such that $-(K_X+\Delta)$ is ample, that is,
$(X, \Delta)$ is a semi log canonical Fano variety.
Then $X$ is simply connected.
\end{conj}
For the details of semi log canonical pairs, see \cite{fujino-slc}.
Note that every quasi-projective semi log canonical
pair has a natural quasi-log structure with only quasi-log canonical
singularities. It is the main theorem of \cite{fujino-slc}.
It is well known that Conjecture \ref{conj1.4}
holds when $(X, \Delta)$ is kawamata log terminal (see \cite{takayama}).
Kento Fujita pointed out that Conjecture \ref{conj1.4}
holds true when $(X, \Delta)$ is log canonical
(see Theorem \ref{thm-fujita} below).
We give Fujita's proof in Section \ref{sec6} for the reader's convenience.
We summarize the contents of this paper.
Section \ref{sec2} collects some basic definitions and results.
In Section \ref{sec3}, we recall the
definition of quasi-log schemes and
state the main theorem of this paper
precisely (see Theorem \ref{main-thm}).
Section \ref{sec4} is the main part of this paper.
Here we discuss the basic properties of quasi-log schemes.
The author believes that
Section \ref{sec4} makes the theory of quasi-log schemes more flexible and
more useful than Ambro's original framework in \cite{ambro}.
Section \ref{sec-proof} is devoted to the proof of the main theorem
(see Theorem \ref{main} and Theorem \ref{main-thm}).
In Section \ref{sec5}, we treat some applications of the main theorem
to singular Fano varieties.
We prove that the algebraic fundamental group
of a quasi-log canonical Fano variety is always
trivial (see Corollary \ref{cor1.2}).
In Section \ref{sec6}, we prove that a log canonical
Fano variety is simply connected (see Theorem \ref{thm-fujita}).
The proof of Theorem \ref{thm-fujita}
is independent of the theory of quasi-log schemes.
Section \ref{sec7} is an appendix, where
we discuss Ambro's original definition of quasi-log schemes.
\begin{ack}
The author was partially supported by the Gran-in-Aid
for Young Scientists (A) $\sharp$24684002 from JSPS.
He would like to thank Takeshi Abe, Kento Fujita, Yuichiro Hoshi, and Tetsushi Ito
for answering his questions and giving him useful comments.
\end{ack}
We will work over $\mathbb C$, the complex number field, throughout this
paper. For the standard notation of the log minimal model program, see,
for example, \cite{fujino}.
For the basic properties and results of semi log canonical
pairs, see \cite{fujino-slc}.
\section{Preliminaries}\label{sec2}
In this section, we collect some basic results and definitions.
\begin{notation}\label{notation2.1}
A pair $[X, \omega]$ consists of a scheme $X$ and an $\mathbb R$-Cartier
$\mathbb R$-divisor (or $\mathbb R$-line bundle) $\omega$ on $X$.
In this paper, a scheme means a separated scheme of finite type over
$\Spec \mathbb C$.
\end{notation}
\begin{notation}[Divisors]\label{notation2.2}
Let $B_1$ and $B_2$ be two $\mathbb R$-Cartier $\mathbb R$-divisors
on a scheme $X$. Then $B_1$ is linearly (resp.~$\mathbb Q$-linearly, or $\mathbb R$-linearly) equivalent to
$B_2$, denoted by $B_1\sim B_2$ (resp.~$B_1\sim _{\mathbb Q} B_2$, or $B_1\sim _{\mathbb R}B_2$) if
$$
B_1=B_2+\sum _{i=1}^k r_i (f_i)
$$
such that $f_i \in \Gamma (X, \mathcal K_X^*)$ and $r_i\in \mathbb Z$ (resp.~$r_i \in \mathbb Q$, or $r_i \in
\mathbb R$) for every $i$. Here, $\mathcal K_X$ is the sheaf of total quotient rings of
$\mathcal O_X$ and $\mathcal K_X^*$ is the sheaf of invertible elements in the sheaf of rings $\mathcal K_X$.
We note that
$(f_i)$ is a {\em{principal Cartier divisor}} associated to $f_i$, that is,
the image of $f_i$ by
$
\Gamma (X, \mathcal K_X^*)\to\Gamma (X, \mathcal K_X^*/\mathcal O_X^*)$,
where $\mathcal O_X^*$ is the sheaf of invertible elements in $\mathcal O_X$.
Let $D$ be a $\mathbb Q$-divisor (resp.~an $\mathbb R$-divisor)
on an equi-dimensional variety $X$, that is,
$D$ is a finite formal $\mathbb Q$-linear (resp.~$\mathbb R$-linear) combination
$$D=\sum _i d_i D_i$$ of irreducible
reduced subschemes $D_i$ of codimension one.
We define the {\em{round-up}} $\lceil D\rceil =\sum _i \lceil d_i \rceil D_i$ (resp.~{\em{round-down}}
$\lfloor D\rfloor =\sum _i \lfloor d_i \rfloor D_i$), where
every real number $x$, $\lceil x\rceil$ (resp.~$\lfloor x\rfloor$) is the integer
defined by $x\leq \lceil x\rceil <x+1$
(resp.~$x-1<\lfloor x\rfloor \leq x$). The
{\em{fractional part}} $\{D\}$ of $D$ denotes $D-\lfloor D\rfloor$. We put
$$D^{<1}=\sum _{d_i<1}d_i D_i, \quad
D^{\leq 1}=\sum _{d_i\leq1}d_i D_i, \quad \text{and}\quad D^{=1}=\sum _{d_i=1}D_i.$$
We can define $D^{\geq 1}$, $D^{>1}$,
and so on, analogously.
We call $D$ a {\em{boundary}} (resp.~{\em{subboundary}}) $\mathbb R$-divisor if
$0\leq d_i\leq 1$ (resp.~$d_i\leq 1$) for every $i$.
\end{notation}
\begin{notation}[Singularities of pairs]\label{notation2.3}
Let $X$ be a normal variety and let $\Delta$ be an
$\mathbb R$-divisor on $X$
such that $K_X+\Delta$ is $\mathbb R$-Cartier.
Let $f:Y\to X$ be
a resolution such that $\Exc(f)\cup f^{-1}_*\Delta$,
where $\Exc (f)$ is the exceptional locus of $f$
and $f^{-1}_*\Delta$ is
the strict transform of $\Delta$ on $Y$,
has a simple normal crossing support. We can
write
$$K_Y=f^*(K_X+\Delta)+\sum _i a_i E_i.
$$
We say that $(X, \Delta)$
is {\em{sub log canonical}} ({\em{sub lc}}, for short) if $a_i\geq -1$ for every $i$.
We usually write $a_i= a(E_i, X, \Delta)$
and call it the {\em{discrepancy coefficient}} of
$E_i$ with respect to $(X, \Delta)$.
It is well known that there exists the largest Zariski open set $U$ of $X$ such that
$(U, \Delta|_U)$ is sub log canonical.
If there exist a resolution $f:Y\to X$ and a divisor $E$ on $Y$ such
that $a(E, X, \Delta)=-1$ and $f(E)\cap U\ne \emptyset$, then $f(E)$ is called a
{\em{log canonical center}} (an {\em{lc center}}, for short) with respect to $(X, \Delta)$.
If $(X, \Delta)$ is sub log canonical and $\Delta$ is effective, then
$(X, \Delta)$ is called {\em{log canonical}} ({\em{lc}}, for short).
We note that we can define $a(E_i, X, \Delta)$ in more general
settings (see \cite[Definition 2.4]{kollar2}).
\end{notation}
Let us recall the definition of simple normal crossing pairs.
\begin{defn}[Simple normal crossing pairs]\label{def-snc}
We say that the pair $(X, D)$ is {\em{simple normal crossing}} at
a point $a\in X$ if $X$ has a Zariski open neighborhood $U$ of $a$ that can be embedded in a smooth
variety
$Y$,
where $Y$ has regular system of parameters $(x_1, \cdots, x_p, y_1, \cdots, y_r)$ at
$a=0$ in which $U$ is defined by a monomial equation
$$
x_1\cdots x_p=0
$$
and $$
D=\sum _{i=1}^r \alpha_i(y_i=0)|_U, \quad \alpha_i\in \mathbb R.
$$
We say that $(X, D)$ is a {\em{simple normal crossing pair}} if it is simple normal crossing at every point of $X$.
We say that a simple normal crossing pair $(X, D)$ is {\em{embedded}}
if there exists a closed embedding $\iota:X\to M$, where
$M$ is a smooth variety of $\dim X+1$.
We call $M$ the {\em{ambient space}} of $(X, D)$.
If $(X, 0)$ is a simple normal crossing pair, then $X$ is called a {\em{simple normal crossing
variety}}. If $X$ is a simple normal crossing variety, then $X$ has only Gorenstein singularities.
Thus, it has an invertible dualizing sheaf $\omega_X$.
Therefore, we can define the {\em{canonical divisor $K_X$}} such that
$\omega_X\simeq \mathcal O_X(K_X)$.
It is a Cartier divisor on $X$ and is well-defined up to linear equivalence.
Let $X$ be a simple normal crossing variety and let $X=\bigcup _{i\in I}X_i$ be the
irreducible decomposition of $X$.
A {\em{stratum}} of $X$ is an irreducible component of $X_{i_1}\cap \cdots \cap X_{i_k}$ for some
$\{i_1, \cdots, i_k\}\subset I$.
Let $X$ be a simple normal crossing variety and
let $D$ be a Cartier divisor on $X$.
If $(X, D)$ is a simple normal crossing pair and $D$ is reduced,
then $D$ is called a {\em{simple normal crossing divisor}} on $X$.
Let $(X, D)$ be a simple normal crossing pair.
Let $\nu:X^\nu \to X$ be the normalization.
We define $\Theta$ by the formula
$$
K_{X^\nu}+\Theta=\nu^*(K_X+D).
$$
Then a {\em{stratum}} of $(X, D)$ is an irreducible component of $X$ or the $\nu$-image
of a log canonical center of $(X^\nu, \Theta)$
(see Notation \ref{notation2.3}).
When $D=0$,
this definition is compatible with the above definition of the strata of $X$.
When $D$ is a boundary $\mathbb R$-divisor,
$W$ is a stratum of $(X, D)$ if and only if
$W$ is an slc stratum of $(X, D)$ (see
\cite[Definition 2.5]{fujino-slc}). Note that
$(X, D)$ is semi log canonical if $D$ is a boundary $\mathbb R$-divisor.
\end{defn}
\begin{notation}
$\pi_1(X)$ denotes the topological fundamental group of
$X$.
\end{notation}
\section{Pull-back of quasi-log structures}\label{sec3}
In this section, we give a precise statement of Theorem \ref{main}
(see Theorem \ref{main-thm}).
First, let us recall the definition of {\em{globally embedded
simple normal crossing pairs}} in order to define quasi-log schemes.
\begin{defn}[Globally embedded simple normal crossing
pairs]\label{def3.1}
Let $Y$ be a simple normal crossing divisor
on a smooth
variety $M$ and let $D$ be an $\mathbb R$-divisor
on $M$ such that
$\Supp (D+Y)$ is a simple normal crossing divisor on $M$ and that
$D$ and $Y$ have no common irreducible components.
We put $B_Y=D|_Y$ and consider the pair $(Y, B_Y)$.
We call $(Y, B_Y)$ a {\em{globally embedded simple normal
crossing pair}} and $M$ the {\em{ambient space}} of $(Y, B_Y)$.
\end{defn}
It is obvious that a globally embedded simple normal crossing
pair is an embedded simple normal crossing
pair in Definition \ref{def-snc}.
Let us define {\em{quasi-log schemes}} (see also Definition \ref{def7.1}
below).
\begin{defn}[Quasi-log schemes]\label{def3.2}
A {\em{quasi-log scheme}} is a scheme $X$ endowed with an
$\mathbb R$-Cartier $\mathbb R$-divisor
(or $\mathbb R$-line bundle)
$\omega$ on $X$, a proper closed subscheme
$X_{-\infty}\subset X$, and a finite collection $\{C\}$ of reduced
and irreducible subschemes of $X$ such that there is a
proper morphism $f:(Y, B_Y)\to X$ from a globally
embedded simple
normal crossing pair satisfying the following properties:
\begin{itemize}
\item[(1)] $f^*\omega\sim_{\mathbb R}K_Y+B_Y$.
\item[(2)] The natural map
$\mathcal O_X
\to f_*\mathcal O_Y(\lceil -(B_Y^{<1})\rceil)$
induces an isomorphism
$$
\mathcal I_{X_{-\infty}}\overset{\simeq}{\longrightarrow} f_*\mathcal O_Y(\lceil
-(B_Y^{<1})\rceil-\lfloor B_Y^{>1}\rfloor),
$$
where $\mathcal I_{X_{-\infty}}$ is the defining ideal sheaf of
$X_{-\infty}$.
\item[(3)] The collection of subvarieties $\{C\}$ coincides with the image
of $(Y, B_Y)$-strata that are not included in $X_{-\infty}$.
\end{itemize}
We simply write $[X, \omega]$ to denote
the above data
$$
\bigl(X, \omega, f:(Y, B_Y)\to X\bigr)
$$
if there is no risk of confusion.
Note that a quasi-log scheme $X$ is the union of $\{C\}$ and $X_{-\infty}$.
We also note that $\omega$ is called the {\em{quasi-log canonical class}}
of $[X, \omega]$, which is defined up to $\mathbb R$-linear equivalence.
A {\em{relative quasi-log scheme}}
$X/S$ is a quasi-log scheme $X$ endowed
with a proper morphism $\pi:X\to S$.
\end{defn}
\begin{rem}\label{rem3.15}
Let $\Div(Y)$ be the group of Cartier divisors
on $Y$ and let $\Pic (Y)$ be the Picard group of $Y$.
Let $$
\delta_Y:\Div(Y)\otimes \mathbb R\to \Pic(Y)\otimes\mathbb R
$$
be the homomorphism
induced by $A\mapsto \mathcal O_Y(A)$
where $A$ is a Cartier divisor on $Y$.
When $\omega$ is an $\mathbb R$-line bundle in Definition \ref{def3.2},
$$
f^*\omega\sim _{\mathbb R}K_Y+B_Y
$$
means
$$
f^*\omega=\delta_Y(K_Y+B_Y)
$$
in $\Pic(Y)\otimes \mathbb R$.
Even when $\omega$ is an $\mathbb R$-line bundle,
we use $-\omega$ to denote the inverse of $\omega$ in
$\Pic (X)\otimes \mathbb R$ (see Corollary \ref{cor1.2} and
Conjecture \ref{conj1.3}) if there is no risk of confusion.
If $\omega$ is an $\mathbb R$-Cartier $\mathbb R$-divisor
on $X$ in Theorem \ref{main}, $$h^*\omega\otimes
\det \Omega_{X'/X}^1$$ means
$$
\delta_{X'}(h^*\omega)\otimes \det \Omega_{X'/X}^1
$$
in $\Pic (X')\otimes \mathbb R$ where $\delta_{X'}:\Div (X')\otimes
\mathbb R\to \Pic(X')\otimes \mathbb R$.
\end{rem}
We give an important remark on Definition \ref{def3.2}.
\begin{rem}[Schemes versus varieties]\label{rem3.3}
A quasi-log {\em{scheme}} in Definition \ref{def3.2} is called
a quasi-log {\em{variety}} in \cite{ambro} (see also \cite{fujino-book}).
However, $X$ is not always reduced when $X_{-\infty}\ne \emptyset$
(see Example \ref{ex3.4} below).
Therefore, we will use the word {\em{quasi-log schemes}} in this paper.
Note that
$X$ is reduced when $X_{-\infty}=\emptyset$ (see Remark \ref{rem3.5} below).
\end{rem}
\begin{ex}[{\cite[Examples 4.3.4]{ambro}}]\label{ex3.4}
Let $X$ be an effective Cartier divisor
on a smooth variety $M$.
Assume that $Y$, the reduced part of $X$, is non-empty.
We put $\omega=(K_M+X)|_X$.
Let $X_{-\infty}$ be the union of the non-reduced components of $X$.
We put $K_Y+B_Y=(K_M+X)|_Y$.
Let $f:Y\to X$ be the closed embedding.
Then $$
\bigl( X, \omega, f:(Y, B_Y)\to X\bigr)
$$
is a quasi-log scheme.
Note that $X$ has non-reduced irreducible components if $X_{-\infty}\ne \emptyset$.
We also note that $f$ is not surjective
if $X_{-\infty}\ne \emptyset$.
\end{ex}
\begin{notation}\label{notation3.4}
In Definition \ref{def3.2}, we sometimes simply say that
$[X, \omega]$ is a {\em{quasi-log pair}}.
The subvarieties $C$
are called the {\em{qlc centers}} of $[X, \omega]$,
$X_{-\infty}$ is called the {\em{non-qlc locus}}
of $[X, \omega]$, and $f:(Y, B_Y)\to X$ is
called a {\em{quasi-log resolution}}
of $[X, \omega]$.
We sometimes use $\Nqlc(X, \omega)$ to denote
$X_{-\infty}$.
\end{notation}
For various applications, the notion of {\em{qlc pairs}}
is very useful.
\begin{defn}[Qlc pairs]\label{def3.5}
Let $[X, \omega]$ be a quasi-log pair.
We say that $[X, \omega]$ has only {\em{quasi-log canonical singularities}}
({\em{qlc singularities}}, for short) if $X_{-\infty}=\emptyset$.
Assume that $[X, \omega]$ is a quasi-log pair
with $X_{-\infty}=\emptyset$. Then
we simply say that $[X, \omega]$ is a
{\em{qlc pair}}.
\end{defn}
We give some important remarks on the non-qlc locus $X_{-\infty}$.
\begin{rem}\label{rem3.4}
We put $A=\lceil -(B_Y^{<1})\rceil$ and $N=\lfloor B_Y^{>1}\rfloor$.
Then we obtain the following big commutative diagram.
$$
\xymatrix{
0 \ar[r]& f_*\mathcal O_Y(A-N)\ar[r]&f_*\mathcal O_Y(A)\ar[r]&f_*\mathcal O_N(A) & \\
0\ar[r]&f_*\mathcal O_Y(-N)\ar[r]\ar[u]^{\alpha_1}&f_*\mathcal O_Y\ar[r]
\ar[u]^{\alpha_2}& f_*\mathcal O_N\ar[u]^{\alpha_3}& \\
0\ar[r]
&\mathcal I_{X_{-\infty}}\ar[r]\ar[u]^{\beta_1}&\mathcal O_X\ar[r]\ar[u]^{\beta_2}
&\mathcal
O_{X_{-\infty}}\ar[r]\ar[u]^{\beta_3}&0
}
$$
Note that $\alpha_i$ is a natural injection for every $i$.
By an easy diagram chasing,
$$
\mathcal I_{X_{-\infty}}\overset{\simeq}{\longrightarrow} f_*\mathcal O_Y(A-N)
$$
factors through $f_*\mathcal O_Y(-N)$.
Then we obtain $\beta_1$ and $\beta_3$.
Since $\alpha_1$ is injective and $\alpha_1\circ \beta_1$ is an isomorphism,
$\alpha_1$ and $\beta_1$ are isomorphisms.
Therefore, we obtain that $f(Y)\cap X_{-\infty}=f(N)$.
Note that $f$ is not always surjective when $X_{-\infty}\ne \emptyset$.
It sometimes happens that $X_{-\infty}$ contains some irreducible
components of $X$. See, for example, Example \ref{ex3.4}.
\end{rem}
\begin{rem}[Semi-normality]\label{rem3.45}
By restricting the isomorphism
$$
\mathcal I_{X_{-\infty}}\overset{\simeq}{\longrightarrow}
f_*\mathcal O_Y(A-N)
$$
to the open subset $U=X\setminus X_{-\infty}$,
we obtain
$$
\mathcal O_U\overset{\simeq}{\longrightarrow}f_*\mathcal O_{f^{-1}(U)}(A).
$$
This implies that
$$
\mathcal O_U\overset{\simeq}{\longrightarrow}f_*\mathcal O_{f^{-1}(U)}
$$
because $A$ is effective.
Therefore, $f:f^{-1}(U)\to U$ is surjective and has connected fibers.
Note that $f^{-1}(U)$ is a simple
normal crossing variety.
Thus, $U$ is semi-normal.
In particular, $U=X\setminus X_{-\infty}$ is reduced.
\end{rem}
\begin{rem}\label{rem3.5}
If the pair $[X, \omega]$ has only qlc singularities,
equivalently,
$X_{-\infty}=\emptyset$, then $X$ is reduced and
semi-normal by Remark \ref{rem3.45}.
Note that $f(Y)\cap
X_{-\infty}=\emptyset$ if and only if $B_Y=B_Y^{\leq 1}$, equivalently, $B_Y^{>1}=0$, by the descriptions in Remark \ref{rem3.4}.
\end{rem}
Let us state the main theorem of this paper
precisely.
\begin{thm}[Main theorem]\label{main-thm}
Let $[X, \omega]$ be a quasi-log pair as in Definition \ref{def3.2}.
Let $X'$ be a scheme and let $h:X'\to X$ be a
smooth quasi-projective morphism.
Then $[X', \omega']$, where $\omega'=h^*\omega\otimes \omega_{X'/X}$ with
$\omega_{X'/X}=\det \Omega^1_{X'/X}$, has a natural quasi-log
structure induced by $h$. More precisely, we have the following properties:
\begin{itemize}
\item[(i)] {\em{(Non-qlc locus)}}.~There
is a proper closed subscheme $X'_{-\infty}\subset X'$.
\item[(ii)] {\em{(Quasi-log resolution)}}.~There exists a proper morphism
$f':(Y', B_{Y'})\to X'$ from
a globally embedded simple normal crossing pair $(Y', B_{Y'})$
such that
$$f'^*\omega'\sim _{\mathbb R} K_{Y'}+B_{Y'}, $$
and the natural map
$$
\mathcal O_{X'}\to f'_*\mathcal O_{Y'}(\lceil -(B_{Y'}^{<1})\rceil)
$$
induces an isomorphism
$$
\mathcal I_{X'_{-\infty}}\overset{\simeq}{\longrightarrow} f'_*\mathcal O_{Y'}(\lceil
-(B_{Y'}^{<1})\rceil-\lfloor B_{Y'}^{>1}\rfloor)
$$
where $\mathcal I_{X'_{-\infty}}$ is the defining ideal sheaf
of $X'_{-\infty}$ and
$$\mathcal I_{X'_{-\infty}}=h^*\mathcal I_{X_{-\infty}}. $$
\item[(iii)] {\em{(Qlc centers)}}.~There is a finite collection $\{C'\}$ of reduced
and irreducible subschemes of $X'$ such that $\{C'\}=\{f^{-1}(C)\}$
and that the collection of subvarieties $\{C'\}$ coincides with
the images of $(Y', B_{Y'})$-strata that are not included in $X'_{-\infty}$.
\end{itemize}
\end{thm}
\begin{rem}
For the definition and basic properties of {\em{quasi-projective
morphisms}}, see \cite[Chapitre II \S 5.3.~Morphismes
quasi-projectifs]{ega}.
\end{rem}
We will prove Theorem \ref{main-thm} in Section \ref{sec-proof}
after we prepare various useful lemmas in Section \ref{sec4}.
We recommend the reader to see \cite{fujino-intro}
for some basic applications of the theory of quasi-log
schemes.
The adjunction and vanishing theorem (see, for example, \cite[Theorem 3.6]
{fujino-intro}) is a key result for qlc pairs.
\section{On quasi-log structures}\label{sec4}
The following proposition makes the
theory of quasi-log schemes more flexible.
It is a key result in this paper.
\begin{prop}[{\cite[Proposition 3.50]{fujino-book}}]\label{prop4.1}
Let $f:V\to W$ be a proper birational morphism between
smooth varieties and let $B_W$ be an
$\mathbb R$-divisor on $W$ such
that $\Supp B_W$ is a simple normal crossing divisor on $W$.
Assume that $$K_V+B_V=f^*(K_W+B_W)$$ and
that $\Supp B_V$ is a simple normal crossing divisor on $V$.
Then we have $$f_*\mathcal O_V(\lceil -(B^{<1}_V)\rceil
-\lfloor B^{>1}_V\rfloor)\simeq
\mathcal O_W(\lceil -(B^{<1}_W)\rceil
-\lfloor B^{>1}_W\rfloor). $$
Furthermore,
let $S$ be a simple normal crossing divisor on $W$ such
that $S\subset \Supp B^{=1}_W$. Let $T$ be the union of the
irreducible
components of $B^{=1}_V$ that are mapped into $S$ by $f$.
Assume that
$\Supp f^{-1}_*B_W\cup \Exc (f)$ is a simple normal crossing
divisor on $V$.
Then we have
$$f_*\mathcal O_T(\lceil -(B^{<1}_T)\rceil
-\lfloor B^{>1}_T\rfloor)\simeq
\mathcal O_S(\lceil -(B^{<1}_S)\rceil
-\lfloor B^{>1}_S\rfloor),$$
where
$(K_V+B_V)|_T=K_T+B_T$ and $(K_W+B_W)|_S=K_S+B_S$.
\end{prop}
\begin{proof}
By $K_V+B_V=f^*(K_W+B_W)$, we
obtain
\begin{align*}
K_V=&f^*(K_W+B^{=1}_W+\{B_W\})\\&+f^*
(\lfloor B^{<1}_W\rfloor+\lfloor B^{>1}_W\rfloor)
-(\lfloor B^{<1}_V\rfloor+\lfloor B^{>1}_V\rfloor)
-B^{=1}_V-\{B_V\}.
\end{align*}
If $a(\nu, W, B^{=1}_W+\{B_W\})=-1$ for a prime divisor
$\nu$ over $W$, then
we can check that $a(\nu, W, B_W)=-1$ by using
\cite[Lemma 2.45]{km}.
Since $$f^*
(\lfloor B^{<1}_W\rfloor+\lfloor B^{>1}_W\rfloor)
-(\lfloor B^{<1}_V\rfloor+\lfloor B^{>1}_V\rfloor)$$ is
Cartier, we can easily see that
$$f^*(\lfloor B^{<1}_W\rfloor+\lfloor B^{>1}_W\rfloor)
=\lfloor B^{<1}_V\rfloor+\lfloor B^{>1}_V\rfloor+E,$$
where $E$ is an effective $f$-exceptional divisor.
Thus, we obtain
$$f_*\mathcal O_V(\lceil -(B^{<1}_V)\rceil
-\lfloor B^{>1}_V\rfloor)\simeq
\mathcal O_W(\lceil -(B^{<1}_W)\rceil
-\lfloor B^{>1}_W\rfloor).$$
Next, we consider the short exact sequence:
\begin{align*}
0&\to \mathcal O_V(\lceil -(B^{<1}_V)\rceil-
\lfloor B^{>1}_V\rfloor-T)\\ &\to
\mathcal O_V(\lceil -(B^{<1}_V)\rceil-
\lfloor B^{>1}_V\rfloor)
\to \mathcal O_T(\lceil -(B^{<1}_T)\rceil-
\lfloor B^{>1}_T\rfloor)\to 0.
\end{align*}
Since $T=f^*S-F$, where $F$ is an effective $f$-exceptional
divisor, we can easily see that
$$f_*\mathcal O_V(\lceil -(B^{<1}_V)\rceil
-\lfloor B^{>1}_V\rfloor-T)\simeq
\mathcal O_W(\lceil -(B^{<1}_W)\rceil
-\lfloor B^{>1}_W\rfloor-S).$$
We note that
\begin{align*}
(\lceil -(B^{<1}_V)\rceil
-\lfloor B^{>1}_V\rfloor-T)-(K_V+\{B_V\}+B^{=1}_V-T)
\\ =-f^*(K_W+B_W).
\end{align*}
Therefore, every associated prime
of $R^1f_*\mathcal O_V(\lceil -(B^{<1}_V)\rceil
-\lfloor B^{>1}_V\rfloor-T)$ is the generic point of the $f$-image
of some stratum of $(V, \{B_V\}+B^{=1}_V-T)$
(see, for example, \cite[Theorem 6.3 (i)]{fujino}).
\begin{claim}\label{claim-ne}
No strata of $(V, \{B_V\}+B^{=1}_V-T)$ are
mapped into $S$ by $f$.
\end{claim}
\begin{proof}[Proof of Claim]
Assume that there is a stratum $C$ of $(V, \{B_V\}+B^{=1}_V-T)$ such that
$f(C)\subset S$. Note that
$$\Supp f^*S\subset \Supp f^{-1}_*B_W\cup \Exc (f)$$ and
$$\Supp B^{=1}_V\subset \Supp f^{-1}_*B_W\cup \Exc (f). $$
Since $C$ is also a stratum of $(V, B^{=1}_V)$ and
$$C\subset \Supp f^*S, $$
there exists an irreducible component $G$ of $B^{=1}_V$ such that
$$C\subset G\subset \Supp f^*S.$$
Therefore, by the definition of $T$, $G$ is an
irreducible component of $T$ because $f(G)\subset S$ and $G$ is an
irreducible component of $B^{=1}_V$. So,
$C$ is not a stratum of $(V, \{B_V\}+B^{=1}_V-T)$. It is
a contradiction.
\end{proof}
On the other hand, $f(T)\subset S$. Therefore,
$$f_*\mathcal O_T(\lceil -(B^{<1}_T)\rceil
-\lfloor B^{>1}_T\rfloor)
\to R^1f_*\mathcal O_V(\lceil -(B^{<1}_Z)\rceil
-\lfloor B^{>1}_Z\rfloor-T)$$ is a zero map
by Claim. Thus, we obtain
$$f_*\mathcal O_T(\lceil -(B^{<1}_T)\rceil
-\lfloor B^{>1}_T\rfloor)\simeq
\mathcal O_S(\lceil -(B^{<1}_S)\rceil
-\lfloor B^{>1}_S\rfloor)$$
by the following commutative diagram.
$$
\xymatrix{
0\ar[d]&0\ar[d]&\\
\mathcal O_W(\lceil -(B_W^{<1})\rceil -\lfloor B_W^{>1}\rfloor
-S)\ar[d]\ar[r]^{\simeq}&
f_*\mathcal O_V(\lceil -(B_V^{<1})\rceil -\lfloor B_V^{>1}\rfloor
-T)\ar[d]\\
\mathcal O_W(\lceil -(B_W^{<1})\rceil -\lfloor B_W^{>1}\rfloor)
\ar[d]\ar[r]^{\simeq}
&
f_*\mathcal O_V(\lceil -(B_V^{<1})\rceil -\lfloor B_V^{>1}\rfloor)\ar[d]
\\
\mathcal O_S(\lceil -(B_S^{<1})\rceil -\lfloor B_S^{>1}\rfloor)\ar[d]\ar[r]
&
f_*\mathcal O_T(\lceil -(B_T^{<1})\rceil -\lfloor B_T^{>1}\rfloor)\ar[d]
\\
0&0
}
$$
We finish the proof.
\end{proof}
It is easy to check:
\begin{prop}\label{prop4.2}
In Proposition \ref{prop4.1},
let $C'$ be an lc center of $(V, B_V)$ contained in $T$.
Then $f(C')$ is an lc center of $(W, B_W)$ contained in $S$ or $f(C')$ is contained
in $\Supp B_W^{>1}$.
Let $C$ be an lc center of $(W, B_W)$ contained in $S$. Then
there exists an lc center $C'$ of $(V, B_V)$ contained in $T$ such that
$f(C')=C$.
\end{prop}
The following important theorem is missing in \cite{fujino-book}.
\begin{thm}\label{thm4.2}
In Definition \ref{def3.2}, we may assume that
the ambient space $M$ of
the globally embedded simple normal crossing
pair $(Y, B_Y)$ is quasi-projective.
In particular, $Y$ is quasi-projective.
\end{thm}
\begin{proof}
In Definition \ref{def3.2}, we may assume that
$D+Y$ is an $\mathbb R$-divisor
on a smooth variety $M$ such that
$\Supp (D+Y)$ is a simple normal crossing divisor
on $M$, $D$ and $Y$ have no common irreducible components,
and $B_Y=D|_Y$ as in Definition \ref{def3.1}.
Let $g:M'\to M$ be a projective
birational morphism from a smooth quasi-projective
variety $M'$ with the following properties:
\begin{itemize}
\item[(i)] $K_{M'}+B_{M'}=g^*(K_M+D+Y)$,
\item[(ii)] $\Supp B_{M'}$ is a simple normal crossing
divisor on $M'$, and
\item[(iii)] $\Supp g_*^{-1}(D+Y)\cup \Exc (g)$ is also a
simple normal crossing divisor on $M'$.
\end{itemize}
Let $Y'$ be the union of the irreducible components
of $B_{M'}^{=1}$ that are mapped into $Y$ by $g$.
We put
$$
(K_{M'}+B_{M'})|_{Y'}=K_{Y'}+B_{Y'}.
$$
Then
$$
g_*\mathcal O_{Y'}(\lceil -(B_{Y'}^{<1})\rceil
-\lfloor B_{Y'}^{>1}\rfloor)\simeq
\mathcal O_{Y}(\lceil -(B_{Y}^{<1})\rceil
-\lfloor B_{Y}^{>1}\rfloor)
$$
by Proposition \ref{prop4.1}.
This implies that
$$\mathcal I_{X_{-\infty}}\overset{\simeq}{\longrightarrow}
f_*g_*\mathcal O_{Y'}(\lceil -(B_{Y'}^{<1})\rceil
-\lfloor B_{Y'}^{>1}\rfloor). $$
By the construction,
$$
K_{Y'}+B_{Y'}=g^*(K_Y+B_Y)\sim _{\mathbb R}g^*f^*\omega.
$$
By Proposition \ref{prop4.2},
the collection of subvarieties
$\{C\}$ in Definition \ref{def3.2} coincides with
the image of $(Y', B_{Y'})$-strata that are not contained in $X_{-\infty}$.
Therefore, by replacing $M$ and $(Y, B_Y)$ with
$M'$ and $(Y', B_{Y'})$, we may assume that
the ambient space $M$ is quasi-projective.
\end{proof}
Theorem \ref{thm4.2} makes the theory of quasi-log schemes
flexible and useful.
\begin{lem}\label{lem4.3}
Let $(Y, B_Y)$ be a simple normal crossing
pair.
Let $V$ be a smooth
variety such that
$Y\subset V$.
Then we can construct a sequence of
blow-ups
$$V_k\to V_{k-1}\to\cdots\to V_0=V$$ with
the following properties.
\begin{itemize}
\item[$(1)$] $\sigma_{i+1}:V_{i+1}\to V_i$ is the
blow-up along a smooth irreducible
component of $\Supp B_{Y_i}$ for
every $i\geq 0$.
\item[$(2)$] We put $Y_0=Y$ and $B_{Y_0}=B_Y$.
Let $Y_{i+1}$ be the
strict transform
of $Y_i$ for every $i\geq 0$.
\item[$(3)$]
We define $K_{Y_{i+1}}+B_{Y_{i+1}}=\sigma^*_{i+1}
(K_{Y_i}+B_{Y_i})$ for
every $i\geq 0$.
\item[$(4)$] There exists
an $\mathbb R$-divisor $D$ on $V_k$ such
that $D|_{Y_k}=B_{Y_k}$.
\item[$(5)$] $\sigma_*\mathcal O_{Y_k}(\lceil
-(B^{<1}_{Y_k})\rceil-\lfloor B^{>1}_{Y_k}\rfloor)\simeq
\mathcal O_Y(\lceil-(B^{<1}_Y)\rceil-\lfloor B^{>1}_Y\rfloor)$,
where $\sigma: V_k\to V_{k-1}\to\cdots\to V_0=V$.
\end{itemize}
\end{lem}
\begin{proof}
It is sufficient to check (5).
All the other properties are obvious by the construction of the
sequence of blow-ups.
By an easy calculation of discrepancy coefficients similar to
the proof of Proposition \ref{prop4.1},
we can check that $$\sigma_{i+1*}\mathcal O_{V_{i+1}}(\lceil -(B_{Y_{i+1}}^{<1})
\rceil
-\lfloor B_{Y_{i+1}}^{>1}\rfloor)\simeq
\mathcal O_{V_{i}}(\lceil -(B_{Y_{i}}^{<1})\rceil
-\lfloor B_{Y_{i}}^{>1}\rfloor)$$ for every $i$.
This implies the desired isomorphism.
\end{proof}
We can easily check:
\begin{lem}\label{lem4.35}
In Lemma \ref{lem4.3},
let $C'$ be a stratum of $(Y_k, B_{Y_k})$.
Then $\sigma(C')$ is a stratum of $(Y, B_Y)$.
Let $C$ be a stratum of $(Y, B_Y)$.
Then there is a stratum $C'$ of $(Y_k, B_{Y_k})$ such that
$\sigma(C')=C$.
\end{lem}
The following lemma is easy but
very useful (cf.~\cite[Proposition 10.59]{kollar2}).
\begin{lem}\label{lem4.4}
Let $Y$ be a simple normal crossing
variety. Let $V$ be a smooth
quasi-projective variety such that
$Y\subset V$. Let $\{P_i\}$ be
any finite set of closed points of
$Y$.
Then we can find a quasi-projective
variety $W$ such that
$Y\subset W\subset V$, $\dim W=\dim Y+1$, and
$W$ is smooth
at $P_i$ for every $i$.
\end{lem}
\begin{proof}
Let $\overline V$ be the closure
of $V$ in a projective space and
let $\overline Y$ be the closure of $Y$ in $\overline V$.
We take a sufficiently large positive integer $d$ such that
the scheme theoretic base locus of
$|\mathcal O_{\overline V}(dH)\otimes \mathcal I_{\overline Y}|$ is
$\overline Y$ near $P_i$ for
every $i$, where $H$ is a very ample Cartier divisor
on $\overline V$ and $\mathcal I_{\overline Y}$
is the defining ideal sheaf of $\overline Y$ on $\overline V$.
By taking a complete intersection of $(\dim V-\dim Y-1)$ general members of
$|\mathcal O_{\overline V}(dH)\otimes \mathcal I_{\overline Y}|$, we obtain
$\overline W\supset \overline Y$ such that
$\overline W$ is smooth at $P_i$ for every $i$.
Note that we used the fact that $\overline Y$
has only hypersurface singularities near $P_i$ for every
$i$.
We put $W=\overline W \cap V$.
By the construction, $W\subset V$ and $\dim W=\dim Y+1$.
\end{proof}
Of course, we can not always make
$W$ smooth in Lemma \ref{lem4.4}.
\begin{ex}[{\cite[Example 3.62]{fujino-book}}]\label{ex4.5}
Let $V\subset \mathbb P^5$ be the Segre
embedding of $\mathbb P^1\times \mathbb P^2$.
In this case, there are no smooth
hypersurfaces of $\mathbb P^5$ containing $V$.
We can check it as follows.
If there exists a smooth hypersurface $S$ such that
$V\subset S\subset \mathbb P^5$, then
$\rho (V)=\rho (S)=\rho (\mathbb P^5)=1$ by
the Lefschetz hyperplane theorem.
It is a contradiction because $\rho(V)=2$.
\end{ex}
By the above results, we can prove the final lemma in this section.
\begin{lem}\label{lem4.6}
Let $(Y, B_Y)$ be a simple normal
crossing
pair such that
$Y$ is quasi-projective. Then
there exist a globally embedded simple normal
crossing
pair $(Z, B_Z)$ and
a morphism
$\sigma: Z\to Y$ such that
$$
K_Z+B_Z=\sigma^*(K_Y+B_Y)
$$
and
$$\sigma _*\mathcal O_Z(\lceil -(B^{<1}_Z)\rceil
-\lfloor B^{>1}_Z\rfloor)
\simeq \mathcal O_Y(\lceil -(B^{<1}_Y)\rceil
-\lfloor B^{>1}_Y\rfloor). $$
Moreover, let $C'$ be a stratum of $(Z, B_Z)$.
Then $\sigma(C')$ is a stratum of $(Y, B_Y)$ or $\sigma(C')$ is
contained in $\Supp B_Y^{>1}$.
Let $C$ be a stratum of $(Y, B_Y)$.
Then there exists a stratum $C'$ of $(Z, B_Z)$ such
that $\sigma(C')=C$.
\end{lem}
\begin{proof}
Let $V$ be a smooth quasi-projective
variety such that
$Y\subset V$.
By Lemma \ref{lem4.3} and Lemma \ref{lem4.35},
we may assume
that there exists an $\mathbb R$-divisor
$D$ on $V$ such that
$D|_Y=B_Y$.
Then we apply Lemma \ref{lem4.4}.
We can find a quasi-projective
variety $W$ such that
$Y\subset W\subset V$, $\dim W=\dim Y+1$, and
$W$ is smooth at the generic point of any stratum
of $(Y, \Supp B_Y)$.
Of course, we can make $W\not\subset \Supp D$
(see the proof of Lemma \ref{lem4.4}).
We apply
Hironaka's resolution to $W$ and
use Szab\'o's resolution lemma (see, for example,
\cite[3.5 Resolution lemma]{fujino-what}).
More precisely, we take blow-ups outside $U$, where
$U$ is the largest Zariski open set
of $W$ such that
$(Y, B_Y)|_U$ is a globally embedded simple normal
crossing pair.
Then we obtain a desired globally embedded
simple normal crossing
pair $(Z, B_Z)$.
More precisely, we can check that
$(Z, B_Z)$ has the desired properties
by an easy calculation
of discrepancy coefficients
similar to the proof of Proposition \ref{prop4.1}.
\end{proof}
Therefore, we obtain the following statement, which is the
main result of this section.
\begin{thm}\label{thm4.7}
In Definition \ref{def3.2},
it is sufficient to assume that $(Y, B_Y)$ is
a quasi-projective $($not necessarily embedded$)$
simple normal crossing pair. \end{thm}
\begin{proof}
We only assume that $(Y, B_Y)$ is a simple normal crossing pair
in Definition \ref{def3.2}. We assume that $Y$ is quasi-projective.
Then we apply Lemma \ref{lem4.6} to $(Y, B_Y)$.
Let $\sigma:(Z, B_Z)\to (Y, B_Y)$ be as in Lemma \ref{lem4.6}.
Then
$$
\bigl(X, \omega, f\circ \sigma: (Z, B_Z)\to X\bigr)
$$ is a quasi-log scheme in the sense of Definition \ref{def3.2}.
\end{proof}
Proposition \ref{prop4.8} shows that
it is not so easy to apply Chow's lemma
directly to make $(Y, B_Y)$ quasi-projective in Definition \ref{def3.2}.
\begin{prop}[{\cite[Proposition 3.65]{fujino-book}}]\label{prop4.8}
There exists a complete simple normal crossing
variety $Y$ with the following property.
If $f:Z\to Y$ is a proper surjective morphism from
a simple normal crossing variety $Z$ such
that $f$ is an isomorphism
at the generic point of any stratum of $Z$, then
$Z$ is non-projective.
\end{prop}
\begin{proof}
We take a smooth complete non-projective toric variety
$X$.
We put $V=X\times \mathbb P^1$. Then
$V$ is a toric variety.
We consider $Y=V\setminus T$, where
$T$ is the big torus of $V$.
We will see that $Y$ has the desired property.
By the above construction, there is
an irreducible component $Y'$ of $Y$ that is
isomorphic to $X$.
Let $Z'$ be the irreducible component of $Z$ mapped
onto $Y'$ by $f$.
So, it is sufficient to see that
$Z'$ is not projective.
On $Y'\simeq X$, there is a torus invariant effective
one cycle $C$ such that
$C$ is numerically trivial.
By the construction and the assumption, $g=f|_{Z'}:
Z'\to Y'\simeq X$ is birational and an isomorphism
over the generic point of any torus invariant curve
on $Y'\simeq X$.
We note that any torus invariant curve on $Y'\simeq
X$ is a stratum of $Y$.
We assume that $Z'$ is projective,
then there is a very ample
effective divisor $A$ on $Z'$ such that
$A$ does not contain any irreducible components of
the inverse image of $C$.
Then $B=f_*A$ is an effective Cartier divisor
on $Y'\simeq X$ such that
$\Supp B$ contains no irreducible
components of $C$.
It is a contradiction because $\Supp B\cap C\ne \emptyset$
and $C$ is numerically trivial.
\end{proof}
Proposition \ref{prop4.8} is the main reason why we proved Theorem \ref{thm4.2}
for the proof of our main theorem:~Theorem \ref{main} and Theorem \ref{main-thm}.
\section{Proof of the main theorem}\label{sec-proof}
Now the proof of the main theorem (see Theorem \ref{main} and
Theorem \ref{main-thm}) is almost obvious.
\begin{proof}[Proof of Theorem \ref{main-thm}]
Let $f:(Y, B_Y)\to X$ be a quasi-log resolution as in Definition \ref{def3.2}.
By Theorem \ref{thm4.2},
we may assume that $Y$ is quasi-projective.
We consider the fiber product $Y'=Y\times _XX'$.
$$
\xymatrix{
Y' \ar[r]^{h'}\ar[d]_{f'}&Y\ar[d]^{f}\\
X' \ar[r]_{h}&X
}
$$
We put $B_{Y'}=h'^*B_Y$.
Then $(Y', B_{Y'})$ is a quasi-projective simple normal crossing pair because
$h$ is a smooth quasi-projective morphism and $(Y, B_Y)$ is a quasi-projective simple normal crossing
pair.
Since $K_Y+B_Y\sim _{\mathbb R}f^*\omega$,
we have
\begin{align*}
f'^*\omega'&=f'h^*\omega\otimes f'^*\omega_{X'/X}\\
&=h'f^*\omega\otimes \omega_{Y'/Y}\\
&\sim_{\mathbb R}h'^*(K_Y+B_Y)\otimes \omega_{Y'/Y}\\
&=K_{Y'}+B_{Y'}.
\end{align*}
By the flat base change theorem,
we have
\begin{align*}
h^*\mathcal I_{X_{-\infty}}&=h^*f_*\mathcal O_Y(\lceil -(B_Y^{<1})\rceil -\lfloor B_Y^{>1}\rfloor)\\
&\simeq f'_*h'^*\mathcal O_Y(\lceil -(B_Y^{<1})\rceil -\lfloor B_Y^{>1}\rfloor)\\
&\simeq f'_*\mathcal O_{Y'}(\lceil -(B_{Y'}^{<1})\rceil -\lfloor B_{Y'}^{>1}\rfloor).
\end{align*}
Finally, by Theorem \ref{thm4.7}, we may assume that
$(Y', B_{Y'})$ is a globally embedded simple normal crossing
pair. Therefore,
$$
\bigl(X', \omega', f':(Y', B_{Y'})\to X'\bigr)
$$
gives us the desired quasi-log structure.
\end{proof}
Theorem \ref{thm1.2} is a special case of Theorem \ref{main} and
Theorem \ref{main-thm}.
\begin{proof}[Proof of Theorem \ref{thm1.2}]
Note that $\Omega^1_{X'/X}=0$ because $h:X'\to X$ is a finite
\'etale morphism. Therefore, we have $\omega'=h^*\omega\otimes \det\Omega^1_{X'/X}\simeq h^*\omega$.
Thus Theorem \ref{thm1.2} follows from Theorem \ref{main} and Theorem \ref{main-thm}.
\end{proof}
\section{Applications to quasi-log canonical Fano varieties}\label{sec5}
Let us recall the vanishing theorem for projective qlc pairs.
It is a very special case of \cite[Theorem 3.39 (ii)]{fujino-book}.
For the details of various vanishing theorems for reducible
varieties, see \cite{fujino-vanishing} and \cite{fujino-inj}.
\begin{thm}[Vanishing theorem for qlc pairs]\label{thm5.1}
Let $[X, \omega]$ be a projective
qlc pair and let $L$ be a Cartier divisor on $X$ such that
$L-\omega$ is ample.
Then $H^i(X, \mathcal O_X(L))=0$ for every $i>0$.
\end{thm}
We give a proof of Theorem \ref{thm5.1} for the reader's convenience.
\begin{proof}
Let $f:(Y, B_Y)\to X$ be a quasi-log resolution as in Definition \ref{def3.2}.
Then
\begin{align*}
f^*(L-\omega)&\sim _{\mathbb R}f^*L-(K_Y+B_Y)\\
&=f^*L+\lceil -(B_Y^{<1})\rceil-(K_Y+\{B_Y\}+B_Y^{=1})
\end{align*}
because $B_Y=B_Y^{\leq 1}$ (see Remark \ref{rem3.5}).
Therefore, we have
$$
H^i(X, f_*\mathcal O_Y(f^*L+\lceil -(B_Y^{<1})\rceil))=0
$$
for every $i>0$ (see, for example, \cite[Theorem 1.1 (ii)]{fujino-vanishing}).
Note that
\begin{align*}
f_*\mathcal O_Y(f^*L+\lceil -(B_Y^{<1})\rceil)\simeq \mathcal O_X(L)\otimes f_*\mathcal O_Y(\lceil
-(B_Y^{<1})\rceil)\simeq \mathcal O_X(L)
\end{align*}
because $X_{-\infty}=\emptyset$ (see Remark \ref{rem3.5}).
This implies that
$$
H^i(X, \mathcal O_X(L))=0
$$
for every $i>0$.
\end{proof}
By combining Theorem \ref{thm5.1} with
Theorem \ref{main-thm},
we can easily check Corollary \ref{cor1.2}.
\begin{proof}[{Proof of Corollary \ref{cor1.2}}]
Without loss of generality, we may assume that
$X$ is connected.
Since $-\omega$ is ample, $H^i(X, \mathcal O_X)=0$ for
every $i>0$ by Theorem \ref{thm5.1}.
Therefore, we have $\chi(X, \mathcal O_X)=1$.
Let $f:\widetilde X\to X$ be a non-trivial
finite \'etale morphism from a connected scheme $\widetilde X$.
By Theorem \ref{main-thm}, the pair $[\widetilde X, \widetilde \omega]$,
where $\widetilde \omega=f^*\omega$,
is a qlc pair such that $-\widetilde \omega$ is ample.
Thus, $H^i(\widetilde X, \mathcal O_{\widetilde X})=0$ for
every $i>0$ by Theorem \ref{thm5.1} again.
This implies $\chi (\widetilde X, \mathcal O_{\widetilde X})=1$.
By the Riemann--Roch formula (see, for example, \cite[Example 18.3.9]{fulton}),
we have
$$
\chi (\widetilde X, \mathcal O_{\widetilde X})=\deg f\cdot \chi (X, \mathcal O_X).
$$
Therefore, we obtain $\deg f=1$.
This means that $X$ has no non-trivial
finite
\'etale covers, equivalently, the algebraic fundamental
group of $X$ is trivial.
\end{proof}
As a direct consequence of Corollary \ref{cor1.2} and
the main theorem of \cite{fujino-slc}, we have:
\begin{cor}\label{cor5.1}
Let $(X, \Delta)$ be a projective semi log canonical
pair such that $-(K_X+\Delta)$ is ample, that is,
$(X, \Delta)$ is a semi log canonical Fano variety.
Then the algebraic fundamental group of $X$ is trivial.
\end{cor}
\begin{proof}
By \cite{fujino-slc}, $[X, K_X+\Delta]$ has a natural quasi-log structure
with only qlc singularities.
Therefore, Corollary \ref{cor5.1} is a special case of Corollary \ref{cor1.2}.
\end{proof}
Note that a union of some slc strata of a semi log canonical Fano variety
is a quasi-log canonical Fano variety by Example \ref{ex6.3}.
\begin{ex}\label{ex6.3}
Let $(X, \Delta)$ be a connected
projective semi log canonical pair such that
$-(K_X+\Delta)$ is ample.
Let $W$ be a union of some slc strata of $(X, \Delta)$ with
the reduced scheme structure. Then
$[W, \omega]$, where $\omega=(K_X+\Delta)|_W$, is a projective
qlc pair such that $-\omega$ is ample by adjunction
(see \cite[Theorem 1.13]{fujino-slc}).
By \cite[Theorem 1.11]{fujino-slc}, we obtain $H^1(X, \mathcal I_W)=0$
where $\mathcal I_W$ is the defining ideal sheaf of $W$ on $X$.
Therefore, we obtain $H^0(W, \mathcal O_W)=\mathbb C$
by the surjection
$$
\mathbb C=H^0(X, \mathcal O_X)\to H^0(W, \mathcal O_W).
$$
This implies that $W$ is connected.
\end{ex}
The author learned the following example from Tetsushi Ito.
\begin{ex}[Topological versus algebraic]\label{ex5.1}
We consider the Higman group $G$.
It is generated by $4$ elements $a$, $b$, $c$, $d$ with
the relations
\begin{align*}
a^{-1}ba=b^2, \quad b^{-1}cb=c^2, \quad c^{-1}dc=d^2, \quad d^{-1}ad=a^2.
\end{align*}
It is well known that $G$ has no non-trivial
finite quotients.
By \cite[Theorem 12.1]{simpson},
there is an irreducible projective variety $X$ such that
$\pi_1(X)\simeq G$.
In this case, the algebraic fundamental group of $X$, which is
the profinite completion of $\pi_1(X)$, is trivial.
\end{ex}
Example \ref{ex5.1} shows that
Conjecture \ref{conj1.3} does not directly follow from Corollary \ref{cor1.2}.
We give a non-trivial example of reducible semi log canonical
Fano varieties.
\begin{ex}\label{ex6.5}
We consider the lattice $N=\mathbb Z^3$.
Let $n$ be an integer with $n\geq 3$.
We consider a convex polyhedron $P$ in $N_{\mathbb R}=N\otimes
\mathbb R\simeq \mathbb R^3$ whose
vertices are $v_0, v_1, \cdots, v_n\in N$ such that
$v_0=(0, 0, -1)$ and that the third coordinates
of $v_1, \cdots, v_n$ are $1$.
Assume that
$P$ contains $(0, 0, 0)$ in its interior.
Then the cones spanned by $(0, 0, 0)$ and faces of $P$ subdivide
$\mathbb R^3$ into
$n+1$ three-dimensional cones.
This subdivision of $\mathbb R^3$ corresponds to a complete toric
threefold $X$.
Then we have the following properties.
\begin{itemize}
\item[(1)] $-K_X$ is ample since $P$ is convex.
\item[(2)] $D_0\sim D_1+\cdots +D_n$ and $D_0$ is $\mathbb Q$-Cartier,
where $D_i$ is the torus invariant prime divisor
on $X$ associated to $v_i$ for every $i$.
\item[(3)] Let $x\in X$ be the torus invariant
closed point associated to the cone spanned by $v_1, v_2, \cdots, v_n$.
Then $X\setminus x$ is $\mathbb Q$-factorial, but
$X$ is not $\mathbb Q$-factorial when $n\geq 4$.
\item[(4)] We put $\Delta=D_1+\cdots +D_n$. Then
$(X, \Delta)$ is a log canonical
Fano threefold.
Note that $-(K_X+\Delta)\sim D_0$.
\item[(5)] We put $W=\lfloor \Delta\rfloor=\Delta$ and
$$
K_W+\Delta_W=(K_X+\Delta)|_W.
$$
Then $(W, \Delta_W)$ is a semi log canonical Fano surface.
Note that $W$ is Cohen--Macaulay since $W$ is
$\mathbb Q$-Cartier.
\end{itemize}
This $W$ shows that the number of
irreducible components of semi log canonical Fano surfaces is not
bounded.
\end{ex}
We recommend the reader who can read Japanese
to see \cite{fujino-fano} for some related topics and open problems
on singular Fano varieties.
\section{Simple connectedness of log canonical Fano varieties}\label{sec6}
In this section, we prove that a log canonical
Fano variety is always simply connected.
Theorem \ref{thm-fujita} is Fujita's answer to the author's question.
\begin{thm}[Kento Fujita]\label{thm-fujita}
Let $(X, \Delta)$ be a projective log canonical
pair such that $-(K_X+\Delta)$ is ample,
that is, $(X, \Delta)$ is a log canonical Fano variety.
Then $X$ is simply connected.
\end{thm}
\begin{proof}
First of all, we may assume that $X$ is connected.
Without loss of generality, we may assume that
$\Delta$ is a $\mathbb Q$-divisor by perturbing $\Delta$ slightly.
Then, by \cite[Corollary 1.3 (2)]{hacon-mckernan},
$X$ is rationally chain connected.
Since $X$ is normal and rationally chain connected, $\pi_1(X)$ is finite
(see, for example, \cite[4.13 Theorem]{kollar}).
Let $f: \widetilde X\to X$ be the universal cover of $X$.
Since $\pi_1(X)$ is finite, $f$ is finite and \'etale.
It is obvious that
$(\widetilde X, \widetilde \Delta)$ is log canonical and
$-(K_{\widetilde X}+\widetilde \Delta)$ is ample, where
$$
K_{\widetilde X}+\widetilde \Delta=f^*(K_X+\Delta).
$$
By \cite[Theorem 8.1]{fujino}, we have
$$
H^i(X, \mathcal O_X)=H^i(\widetilde X, \mathcal O_{\widetilde X})=0
$$
for every $i>0$. This implies
$$
\chi(X, \mathcal O_X)=\chi (\widetilde X, \mathcal O_{\widetilde X})=1.
$$
On the other hand,
$$
\chi(\widetilde X, \mathcal O_{\widetilde X})=\deg f\cdot \chi(X, \mathcal O_X)
$$
holds by the Riemann--Roch formula (see, for example, \cite[Example 18.3.9]{fulton}).
Thus we obtain $\deg f=1$.
Therefore, $X$ is simply connected.
\end{proof}
\begin{rem}
By \cite[Corollary 1.3 (2)]{hacon-mckernan}, we can easily see that a semi log canonical
Fano variety is rationally chain connected.
However, \cite[4.13 Theorem]{kollar} does not always
hold for
{\em{non-normal}} rationally chain connected varieties.
Note that a nodal rational curve $C$ is rationally chain connected
such that $\pi_1(C)$ is infinite.
Therefore, the proof of Theorem \ref{thm-fujita} does not
work for semi log canonical Fano varieties.
\end{rem}
The following well-known example shows some subtleties
on log canonical Fano varieties.
\begin{ex}\label{ex7.2}
Let $C\subset \mathbb P^2$ be a smooth
cubic curve and let $X\subset \mathbb P^3$ be the cone over
$C\subset \mathbb P^2$.
then $X$ is a Gorenstein log canonical surface such that
$-K_X$ is ample.
It is easy to see that $X$ is rationally chain connected and
that $\pi_1(X)=\{1\}$ by Theorem \ref{thm-fujita}.
Let $f:Y\to X$ be the blow-up at $P$ where $P$ is the vertex of $X$.
Then $$
K_Y+E=f^*K_X.
$$
The pair $(Y, E)$ is purely log terminal and $-(K_Y+E)$ is big and semi-ample.
Note that the exceptional curve $E$ is isomorphic to $C$ and that $Y$ is a $\mathbb P^1$-bundle
over $C$.
Therefore, it is easy to see that $Y$ is not rationally chain connected and
$\pi_1(Y)\ne \{1\}$.
\end{ex}
Example \ref{ex-whitney} is a non-trivial example
of {\em{irreducible non-normal}} semi log canonical Fano varieties.
\begin{ex}\label{ex-whitney}
We put $X=(x^2w-zy^2=0)\subset \mathbb P^3$.
Then $X$ is a Gorenstein Fano variety with only
semi log canonical singularities.
Note that $X$ is irreducible and non-normal.
By using the van Kampen theorem,
we see that $\pi_1(X)=\{1\}$.
\end{ex}
\section{Appendix:~Ambro's original definition}\label{sec7}
In this section, we prove that
our definition of quasi-log schemes (see Definition \ref{def3.2})
is equivalent to Ambro's original definition in \cite{ambro}.
First, let us recall the definition of {\em{normal crossing pairs}}.
We need it for Ambro's original definition of quasi-log schemes in \cite{ambro}.
\begin{defn}[Normal crossing pairs]\label{def8.1}
A variety $X$ has {\em{normal crossing}} singularities
if, for every closed point $x\in X$,
$$
\widehat{\mathcal O}_{X,x}\simeq \frac{\mathbb C[[x_0, \cdots, x_{N}]]}
{(x_0\cdots x_k)}
$$
for some $0\leq k \leq N$, where
$N=\dim X$.
Let $X$ be a normal crossing variety.
We say that a reduced divisor $D$ on $X$
is {\em{normal crossing}} if,
in the above notation,
we have
$$
\widehat{\mathcal O}_{D, x}\simeq \frac{\mathbb C[[x_0, \cdots,
x_{N}]]}{(x_0\cdots x_k, x_{i_1}\cdots x_{i_l})}
$$
for some $\{i_1, \cdots, i_l\}\subset\{k+1, \cdots, N\}$.
We say that the pair $(X, B)$ is a
{\em{normal crossing pair}} if the
following
conditions are satisfied.
\begin{itemize}
\item[(1)] $X$ is a normal crossing
variety, and
\item[(2)] $B$ is an $\mathbb R$-Cartier $\mathbb R$-divisor whose
support is normal crossing on $X$.
\end{itemize}
We say that a normal crossing
pair $(X, B)$ is {\em{embedded}}
if there exists a closed embedding
$\iota:X\to M$, where
$M$ is a smooth variety
of dimension $\dim X+1$.
We call $M$ the {\em{ambient space}} of $(X, B)$.
We put
$$K_{X^\nu}+\Theta=\nu^*(K_X+B), $$
where
$\nu:X^\nu\to X$ is the normalization of $X$.
A {\em{stratum}} of $(X, B)$
is an irreducible
component of $X$ or
the image of some lc center of $(X^\nu, \Theta)$ on $X$.
\end{defn}
It is obvious that a {\em{simple normal crossing
pair}} in Definition \ref{def-snc} is a {\em{normal crossing
pair}} in Definition \ref{def8.1}.
Note that the differences between normal crossing varieties and
simple normal crossing varieties sometimes
cause some subtle troubles (see, for example, \cite[3.6 Whitney umbrella]{fujino-what}).
Let us recall Ambro's original definition of
quasi-log schemes (see \cite{ambro}).
\begin{defn}[Quasi-log schemes]\label{def7.1}
A {\em{quasi-log scheme}} is a scheme $X$ endowed with an
$\mathbb R$-Cartier $\mathbb R$-divisor
(or $\mathbb R$-line bundle) $\omega$, a proper closed subscheme
$X_{-\infty}\subset X$, and a finite collection $\{C\}$ of reduced
and irreducible subschemes of $X$ such that there is a
proper morphism $f:(Y, B_Y)\to X$ from an {\em{embedded
normal crossing pair}}
satisfying the following properties:
\begin{itemize}
\item[(1)] $f^*\omega\sim_{\mathbb R}K_Y+B_Y$.
\item[(2)] The natural map
$\mathcal O_X
\to f_*\mathcal O_Y(\lceil -(B_Y^{<1})\rceil)$
induces an isomorphism
$$
\mathcal I_{X_{-\infty}}\overset{\simeq}{\longrightarrow} f_*\mathcal O_Y(\lceil
-(B_Y^{<1})\rceil-\lfloor B_Y^{>1}\rfloor),
$$
where $\mathcal I_{X_{-\infty}}$ is the defining ideal sheaf of
$X_{-\infty}$.
\item[(3)] The collection of subvarieties $\{C\}$ coincides with the image
of $(Y, B_Y)$-strata that are not included in $X_{-\infty}$.
\end{itemize}
\end{defn}
In Definition \ref{def3.2}, we assume that
$(Y, B_Y)$ is a {\em{globally embedded simple normal crossing pair}}.
On the other hand, in Definition \ref{def7.1}, we only assume that
$(Y, B_Y)$ is an {\em{embedded normal crossing pair}}.
\begin{rem}\label{rem8.2}
As was pointed out in Remark \ref{rem3.3}, a quasi-log scheme
in Definition \ref{def7.1}
was called a quasi-log {\em{variety}} in \cite{ambro}.
\end{rem}
\begin{rem}
In \cite{ambro}, Ambro required that $(Y, B_Y)$ is {\em{embedded }}for technical reasons
and expected that this {\em{extra}} assumption is not necessary
(see \cite[Introduction]{ambro}).
On the other hand, the author thinks that
the existence of the ambient space $M$ of $(Y, B_Y)$ makes the theory
of quasi-log schemes more
flexible and more powerful.
Note that a key point of the main theorem in \cite{fujino-slc}
is to construct good ambient spaces
for quasi-projective semi log canonical pairs after some suitable
birational modifications.
\end{rem}
Lemma \ref{lem8.4} is essentially the same as Ambro's
{\em{embedded log transformations}} in \cite{ambro}.
\begin{lem}\label{lem8.4}
Let $(Y, B_Y)$ be an embedded normal crossing
pair and let $M$ be the ambient space of $(Y, B_Y)$.
Then there are a projective surjective morphism
$\sigma:M'\to M$ from a smooth
variety $M'$ such that
$\sigma$ is a composition of blow-ups and
a simple normal crossing
pair $(Z, B_Z)$ embedded into $M'$ with the following
properties.
\begin{itemize}
\item[(i)] $\sigma:Z\to Y$ is surjective
and $K_Z+B_Z=\sigma^*(K_Y+B_Y)$.
\item[(ii)] $\sigma_*\mathcal O_Z(\lceil -(B_Z^{<1})\rceil-\lfloor B_Z^{>1}\rfloor)
\simeq \mathcal O_Y(\lceil -(B_Y^{<1})\rceil -\lfloor B_Y^{>1}\rfloor)$.
\item[(iii)] Let $C'$ be a stratum of $(Z, B_Z)$.
Then $\sigma(C')$ is a stratum of $(Y, B_Y)$ or
is contained in $\Supp B_Y^{>1}$.
Let $C$ be a stratum of $(Y, B_Y)$.
Then there is a stratum $C'$ of $(Z, B_Z)$ such that
$\sigma(C')=C$.
\end{itemize}
\end{lem}
\begin{proof}
First, we can construct a sequence of blow-ups
$M_k\to M_{k-1}\to\cdots\to M_0=M$ with
the following properties.
\begin{itemize}
\item[(a)] $\sigma_{i+1}:M_{i+1}\to M_i$ is the blow-up
along a smooth stratum of $Y_i$ for every $i$.
\item[(b)] We put $Y_0=Y$, $B_{Y_0}=B_Y$, and
$Y_{i+1}=\sigma_{i+1}^{-1}(Y_i)$ with the reduced scheme structure.
\item[(c)] $Y_k$ is a simple normal crossing divisor on $M_k$.
\end{itemize}
We can check that $K_{Y_{i+1}}=\sigma_{i+1}^*K_{Y_i}$ for every $i$ by
the construction. We can directly check that
$$R^1\sigma_{i+1*}\mathcal O_{M_{i+1}}(-Y_{i+1})=0$$ and
$$\sigma_{i+1*}\mathcal O_{M_{i+1}}(-Y_{i+1})\simeq
\mathcal O_{M_i}(-Y_i)$$ for every $i$.
Therefore, by the diagram:
$$
\xymatrix{
0\ar[r]&\mathcal O_{M_i}(-Y_i)\ar[r]\ar[d]^{\simeq}
&\mathcal O_{M_i}\ar[r]\ar[d]^{\simeq}&\mathcal O_{Y_i} \ar[d]\ar[r]&0\ \\
0\ar[r]&\sigma_{i+1*}\mathcal O_{M_{i+1}}(-Y_{i+1})
\ar[r]&\sigma_{i+1*}\mathcal O_{M_{i+1}}
\ar[r]&\sigma_{i+1*}\mathcal O_{Y_{i+1}}\ar[r]&0,
}
$$
we obtain $\sigma_{i+1*}\mathcal O_{Y_{i+1}}\simeq \mathcal O_{Y_i}$ for every $i$.
We put $B_{Y_{i+1}}=\sigma_{i+1}^*B_{Y_i}$ for every $i$.
Then, by replacing $(Y, B_Y)$ and $M$ with
$(Y_k, B_{Y_k})$ and $M_k$, we may assume that
$Y$ is a simple normal crossing divisor on $M$.
Next, we can construct a sequence of blow-ups
$M_k\to M_{k-1}\to\cdots\to M_0=M$ with
the following properties.
\begin{itemize}
\item[(1)] $\sigma_{i+1}:M_{i+1}\to M_i$ is the blow-up
along a smooth stratum of $(Y_i, B_{Y_i})$ contained
in $\Supp B_{Y_i}$ for every $i$.
\item[(2)] We put $Y_0=Y$ and $B_{Y_0}=B_Y$.
Let $Y_{i+1}$ be the strict transform of $Y_i$ on $M_{i+1}$ for every $i$.
\item[(3)] We put $K_{Y_{i+1}}+B_{Y_{i+1}}=\sigma_{i+1}^*(K_{Y_i}+B_{Y_i})$ for
every $i$.
\item[(4)] $\Supp B_{Y_k}$ is a simple normal crossing divisor on $Y_k$.
\end{itemize}
Finally, by the construction, we can check the properties (i), (ii), and (iii)
for $\sigma:M_k\to M$ and $(Y_k, B_{Y_k})$ by an easy calculation of
discrepancy coefficients similar to the proof of Proposition \ref{prop4.1}.
\end{proof}
\begin{prop}\label{prop7.3}
Assume that
$(Y, B_Y)$ is an embedded simple normal crossing
pair in Definition \ref{def7.1}.
Let $M$ be the ambient space of $(Y, B_Y)$.
Then, by taking some sequence of blow-ups of $M$,
we may further assume that
$(Y, B_Y)$ is a globally embedded simple normal crossing
pair in Definition \ref{def7.1}.
\end{prop}
\begin{proof}
We can construct a sequence of blow-ups
$M_k\to M_{k-1}\to\cdots\to M_0=M$ with
the following properties.
\begin{itemize}
\item[(i)] $\sigma_{i+1}:M_{i+1}\to M_i$ is the
blow-up along a smooth irreducible component of $\Supp B_{Y_i}$ for
every $i\geq 0$.
\item[(ii)] We put $Y_0=Y$ and $B_{Y_0}=B_Y$.
Let $Y_{i+1}$ be the
strict transform of $Y_i$ for every $i\geq 0$.
\item[(iii)] We define $K_{Y_{i+1}}+B_{Y_{i+1}}=\sigma^*_{i+1}
(K_{Y_i}+B_{Y_i})$ for every $i\geq 0$.
\item[(iv)] There exists an $\mathbb R$-divisor $D$ on $M_k$ such that
$\Supp (Y_k+D)$ is a simple normal crossing divisor
on $M_k$ and that $D|_{Y_k}=B_{Y_k}$.
\item[(v)] $\sigma_*\mathcal O_{Y_k}(\lceil
-(B^{<1}_{Y_k})\rceil-\lfloor B^{>1}_{Y_k}\rfloor)\simeq
\mathcal O_Y(\lceil-(B^{<1}_Y)\rceil-\lfloor B^{>1}_Y\rfloor)$,
where $\sigma: M_k\to M_{k-1}\to\cdots\to M_0=M$.
\end{itemize}
Moreover, we have:
\begin{itemize}
\item[(vi)]
Let $C'$ be a stratum of $(Y_k, B_{Y_k})$.
Then $\sigma(C')$ is a stratum of $(Y, B_Y)$.
Let $C$ be a stratum of $(Y, B_Y)$.
Then there is a stratum $C'$ of $(Y_k, B_{Y_k})$ such that
$\sigma(C')=C$.
\end{itemize}
We note that we can directly check $$\sigma_{i+1*}
\mathcal O_{Y_{i+1}}(\lceil
-(B^{<1}_{Y_{i+1}})\rceil -\lfloor
B^{>1}_{Y_{i+1}}\rfloor)\simeq
\mathcal O_{Y_i}(\lceil -(B^{<1}_{Y_i})\rceil
-\lfloor B^{>1}_{Y_i}\rfloor)$$ for every $i\geq 0$
and the property (vi)
by an easy calculation of discrepancy coefficients similar to
the proof of Proposition \ref{prop4.1}.
Then we replace $M$ and $(Y, B_Y)$ with
$M_k$ and $(Y_k, B_{Y_k})$.
\end{proof}
Therefore, by Lemma \ref{lem8.4} and Proposition \ref{prop7.3},
Definition \ref{def3.2} is equivalent to Ambro's original
definition of quasi-log schemes:~Definition \ref{def7.1}.
\begin{thm}
Definition \ref{def3.2} is equivalent to Definition \ref{def7.1}.
\end{thm}
|
3,212,635,537,658 | arxiv | \section{The HEGRA IACT Array}
The HEGRA IACT system is located on the Canary Island of La Palma,
at the Observatorio del Roque de los Muchachos
of the Instituto Astrofisico de Canarias,
at a height of about 2200~m asl.
In its final form, the HEGRA IACT array will consist of
five identical telescopes with 8.5~m$^2$ mirror area,
5~m focal length, and 271-pixel cameras with a pixel
size of $0.25^\circ$ and a field of view of $4.3^\circ$
\footnote{The actual field of view is hexagonal; given
here is
the diameter of a circle with the equivalent area.}.
Four telescopes are arranged in the corners of a square with
roughly
100~m side length, the fifth telescope is located
in the center of the square.
The cameras are read out by Flash-ADCs. The two-level
trigger requires a coincidence of two neighboring pixels
to trigger a telescope, and a coincidence of at least two
telescope triggers to initiate the readout.
The first telescope with the final 271-pixel camera
(CT3)
came into operation late in 1995, and clear signals
for $\gamma$-ray emission from the Crab Nebula were
seen in the 1995 and 1996 single-telescope
data, see Fig.~\ref{ct3_crab}.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfxsize9.0cm
\epsffile{alphact3.eps}}
\end{center}
\caption
{\em Crab signal observed in 40 h of on-source data taken
with the telescope CT3 in stand-alone mode, after cuts on
the image parameters {\em width}, {\em length},
{\em concentration}, and {\em distance}. Shown is as
a full line the
distribution in the angle $\alpha$ between the major axis
of the Cherenkov image and the direction to the image of the
source in the center of the camera. The shaded area indicates
the background obtained from an equivalent amount of off-source
observations.}
\label{ct3_crab}
\end{figure}
The data discussed in the following were taken in Dec. '96
and Jan. '97, shortly after the commissioning of the
next three telescopes CT4,5,6 and their electronics. During this period,
four of the five telescopes
(CT3,4,5,6)
were operational with the 271-pixel cameras. Three of
these used the final 120~Mhz VME Flash-ADC readout; one
(CT4) was
still equipped with an interim 40~MHz VME Flash-ADC system.
The lower sampling frequency results in increased
integration times and somewhat higher sky noise for this
telescope. The data presented here were taken with a simple
two-pixel concidence for the telescope trigger, without
a next-neighbor requirement.
The fifth system telescope (CT2) was still equipped
with an old, coarser 61-pixel camera and CAMAC ADCs. It was
operated independently from the other telescopes and is
not used in the analysis discussed in the following. The same is true for
the smaller prototype telescope CT1.
Use of four rather than five telescopes
reduces the rate by 20\% and 30\% for two-telescope
and three-telescope triggers, respectively.
In addition, for the data set used in the analysis
presented here, the central telescope (CT3) was always
required among the triggering telescopes, resulting
in an additional rate loss of about
25\% compared to the normal situation,
where any two telescopes would trigger the IACT array.
Given the short time since the data was taken, and the
emphasis on continued installation and tests of the
hardware, the software algorithms for data analysis
as well as our understanding of the properties of the
system and its optimization are still imperfect.
Also, geometric calibration data were not yet
available in sufficient quantity and quality for all
telescopes.
Given these various caveats, the following performance
figures, though quite remarkable,
should not be taken to reflect the ultimate
performance of such an
array; one should expect detection rates to increase
by a factor 1.5 to 2, with a corresponding increase
in sensitivity.
\section{Triggering of the HEGRA IACT Array}
The individual HEGRA telescopes are triggered if
two pixels exceed a preset threshold, within a
coincidence window of about 12~ns. Presently,
any two pixels may trigger the
telescope. In future, a hardware next-neighbor
unit will require that at least two trigger pixels
are direct neighbors, to further reduce the rate
of random coincidences. In earlier single-telescope
observations, the next-neighbor check
was emulated in software, providing a
trigger verification after about 100~$\mu$s. This
delay is however not compatible with the timing
of the multi-telescope coincidence.
The individual telescope trigger signals are routed to
a central station, where they are delayed
in a custom designed VME unit, to compensate differences
in timing which arise since the Cherenkov light front
does not reach all telescopes at the same time. The
delay values are updated continuously as the
source moves across the sky.
To trigger
the readout of the IACT array, at least two
telescopes have to trigger within a time
window of 70~ns; this window is large compared
to the timing jitter between telescope signals,
of order 10~ns. An inter-telescope coincidence
causes a global trigger signal and an event number to
be transmitted back to all telescopes, including
those which did not trigger themselves. After
such a global trigger, the Flash-ADCs stop
recording signals, the local VME CPU at each
telescope locates
the appropriate time slice in the Flash-ADC memory,
and reads out the data, with on-line sparsification.
Data are buffered and multi-event packets
are sent via Ethernet to a central event builder
station.
Fig.~\ref{fig_rates} shows the trigger rates for
one single telescope (CT3) and for a
coincidence of two given telescopes (CT3,4), as a function
of the pixel trigger threshold in mV. One photoelectron
corresponds roughly to 1.2~mV. For low tresholds,
the single-telescope rate shows a very steep
dependence on the pixel trigger threshold. In
this regime, triggers are almost entirely caused
by random coincidences due to night-sky photons;
the observed rate is fully consistent with the
expected random rate calculated from the measured
single-pixel rates and the width of the coincidence
window.
Only for thresholds above 25~mV the dependence
flattens out, and genuine air-shower triggers
dominate. In contrast, the two-telescope trigger rate
is over the entire range, down to pixel thresholds of
8~mV, governed by air showers, and exihibits a
slope similar to the integral cosmic-ray
spectrum. For such a trigger, the choice of the
pixel thresholds is no longer determined purely by
the onset of noise triggers, but rather by other
considerations such as the minimal light yield
required to generate a useable image.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfxsize10.0cm
\epsffile{rates.eps}}
\end{center}
\caption
{\em Rate of single-telescope (CT3) triggers and of
two-telescope (CT3,4) coincidences, as a function of
the pixel trigger threshold applied to the camera
pixels. One photoelectron is roughly equivalent
to 1.2~mV. The dashed line shows the calculated rate of
random single-telescope triggers. The
rate of random two-telescope coincidences is small
compared to the measured rates.}
\label{fig_rates}
\end{figure}
For any given pixel threshold, the
rate of two-telescope (CT3,4) coincidences is smaller than
the rate of good single-telescope (CT3) triggers. The reason
is twofold: on the one hand the effective area of
a two-telescope coincidence is smaller than that of
a single telescope, since both telescopes have
to be located within the light pool of an air shower.
On the other hand, the coincidence requirement
provides already at the trigger level
a suppression of hadron-induced
showers as compared to $\gamma$-ray showers.
\begin{table} [tb]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Mode & Pixel threshold & Energy threshold \\
& (photoelectrons) & (TeV) \\
\hline \hline
Single telescope, & $\approx 22$ & $\approx 1.0$\\
~~~any 2 pixels & & \\ \hline
Single telescope, & $\approx 15$ & $\approx 0.7$ \\
~~~two neighbor pixels & & \\ \hline
4-Telescope system, & $\approx 8$ & $\approx 0.5$ \\
~~~at least two telescopes, & & \\
~~~any two pixels per telescope & & \\
\hline
\end{tabular}
\vspace{0.5cm}
\caption{\em Experimentally determined
pixel thresholds and resulting
energy thresholds for different trigger modes.
The pixel thresholds are chosen such that the
rate of random 2-pixel coincidences is small
(O(10\%)) compared
to the rate of triggers caused by air showers.
The energy thresholds are derived using Monte
Carlo simulations; the thresholds refer to
$\gamma$-rays and are defined as the energy
with the peak differential counting rate for
a source with a spectral index similar to the
cosmic-ray spectrum, for vertical incidence.}
\label{tab_thresh}
\end{center}
\end{table}
Table~\ref{tab_thresh} summarizes the typical
pixel trigger thresholds achieved for the
different modes of operation.
To relate these pixel thresholds to energy
thresholds, a Monte Carlo simulation of the
system response is required.
Since the detailed
Monte Carlo simulation of the system is still under
development, a
fast simulation tool using parametrized trigger efficiencies
was used \cite{fast_sim}. Fig.~\ref{fig_drates} shows the
resulting differential detection rates for four cases:
(1) the 5-telescope IACT array,
(2) the present 4-telescope
array with CT3 required in the analysis, in either case
with a 2-telescope trigger and a pixel threshold of 8 photoelectrons,
(3) the single telescope CT3 with a next-neighbor trigger
and a pixel threshold of 15 photoelectrons, and (4) CT3
with a 22 photoelectron threshold. While the fast simulation
does not reach the accuracy of a full Monte Carlo simulation,
it is expected to reproduce the relative performance rather
well.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfxsize10.0cm
\epsffile{drates.eps}}
\end{center}
\caption
{\em
Simulated differential detection rates for a $\gamma$-ray source
with a differential spectral index of 2.7 and a flux of
$10^{-11}$/cm$^2$s. Curves: (1) full IACT system, 8-photoelectron
threshold, (2) IACT system as used in the present analysis,
8-p.e. threshold,
(3) single telescope, 15-p.e. threshold, (4) single telescope,
22-p.e. threshold.}
\label{fig_drates}
\end{figure}
One notices that at large energies the effective area and hence
the differential detection rate of the system is not much larger
than that of a single telescope; the gain in total detection rate
results mainly from the lower threshold.
The resulting energy thresholds are included in
Table~\ref{tab_thresh}; they should be reliable within 20\%.
The threshold for the
IACT system is about 1/2 of that of an
equivalent (without next-neighbor trigger) single telescope.
The 8 photoelectron threshold, which was used for all system
data discussed here, results in a global trigger rate
of 12~Hz.
Many other aspects of the multi-telescope trigger
coincidence have been investigated, such as
the dependence of rates on zenith angle or
their dependence on the relative pointing
of the individual telescopes; more details
will be published elsewhere.
\section{Calibration and Data Analysis}
The analysis of data generated by the HEGRA IACT
system builds upon the well-known techniques
for image analysis of single-telescope data,
augmented by new developments in certain areas,
such as an improved
geometry calibration of the telescopes,
the procedures to extract the
amplitude information from the Flash-ADC data,
and the techniques for stereoscopic reconstruction.
The stereoscopic reconstruction of air showers,
with its resolution in the mrad-range, is very
sensitive to deviations in the pointing of the
telescopes. In the past, pointing deviations of
up to one pixel size were observed
for some of the HEGRA telescopes, which is
absolutely unacceptable for a proper
stereoscopic reconstruction.
Alignment procedures developed earlier within
HEGRA were extended and refined to cope with this
problem. A first step after the installation of
a telescope is the alignment of the vertical axis
and a survey of the telescope by means of a
theodolite. The final alignment is then based on
so-called ``point runs''. In these runs, the
image of a star is scanned in fine lines
across one or more pixels, and the DC pixel
currents are measured. From those scans, both the
center of gravity of the image - and hence any
pointing deviations - and the point spread function
can be determined. To provide a complete map of
pointing deviations, the procedure is repeated
with many stars distributed over the entire
sky. Figs.~\ref{fig_pointing}(a),(b) illustrate
the differences between the nominal
positions of the stars and their actual images
measured using the pixel currents,
as a function of altitude and azimuth.
These data are then fit to a model of
pointing deviations including atmospheric
refraction and, as free parameters, the
bending of the camera masts (proportional
to $\cos(altitude)$), an offset of the camera
from the optical axis, a deviation of the azimuth
axis from vertical alignment, offsets in the
shaft encoder values, and 2nd harmonics for
the shaft encoder response, caused e.g. by
eccentricity of the axes or gears. This model
provides a consistent description of the
pointing data; after corrections, all stars
appear at their nominal positions, with an
rms deviation of less than $0.005^\circ$
(Fig.~\ref{fig_pointing}(c)). Exact pointing
data exist so far only for 3 of the 4 telescopes
used here.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfxsize14.0cm
\epsffile{point2.eps}}
\end{center}
\caption
{\em (a) Pointing deviations of one of the HEGRA telescopes
as a function of azimuth and altitude, determined using
images of bright stars. The (enlarged) deviations
are indicated as vectors. (b) Measured deviations from
the nominal pointing for an ensemble of stars
(in the camera coordinate system). For comparison,
the outline of a 0.25$^\circ$ camera pixel is superimposed.
(c) Deviations after correction using the model discussed in the
text.}
\label{fig_pointing}
\end{figure}
While the pointing calibration is repeated
rather infrequently - typically a few times
per year - calibration data relevant for
camera and electronics operation are collected
regularly during data taking.
In particular, runs with a scintillator
light source pulsed by a laser are used to
flat-field the camera, and to calibrate pixel timing.
For the analysis for the Flash-ADC data, the
following procedure emerged: pedestal values are
continuously tracked and updated based on Flash-ADC samples
before and after the shower signal. For signals
which do not saturate the Flash-ADC dynamic range,
the signal is digitally deconvoluted. The
deconvolution reverses the effect of signal
shapers in front of the Flash-ADC,
which are required to suppress signal components above the
Nyquist frequency. The deconvolution
shortens the signal and hence the effective
integration time (Fig.~\ref{fig_fadc}), thereby
reducing the influence of sky noise. The deconvolution
assumes a linear response and cannot be applied
to signals saturating the dynamic range of the
8-bit Flash-ADC. For overflow pulses, the area under
the truncated pulse (i.e., the sum of the Flash-ADC
samples) is still a monotonic function
of the amplitude. A look-up table is used to
convert the area into an amplitude, extending
the linear dynamic range by a factor of more than 2,
up to about 500 photoelectrons per pixel.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize8cm
\epsffile{120mhz_puls.eps}}
\end{center}
\caption
{\em PMT signal before and after
digital deconvolution of the Flash-ADC data,
averaged over many pulses.}
\label{fig_fadc}
\end{figure}
A conversion factor relating the ADC signal
to the number of photoelectrons has been derived
with two independent techniques, once using the
width of the laser calibration signal (which is
essentially governed by photoelectron statistics), and once
using a pulsed light source shining a defined
amount of light onto the mirror. Both techniques
are in good agreement and give about 1 ADC count
per photoelectron.
Images are finally analyzed in terms of the usual
2nd-moment parameters, providing the center of gravity of
the images, their orientation, and their width
and length. A two-level tail cut is used to
selected pixels belonging to the image.
To be included, a pixel has either to show a signal equivalent
to 6 photoelectrons, or to have at least 3 photoelectrons
and a neighbor pixel with at least 6 photoelectrons.
Images are only accepted if they contain at least one
additional pixel adjacent to one of the trigger pixels.
The shower axis and core location are then derived
geometrically
(see, e.g., \cite{kohnle_paper,ulrichphd,ulrich_padua}, and
\cite{whipple_source}).
Both the image of the source and the
point where the shower axis intersects the plane
of the mirror dish
have to lie on the symmetry axis of the image. The
shower direction is hence derived by superimposing the
images and intersecting their axes
(Fig.~\ref{fig_geom}(a)). The core location
is obtained by intersecting the image axes emerging from the
telescope locations (Fig.~\ref{fig_geom})(b)
(assuming the mirror planes of
all telescopes coincide; the extension to the general
case is straight forward). At present, the
intersection points are calculated for each pair
of telescopes, and then averaged over all pairs,
weighted according to the angle between the views.
Advanced procedures and fits, which properly track
the errors of the image parameters,
have been developed, but have not yet been
applied to the data.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize6cm
\epsffile{geomab.eps}
}
\end{center}
\caption
{\em Reconstruction of (a) the shower direction and of (b) the
core location from the images observed in the cameras.}
\label{fig_geom}
\end{figure}
First results are also available which use the
timing information in the Flash-ADC data for the
stereoscopic reconstruction, in addition to
the amplitude information; this topic is, however,
beyond the scope of this brief report.
\section{Performance for Crab Observations}
The analysis presented in the following is based on
about 11.7 (on-source) hours of observations of the Crab Nebula
with the four IACTs CT3,4,5,6.
Zenith angles ranged from $5^\circ$ to $40^\circ$,
with a mean value of $20.2^\circ$.
About half of these
data were taken in ON-OFF mode, with equal shares
of time on-source and off-source. For the other half, the Crab
Nebula was displaced by $0.5^\circ$ from the optical
axis, and a region displaced symmetrically by the same amount
in the opposite direction is used to estimate backgrounds.
Fig.~\ref{fig_xy} shows the distribution in the direction
of reconstructed shower axes, without any additional
cuts. An excess from the direction of the Crab Nebula
is already visible in this raw data.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize10cm
\epsffile{uncut2d.eps}
}
\end{center}
\caption
{\em Distribution of the reconstructed shower
directions relative to the direction to the
Crab Nebula, for events where at least
two telescopes triggered, for 11.7 hours of
on-source observations, before cuts.}
\label{fig_xy}
\end{figure}
For a quantitative
analysis, we plot the distribution in the angle $\theta$
between
the shower axis and the direction to the Crab Nebula; shown in
Fig.~\ref{fig_theta} is $dN/d\theta^2$. Since $\theta$
is an angle in space, one expects for the uniform
background from charged cosmic rays a flat distribution,
\begin{equation}
\left( {dN \over d\theta^2} \right)_{CR}
= const.~~~,
\end{equation}
whereas a point source at the origin generates an excess
\begin{equation}
\left( {dN \over d\theta^2} \right)_{Source}
\sim e^{-\theta^2/2 \sigma^2}~~~~,
\end{equation}
with the (projected) angular resolution $\sigma$.
Fig.~\ref{fig_theta} illustrates that the distribution
of events is indeed flat over a wide range in
$\theta^2$, with the signal confined to a narrow
spike at $\theta \approx 0$.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize11cm
\epsffile{uncut1d.eps}
}
\end{center}
\caption
{\em Line: distribution $dN/d\theta^2$ of events in the
square of the angle $\theta$ relative to the direction to
the Crab Nebula. The shaded histogram shows
the background derived from off-source runs and
analyses with a shifted source position (see text).}
\label{fig_theta}
\end{figure}
It is instructive to compare these (uncut) data with typical data reported
for conventional single telescopes - the
equivalent plot is the distribution in the angle $\alpha$
between the image axis and the direction to the camera
center, i.e., the source (see Fig.~\ref{ct3_crab}).
In terms of the signal to noise ratio, the uncut system data
are not too far from the single-telescope data where tight
image cuts have been applied; in the raw single-telescope
data the Crab signal is barely visible.
The advantages of the stereoscopic technique are related to the fact
the the signal is confined to a small region (about
1\%) of the available two-dimensional phase space, whereas in the typical
$\alpha$-distribution the signals extend over about
10\% of the $\alpha$-range. In particular,
\begin{itemize}
\item due to the concentration of the signal events,
a highly significant peak is seen already
in the 11.7~h of raw system
data
\item due to the flat distribution of background
in two dimensions,
the background under
the signal can be estimated reliably
even without dedicated off-source runs.
\end{itemize}
Apart from the information concerning the shower
direction - for system data, contained directly in
the direction reconstructed event-by-event, for
single-telescope data contained e.g. in the variable
$\alpha$ and the distance $d$ between the image and
the camera center - image shapes are used to reject
hadronic showers. Relevant image parameters are
the width $w$ of the images, their length $l$,
the degree of concentration of the image, etc.
Figure~\ref{fig_meanw} shows, e.g., the
distribution in the mean width $\bar w$
- averaged over all telescopes which triggered in a given
event - for showers with directions within 0.13$^\circ$
from the Crab. As expected for $\gamma$-ray showers, the
excess events have small values of $\bar w$.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize10cm
\epsffile{meanw.eps}
}
\end{center}
\caption
{\em
Line: distribution of the
mean width $\bar w$, averaged
over all telescopes which triggered in a given event,
for showers from the direction of
the Crab Nebula. The shaded histogram shows
the background derived from off-source runs and
analyses with a shifted source position.}
\label{fig_meanw}
\end{figure}
Given that the different image parameters are strongly
correlated, and dependent on the amplitude of an
image, the optimization of cuts is non-trivial
even in the case of a single telescope. The
number of parameters increases with the number
of telescopes, rendering the problem even more
complex. A simple cut e.g. on the widths of all
images is a rather inefficient procedure, since
some images - those with large amplitudes and
optimal distances around 100~m from the shower
core - provide a large separation power, whereas
faint images differ little between hadronic and
photon-induced showers and a cut merely reduces
the efficiency for photons, without a corresponding
gain in signal-to-noise. Therefore, a different
procedure was followed: Monte-Carlo events were
used to parametrize the distribution in width $w$
and length $l$ as a function of the nature of the
incident particle, of the amplitude $A$ of the image,
of the distance $D$ to the shower core, and of the
zenith angle $z$ under which the shower was observed.
Lacking sufficient Monte Carlo statistics, the
joint distribution was factored into a $w$-dependent
and a $l$-dependent term, neglecting their correlation:
\begin{equation}
\rho(w,l|A,D,z) = \rho_w(w|A,D,z) \rho_l(l|A,D,z)~~~.
\end{equation}
A `probability' $p$ for a given hypothesis - $\gamma$-ray
or cosmic-ray shower - was calculated by multiplying
the terms for the individual telescope images $i$,
with their image parameters $w_i$ and $l_i$ and the
image amplitude $A_i$
\begin{equation}
p = \prod_i \rho_w(w_i|A_i,D_i,z) \rho_l(l_i|A_i,D_i,z)~~~~.
\end{equation}
The distance $D_i$ between telescope $i$ and the shower
core is calculated using the stereoscopic reconstruction of the
core location.
Events were then selected by requiring that the `probability'
for the $\gamma$-ray hypothesis exceeds the `probability'
for the cosmic-ray hypothesis by a certain factor.
This procedure avoids the potentially rather inefficient
cuts on individual image parameters.
Of course, rather
than using Monte-Carlo cosmic-ray events, one can use
real events from off-source runs; both choices gave
similar performance.
Figs.~\ref{fig_cut},\ref{fig_diff} show the directional distribution
of events after this cut, with the additional requirement
that at least two telescopes have images with 50 or more
photoelectrons. The projected rms angular resolution derived
from the peak is $0.08^\circ$
(Fig.~\ref{fig_diff}); for optimum signal-to-noise,
one should hence select a region with a radius of about
$0.13^\circ$ around the source. Table~\ref{tab_rates}
lists the resulting rates and efficiencies. The shape
cuts discussed above allow to reduce comic-ray background
by almost a factor 100, while maintaining a 40\% efficiency
for $\gamma$-rays. With simpler cutting procedures, based
e.g. on the mean width $\bar w$ of all telescope images,
and on their
mean length and concentration,
background rejection is about a factor 2 worse, but still
sizeable.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize11cm
\epsffile{cut12dnn.eps}
}
\end{center}
\caption
{\em Line: distribution of reconstructed shower directions,
relative to the direction to the Crab Nebula, after
cuts on the event shapes. The shaded histogram shows
the background derived from off-source runs and
analyses with a shifted source position.
The small insert shows the two-dimensional distribution
of shower directions.}
\label{fig_cut}
\end{figure}
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfysize11cm
\epsffile{diff1d.eps}
}
\end{center}
\caption
{\em
Expanded view of the signal region,
showing the background-subtracted peak with
an exponential fit corresponding to a projected
angular resolution of $0.08^\circ$.}
\label{fig_diff}
\end{figure}
We note that for short observation times of one hour,
one expects about one background
event in the signal region,
whereas a source with a flux of the Crab generates
about 20 events. This features allows e.g. sky surveys with
an observation time of about 1~h per $2^\circ$ by $2^\circ$
region (the effective field of view of the system), at a
sensitivity of about 30\% of the Crab flux. For longer-term
observations - 100 h on-source corresponding to one or
two months of data taking - a source of 3\% of the Crab flux should
still generate a 5-sigma signal, on top of an average
background of about 120 events \footnote{Since the background
is flat over a region which is very large compared
to the signal region, the expected background can be
estimated with good statistical precision. This is
in contrast to single-telescope observations, where
the background is usually determined from an off-source sample,
and has similar statistical uncertainty as the signal sample.}.
For comparison, the single telescope CT3 provides
a detection limit of 15\% to 20\% of the Crab flux after
100~h on-source
(see Fig.~\ref{ct3_crab}).
A uniform extended source of diameter $1^\circ$
is detectable by the system if the total flux exceeds
15\% of the Crab flux, assuming 100~h of on-source data.
In this case, an equivalent amount of off-source data is
required for background subtraction.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
& Before shape cuts & After shape cuts & Selection eff.\\
\hline \hline
Signal & 55/h & 23/h & 0.42\\
(MC) & (58/h) & (29/h) & (0.5) \\
\hline
Background & 105/h & 1.2/h & 0.011 \\
(MC) & (95/h) & (3/h) & (0.03) \\
\hline
\end{tabular}
\vspace{0.5cm}
\caption{\em Detection rates for Crab observations
with the 4-telescope system, before and after shape
cuts, and efficiency of the shape cuts. Rates
given refer to a circle with radius $0.13^\circ$
around the source location. The numbers
in parentheses represent Monte Carlo estimates
(see text); for the $\gamma$-ray source a
flux of $10^{-11}$/cm$^2$s above 1 TeV is assumed,
and a differential spectral index of 2.7.
For the background, only the cosmic-ray proton
component is considered; heavier nuclei have
significantly reduced trigger probabilities.}
\label{tab_rates}
\end{center}
\end{table}
To provide a consistency check,
Table~\ref{tab_rates} also includes Monte Carlo based
estimates for the detection rates. For the $\gamma$-ray
source a differential spectral index of 2.7 is assumed,
and a flux of $10^{-11}$/cm$^2$s above 1 TeV.
Recent single-telescope HEGRA measurements
give values for the Crab flux above 1~TeV of $(0.8 \pm 0.3)
\cdot 10^{-11}$/cm$^2$
\cite{crab5} and $1.5^{+1.0}_{-0.5} \cdot 10^{-11}$/cm$^2$ \cite{crab6},
with a spectral index of $2.7 \pm 0.3$, and
$0.77^{+0.47}_{-0.13} \cdot 10^{-11}$/cm$^2$ \cite{crab7}
above 1.5~TeV.
The raw detection rates given in the table are based on the
fast simulation tool using parametrized trigger efficiencies.
The Monte Carlo
estimates for the rates before cuts should be reliable within 50\%.
Given these
caveats, the agreement of simulated and measured rates
before cuts is surprisingly good.
The Monte Carlo selection efficiencies quoted
in the table refer to early simulations,
which were carried out using a slightly different configuration and
different selection criteria, and should only serve to illustrate
approximate magnitudes; the cuts used in the present data
analysis are obviously somewhat tighter.
The angular resolution has been
simulated~\cite{ulrich_padua} using the identical algorithm used for
shower reconstruction, and the predicted value of 0.1$^\circ$
is in good agreement with the measured (projected)
resolution of $0.08^\circ$ after cuts, or $0.09^\circ$
before cuts. (The cuts bias towards well-measured narrow
showers, resulting in a slightly improved resolution.)
\section{Conclusion}
In summary, it seems appropriate to state that while
both the hardware of the HEGRA IACT array and the
analysis techniques are still evolving and further
improvements are to be expected, existing data clearly
demonstrate the power and the potential of the
stereoscopic approach. The detection rates and
angular resolutions are consistent with expectations based on
Monte-Carlo studies. Data clearly demonstrate
the lower trigger thresholds achievable with a
telescope coincidence trigger, the high level of
suppression of the cosmic-ray background, and the
superior angular resolution.
\section*{Acknowledgements}
The support of the German Ministry for Research
and Technology BMBF and of the Spanish Research Council
CYCIT is gratefully acknowledged. We thank the Instituto
de Astrofisica de Canarias for the use of the site and
for providing excellent working conditions. We gratefully
acknowledge the technical support staff of Heidelberg,
Kiel, Munich, and Yerevan.
|
3,212,635,537,659 | arxiv | \section{Introduction}
The pragmatically applied rules of quantum mechanics involve two distinct laws of
evolution for the state of the system: these are the Schr\"odinger equation and quantum state reduction.
A long-standing problem is how to make sense of this since there is no underlying theory stating when one or
the other of these laws is to be used. Instead it is left to a judgment whereby state reduction is associated with
the fuzzy concept of measurement.
Based on the premise that quantum state reduction should be taken seriously as a genuine physical process,
collapse models \cite{ghir3,ghir2} are an attempt to resolve this situation by suggesting a composite
dynamics incorporating state reduction events or collapses and unitary state evolution (for general reviews see \cite{Bass, Pear2}).
The idea is that the Schr\"odinger equation should be viewed as an approximation to this more general dynamics
valid when collapse effects are negligible. Conversely, collapse effects should be seen to dominate in situations
where state reduction is an appropriate description.
The most familiar model of this type is that of Ghirardi, Rimini, and Weber (GRW) \cite{ghir3} describing a system
of nonrelativistic quantum particles. The essential idea of GRW is that the state of each particle, as a matter of
physical law, occasionally (but very infrequently) undergoes a random collapse to a state localized in position space.
From this law of collapses it follows that quantum wavelike behavior becomes increasingly unstable for systems of increasing
size \cite{ghir3}. Collapse models thus offer a mathematical framework capable of unifying quantum and classical
domains.
The nonrelativistic Continuous Spontaneous Localization (CSL) model \cite{ghir2} is an improvement on
GRW since it preserves the symmetries of systems of identical particles. The formulation of CSL, in terms of a
stochastic differential equation, invites a straightforward generalization to relativistic quantum field theory (QFT),
but it is well known that this results in physically unacceptable divergent behavior \cite{pear3,pearGhir}.
Here we shall address the question of how these infinities can be avoided.
The source of divergences, as with other infinite behavior in QFT, can be traced
to point interactions between quantum field operators in the dynamical equations for the state vector.
However, in the case of relativistic collapse models, attempts to renormalize with the inclusion of subtractive counter
terms fail. This way of viewing the problem of infinities suggests that a solution
could be to smear out the point interactions. The same idea was considered by Nicrosini and Rimini \cite{Nicr}
although their implementation requires the unsatisfactory inclusion of a locally preferred frame.
In this article we propose the use of a novel relativistic field responsible for mediating the collapse
process which enables us to fulfill the aim of smearing the interactions whilst preserving Lorentz covariance and
frame independence. This forms the basis of a relativistic collapse mechanism which naturally resembles
CSL, describing state reductions in a smeared number density eigenbasis.
The structure of the article is as follows. We begin the presentation of our model in section \ref{pf} by stating the
properties of the mediating field (which we subsequently refer to as the pointer field). In section \ref{sd} we define
the dynamical equations of motion following the outline for relativistic collapse models given by Pearle \cite{pear3}.
We discuss the local properties of the model in section \ref{lb} and the form of the smeared interactions in
section \ref{so}.
In section \ref{cp} we describe the collapse process in detail for an idealized example. We estimate
a collapse timescale and demonstrate that the dynamics reproduce the Born rule. In section \ref{ep} we show that the
model does not exhibit physically unacceptable divergent behavior by considering how the energy of the system is
influenced by the equations of motion. We end with a numerical demonstration of the collapse process and a short discussion.
\section{Pointer field}
\label{pf}
Consider a field in which a quantized degree of freedom is associated to each point in spacetime.
This is to be contrasted with a standard quantum field whose modes describe the field configuration on
a time slice or spacelike hypersurface.
We define creation and annihilation operators $a(x)$ and $a^{\dagger}(x)$ with commutation relations
\begin{align}
[a(x),a^{\dagger}(x')] = \delta^4(x-x') \quad ; \quad [a(x),a(x')] = 0,
\end{align}
and specify a normalized ground state with the property that $a(x)|0\rangle=0$.
First excited states are
given by
\begin{align}
|h\rangle = \int d\omega_x h(x)a^{\dagger}(x)|0\rangle,
\end{align}
where $\mbox{$d$}\omega_x$ denotes the integration measure over spacetime volume and
$h$ is some complex $L^2$-function on spacetime.
Higher exited states can be constructed by repeated application of the creation operator in this way and we define
our state space to accommodate addition of states (enabling field superpositions).
To ensure that the field transforms appropriately when specified in different coordinate frames we supply it with the
transformation property
\begin{align}
U_{\Lambda,b} a(x) U^{-1}_{\Lambda,b} = a(\Lambda x+b),
\end{align}
for a Lorentz coordinate transformation $\Lambda$, spacetime translation $b$, and unitary representation of the
Poincar\'e group $U$.
We refer to this general construction as the pointer field since its role in our model is to make a record of
the state of a conventional quantum field with which it interacts. An equivalent construction used for a different purpose
can be found in references \cite{CQC1,CQC2}. The pointer field's degrees of freedom describe a field configuration over
the whole of spacetime.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{DJB_fig1}
\caption{
{A representation of the domains of functions $f$ and $g$ in spacetime.
The points $x$ and $z$ are spacelike separated and $\sigma$ denotes a spacelike hypersurface belonging
to some spacetime foliation.
}}
\end{center}
\end{figure}
The number density operator is given by $n(x) = a^{\dagger}(x)a(x)$. We use this to construct new operators which are
smeared over spacetime
\begin{align}
N(x) = \int \mbox{$d$} \omega_y f(x,y) n(y).
\end{align}
Here $f(x,y)$ is some invariant function on
spacetime which is only nonzero for $y$ in the past cone of $x$ (see figure 1). Similarly we define
\begin{align}
A(x) = \int \mbox{$d$}\omega_y g(x,y) \left[a(y)+a^{\dagger}(y)\right]
\end{align}
where $g(x,y)$ is some invariant function which is only nonzero for $y$ in the future cone of
$x$ (figure 1). Proposals for the functions $f$ and $g$ will be specified in section \ref{so}. Note that
\begin{align}
[N(x),N(x')]=0 \quad {\rm and}\quad [A(x),A(x')] = 0 \quad \forall\;\; x,x'.
\label{0com}
\end{align}
However,
\begin{align}
[N(x),A(x')]= \int \mbox{$d$}\omega_y f(x,y)g(x',y)\left[a^{\dagger}(y) - a(y)\right],
\label{commie}
\end{align}
where the right hand side is only nonzero when $x$ is in the future cone of $x'$. This entails that
$[N(x),A(x')]=0$ if $x$ and $x'$ are spacelike separated.
We regard the pointer field as a new and fundamental component of our model (rather than as some effective construction
representing the effects of standard quantum fields).
\section{State dynamics}
\label{sd}
For a relativistic collapse model we require a covariant description of how the state changes as we
advance through spacetime. The dynamics should involve a classical stochastic input to capture
the random character of quantum state reduction and we expect our equations to be nonlinear reflecting a feedback
from the state vector to the probability of an outcome.
\subsection{Tomonaga picture}
Consider the orthodox dynamics of a conventional relativistic quantum field. In order to form a covariant description of the
evolving state of the field we use the Tomonaga picture: A state $|\Phi(\sigma)\rangle$ is assigned to any given
spacelike hypersurface $\sigma$. As we advance the surface $\sigma$ to a new surface $\sigma'$ which differs from $\sigma$ only
at the point $x$ such that $\sigma$ and $\sigma'$ enclose an incremental spacetime volume $d\omega_x$, the change of
state is given by the Tomonaga equation
\begin{align}
\mbox{$d$}_x |\Phi(\sigma)\rangle =
-i H_{\rm int}(x) \mbox{$d$} \omega_x|\Phi(\sigma)\rangle,
\label{TOM}
\end{align}
where $H_{\rm int}(x)$ represents the interaction Hamiltonian. Any interaction terms must be Lorentz scalars to give the equation
covariant form and must commute at spacelike separation to reflect the fact that there is no temporal ordering of spacelike
separated points (i.e.~no preferred frame). Aharonov and Albert argue in reference \cite{aa} that, for a covariant description
of state collapse, the state must take the form of a functional on the set of spacelike hypersurfaces as in this picture.
For our model we consider a state which describes both a quantum field and pointer field. Given the commutation
relations~(\ref{0com}) and (\ref{commie}) and given the above constraints on $H_{\rm int}$ we may use the Tomonaga
picture to describe the evolving state where $H_{\rm int}$ is constructed from terms involving $N(x)$ and $A(x)$
(along with quantum field operators). This allows us to describe state evolution involving interactions between the
quantum field and pointer field. We remark that whereas the quantum field state describes the quantum field on some
given hypersurface, the pointer field state describes the pointer field over the whole of spacetime.
The pointer field state nevertheless depends on the given hypersurface since this demarcates a boundary of past
interactions with the quantum field.
In the Tomonaga picture we are required to think of state evolution with regards to an ordered sequence of
spacelike hypersurfaces. The relationship between different spacelike hypersurfaces in spacetime can be
classified by a partial ordering structure. Consider two surfaces $\sigma_1$ and $\sigma_2$. If no point
in $\sigma_1$ is to the causal future of any point in $\sigma_2$ then we can say that $\sigma_1 \prec \sigma_2$.
(We will also use the notation $\sigma \prec x$ and $x\prec\sigma$ to denote that the point $x$ is not to the past
and not to the future of $\sigma$ respectively.) The partial order relation $\prec$ is
\begin{align}
{\rm reflexive\;\; : \;\;} & \sigma \prec \sigma,
\nonumber\\
{\rm antisymmetric\;\; : \;\;} & (\sigma_1 \prec \sigma_2) \land (\sigma_2 \prec \sigma_1) \Rightarrow \sigma_1 = \sigma_2,
\nonumber\\
{\rm transitive\;\; : \;\; } & \sigma_1 \prec \sigma_2 \prec \sigma_3 \Rightarrow \sigma_1 \prec \sigma_3.
\end{align}
A foliation of spacetime is any maximally ordered chain of surfaces. In a model with no preferred frame the
foliation should have no physical significance---it should be considered to be analogous to a choice of gauge.
\subsection{Stochastic processes}
In order to understand the disclosure of stochastic
information in the context of hypersurfaces advancing through a
foliation of spacetime we require an appropriately structured probability space.
We specify our probability space by
$(\Omega, {\cal F}, \mathbb{Q})$ along with a filtration $\{{\cal F}_{\sigma}\}$ of ${\cal F}$,
defined to be a family of sigma-algebras ${\cal F}_{\sigma}\subset {\cal F}$ such that
\begin{align}
\sigma_1 \prec \sigma_2 \Rightarrow {\cal F}_{\sigma_1} \subset {\cal F}_{\sigma_2}.
\end{align}
The partially ordered set structure of the spacelike hypersurfaces is thus induced on the subset structure
of the filtration. The subsets $F$ of $\Omega$ belonging to ${\cal F}$ are the events of our probability
space (e.g. $F=\{\text{the state assigned to $\sigma$ is }|0\rangle\}$).
We interpret $\mathbb{Q}(F)$ as the probability that the event $F$ occurs.
The construction of a filtration on the probability space allows us to formalize the notion that the consequences
of the outcome of chance (an element $\omega$ of $\Omega$) are not necessarily revealed at once, but rather may emerge
sequentially as the system evolves. This is achieved using the concept of conditional expectation with respect to
${\cal F}_{\sigma}$, having the intuitive meaning of conditioning with respect to information about the set
of events belonging to ${\cal F}_{\sigma}$. Stochastic processes in this context are random variables indexed by $\sigma$.
We describe the classical stochastic input of our model in terms of a noise field on spacetime.
By comparison with standard Brownian motion we can define a Brownian motion field in terms of infinitesimal
increments $\mbox{$d$} W_{x}$ specified at each spacetime point with properties
\begin{align}
\mathbb{E}^{\mathbb{Q}}[\mbox{$d$} W_{x}] = 0 \quad \text{and} \quad \mbox{$d$} W_{x}\mbox{$d$} W_{x'} = \delta_{x,x'} \mbox{$d$}\omega_x,
\end{align}
where $\mathbb{E}^{\mathbb{Q}}[\; \cdot \;]$ denotes $\mathbb{Q}$-expectation.
We assume that the filtration $\{{\cal F}_{\sigma}\}$ is generated by our Brownian motion field such that for
any $x$ where $\neg(\sigma \prec x)$, $\mbox{$d$} W_x$ is ${\cal F}_{\sigma}$-measurable. We can define a Brownian motion
process $W_{\sigma}$ such that $W_{\sigma'}-W_{\sigma} = \int_{\sigma}^{\sigma'}dW_x$ for any $\sigma\prec \sigma'$.
\subsection{Implicit equation of motion}
Other than spacetime, the structure of our model involves three spaces: (i) the space $\Sigma$ of all possible
spacelike hypersurfaces $\sigma$ in spacetime; (ii) a probability space $(\Omega, {\cal F}, \mathbb{Q})$ in which
all $\mbox{$d$} W_x$ are specified; and (iii) a Hilbert space ${\cal H}$ which describes the degrees of freedom of our
universe (including matter fields, gauge fields, and the pointer field). The model describes a joint map from
$\Sigma$ and $\Omega$ to
${\cal H}$
\begin{align}
\Phi : {}& \{\Sigma, \Omega\} \rightarrow {\cal H}, \nonumber \\
& \{\sigma, \omega\} \mapsto |\Phi(\sigma, \omega)\rangle.
\end{align}
Given an initial condition for the state we can define this map in terms of state evolution by the stochastic differential equation
\begin{align}
\mbox{$d$}_x |\Phi(\sigma)\rangle = \left\{
-i J(x) A(x) \mbox{$d$} \omega_x - \mbox{$\textstyle \frac{1}{2}$} \lambda^2 N^2(x) \mbox{$d$} \omega_x +\lambda N(x) \mbox{$d$} W_x
\right\}|\Phi(\sigma)\rangle.
\label{SE}
\end{align}
We also specify a change of probability measure
\begin{align}
\mathbb{E}^{\mathbb{P}}[\; \cdot \;|{\cal F}_{\sigma}] =
\frac{\mathbb{E}^{\mathbb{Q}}[\; \cdot \; \langle\Phi(\sigma_f)|\Phi(\sigma_f)\rangle |{\cal F}_{\sigma} ]}
{\mathbb{E}^{\mathbb{Q}}[\langle\Phi(\sigma_f)|\Phi(\sigma_f)\rangle |{\cal F}_{\sigma}]},
\label{PROB}
\end{align}
which relates the defining probability measure $\mathbb{Q}$, under which all Brownian increments $d W_x$
are independent, to the physical probability measure $\mathbb{P}$, under which stochastic probabilities
of evolved states agree with quantum predictions (see below). The surface $\sigma_f$ should be entirely to the future
of any regions of interest but is otherwise arbitrary owing to the fact that
$\langle\Phi(\sigma)|\Phi(\sigma)\rangle$ is a $\mathbb{Q}$-martingale:
\begin{align}
\mathbb{E}^{\mathbb{Q}}[\langle\Phi(\sigma')|\Phi(\sigma')\rangle|{\cal F}_{\sigma}]
=\langle\Phi(\sigma)|\Phi(\sigma)\rangle,
\end{align}
for $\sigma\prec\sigma'$ (the tower rule can then be used to show that $\mathbb{P}$-expectations for different
$\sigma_f$ are equivalent).
The stochastic coupling parameter $\lambda$ is a constant which relates to the rate at which the
collapse process occurs and the Lorentz invariant operator $J(x)$ is a scalar current operator
representing the matter density of a quantum field. (For example, we might choose $J(x) = \bar{\psi}(x)\psi(x)$
for a Dirac field $\psi(x)$.) We will refer to $J(x)$ as the matter density operator for the quantum field.
We have omitted any interactions between different quantum fields in equation (\ref{SE}), however, these can
easily be added.
Equation (\ref{SE}) is a stochastic extension of the Tomonaga formulation of quantum state
evolution. By setting $\lambda = 0$ we recover the Tomonaga equation in differential form (\ref{TOM}).
Provided that $J(x)$ commutes with $J(x')$ for spacelike separated $x$ and $x'$ then all terms in the
evolution equation (\ref{SE}) commute at spacelike separation. (This is indeed the case for the example
$J(x) = \bar{\psi}(x)\psi(x)$ where $\{\psi(x),\bar{\psi}(x')\}=\{\psi(x),\psi(x')\}=0$ for spacelike
separated $x$ and $x'$.) This fact ensures that the specific foliation used has no physical consequences
since given a fixed initial state and a complete realized set of stochastic information $\{dW_x\}$, for
any two foliations which share a common leaf, the assigned state on that leaf is unique. Equation (\ref{PROB})
also shows that the physical probability density of a given realized set of stochastic information
$\{dW_x |\sigma \prec x \prec \sigma_f \}$ conditional on ${\cal F}_{\sigma}$ depends only on the covariantly
defined and foliation independent state norm assigned to $\sigma_f$. This in turn is determined from the state
at $\sigma$ using the same realized stochastic information $\{dW_x |\sigma \prec x \prec \sigma_f\}$.
Since the physical probability of obtaining a given final state
depends on the final state itself we refer to this formulation as implicit.
Equations (\ref{SE}) and (\ref{PROB}) completely specify the dynamics of the model in a covariant and frame
independent manner.
\subsection{Collapse mechanism outline}
\label{CMO}
Consider the pointer field initially in its ground state. As the state evolves according to equation
(\ref{SE}), the interaction described by the term $J(x)A(x)$
leads to an excitation of the pointer field only in the future cone at $x$. We assume for now that the smearing density
$g(x,y)$ is fairly well localized about $x$ in some sense. If $J(x)$ is significant at $x$ then the effect can
be thought of as analogous to that of a particle passing through a cloud chamber where a record of the track is formed
and left behind.
By contrast, the operator $N(x)$ acts on the pointer field state only in the past light cone of the point $x$,
registering the track made by $J$.
The effect of the last two terms on the right side of equation (\ref{SE}) can be understood by considering an
incremental stage in the state evolution. We can write
\begin{align}
(1+\Delta_x) |\Phi(\sigma)\rangle &\sim \left\{1- \mbox{$\textstyle \frac{1}{2}$} \lambda^2 N^2(x) \Delta \omega_x +\lambda N(x) \Delta W_x
\right\}|\Phi(\sigma)\rangle \nonumber\\
&\sim \exp\left\{- \lambda^2 \left[N(x)-\frac{1}{2\lambda}\frac{\Delta W_x}{\Delta\omega_x}\right]^2 \Delta\omega_x + \mbox{$\textstyle \frac{1}{4}$}
\right\}|\Phi(\sigma)\rangle.
\label{hercoll}
\end{align}
Heuristically we see that the state is acted on by a Gaussian positive valued operator which is centered
about a point determined by the random choice of $\Delta W_x$. This has the effect of diminishing the quantum
amplitude of all $N(x)$-eigenstates with respect to this central value. The probability rule (\ref{PROB}) is designed to
ensure that the location of this projection is more likely where the quantum amplitude is greatest and in so doing
reproduce the Born rule (we will examine this in detail in section \ref{cp}). As the state evolves it is impelled by
these projections toward an $N(x)$-eigenstate.
Reductions occur to a smeared number density eigenbasis as with the CSL model. This model can therefore be
seen as a natural relativistic extension of CSL.
It is important that the excitations made to the pointer field influence
the result of acting with the operator $N(x)$ at $x$. Equation~(\ref{commie}) indicates that this is the case
provided that the excitation involving $A(y)$ occurs for $y$ in the past cone of $x$.
Since interactions between the quantum field and the pointer field result in entanglement between different quantum matter
densities and different pointer field states, the collapse of a superposition of pointer field states will induce a collapse
of quantum field states. We will consider specific examples of this process in later sections.
\subsection{Nonlocality}
The probability rule (\ref{PROB}) is responsible for nonlocal correlations in this model. Consider a state which describes
two spacelike separated subsystems of an entangled global system (such as in an EPR-type experiment). Suppose that each of these
subsystems undergoes a collapse (such as that involved in a spin measurement). The probability rule ensures not only that the outcomes of
the collapse processes for each subsystem occur individually with the correct quantum probabilities but also that the
joint probabilities for outcomes in the two subsystems satisfy quantum predictions. In this case we find that the $dW_x$s
are correlated over spacelike separation in the physical probability measure. The $\mathbb{Q}$-Brownian motion field behaves as
a nonlocal hidden variable in the theory. This is analyzed in detail in reference \cite{ME}.
\subsection{Explicit equation of motion}
We can define a Brownian motion field under the $\mathbb{P}$-measure such that
\begin{align}
\mathbb{E}^{\mathbb{P}}[\mbox{$d$} B_{x}] = 0 \quad \text{and} \quad \mbox{$d$} B_{x}\mbox{$d$} B_{x'} = \delta_{x,x'} \mbox{$d$}\omega_x.
\label{BMotion}
\end{align}
Given a specific foliation of spacetime we can relate this to the $\mathbb{Q}$-Brownian motion field by defining
\begin{align}
\mbox{$d$} B_x = \mbox{$d$} W_x -2\lambda\langle N(x) \rangle_{\sigma}\mbox{$d$} \omega_x,
\label{Pnoise}
\end{align}
where we have used the notation
\begin{align}
\langle \; \cdot \; \rangle_{\sigma} = \frac{\langle\Phi(\sigma) |\;\cdot\; |\Phi(\sigma)\rangle}{\langle\Phi(\sigma) |\Phi(\sigma)\rangle}
\end{align}
to denote quantum expectation.
It is straightforward to show that this definition satisfies (\ref{BMotion}). Note that it is the increments $dW_x$ which
represent the physical stochastic information in our model. As stated above, for a given initial state and a complete set of realized values
$\{dW_x\}$ the final state is uniquely specified by equation (\ref{SE}). The construction (\ref{Pnoise}) is just a useful
way in which we can represent the stochastic
information but we should be aware that the realized values $dB_x$ are not physical meaningful in the sense that they depend
on the specific choice of foliation (via the state defined on surface $\sigma$). A different foliation would require a
different realized $\mathbb{P}$-Brownian motion field to achieve the same evolved state on a given leaf.
Expressing equation (\ref{SE}) directly in terms of the $\mathbb{P}$-Brownian motion field
we end up with the following nonlinear equation for the normalized state:
\begin{align}
\mbox{$d$}_x |\Psi(\sigma)\rangle = \Big\{
-i J(x) A(x) \mbox{$d$} \omega_x - & \mbox{$\textstyle \frac{1}{2}$} \lambda^2 \left[N(x)-\langle N(x)\rangle_{\sigma}\right]^2 \mbox{$d$} \omega_x
\nonumber \\
& + \lambda^{} \left[N(x)-\langle N(x)\rangle_{\sigma}\right]\mbox{$d$} B_x
\Big\}|\Psi(\sigma)\rangle,
\label{SEP}
\end{align}
with
$|\Psi(\sigma)\rangle = |\Phi(\sigma)\rangle \langle\Phi(\sigma)|\Phi(\sigma)\rangle^{-\mbox{$\textstyle \frac{1}{2}$}}$.
This equation enables us to generate physical sample paths for the state in terms of Brownian increments
generated under the physical measure for a given spacetime foliation. By construction we know that this equation gives foliation independent
results even though the nonlinearity obscures this fact.
\section{Local beables}
\label{lb}
If the model outlined above is to solve any of the conceptual problems of quantum theory then it must be equipped with a
prescription for determining definite properties of the world in bounded regions of spacetime. Bell introduced the
concept of local beables to provide such a means of describing a system in classical terms in order to make a
clear point of contact with evidence of real world phenomena \cite{Bell}.
In describing state vector collapse our mathematical framework contains only the state vector and a classical
stochastic noise field. We first consider the former. Local properties of the state vector,
as described by the action of local operators are indefinite for two reasons: (i) the state may not
be an eigenstate of the operator in question; and (ii) nonlocalities ensure that the action of local operators
on the state are affected by distant collapses which may or may not have happened depending on the choice of
spacelike hypersurface.
To address this Ghirardi \cite{Ghir4} has proposed that definite properties of the theory at point $x$ can be
defined as the quantum expectations of local operators $O(x)$, where the state is assigned to the hypersurface $plc(x)$
forming the past light cone of $x$ (or the spacelike surface which is arbitrarily close to this):
\begin{align}
\bar{O}(x) = \langle O(x) \rangle_{plc(x)}.
\end{align}
Assuming that this past light cone limit is valid, we can define local beables in this way which are unambiguous,
Lorentz covariant, and frame independent. This does not affect the arbitrary choice of foliation used to describe
the state evolution---by conditioning on ${\cal F}_{\sigma}$ for any hypersurface $\sigma$ passing through the
point $x$, the past light cone state is specified.
It should not be necessary to grant beable status to the quantum expectation of every local operator in this way. Ghirardi
suggests that only the matter density need be a beable since this is enough to specify the
locations of macro objects. In the spirit of relativity we suggest that the beables of the theory could be the stress-energy
density of the quantum field
\begin{align}
\bar{T}^{\mu\nu}(x) = \langle T^{\mu\nu}(x) \rangle_{plc(x)},
\end{align}
(assuming that quantum expectations are finite following renormalization).
The other possible choice for the local beable of the theory is the classical stochastic noise field. From equation
(\ref{Pnoise}) we can associate physical random variables to any finite region of spacetime $R$ as follows:
\begin{align}
W_R = \int_{R}dW_x = \int_{R}dB_x + 2\lambda \int_R d\omega_x \langle N(x) \rangle_{\sigma}.
\end{align}
This is a Lorentz invariant random variable (the explicit $\sigma$ dependence is offset by the foliation dependence of $dB_x$).
The right side of this equation demonstrates that the physical variable $W_R$ is composed of a signal---the quantum expectation of the
operator $N(x)$ integrated over the region $R$---and a noise $B_{R} =\int_{R}dB_x$. For regions where the quantum expectation of $N(x)$ is large we
can expect a large signal to noise ratio. The random variable $W_R$ then gives a classical image of $N(x)$-density.
This is perhaps the more natural choice for the beables given that the physical noise field is
the classical element of the theory. However, note that the essential information in $W_R$ is (given in a Lorentz invariant form by)
$\bar{N}(x)$. This shows an equivalence between the two proposals.
In each case, whether or not these variables are treated as local beables, they are nevertheless well defined Lorentz covariant
and frame independent local properties of the theory.
\section{Smeared operators}
\label{so}
Our dynamical equation for the state vector (\ref{SEP}) involves two different operators acting on the pointer
field state:
\begin{align}
N(x) = \int \mbox{$d$}\omega_y f(x,y) n(y)
\;\; ; \;\;
A(x) = \int \mbox{$d$}\omega_y g(x,y) \left[a(y)+a^{\dagger}(y)\right].
\nonumber
\end{align}
Each of these operators potentially describes a nonlocal interaction.
The aim of this section is to consider some possible forms for the smearing functions $f$ and $g$ which satisfy the
constraint of Lorentz invariance.
First consider the function $g(x,y)$. In order that the smeared interaction satisfies a reasonable definition of locality we would like
for $g(x,y)$ to be appreciable only for points $y$ which are near to $x$. However, the notion of $y$ being near to $x$
is frame dependent. We therefore propose to use the local properties of the theory to determine the form of
$g(x,y)$ in a way which takes account of the local energy flow of the field at point $x$. Specifically we propose a form
\begin{align}
g(x,y)=
C(x)\exp\left\{- k \bar{T}^{\mu\nu}(x)(y_{\mu}-x_{\mu})(y_{\nu}-x_{\nu}) \right\},
\label{g}
\end{align}
for $y$ in the future cone of $x$ and $g(x,y)= 0$ elsewhere. Here $k$ is a positive real constant which controls the
rate of decay of $g$ with $y$ and $C(x)$ is a positive real
normalization function defined such that $g$ satisfies $\int d\omega_y g(x,y) = 1$.
Note that the form of $g$ is Lorentz invariant. The exponent is negative definite since for $y$ within the future cone of $x$ we can
always choose a frame in which $(y-x)$ defines the time direction and in this frame only the positive definite
$\bar{T}^{00}$ component contributes. The stress-energy factor ensures that the function decays more rapidly in
those timelike directions in which the magnitude of the field momentum is large (e.g. at the extremes of the light cone).
The function $g(x,y)$ thus defines a distribution of points $y$ near to $x$ from the point of view of a field rest frame
at $x$. Of course this function could take many other forms---this example is intended as an illustration.
Similarly for $f(x,y)$ we choose the smeared form
\begin{align}
f(x,y)=
C(x)\exp\left\{- k \bar{T}^{\mu\nu}(x)(x_{\mu}-y_{\mu})(x_{\nu}-y_{\nu}) \right\},
\label{FT}
\end{align}
for $y$ in the past cone of $x$ and $f(x,y)=0$ elsewhere.
Since $\bar{T}^{\mu\nu}(x)$ is involved in the equations of motion of the state then the model is nonMarkovian: In order to
advance the state from some arbitrary surface $\sigma$ to another surface $\sigma'$ which differs
from $\sigma$ only at the point $x$ we must determine $\bar{T}^{\mu\nu}(x)$. Since this depends of the surface $plc(x)$ we require
stochastic information encoded in $\{dW_y\}$ for $y$ to the past of $\sigma$ (but outside
the past cone of $x$). This makes exact calculations difficult to perform.
\section{Collapse process}
\label{cp}
In this section we consider a specific example involving an initial superposition of different quantum matter density
states. By decomposing into different time stages where either the field interactions dominate or the
collapse process dominates we will demonstrate the characteristics of the state dynamics.
Let us express the initial state (assigned to an initial hypersurface $\sigma_i$) as a direct product of the quantum
field state (describing matter) and the pointer state
\begin{align}
|\Psi(\sigma_i)\rangle = |\Psi_{\rm matter}\rangle |\Psi_{\rm pointer}\rangle.
\label{formINIT}
\end{align}
We contrive a situation in which the matter field is initially in a superposition of idealized $J(x)$-eigenstates, i.e.
\begin{align}
|\Psi_{\rm matter}\rangle = \sum_i c_i |J_i\rangle,
\end{align}
where $|J_i\rangle$ are normalized and satisfy $J(x)|J_i\rangle = J_i(x)|J_i\rangle$
($J_i(x)$ is some real valued function of $x$). The initial condition of the pointer field (prior to interaction) is the
ground state. Equation (\ref{formINIT}) can thus be written
\begin{align}
|\Psi(\sigma_i)\rangle = \sum_i c_i |J_i\rangle |0\rangle.
\label{init}
\end{align}
A state of this type could, for example, be formed following the interaction of some (quantum) measuring device with
a quantum particle (prior to any interaction with the pointer field).
If we ignore for now the collapse dynamics by setting $\lambda=0$ in equation (\ref{SEP}), we have the
state evolution equation
\begin{align}
\mbox{$d$}_x |\Psi(\sigma)\rangle = -i J(x) A(x) \mbox{$d$} \omega_x |\Psi(\sigma)\rangle.
\end{align}
This equation has the formal solution
\begin{align}
|\Psi(\sigma)\rangle = \exp\left\{-i \int_{\sigma_i}^{\sigma} d\omega_x J(x) A(x) \right\} |\Psi(\sigma_i)\rangle,
\end{align}
where the terms in the exponent should be time ordered (noting that in general $[J(x),J(x')]\neq 0$ for timelike
separated $x$ and $x'$).
Applying this solution to our initial condition (\ref{init}) we find that after the system has evolved to some
hypersurface $\sigma_{\rm int}$ (denoting the end of this pure interaction phase) the state is given by
\begin{align}
|\Psi(\sigma_{\rm int})\rangle = \sum_i c_i |J_i\rangle |\alpha_i\rangle,
\label{JAsol}
\end{align}
where
\begin{align}
|\alpha_i\rangle = \exp\left\{\int d\omega_y \left[\alpha_i(y,\sigma_{\rm int})a^{\dagger}(y)
- \alpha_i^*(y,\sigma_{\rm int})a(y)\right] \right\}|0\rangle,
\label{coh}
\end{align}
and
\begin{align}
\alpha_i(y,\sigma_{\rm int}) = -i \int_{\sigma_i}^{\sigma_{\rm int}} d\omega_x J_i(x) g(x,y).
\label{coheig}
\end{align}
From equation (\ref{coh}) it is straightforward to show that the state $|\alpha_i\rangle$ has the property
\begin{align}
a(z)|\alpha_i\rangle = \alpha_i(z,\sigma_{\rm int})|\alpha_i\rangle.
\end{align}
The pointer field state is therefore a coherent state.
Equation (\ref{coheig}) entails that the pointer field is excited in proportion to the matter density and is only excited
in locations near where the matter density is nonzero (see equation (\ref{g})).
This analysis shows that a superposition state in the matter field leaves an imprint on the pointer field.
An initial superposition of different $J(x)$-states results in an entangled superposition of $a$-eigenstates
after a short period of interaction. Notice that this will lead to a loss of coherence for the matter field state in cases where
the pointer field is significantly excited (the pointer field behaves as an environment) .
If the pointer field is in the state $|\alpha_i\rangle$, then the quantum expectation value of the operator $N(x)$ is
\begin{align}
\langle \alpha_i |N (x)|\alpha_i \rangle = \int d\omega_y f(x,y) |\alpha_i(y,\sigma_{\rm int})|^2,
\end{align}
and the quantum variance of the operator $N(x)$ is
\begin{align}
\langle \alpha_i |N^2 (x)|\alpha_i \rangle - \langle \alpha_i |N (x)|\alpha_i \rangle^2 =
\int d\omega_y f^2(x,y) |\alpha_i(y,\sigma_{\rm int})|^2.
\end{align}
This means that the size of quantum fluctuations in $N(x)$ behaves approximately as the square
root of the expected value. Therefore, for sufficiently large values of $J_i(x)$ (corresponding to macroscopic matter)
we can make the assumption that $|\alpha_i\rangle$ is an approximate $N(x)$-eigenstate:
\begin{align}
N(x)|\alpha_i\rangle \simeq \int d\omega_y f(x,y) |\alpha_i(y,\sigma_{\rm int})|^2|\alpha_i\rangle.
\label{Neig}
\end{align}
We now turn to the collapse dynamics, ignoring the $J(x)A(x)$ interaction term in equation (\ref{SEP}):
\begin{align}
\mbox{$d$}_x |\Psi(\sigma)\rangle = \Big\{
- \mbox{$\textstyle \frac{1}{2}$} \lambda^2 & \left[N(x)-\langle N(x)\rangle_{\sigma}\right]^2 \mbox{$d$} \omega_x
\nonumber \\ &
+ \lambda^{} \left[N(x)-\langle N(x)\rangle_{\sigma}\right]\mbox{$d$} B_x
\Big\}|\Psi(\sigma)\rangle.
\label{collonly}
\end{align}
We take the state to be of the idealized postinteraction form
\begin{align}
|\Psi(\sigma_{\rm int})\rangle = \sum_i c_i |J_i\rangle |N_i\rangle,
\label{perfectN}
\end{align}
where $|N_i\rangle$ satisfies $\langle N_i|N_j\rangle=\delta_{ij}$ and $N(x)|N_i\rangle = N_i(x)|N_i\rangle$
($N_i(x)$ is a real valued function of $x$). Denoting the quantum variance of the operator $N(x)$ as
\begin{align}
{\rm Var}_{\sigma}[N(x)] = \langle N^2(x) \rangle_{\sigma} - \langle N(x) \rangle^2_{\sigma},
\label{varperfectN}
\end{align}
we find that for a state of the form (\ref{perfectN}),
\begin{align}
{\rm Var}_{\sigma_{\rm int}}[N(x)] = \sum_i |c_i|^2 N^2_i(x) - \left( \sum_i |c_i|^2 N_i(x) \right)^2.
\end{align}
The quantum variance is greater than or equal to zero, and is only equal to zero if either (i) $|c_j|=1$ for some $j$ and
$c_{i\neq j}=0$ for all other $i$s, or (ii) all $N_i(x)$s have the same value. We assume that the second situation is
not true everywhere.
Similarly let us define the quantum covariance of $N(x)$ and $N(y)$ by
\begin{align}
{\rm Cov}_{\sigma}[N(x),N(y)] = \langle N(x)N(y) \rangle_{\sigma} - \langle N(x) \rangle_{\sigma}\langle N(y) \rangle_{\sigma}.
\label{cov}
\end{align}
From equation (\ref{collonly}) we can show that the quantum variance of $N(x)$ satisfies the process
\begin{align}
\mbox{$d$}_y {\rm Var}_{\sigma}[N(x)] = -4 & \lambda^2 {\rm Cov}^2_{\sigma}[N(x),N(y)] \mbox{$d$} \omega_y
\nonumber\\
+ 2\lambda & \left[ \langle N^2(x)N(y)\rangle_{\sigma}
- \langle N^2(x)\rangle_{\sigma}\langle N(y)\rangle_{\sigma} \right.
\nonumber \\
& \left. - 2\langle N(x)\rangle_{\sigma} \langle N(x)N(y)\rangle_{\sigma}
+ 2\langle N(x)\rangle^2_{\sigma}\langle N(y)\rangle_{\sigma}\right]\mbox{$d$} B_y.
\end{align}
From this equation we find
\begin{align}
\mathbb{E}^{\mathbb{P}}\left[\left. {\rm Var}_{\sigma}[N(x)] \right| {\cal F}_{\sigma_{\rm int}}\right]
= & {\rm Var}_{\sigma_{\rm int}}[N(x)] \nonumber\\
& - 4\lambda^2 \mathbb{E}^{\mathbb{P}}\left[\left. \int_{\sigma_{\rm int}}^{\sigma}\mbox{$d$} \omega_y{\rm Cov}^2_{\sigma'}[N(x),N(y)]
\right| {\cal F}_{\sigma_{\rm int}}\right],
\label{varexp}
\end{align}
where the surfaces $\sigma'$ define a foliation between $\sigma_{\rm int}$ and $\sigma$ and $y\in \sigma'$.
In general for nonzero covariance of $N$, equation
(\ref{varexp}) indicates that the $\mathbb{P}$-expected quantum variance will decrease as $\sigma$ advances through spacetime.
Since the $\mathbb{P}$-expectation of quantum variance tends to zero then the realized quantum variance must tend to zero. This
implies that the state tends to an $N(x)$-eigenstate on a collapse timescale of order
\begin{align}
\tau_{\rm coll} \sim \frac{ {\rm Var}_{\sigma_{\rm int}}[N(x)]}{\lambda^2 \int d^3 y {\rm Cov}^2_{\sigma_{\rm int}}[N(x),N(y)]}
\label{tau}
\end{align}
(in the frame defined by our chosen time slice).
Now consider the projection operator $P_j = |N_j\rangle \langle N_j|$. Given equation (\ref{collonly}), the quantum expectation of $P_j$
satisfies
\begin{align}
d_x \langle P_j \rangle_{\sigma} = \lambda \langle \{ P_j, N(x)\}\rangle_{\sigma}dB_x
-2\lambda \langle P_j\rangle_{\sigma}\langle N(x) \rangle_{\sigma}dB_x.
\end{align}
This means that $\langle P_j \rangle_{\sigma}$ is a $\mathbb{P}$-martingale, i.e.
\begin{align}
\mathbb{E}^{\mathbb{P}}\left[\left. \langle P_j \rangle_{\sigma} \right| {\cal F}_{\sigma_{\rm int}}\right]
=\langle P_j \rangle_{\sigma_{\rm int}},
\end{align}
for $\sigma_{\rm int}\prec \sigma$. As the quantum variance of $N(x)$ tends to zero then either $\langle P_j\rangle_{\sigma}\rightarrow 1$ or
$\langle P_j\rangle_{\sigma}\rightarrow 0$ depending on whether the state ends up as $|N_j\rangle$ or not.
Let $\sigma_{\rm coll}$ denote the end of this collapse phase where we can apply these limits.
We have
\begin{align}
\mathbb{E}^{\mathbb{P}}\left[\left. \langle P_j \rangle_{\sigma_{\rm coll}} \right| {\cal F}_{\sigma_{\rm int}}\right]
=\mathbb{E}^{\mathbb{P}}\left[\left. 1_{\{ |\Psi(\sigma_{\rm coll})\rangle = |N_j\rangle \}} \right| {\cal F}_{\sigma_{\rm int}}\right]
=\langle P_j \rangle_{\sigma_{\rm int}}.
\end{align}
This equation states that the stochastic probability of a given outcome (in this case $|\Psi(\sigma_{\rm coll})\rangle = |N_j\rangle$)
is given by the initial quantum prediction for the probability of this outcome ($\langle P_j \rangle_{\sigma_{\rm int}}$),
i.e. the Born rule is satisfied.
For the initial state given by (\ref{init}) the result is therefore
\begin{align}
|\Psi(\sigma_{\rm coll})\rangle \simeq
|J_i\rangle |\alpha_i\rangle & \text{ with prob } |c_i|^2.
\label{colstate}
\end{align}
Consider an equal superposition ($|c_1|=|c_2|$) of two matter field states with eigenvalues $J_i(x)$ such that,
in the rest frame of the system, $J_1(x)=J$ only in the spatial region $R_1$ ($J_1(x)=0$ elsewhere) and $J_2(x)=J$ only
in the spatial region $R_2$. Further consider $V_{\triangle}$ to be the spatial volume of the symmetric difference $R_1\triangle R_2$
(containing points belonging to one but not both of $R_1$ and $R_2$).
Using equations (\ref{tau}), (\ref{cov}), (\ref{varperfectN}), (\ref{Neig}), and (\ref{coheig}),
and assuming that the smearing scales associated with $f(x,y)$ and $g(x,y)$ are much smaller
than the scale associated with $V_{\triangle}$, we find $\tau_{\rm coll}\sim \lambda^{-2}V_{\triangle}^{-1} J^{-4}$.
The rate of reduction depends on the magnitude of the matter density eigenvalue $J$ and the spatial extent
of the region $R_1\triangle R_2$. This amplification effect ensures that low energy excitations can be essentially
unaffected by the collapse mechanism whilst large scale superpositions (as characterized by $J$ and
$V_{\triangle}$) undergo rapid state reduction. The precise rate is controlled by the stochastic coupling parameter.
So far we have considered the dynamics of the state vector. In order to describe the system in definite terms
we must consider the dynamics of the local properties of the theory.
In order to estimate the stress-energy density we make the simplifying assumption that in the rest frame of the
matter field, only the $T^{00}$ component is nonzero. This corresponds to the assumption that the matter behaves
as a swarm of noninteracting particles. In this frame we assume that
\begin{align}
T^{00}(x) |J_i\rangle =
E_i(x)|J_i\rangle,
\end{align}
with all other components equal to zero. We further assume $E_i(x)$ and $J_i(x)$ are related in that they agree with
regards to the approximate distribution of matter. With the state given by equation (\ref{JAsol})
(with $\sigma_{\rm int}=plc(x)$), the stress energy density beable in the matter field rest frame is
\begin{align}
\bar{T}^{00}(x) = \sum_i |c_1|^2 E_i (x).
\end{align}
After collapse when the state takes the form of equation (\ref{colstate})
(with $\sigma_{\rm coll}=plc(x)$), $\bar{T}^{00}(x)$ is equal to $E_i(x)$ with probability $|c_i|^2$.
\section{Energy process}
\label{ep}
We have seen in previous sections that the collapse terms in the dynamical equation for the state vector involve
the operators $N(x)$ and that this results in collapse toward an $N(x)$-eigenstate. If we had chosen, for example,
a scalar field operator $\varphi(x)$ in place of $N(x)$ we would expect collapse toward a $\varphi(x)$-eigenstate.
The problem in this case is that, as the $\varphi(x)$-state becomes more certain, the scalar field momentum state
becomes more uncertain. The result is a divergent increase in the energy density \cite{pear3, pearGhir}.
Here we demonstrate that the present model does not suffer from this problem. A sensible choice for the energy of the
pointer field is given by
\begin{align}
H_{\rm pointer} = \int d \omega_x a^{\dagger}(x) i \partial_{x_0} a(x).
\end{align}
This operator generates time translations in the pointer field annihilation and creation operators:
\begin{align}
[H_{\rm pointer},a(x)] = -i\partial_{x_0} a(x)
\;\; ; \;\;
[H_{\rm pointer},a^{\dagger}(x)] = -i\partial_{x_0} a^{\dagger}(x).
\end{align}
For the operator $A(x)$ we have
\begin{align}
[H_{\rm pointer},A(x)] = -\int d\omega_y g(x,y) i\partial_{y_0}\left[ a(y) + a^{\dagger}(y) \right].
\end{align}
The pointer field energy does not generate time translations in $A(x)$ unless we make the assumption that
$\partial_{x_0} g(x,y) \simeq -\partial_{y_0} g(x,y)$ valid when $\bar{T}^{\mu\nu}(x)$ is slowly varying with
time (when compared to $g(x,y)$). We can then integrate by parts to find
\begin{align}
[H_{\rm pointer},A(x)]\simeq -i\partial_{x_0} A(x).
\label{AComAp}
\end{align}
With this approximation the $J(x)A(x)$ interaction term will conserve a total energy of the form
\begin{align}
H_{\rm total} = H_{\rm matter} + H_{\rm pointer} + \int d^3 x J(x) A(x),
\end{align}
provided that $H_{\rm matter}$ satisfies $[H_{\rm matter},J(x)] = -i\partial_{x_0} J(x)$.
We expect the collapse terms in the equations of motion to result in nonconservation of energy since the collapse
process should be able to randomly choose from a superposition of differing energy states. Given some
operator ${O}$ we can show using equation (\ref{SEP}) that its quantum expectation satisfies
\begin{align}
\mbox{$d$}_x\langle {O} \rangle_{\sigma} = {}&
\langle\mbox{$d$}_x{O}\rangle_{\sigma}
-i \langle\left[{O}, J(x)A(x)\right]\rangle_{\sigma} \mbox{$d$}\omega_x
\nonumber\\
& -\mbox{$\textstyle \frac{1}{2}$}\lambda^2\langle\left[N(x),\left[N(x),{O}\right]\right]\rangle_{\sigma}\mbox{$d$}\omega_x \nonumber\\
& +\lambda\langle\left\{{O},N(x)\right\}\rangle_{\sigma}\mbox{$d$} B_x
- 2\lambda\langle{O}\rangle_{\sigma} \langle N(x)\rangle_{\sigma}\mbox{$d$} B_x.
\label{opexp}
\end{align}
For example, setting $O=H_{\rm matter}$ we find
\begin{align}
\mbox{$d$}_x\langle {H_{\rm matter}} \rangle_{\sigma} = & - \langle A(x) \partial_{x_0} J(x) \rangle_{\sigma} \mbox{$d$}\omega_x
\nonumber\\
& +\lambda\langle\left\{{H_{\rm matter}},N(x)\right\}\rangle_{\sigma}\mbox{$d$} B_x
- 2\lambda\langle{H_{\rm matter}}\rangle_{\sigma} \langle N(x)\rangle_{\sigma}\mbox{$d$} B_x.
\label{mattereng}
\end{align}
Any changes in energy described by equation (\ref{mattereng}) must be consistent with experimental bounds
on energy conservation. This will result in bounds on the parameters of the model.
For the pointer field energy,
\begin{align}
\mbox{$d$}_x\langle {H_{\rm pointer}} \rangle_{\sigma} \simeq {}&
- \langle J(x) \partial_{x_0} A(x) \rangle_{\sigma} \mbox{$d$}\omega_x
\nonumber\\
& -\mbox{$\textstyle \frac{1}{2}$}\lambda^2\langle\left[N(x),\left[N(x),{H_{\rm pointer}}\right]\right]\rangle_{\sigma}\mbox{$d$}\omega_x \nonumber\\
& +\lambda\langle\left\{{H_{\rm pointer}},N(x)\right\}\rangle_{\sigma}\mbox{$d$} B_x
- 2\lambda\langle{H_{\rm pointer}}\rangle_{\sigma} \langle N(x)\rangle_{\sigma}\mbox{$d$} B_x.
\label{pointereng}
\end{align}
where we have made use of equation (\ref{AComAp}). To calculate the second term on the right side we use
\begin{align}
[N(x),H_{\rm pointer}] = \int d \omega_y f(x,y) \left[\left(i\partial_{x_0}a^{\dagger}(x)\right) a(x)
+ a^{\dagger}(x) \left(i\partial_{x_0}a(x)\right)\right],
\end{align}
which in turn can be used to show that
\begin{align}
\left[N(x),\left[N(x),{H_{\rm pointer}}\right]\right] = 0.
\end{align}
There are no divergences in the rates of change of either $H_{\rm matter}$ or $H_{\rm pointer}$. There is
exchange of energy between the two fields driven by the $J(x)A(x)$ interaction but the collapse process
conserves total energy in expectation.
Let us briefly consider how this model is affected by letting the smearing function $f$ become a delta function.
In this case we have $N(x)=n(x)$ and we find
\begin{align}
\left[n(x),\left[n(x),{H_{\rm pointer}}\right]\right] = -\left(i\partial_{x_0}a^{\dagger}(x)\right) a(x) \delta^4(0)
+ a^{\dagger}(x) \left(i\partial_{x_0}a(x)\right) \delta^4(0).
\end{align}
The pointer field energy therefore changes at an infinite rate due to the collapse dynamics. It can also be
shown that taking $g(x,y)=\delta^4(x-y)$ leads to divergences for the matter field energy.
\section{Numerical calculation in 2D}
In order to understand the collapse dynamics in more detail we consider a numerical solution of the state evolution
in the simplified case of a 2D spacetime (one space and one time dimension). We use the example of an initial
superposition of two different matter density states.
Consider the state $|J_i\rangle |\alpha_i\rangle$ where in the rest frame of the
matter field we have
\begin{align}
J({x_1}, x_0)|J_i\rangle =
\begin{cases}
J_i|J_i\rangle & \text{for } {l}_i < {x_1} <{u}_i,\\
0 & \text{otherwise},
\end{cases}
\end{align}
where $l_i$ and $u_i$ are constant lower and upper bounds respectively of the spatial extent of the matter density.
Assuming a sufficiently large value for the eigenvalue $J_i$ we can make the approximation (see section \ref{cp})
\begin{align}
N(x_1, x_0)|\alpha_i\rangle \simeq N_i (x_1,x_0)|\alpha_i\rangle.
\end{align}
We further assume that the length scale associated with $f$ and $g$ is sufficiently small (compared to length scales
$u_i-l_i$ and their overlaps) that we can approximate equations (\ref{coheig}) and (\ref{Neig}) to give
\begin{align}
N_i({x_1}, x_0) =
\begin{cases}
J^2_i & \text{for } {l}_i < {x_1} <{u}_i,\\
0 & \text{otherwise}.
\end{cases}
\label{Nsol}
\end{align}
The pointer field thus provides an image of the matter field state.
Suppose that the state of the system following some interaction between matter field and pointer field is
\begin{align}
|\Psi(\sigma)\rangle = c_1 |J_1\rangle |\alpha_1\rangle + c_2 |J_2\rangle |\alpha_2\rangle.
\end{align}
In the rest frame we consider matter density states which are nonzero
only in the following regions: $l_1 = -1 ; u_1 =0$ and $l_2 = 0 ; u_2 =1$. This corresponds to an initial
superposition of two adjacent lumps of matter.
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{DJB_fig2}
\caption{
{
Numerical demonstration of the collapse process. The variable
$\mathbb{E}^{\mathbb{P}}\left[ \int dx_1 {\rm Var}_{\sigma}[N(x_1,x_0)]\right]$ tends to zero as the system evolves
indicating that only one of the $N$-eigenstates survives. An example path highlights the stochastic nature of a typical
realized process.
}}
\end{center}
\end{figure}
We can solve equation (\ref{SEP}) numerically for this two-state example. Since the $|\alpha_i\rangle$
are approximate eigenstates of the operator $N(x_1, x_0)$,
the state dynamics reduces to the dynamics of the two coefficients $c_1$
and $c_2$ (taken to be initially equal in our simulation). For simplicity the state evolution is considered only in terms of a foliation of constant $x_0$ surfaces.
The stochastic coupling $\lambda$ is set to be equal to 0.5 and $J_i^2$ is 100 for both $i=1,2$.
In order to characterize the collapse process we use the quantity
\begin{align}
\int dx_1 {\rm Var}_{\sigma}[N(x_1,x_0)].
\label{collmeas}
\end{align}
As this quantity tends to zero then the state must tend to a $N(x_1,x_0)$-eigenstate for all $x_1$. Figure 2 shows how the
$\mathbb{P}$-expectation of (\ref{collmeas}) decreases as the system evolves. The $\mathbb{P}$-expectation is calculated by
Monte Carlo simulation using 200 sample paths. An example path is shown in the figure to highlight the stochastic nature
of the realized process.
Since the $\mathbb{P}$-expectation of (\ref{collmeas}) tends to zero then the realized quantum variance of $N$ must with certainty tend to
zero. The state ends up in the form $|\Psi(\sigma)\rangle = |J_i\rangle |\alpha_i\rangle$. With initial conditions
specified by $c_1 = c_2 = 1/\sqrt{2}$ we find that the proportion of occurrences of
$|\Psi(\sigma)\rangle \rightarrow |J_1\rangle |\alpha_1\rangle$ and
$|\Psi(\sigma)\rangle \rightarrow |J_2\rangle |\alpha_2\rangle$ are even to within statistical error.
The timescale for collapse is of order $10^{-4}-10^{-3}$ (in units defined by the chosen parameters). This is well approximated by the
formula $\tau_{\rm coll}\sim \lambda^{-2}V_{\triangle}^{-1} J^{-4}$.
\section{Discussion}
In this article we have outlined a framework for describing the evolution of relativistic quantum systems
which consistently explains the behavior of both microscopic and macroscopic systems. To do this the model incorporates
quantum state reduction into the standard state dynamics in a way which is not only covariant and frame
independent, but also objective, naturally differentiating between systems of different scale and adjusting
its effect accordingly. In this way the model offers a potential unification of quantum and classical sectors.
Within this framework no judgment is required on when to apply collapse and when to apply unitary evolution (as with
orthodox quantum theory) and it is not necessary to perform an arbitrary separation of system and environment in order
to understand its decoherence properties. The present model leads to the prediction of well defined observer independent
local properties.
The mechanism can be used to describe collapse in any quantum field for which we can form a Lorentz invariant scalar
current $J(x)$. This applies to both fermions and bosons. There is no incompatibility with the inclusion of
gauge field interactions and there are therefore no problems in principle with application to the
standard model of particle physics. (We note that the model of Tumulka \cite{Tum}, although of
interest as a demonstration of a consistent and formally rigorous relativistic collapse model, applies only to a fixed number of
noninteracting quantum particles. To consider any interesting correlations between particle states
in this model they must be encoded in the initial state vector.)
The present model has features with the potential for experimental scrutiny. For example, the expected rates of collapse show a
dependence on the specific system details; local properties of the theory exhibit a well defined stochasticity; and
there is energy transfer between the quantum fields and the pointer field. By quantifying these effects it is hoped that
new tests of quantum theory may be suggested.
\vspace{10pt}
\noindent
{\it Acknowledgments}. It is a pleasure to thank Philip Pearle, GianCarlo Ghirardi, Shelly Goldstein, and Fay Dowker
for helpful discussions.
|
3,212,635,537,660 | arxiv | \section{Introduction}
In many industrial processes, one frequently faces a measured data that is composed of the true value of a signal and some random noise. One central problem in signal processing is to find the characteristics of noise, which is critical to many important tasks such as signal filtering, estimation, prediction and anomaly detection. The probabilistic variance is among the most interesting characters of noise, hence the estimation of it has been a popular topic in many research fields.
In image or speech processing, the estimation of noise properties is mainly linked to denoising processes; several methods have been proposed which deal with the data either in its original space \cite{ref:Immerker-1996} or represented in some transformed (e.g. via FFT and wavelets) basis \cite{ref:Sijbers-2007}, \cite{ref:Hashemi-2010,ref:Beheshti-2003}. In systems and controls society, the estimation of noise properties have been important for signal estimation and system identification and attracted a great deal of research interest in 1970s and 1980s. Various methods of different types were proposed in that time. Among them, two groups of methods have been reviewed and improved in the past decades, namely the correlation methods and joint state and parameter estimation methods. The paper \cite{ref:Dunik-2009} give an brief yet extensive survey of the previous works and shows a simulation-based comparison with selected methods. The result of the comparison favours the correlation methods that was pioneered by the works in \cite{ref:Mehra-1970,ref:Belanger-1974} and have been improved and extended by \cite{ref:Odelson-2006,ref:Dunik-2008,ref:Ge-2014,ref:Dunik-2018}. The basic idea of the correlation method is to link the auto-correlations of the prediction errors of a sub-optimal linear estimator to the variances of the noises in the state-space equations describing the system. Least-squares fitting is then used to find the best matching variances of the noises. In principle, the method needs the knowledge of the deterministic part of the system and the noise properties are assumed time-invariant; hence its performance is not warranted with unknown system dynamics, inputs, disturbances and time-varying noise properties (The robust issue can be seen in the simulation in Section \ref{sec:sim}). The robustness is certainly important to industrial application yet has not been well examined in the literature.
In this paper, we follow the line of correlation method and propose a simple and robust method for noise variance estimation with time-varying signals. Unlike most methods existing in this category, our focus is laid only on the estimation of the variance of measurement (or sensor) noise. The time-varying signal under our consideration is 1-D and can be considered output of a system with unmodelled dynamics and disturbances (The detailed description of the signal and noise is given in Section \ref{sec:signal}). We present our method, which utilizes a robust measure of the variability of the innovations, in Section \ref{sec:sec-method} and give an analysis of its estimation error in \ref{sec:analysis}. In Section \ref{sec:sim} the proposed method is compared with two popular methods in the literature with a simulation example, where we show the robust issue of the existing methods and demonstrate how it is handled by the proposed method. We draw conclusions in Section \ref{sec:conclusions}.
\section{The signal and measurement} \label{sec:signal}
Consider a time-varying {\it signal} $\{x_k\}$, where $k=0,1,2,\cdots,$ is the discrete time indices. Suppose that we cannot measure the actual values of the signal but instead we have a sensor measurement in the form of $y_k = x_k + v_k$, where $v_k$ is a white noise sequence with
\begin{equation}\label{eq:meas-noise}
\E(v_k) = 0, \quad \Cov(v_k v_l) = R \delta_{kl}.
\end{equation}
for all $k,l=0,1,2,\cdots$. Our goal is to estimate the noise variance $R$ by using the measurement data \footnote{In practice, the noise variance often has event-based changes and thus is piece-wise constant. For simplicity we will not focus our synthesis and analysis on time-varying variances. But we will remark on the trackability of our estimator later in this section and demonstrate it in the simulation example in Section \ref{sec:sim}.}.
It is straightforward to see that the signal and measurement can be described by the following dynamical system:
\begin{subequations}\label{eq:sys}
\begin{align}
x_{k+1} &= x_k + w_k, \label{eq:sys-1} \\
y_k &= x_k + v_k, \label{eq:sys-2}
\end{align}
\end{subequations}
where $w_k$ is the change of the signal between time $k$ and time $k+1$. In this paper, we pursue an approach that is robust to unmodelled system dynamics and disturbances, hence we allow $w_k$ to take any form.
\section{Estimation method for measurement noise} \label{sec:sec-method}
In a conventional setting of the correlation based method for noise variance estimation (see, e.g., \cite{ref:Mehra-1970} and \cite{ref:Dunik-2009}), $w_k$ is assumed to be a sequence of independent random variables with identical distribution with zero mean value (That is, all other changes to the signal are separated out in the system equations). This means that $w_k$ satisfies that, for all $k,l=0,1,2,\cdots$,
\begin{equation}\label{eq:process-noise-mean-var}
\E(w_k) = 0, \quad \Cov(w_k w_l) = Q \delta_{kl}.
\end{equation}
In addition, $w_k$ is assumed uncorrelated with $v_k$, i.e., $\E\left(w_k v_l\right) = 0$.
A linear estimator for $x_k$, $k=1,2,\cdots$, can be designed as follows:
\begin{subequations}\label{eq:KF}
\begin{align}
\hat x^-_{k} &= \hat x_{k-1}, \label{eq:state-pred} \\
\hat x_{k} &= \hat x^-_{k} + K(y_{k} - \hat x^-_{k}). \label{eq:state-update}
\end{align}
\end{subequations}
The state estimation error $e_k$ and the prediction error $\eta_k$ of the estimator is defined, respectively, as $e_k \defeq x_k - \hat x^-_k$ and $\eta_k \defeq y_{k} - \hat x^-_{k}$. The prediction error $\eta_k$ is usually called the \textit{innovation} (at time $k$). To have a stable estimator, we limit the value of $K$ to be inside $(0,1)$.
The key idea of the correlation method for noise variance estimation is utilizing the dependency of the covariance matrix of $e_k$ and auto-covariance of $\eta_k$ on the noise variance $Q$ and $R$. As derived in \cite{ref:Mehra-1970, ref:Odelson-2006}, the covariance matrix $\E(e_k e^\top_k)$ converges to
\begin{equation}\label{eq:ss-var-est-err}
M = [M - K M - M K^\top + K(M+R)K^\top] + Q;
\end{equation}
and the covariance matrix $\E(\eta_k\eta^\top_k)$ converges to
\begin{align}\label{eq:ss-var-innovation}
C & = M + R.
\end{align}
The matrix $M$ and $C$ are hence called the {\it steady-state} covariance matrix of the estimation error $e_k$ and the innovation $\eta_k$.
For 1-D signal, we can solve \eqref{eq:ss-var-est-err} and \eqref{eq:ss-var-innovation} to obtain an expression for the variance of measurement noise:
\begin{align}\label{eq:R}
R & = \frac{C(2K-K^2)-Q}{2K}.
\end{align}
It is clear that one can obtain the value of $R$ with the knowledge of $C$ and $Q$ (as $K$ is up to our design). Now, one important choice we make to proceed with is to drop $Q$ in \eqref{eq:R}. This is because we want to handle the general situation in which the signal variation $w_k$ is unknown and not confined to \eqref{eq:process-noise-mean-var}. Ignoring $Q$ in \eqref{eq:R} we may approximate the noise variance simply as follows:
\begin{align}\label{eq:R-est}
\hat R & = \frac{\hat C(2K-K^2)}{2K} = \hat C\bigg(1-\frac{K}{2}\bigg),
\end{align}
where $\hat C$ is an estimate of $C$, on which we will elaborate below.
Note that, in 1-D case, $C$ is just the steady-state variance of the innovation $\eta_k$. In practice, $\Var(\eta_k)$ normally converges fast and hence one may estimate $C$ using with a moving time window (which we will call a \textit{estimation window}). Let the size of the window be $m$, then a typical sample estimate of $C$ may be formed as
\begin{equation} \label{eq:C-est-mean}
\hat C(k,m) = \begin{cases}
\frac{1}{m}\sum_{i=k-m}^{k} \left(\eta_i - \left(\frac{1}{m+1}\sum_{j=k-m}^{k}\eta_j\right) \right)^2, & \text{if $k\geq m$}; \\
\frac{1}{k}\sum_{i=0}^{k} \left(\eta_i - \left(\frac{1}{k+1}\sum_{j=0}^{k}\eta_j\right) \right)^2, & \text{if $k < m$}.
\end{cases}
\end{equation}
The estimator in \eqref{eq:C-est-mean} is vulnerable to outliers in the measurement noise as well as the large abrupt changes in the signal, since both could generate large innovation that is {\it not} related to variance of the measurement noise. A much more robust estimator can be formed by utilizing the {\it sample median absolute deviation}:
\begin{equation} \label{eq:C-est-MAD}
\hat C(k,m) = \begin{cases}
\left(a \cdot \mathrm{med}\{|\eta_i - \mathrm{med}\{\eta_i: i\in\overline{k-m,k}\}|: i\in\overline{k-m,k}\}\right)^2, & \text{if $k\geq m$}; \\
\left(a \cdot \mathrm{med}\{|\eta_i - \mathrm{med}\{\eta_i: i\in\overline{0,k}\}|: i\in\overline{0,k}\}\right)^2, & \text{if $k < m$}.
\end{cases}
\end{equation}
where the notation ``$\mathrm{med}$" is the median operator for an finite set of real numbers and $\overline{m,n}$ denotes the integer sequence ${m, m+1, \cdots, n}$. The selection of parameter $a$ depends on the distribution of $\eta_k$. For example, if $\eta_k$'s are independent and follow a identical Gaussian distribution, then $a$ should be set as $1.4268$ and \eqref{eq:C-est-MAD} converges to \eqref{eq:C-est-mean} almost surely as $m$ goes to infinity (See \cite{ref:Pham-Gia-2001} for more properties about the median absolute deviations). In practice, it may be difficult to know the exact statistical distribution of $\eta_k$; it is probably then wise to make $a$ a tuning parameter.
It often happens in practice that the variance of measurement noise is varying with time or the magnitude of the signal, instead of being a constant value. To keep track the noise variance, one could compute $C(k,m)$ with progressing $k$. The window length $m$ should be chosen to balance the trackability and variability of the output of the estimator: With a long estimation window the estimate tends to be less fluctuating but lack the capability to track fast changes of the noise variance; on the other hand, with a short estimation window, the estimate may vary in a larger range but can respond to the change of noise variance quickly.
Summarizing what has been presented in this section, we propose the following algorithm for estimation of the variance of the measurement noise.
\vspace{0.5cm}
\begin{algorithm}[H] \label{alg:main}
\DontPrintSemicolon
\SetKwInOut{Parameter}{parameter}
\KwInput{Measurement sequence $y_k,k=0,1,2,\cdots$}
\KwOutput{Estimated noise varaince at time $k$ (denoted by $\hat R_k$)}
\Parameter{$K\in(0,1)$ (estimator gain), $m$ (window length)}
\For{$k=0,1,2,\cdots$}
{
\If{$k=0$}
{
$\hat x_0 = y_0$;
}
\Else
{
$\hat x^-_{k} = \hat x_{k-1}$\;
$\eta_k = y_{k} - \hat x^-_{k}$ \tcp*{innovation at step k}
$\hat x_{k} = \hat x^-_{k} + K\eta_k$\;
Estimate the innovation variance $\hat C$ by \eqref{eq:C-est-MAD}\;
Estimate noise variance as $\hat R_k = \hat C\left(1-\frac{K}{2}\right)$
}
}
\caption{A simple noise variance estimation algorithm}
\end{algorithm}
We have mentioned the selection of the window length $m$ and will address the effect of the choice of $K$ in the next section, in which we give a informal analysis of the performance of the proposed approach.
\section{An informal analysis of the estimation error} \label{sec:analysis}
Naturally, we wish to quantify the estimation error with the proposed simple algorithm, which can be defined as $\epsilon(k,m)=\frac{2-K}{2}\hat C(k,m) - R$. We will show how this error is related to the variation of the signal. As the focus of the paper is a simple and practical estimation method, a rigorous mathematical proof is not pursued here. Furthermore, due to technical difficulties in dealing with median values, we only give an analysis for the estimation using Algorithm 1 with \eqref{eq:C-est-mean}, even though \eqref{eq:C-est-MAD} is the robust and preferred choice in practice \footnote{In the special situation where the innovations $\eta_k$ are independent and follow the same normal distribution, on can show that the estimate \eqref{eq:C-est-MAD} converges almost surely to \eqref{eq:C-est-mean} as the estimation window length $m$ goes to infinity (see, e.g., \cite{ref:Pham-Gia-2001})}. We use the notation $\smallO_m$ to denote and a sequence of random variables, indexed with $m$, which converges to zero in probability as $m$ goes to infinity.
We first note an expression of the innovation $\eta_k$ (see Appendix \ref{sec:appendex-A} for a derivation):
\begin{align} \label{eq:innov}
\eta_k & = (1-K)^{k-1}(y_{0} - \hat x_{0}) + \sum_{i=1}^{k}(1-K)^{i-1}w_{k-i} + \sum_{i=1}^{k}(1-K)^{i-1} \delta v_{k-i},
\end{align}
where $\delta v_{k}$ is the difference of the measurement noise defined as:
\begin{align} \label{eq:def-delta-v}
\delta v_{k} = v_{k+1} - v_{k}.
\end{align}
The first term in \eqref{eq:innov} vanishes with the choice of $\hat x_{0}$ in Algorithm 1, then we write \eqref{eq:innov} in a simpler form as
\begin{align} \label{eq:innov-in-two-terms}
\eta_k & = \eta_{k,1} + \eta_{k,2}.
\end{align}
where we have defined
\begin{align}
\eta_{k,1} & = \sum_{i=1}^{k}(1-K)^{i-1} \delta v_{k-i} \label{eq:innov-first-part}, \\
\eta_{k,2} & = \sum_{i=1}^{k}(1-K)^{i-1}w_{k-i}. \label{eq:innov-second-part}
\end{align}
Consequently, one can put \eqref{eq:C-est-mean} into,
\begin{align} \label{eq:C-est-approx}
\hat C(k,m) = S_{11}(k,m) + S_{22}(k,m) + 2\cdot S_{12}(k,m),
\end{align}
where $S_{11}(k,m)=\frac{1}{m}\sum_{i=k-m}^{k} ( \eta_{i,1} - \frac{1}{m+1}\sum_{j=k-m}^k \eta_{j,1} )^2$ is a sample estimate of the variance of $\eta_{k,1}$, which can be shown to converge to $\frac{2}{2-K}R$ in probability as the window length $m$ goes to infinity under the assumption in \eqref{eq:meas-noise} (see Appendix \ref{sec:appendex-B}); $S_{22}(k,m)=\frac{1}{m}\sum_{i=k-m}^{k} (\eta_{i,2} - \frac{1}{m+1}\sum_{j=k-m}^k \eta_{j,2})^2$ captures the variation of the signal as it is related only to $w_k$; and $S_{12}(k,m)=\frac{1}{m}\sum_{i=k-m}^{k} ( \eta_{i,1} - \frac{1}{m+1}\sum_{j=k-m}^k \eta_{j,1} )\cdot ( \eta_{i,2} - \frac{1}{m+1}\sum_{j=k-m}^k \eta_{j,2} )$, which reflects some correlation between the change of the signal and that of the measurement noise.
To simplify further analysis, we choose $K$ close to 1 so that we might only take the first term of the summations in \eqref{eq:innov-first-part} and \eqref{eq:innov-second-part}:
\begin{align}
\eta_{k,1} \approx \delta v_{k-1}, \quad \eta_{k,2} &\approx w_{k-1}. \label{eq:innov-sep-part-approx}
\end{align}
After some slightly involved but straightforward steps, one have $
S_{12}(k,m) = \frac{1}{m}\sum_{i=k-m}^{k}(v_i-v_{i-1})w_{i-1} - \frac{1}{m}\left(\sum_{i=k-m}^{k}(v_i-v_{i-1})\right)\left(\frac{1}{m}\sum_{i=k-m}^{k}w_{i-1}\right)$,
which, under the assumption \eqref{eq:meas-noise}, converges to 0 in probability as $m$ goes large, if $\frac{1}{m}\sum_{i=k-m}^{k}w_{i-1}$ is finite (which is always the case in practice) and
\begin{align} \label{eq:uncorrected-w-v}
\frac{1}{m}\sum_{i=k-m}^{k}(v_i-v_{i-1})w_{i-1} = \smallO_m.
\end{align}
In summary, with the assumptions in \eqref{eq:meas-noise} and \eqref{eq:uncorrected-w-v} and the approximations in \eqref{eq:innov-sep-part-approx}, we have from \eqref{eq:C-est-approx} that
\begin{align} \label{eq:C-est-further-approx}
\hat C(k,m) \approx \frac{2}{2-K}R + \frac{1}{m}\sum_{i=k-m}^{k} \left( w_{i-1} - \frac{1}{m+1}\sum_{j=k-m}^k w_{j-1} \right)^2 + \smallO_m.
\end{align}
This means that estimation error defined at the beginning of this section can be approximated as
\begin{align} \label{eq:err-C-est-approx}
\epsilon(k,m) \approx \frac{2-K}{2m}\sum_{i=k-m}^{k} \left( w_{i-1} - \frac{1}{m+1}\sum_{j=k-m}^k w_{j-1} \right)^2 + \smallO_m.
\end{align}
In particular, if the assumption \eqref{eq:process-noise-mean-var} on $w_k$ holds, then we have $\epsilon(k,m) \approx \frac{2-K}{2}Q + \smallO_m$ from \eqref{eq:err-C-est-approx}.
Therefore, we see that the estimate rendered by Algorithm 1 overestimates the variance of measurement noise if the signal is time-varying and the error depends on the variation of the signal in the moving estimation window. It can be also seen that the error is reduced by pushing the estimator gain $K$ close to 1 in view of the stability constraint $K\in(0,1)$ and the pre-requisite for the approximation in \eqref{eq:innov-sep-part-approx}. It is worth mentioning that the knowledge of the variation of the signal, if available, can certainly be used to reduce or even completely remove the first part of the error in \eqref{eq:err-C-est-approx}.
\section{Simulation results} \label{sec:sim}
We compare the proposed method with the method described in \cite{ref:Mehra-1970} (with the key equations (22) and (23) therein) and the one described in \cite{ref:Odelson-2006}, which belong to the same class of correlation based methods as the proposed method in this paper (see, e.g., \cite{ref:Dunik-2009}). As in the comparison study \cite{ref:Dunik-2009}, auto-correlations (of the innovation sequence) up to lag 4 are used for both the methods. We use a synthetic signal with sampling frequency 100Hz to test each method. The synthetic signal can be seen as mimicking a signal measured from a real industrial process. The process is in steady-state at the start ($t=0$) and experiences some disturbance or unknown input at $t = 5$ seconds, which raises the magnitude of the signal; the process turns unstable with oscillating signal at about $t = 10$ seconds, the frequency of the oscillation increases first before decreases back to its original value; the process is stabilized (by, for example, an operator in practice) at time $t=20$ seconds and the signal become constant again with a outlier in measurement arriving at time $t=22$ seconds. The measurement noise at different time steps are independent and they all follow zero-mean normal distributions, the standard deviations of the distributions are shown in Figure \ref{fig:sim-1}.
To have a fair comparison, we use the same value for the estimator gain ($K=0.9902$) in all the three methods (steady-state Kalman filter gain in \cite{ref:Mehra-1970}); the same estimation window length of 100 samples is used for all three methods in estimating the variance of prediction errors. In addition, we take the initial value of the estimated state and its variance as $\hat x_0 = y_0$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{Combined_signal.eps}
\caption{Comparison of estimated measurement noise on a synthetic signal: Added noise is white with zero mean. The noise has piece-wise constant variance that is shown in the figure.}
\label{fig:sim-1}
\end{figure}
We note the following points from the simulation: (1) Due to the use of the same estimator gain, the output of all the three estimators coincide with each other. (2) The proposed method is able to estimate the varying variance of the noise reasonably well throughout the whole time period. It slightly overestimates the noise variance when the signal has fast oscillations, which agrees with the error analysis in Section \ref{sec:analysis}. (3) There are three places where both the methods in \cite{ref:Mehra-1970} and \cite{ref:Odelson-2006} go wrong. The first happens within the time period $5\leq t \leq 10$ where the signal has a couple of large jumps; the second occurs when the oscillation goes very fast; and the third happens with the presence of the outlier. The root cause of the issues is that both the methods uses a least-squares fitting to find the ``best" noise variance that matches variance of the innovation, which turns large when the three events occurs. These changes of magnitudes of innovation, however, are not related to the change of variance of the measurement noise.
\section{Conclusions}\label{sec:conclusions}
In this work, we have presented a simple and robust approach to estimate the variance of measurement noise. It belongs to the class of correlation based methods that were originated in 1970s and further developed in the past decades. The key to the robustness characteristic of our proposed approach is the use of a robust measure of the variability (i.e. the median absolute deviation) of the error between measurement and the predicted value of it from a linear time-invariant estimator, which is linked to the variance of the measurement noise by a simple explicit formula for 1-D signals. The estimate of the noise variance of our method is biased if the signal is time-varying and the bias increases with the variability of the signal. This is the cost of our assumption that the dynamics of the signal is completely unknown, which on the other hand makes the proposed approach widely applicable in practice. We hope this work can attract further research on robust extension of the popular correlation based methods for estimation of noise variances.
|
3,212,635,537,661 | arxiv | \section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
Localization in commutative algebra means a universal construction
where a set of chosen elements in a given commutative
ring is made invertible (they will become denominators):
the outcome is called a ring of fractions.
The classical example is the
well-known passage from the integers to the field of rational
numbers. It is a very important tool in algebraic
and analytical geometry. In differential geometry, however,
localization is rather used in the analytic sense,
i.e.~the passage from globally defined smooth functions to those
which are only defined on an open subset. It follows from
the classical works by Whitney, Malgrange \cite{Mal67} and Tougeron \cite{Tou72}
that these analytical localizations are often isomorphic to
certain algebraic localizations in the smooth (or
even $\mathcal{C}^k$, $k\in\mathbb{N}$) case, see also
\cite{FS98}, \cite{RS94} and the book \cite{NS03}.
For noncommutative algebras the problem of algebraic
localization has two solutions: a general construction (see e.g.~\cite[p.289]{Lam99}), and the better known solution
initiated by \O{}.Ore \cite{Ore31} in the 1930's
requiring additional conditions on the multiplicative
subset, the famous Ore conditions. It turns out
that the general construction is rather inexplicit
and in some situations not very practical. On the other hand, the more particular Ore localization shares
almost all properties of the commutative localization.
In this work we would like to study noncommutative localization of algebras arising in deformation quantization:
in this theory --invented by \cite{BFFLS78} in 1978--
formal associative deformations of the algebra of all
smooth complex-valued functions on a Poisson manifold,
so-called star products, are studied aiming at an interpretation of the noncommutative
multiplication of operators used in quantum mechanics.
It is well-known that the first order commutator of such
a deformation always gives rise to a Poisson bracket, but it
is highly non-trivial to show that every Poisson bracket
arises as a first order commutator of a deformation: this
latter result is the famous Kontsevich formality Theorem,
\cite{Kon03}.
We consider star products given by formal power series of
bidifferential operators (as almost every-one): these
multiplications immediately define star products of
locally defined functions by suitable `restrictions'.
We mention that noncommutative localization has been
used for the algebras in noncommutative geometry to
describe inverses of functions appearing in passing to
coordiates, see e.g.~the recent work \cite{ACH16},
\cite{AW17a}, and \cite{AW17b}.\\
In this article we have chosen the algebra of smooth functions on a smooth manifold and not a framework
of algebraic or analytical geometry which would have required a sheaf theoretic approach: firstly, historically deformation quantization has been formulated in a differential geometry framework
and has farther reaching existence and uniqueness results for smooth functions, secondly, in the smooth world there is no urgent need to pass to sheaves which simplifies the
exposition, and thirdly, the commutative
algebra of smooth function algebras seems to be
of a different, `funnier' kind (not Noetherian) which
we liked to rediscover from the classical literature,
in particular from the books \cite{Tou72} and \cite{NS03}.\\
We first show that this analytical localization
of star product algebras is
isomorphic to the algebraic localization
with respect to the set of all those formal power series
of smooth functions whose zeroth order term does nowhere
vanish on the given open set. As a by-product we have the
result that this multiplicative set satisfies the right
(and also left) Ore condition.\\
In a similar way we can show that the set of all
germs of a star product algebra at a given point of the
manifold --defined in analytic terms-- is isomorphic to the noncommutative localization
of the complement of the maximal ideal of all those
formal power series of functions whose term of order
zero vanishes at the point.\\
We also sketch a more general algebraic framework inspired by the two preceding results: given a commutative associative unital algebra $A$ over a commutative ring $K$, a multiplicative subset $S_0$ of $A$, and a bidifferential star product $\star$ (the bidifferential operators are defined in the well-known algebraic sense) then the following two constructions can be compared: first, localizing first the bidifferential operators \`{a} la G.Vezzosi \cite{Vez97} there is
a star product $\star_{S_0}$ on $A_{S_0}[[\lambda]]$, the formal power series whose terms are in the localized algebra $A_{S_0}$, and secondly,
considering first the natural `deformation of $S_0$,
$S=S_0+\lambda A[[\lambda]]$, which is a noncommutative multiplicative subset of the star product algebra
$\big(A[[\lambda]],\star\big)$ there is the noncommutative
localization (a priori in the general sense) of
$A[[\lambda]]$ with respect to the deformation $S$ of $S_0$. Two quite natural questions arise: `\emph{Does localization commute with deformation ?}' --on which
we give a positive answer in case $S_0$ has a sort of
`common multiple property for sequences'--and
`\emph{Is $S$ right or left Ore ?}' for which we give
an elementary
counterexample in Section \ref{SecNonOreExample}.
The paper is organised as follows: in the first section we
recall some basic concepts of the commutative algebra of smooth function algebras (where we could not resist
the pleasure of the completely unnecessary study of its prime ideals) following Tougeron's book \cite{Tou72},
and of (non)commutative localization
following Lam's very nice text-book \cite{Lam99}.
\\
In Section 2 we show the first localization result concerning
open sets: an important tool is Tougeron's \emph{fonction aplatisseur} \cite{Tou65} which makes a given sequence of locally defined smooth
functions globally defined by multiplication with a single suitable function being
nowhere zero on the given open set. In this proof, we had to
make explicit use of the seminorms defining the
Fr\'{e}chet topology of the smooth
function space.\\
In Section 3 we prove a similar result for germs, heavily relying on the first theorem. Note that
there is slight, but important difference between `germs over the formal
power series ring $\mathbb{K}[[\lambda]]$' which we describe and the `formal power series of germs'
on which we comment in Section
\ref{SecCommutativelyLocalizedStarProducts}.\\
Section 4 is devoted to the above-mentioned discussion of commutative localization
of multidifferential operators (defined in the algebraical way) for any commutative
algebra $A$ (over some unital commutative ring $K$)
by general commutative
multiplicative subsets $S_0$ and its comparison with a `natural' noncommutative localization with respect to
$S=S_0+\lambda A[[\lambda]]$. The technical tool
will be a localization theorem of algebraic differential operators due to G.Vezzosi, 1997, \cite{Vez97}.\\
In Section 5 we describe a simple example of a multiplicative set of the
type $S=S_0+\lambda A[[\lambda]]$ in smooth deformation quantization of the plane which is not Ore.\\
\subsubsection*{Acknowledgements}
The authors would like to thank Joakim Arnlind, Alberto Elduque, Jens Hoppe, Camille Laurent-Gengoux, Daniel Panazzolo, Leonid Ryvkin,
Zoran \v{S}koda, and Friedrich Wagemann for many valuable discussions and hints.
\section{Review of basic concepts}
Let $K$ be a fixed commutative associative unital ring such that
$1=1_K\neq 0=0_K$.
All $K$-algebras are supposed to be associative and unital.
We shall include unital $K$-algebras $R$ isomorphic to $\{0\}$ for which $1_R=0_R$. Note that associative
unital rings are always $\mathbb{Z}$-algebras in a natural
way. In order to avoid clumsy notation we shall not
write $1_R$, $1_K$, $0_R$ or $0_K$, but simply $1$ and $0$
where the precise interpretation should be clear from the
context.
\subsection{Review of Commutative Algebra for smooth function algebras}
\subsubsection{Elementary features of function algebras}
\label{SubSubSecElementary features of function algebras}
For the convenience of the reader (working in differential geometry) we shall give the following
elementary survey which can be ignored on first reading. For more information see e.g.~\cite{BouCommAlg}.\\
Recall some elementary commutative algebra of unital $K$-algebras of functions on a set: let
$X$ be a set, $K$ be a commutative domain, and $R$ be a given unital subalgebra
of the $K$-algebra of all the functions $X\to K$. As usual, for any subset
$Y\subset X$, let $I(Y)\subset R$ be the set of functions in $R$ vanishing on
$Y$ which is always an ideal of $R$ (the \emph{vanishing ideal of $Y$}), and for any subset $J\subset R$ let
$Z(J)\subset X$ be the subset of those points of $X$ on which all functions in $J$ vanish. Clearly $Y\subset Z(I(Y))$ and $J\subset I(Z(J))$. Moreover for any two ideals $I_1$ and $I_2$ of $R$ it follows
that $Z(I_1)\cup Z(I_2)= Z(I_1I_2)$ (here the fact that $K$ is a domain is used), hence the set of all subsets
$Z(I)$, $I$ ideal of $R$, satisfies the axioms of the closed sets of
a topology on $X$ called the \emph{Zariski topology on $X$ w.r.t.~$R$}.
These closed subsets could be called \emph{$R$-algebraic sets}: in the particular
case where $K=\mathbb{K}$ is a field, $X=\mathbb{K}^n$, and $R$ the polynomial ring $\mathbb{K}[x_1,\ldots,x_n]$
these are the \emph{algebraic subsets}, whereas for $R$ being the algebra
of analytic functions (for $\mathbb{K}=\mathbb{R}$
or $\mathbb{K}=\mathbb{C}$) these sets are called
\emph{analytic subsets}. It is well-known that for
$X=\mathbb{R}^n$ and $R$ the ring of smooth real-valued functions the Zariski topology of $X=\mathbb{R}^n$
coincides with its usual topology. Returning to the general situation, note that the Zariski closure of
a set $Y\subset X$ is equal to $Z(I(Y))$. On the other hand
the inclusion $J\subset I(Z(J))$ can sometimes be made more precise
by what is called a
\emph{Nullstellensatz}: in algebraic geometry over algebraically closed fields $I(Z(J))$ is equal to the
set of all polynomials in $R$ such that a certain power
is in $J$.\\
Recall that an ideal $I\subset R$ is called \emph{proper} iff
$I\neq R$ iff $1\not\in I$. Moreover, a proper ideal $\mathfrak{m}$ is
called \emph{maximal} iff it is equal to any other proper ideal containing it iff --since $R$ is commutative-- the factor algebra
$R/\mathfrak{m}$ is a field. Recall that a
\emph{multiplicative set} $S\subset R$ is a subset $S$
of $R$
containing $1$ and if $s,s'\in S$ then $ss'\in S$. Moreover a general proper ideal
$\mathfrak{p}$ of
$R$ is called
a \emph{prime ideal} iff the factor algebra
$R/\mathfrak{p}$ is a domain
iff the complementary set $S=R\setminus \mathfrak{p}$ is a \emph{multiplicative subset}.
Recall that \emph{Krull's Lemma} states that given any multiplicative subset $S\subset R$ and ideal $J$ with
$J\cap S=\emptyset$ there is a prime ideal $\mathfrak{p}\supset J$
with $S\cap \mathfrak{p}=\emptyset$, see e.g.~\cite[p.391, Prop.~7.2, Prop.~7.3]{Jac80}.
\subsubsection
{Analytical features of smooth function algebras}
\label{SubSubSec Commutative algebra of smooth function algebras}
Let $X$ be an $N$-dimensional differentiable manifold (whose underlying topological space
we shall always assume to be Hausdorff and second countable).
Let $\mathbb{K}$ denote either the field of all real numbers,
$\mathbb{R}$, or the field of all complex numbers, $\mathbb{C}$.
For any real vector bundle $E$ over $X$ we shall denote by the same symbol
$E$ its complexification. Consider the $\mathbb{K}$-algebra
$A=\mathcal{C}^\infty(X,\mathbb{K})$ of
all smooth $\mathbb{K}$-valued functions $f$ on $X$. Even in the case
where $X$ is an open subset of $\mathbb{R}^n$ the algebraic properties
of $A$ are rather different from the function algebras used in
algebraic or analytic geometry. There has been much work on that
in the past, see e. g. \cite{Mal67}, \cite{Tou72}, based on the classical works
by Whitney. We shall give a short outline of the features we shall need.
The $\mathbb{K}$-vector space $A$ is given a well-known Fr\'{e}chet
topology which can be conveniently defined in the following terms:
fix a Riemannian metric $h$ on $X$, and let $\nabla$ denote its
Levi-Civita connection. For any nonnegative integer $n$ denote by
$\mathsf{S}^nT^*X$ the $n$th symmetric power of the cotangent bundle
of $X$: its smooth sections can be viewed as smooth functions
on the tangent bundle $\tau_X:TX\to X$ which are of homogeneous polynomial degree $n$
in the direction of the fibres. For any section $\alpha\in
\Gamma^\infty(X,\mathsf{S}^nT^*X)$ let $D\alpha\in\Gamma^\infty(X,\mathsf{S}^{n+1}T^*X)$ be it is symmetrized
covariant derivative w.r.t.~$\nabla$ which can be seen as a
symmetric version (depending on $\nabla$!) of the exterior derivative.
Finally for any smooth function $f:X\to\mathbb{K}$ let
$D^nf\in\Gamma^\infty(X,\mathsf{S}^nT^*X)$ be the $n$fold iterated
symmetrized covariant derivative of $f$, hence $D^0f=f$, $Df=df$,
$D^{n+1}f=D(D^nf)$. For any compact set $K\subset X$ and any nonnegative
integer $m$ define a system of functions $p_{K,m}:A\to \mathbb{R}$ by
\begin{equation}\label{EqDefSeminorms}
p_{K,m}(f)=\max\{|D^nf(v)|~|~n\leq m,~\tau_X(v)\in K~
\mathrm{and}~h(v,v)\leq 1\},
\end{equation}
which will define an exhaustive system of seminorms, hence a locally
convex topological vector space which is known to be metric and
sequentially complete, hence Fr\'{e}chet.
It is not hard to see that the choice of another Riemannian metric
will give another system of seminorms which is equivalent to the first one. For flat $\mathbb{R}^n$ equipped with the usual euclidean scalar product these seminorms are easily seen to be equivalent to the
usual seminorms used in analysis where the higher partial derivatives
are expressed by multi-indices. Pointwise multiplication and
evaluation at a point are well-known to be continuous w.r.t.~to the Fr\'{e}chet topology.\\
For later use we shall give the usual definition of \emph{multidifferential operators}
$D:A\times \cdots\times A\to A$ of rank $p$ as a $p$-linear
map (over the ground field $\mathbb{K}$) such that there is a nonnegative integer $l$ and for each
chart $(U,(x^1 ,\ldots,x^N))$ there are smooth functions
$D^{\alpha_1\cdots\alpha_p}:U\to \mathbb{K}$ indexed by
$p$ multi-indices $\alpha_1,\ldots,\alpha_p\in\mathbb{N}^N$ such that for each point $x\in U$, and smooth functions
$f_1,\ldots,f_p\in A$ there is the following local expression
\begin{equation}\label{EqDefMultiDifferentialOperatorsAn}
D(f_1,\ldots,f_p)(x)
=\sum_{|\alpha_1|,\ldots,|\alpha_p|\leq l}
D^{\alpha_1\cdots\alpha_p}(x)
\frac{\partial^{|\alpha_1|}\big(f_1|_U\big)}
{\partial x^{\alpha_1}}(x)\cdots
\frac{\partial^{|\alpha_p|}\big(f_p|_U\big)}
{\partial x^{\alpha_p}}(x)
\end{equation}
where as usual $|\alpha|=|(i_1,\ldots,i_N)|=i_1+\ldots+i_N$ and $\frac{\partial^{|\alpha|}\phi}{\partial x^\alpha}$ is short for $\frac{\partial^{i_1+\ldots+i_N}\phi}
{\partial (x^1)^{i_1}\cdots \partial (x^N)^{i_N}}$.
Note that the value of $D(f_1,\ldots,f_p)$ at $x$ only
depends on the restriction of the functions $f_1,\ldots, f_p$
to any open neighbourhood of $x$: it follows that multidifferential operators can always be \emph{localized in the analytical
sense} that they give rise to unique well-defined multidifferential
operators $D_U$ on $\mathcal{C}^\infty(U,\mathbb{K})$ for any
open subset $U\subset X$ such that $D$ and $D_U$ intertwine the
restriction map $\eta:A\to \mathcal{C}^\infty(U,\mathbb{K})$
in the obvious way.
Multidifferential operators
are well-known to be continuous w.r.t.~to the Fr\'{e}chet topology. Furthermore, recall the usual composition rule
of multidifferential operators (inherited by the usual rule for multi-linear maps): given two multidifferential
operators $D$ (of rank $p$) and
$D'$ (of rank $q$) and a positive integer $1\leq i\leq p$ then the map $D\circ_i D'$ defined by $(f_1,\ldots,f_{p+q-1})\mapsto
D\big(f_1,\ldots,f_{i-1},D'(f_i,\ldots,f_{i+p-1}),
f_{i+p},\ldots,f_{p+q-1}\big)$ is a multidifferential operator of rank $p+q-1$. This composition rule
obviously is compatible with localization in the sense
that $\big(D\circ_i D'\big)_U=D_U\circ_i D'_U$.\\
Consider for any given point
$x_0\in X$ the binary relation $\sim_{x_0}$ on $A$ defined by $f\sim_{x_0} g$ iff the two smooth functions $f$ and $g$ have the same Taylor series at $x_0$
w.r.t.~to some chosen chart around $x_0$. It is
well-known to be an equivalence relation which does not depend on the chosen chart, an equivalence class is called an \emph{infinite jet}, and
the class of $f$ is called \emph{the infinite jet $j^\infty_{x_0}(f)$ of $f$}, see e.g.~\cite[p.117 section 12]{KMS93}.\\
Apart from the
trivial case where $X$ is a point, $A$ is well-known to have two `bad' features from the point of view of commutative algebra: firstly the product of two non-zero
functions with disjoint supports --which exist in $A$-- clearly vanishes showing that $A$ \emph{has very many nontrivial
zero-divisors}. Secondly $A$ \emph{is NOT Noetherian}: if for each nonnegative integer $n$ we denote by $I_n$ the ideal of all smooth $\mathbb{K}$-valued functions on $\mathbb{R}$ vanishing
on the closed interval
$\left[-\frac{1}{n+1},\frac{1}{n+1}\right]$, then the
ascending sequence $I_n\subset I_{n+1}$ never stabilizes
after a finite number of steps.
\subsubsection{Some ideal theory of smooth function algebras}
In this paragraph we collect some facts of maximal
and prime ideals of $A=\mathcal{C}^\infty(X,\mathbb{K})$
which are major topics in commutative algebra.
The main source will be J.-C.~Tougeron's classic
\cite{Tou72}. This paragraph can be ignored by the impatient reader.\\
For a given ideal $J$ of $A$ its zero set $Z(J)\subset X$ is of course a closed subset of $X$, and the closure $\overline{J}$ of $J$ (w.r.t.~the Fr\'{e}chet topology) in $A$ remains an ideal of $A$. The vanishing ideal of any set is closed in the Fr\'{e}chet topology, hence there is the chain of inclusions
$J\subset \overline{J}\subset I(Z(J))$. Since for any point $x$ not contained in $Z(J)$ there is a function $g$ in the ideal $J$ not vanishing at $x$, a simple partition of unity argument shows that any smooth $\mathbb{K}$-valued function whose support is compact and has empty intersection with $Z(J)$ must be an element of $J$. It follows in particular that an ideal $J$ contains the ideal $\mathcal{D}(X)$ of all smooth $\mathbb{K}$-valued functions
having compact support iff $Z(J)=\emptyset$ iff $J$ is dense in $A$. In the particular case where $X$ is compact this means that the only dense ideal is equal to $A$.
Returning to general $X$ it follows that
every proper ideal of $A$ which is \emph{closed} w.r.t.~the
Fr\'{e}chet topology has a non-empty set of common zeros.\\
In particular, every \emph{closed maximal ideal} of $A$
is equal to the vanishing ideal $I_{x_0}=I(\{x_0\})$ of some
point $x_0\in X$.\\
In general, for the closure of an ideal there is the very useful \emph{Whitney's Spectral Theorem} stating that
a function $g$ belongs to the closure $\overline{J}$ of an
ideal $J$ of $A$ iff for each $x\in X$ there is a function $h\in J$
(whose choice may depend on $x$)
whose infinite jet $j_x^\infty(h)$ is equal to $j_x^\infty(g)$, see e.g.~\cite[p.~91, Cor.~1.6., cas $q=1$]{Tou72}. Moreover, ideals having finitely many analytic generators
are always closed, see e.g.~\cite[p.119, Cor.~1.6.]{Tou72},
but there are also closed ideals having finitely many nonanalytic generators, see e.g.~\cite[p.104,~Rem.~4.7, Exemp.~4.8.]{Tou72}. \\
In the following, given $x_0\in X$, denote by $\mathfrak{I}_{x_0}$ the ideal of $A$ consisting of all smooth functions
vanishing in some neighbourhood of $x_0$, and by
$I^\infty_{x_0}$ the ideal of $A$ consisting of all functions $f$ such that $j^\infty_{x_0}(f)=0$.
Clearly $\mathfrak{I}_{x_0}\subset I^\infty_{x_0}\subset I_{x_0}$.
Consider now a \emph{prime ideal}
$\mathfrak{p}\subset A$ of $A$. We know that it is either dense iff $Z(\mathfrak{p})=\emptyset$ or has a
nonempty zero set. For each prime ideal it can be shown
that
\begin{equation}\label{EqCompProperPrimeFirstProperties}
Z(\mathfrak{p})\neq \emptyset
~~\Leftrightarrow~~\exists ~ x_\mathfrak{p}\in X:
Z(\mathfrak{p}) = \{x_\mathfrak{p}\}~~\Leftrightarrow~~
\exists ~ x_0\in X:~\mathfrak{I}_{x_0}\subset \mathfrak{p}~~\Leftrightarrow~~
\exists ~ y_0\in X: \mathfrak{p}\subset I_{y_0}.
\end{equation}
and in case one the four equivalent statements is fulfilled then $x_0=y_0=x_\mathfrak{p}$, uniquely determined by $\mathfrak{p}$. \\
Indeed, it is obvious that in
eqn (\ref{EqCompProperPrimeFirstProperties}) the second statement implies the first which is equivalent to the fourth. Moreover, if $Z(\mathfrak{p})$ contained
two distinct points $x_1,x_2\in X$ there would be two smooth functions
$\varphi_1,\varphi_2\in A$ with disjoint supports such that $\varphi_1(x_1)=1=\varphi_2(x_2)$ (whence
$\varphi_1,\varphi_2\in A\setminus \mathfrak{p}$),
but $\varphi_1\varphi_2=0\in\mathfrak{p}$ contradicting the fact that
$\mathfrak{p}$ is prime whence the first, the second,
and the last statement of (\ref{EqCompProperPrimeFirstProperties})
are equivalent implying the uniqueness and equality of
$x_\mathfrak{p}$ and $y_0$ in case one of three statements
is fulfilled. Moreover supposing that $Z(\mathfrak{p})=\{x_\mathfrak{p}\}$ then for any $h\in\mathfrak{I}_{x_\mathfrak{p}}$ there is $\varphi\in A$ with
$\varphi(x_\mathfrak{p})=1$ having its support inside the open neighbourhood of $x_\mathfrak{p}$ on which $h$ vanishes, whence
$\varphi\in A\setminus \mathfrak{p}$, but
$\varphi h=0\in\mathfrak{p}$ so $h\in\mathfrak{p}$ since
$\mathfrak{p}$ is prime, whence the second statement
of (\ref{EqCompProperPrimeFirstProperties}) implies the
third. Finally, supposing $\mathfrak{I}_{x_0}\subset \mathfrak{p}$ for some $x_0\in X$, if there was a $g\in\mathfrak{p}$
with $g(x_0)\neq 0$ one would find a positive valued
function $h\in \mathfrak{I}_{x_0}\subset \mathfrak{p}$
and a bump function $\chi\in A$ such that
$\chi|g|^2+h\in\mathfrak{p}$ has only strictly positive values, hence is invertible
implying $\mathfrak{p}=A$ which contradicts the fact that $\mathfrak{p}$ is proper. Hence $\mathfrak{p}\subset I_{x_0}$ whence
the third statement implies the last
in eqn (\ref{EqCompProperPrimeFirstProperties}) and the
equality $x_\mathfrak{p}=x_0=y_0$.
\\
Next we shall look at \emph{closed prime ideals}
of $A$:
fix a point $x_0\in X$, then
by Borel's classical Lemma (see e.g.~\cite[p.332,~Satz~5.3.33]{Wal07})
the factor algebra $A/I^\infty_{x_0}$ is isomorphic
to the algebra of formal power series
$\mathbb{K}[[x_1,\ldots,x_n]]$ which is a domain whence
$I^\infty_{x_0}$ is a prime ideal which is closed
since $f\mapsto j^k_{x_0}(f)$ is continuous for every
nonnegative integer $k$. Moreover the obvious inclusion
$\mathfrak{I}_{x_0}\subset I^\infty_{x_0}$ implies
$\overline{\mathfrak{I}_{x_0}}\subset I^\infty_{x_0}$ since $I^\infty_{x_0}$ is closed.
Since for any
$g\in I^\infty_{x_0}$ we have by definition
$j^\infty_{x_0}(g)=0=j^\infty_{x_0}(0)$, and for any
$y\in X\setminus\{x_0\}$ there is a smooth function
$\chi\in A$ vanishing in a suitable open neighbourhood of
$x_0$ and having the constant value $1$ in another suitable open neigbourhood of $y$ it follows that
$\chi g\in \mathfrak{I}_{x_0}$ and
$j^\infty_{y}(\chi g)=j^\infty_{y}(g)$ whence
$g\in \overline{\mathfrak{I}_{x_0}}$ thanks to Whitney's
spectral theorem. This implies the equality
$\overline{\mathfrak{I}_{x_0}}= I^\infty_{x_0}$.
By passing to closures in eqn
(\ref{EqCompProperPrimeFirstProperties}) it immediately follows that for any proper closed prime ideal $\mathfrak{p}$
\begin{equation}
\label{EqCompPropertiesClosedPrimes}
\mathrm{if}~\mathfrak{p}=\overline{\mathfrak{p}}\neq A
~\mathrm{and}~Z(\mathfrak{p})=\{x_\mathfrak{p}\}:
~~~~~\mathfrak{I}_{x_\mathfrak{p}}\subset \overline{\mathfrak{I}_{x_\mathfrak{p}}}
= I^\infty_{x_\mathfrak{p}}
\subset \mathfrak{p} \subset I_{x_\mathfrak{p}}.
\end{equation}
Conversely, another simple application of Whitney's spectral theorem shows that for any prime ideal $\mathfrak{p}$ the inclusion
$I^\infty_{x_0}\subset \mathfrak{p}\subset I_{x_0}$ implies
that $\mathfrak{p}$ is proper and closed.
Moreover, it follows that for each given $x_0\in X$ the set of all closed prime ideals $\mathfrak{p}\subset A$ with
$Z(\mathfrak{p})=\{x_0\}$ is in bijection with
the set of all the prime ideals of the formal power series
algebra $\mathbb{K}[[x_1,\ldots,x_n]]$ via the map
$\mathfrak{p}\mapsto \mathfrak{p}/I_{x_0}^\infty$.
These latter prime ideals can be characterized in a purely algebraic way, see e.g.~
\cite[p.~31,~Prop.~2.2]{Tou72}. Thirdly it is somewhat harder to see that
$I^\infty_{x_0}I^\infty_{x_0}=I^\infty_{x_0}$, see
\cite[p.93, Lemme 2.4]{Tou72}
(for the particular case where the closed set equals
$\{x_0\}$) implying that $I^\infty_{x_0}$ is equal to the intersection of all the powers of $I_{x_0}$.\\
Note that there are very many `funny' non closed prime
ideals of $A$ (even in the case where $X$ is compact):
by applying Krull's Lemma to the ideal $\mathcal{D}(X)$
and the multiplicative subset generated by an arbitrary fixed function $f$ having non compact support we have the existence of a dense prime ideal which does not contain $f$. Likewise, applying Krull's Lemma to the
ideal $\mathfrak{I}_{x_0}$ and the multiplicative subset
generated by an arbitrary function $g\in I^\infty_{x_0}$
which is not in $\mathfrak{I}_{x_0}$ we get a proper
non closed prime ideal $\mathfrak{p}$ with
$Z(\mathfrak{p})=\{x_0\}$, hence containing $\mathfrak{I}_{x_0}$, but not $I^\infty_{x_0}$.
\subsection{(Algebraic) Localization}
\label{SubSecAlgebraicLocalization}
This section recalls well-known results which we present
according to the excellent text-book \cite{Lam99} in a categorically
`tuned' version. See also the rather useful review
\cite{Sko2006} for more aspects.
\subsubsection{Commutative Localization}
Recall that for any domain $ R $ it is always possible to construct a field, called the field of fractions of $ R $, by formally inverting all nonzero elements.
More generally, recall the \emph{localization of a commutative
$K$-algebra $R$}: let $S\subset R$ be a \emph{multiplicative subset}
(which is characterized by containing the unit and for any two
of its elements its product). Then the following binary relation
$\sim$ on $R\times S$ defined by
\begin{equation}\label{EqDefFractionsEquivalenceRelComm}
(r_1,s_1)\sim (r_2,s_2)~~~\mathrm{if~and~only~if}~~~
\exists~s\in S:~r_1s_2s=r_2s_1s
\end{equation}
is an equivalence relation, and the set of all classes (written as (symbolic) fractions $\frac{r}{s}$) forms a commutative
$K$-algebra $R_S$ --by means of the usual addition and multiplication rules of fractions-- called the \emph{quotient ring}, and a ring homomorphism (the \emph{numerator morphism})
$\eta_{(R,S)}=\eta:R\to R_S$ given by $r\mapsto \frac{r}{1}$
which in particular defines the $K$-algebra structure of
$R_S$.
Let $U(R)\subset R$ denote the multiplicative group of invertible
elements of $R$. A morphism of unital $K$-algebras
$\Phi:R\to R'$ is called \emph{$S$-inverting} (for a multiplicative
subset $S\subset R$) if for each $s\in S$ the image
$\Phi(s)$ is invertible in $R'$, hence $\Phi(S)\subset U(R')$. The following properties of the constructions can be observed:
\begin{prop} \label{PLocalizationCommutative}
Let $R$ be a commutative $K$-algebra and
$S\subset R$ be a multiplicative subset. Then the following
is true:
\begin{enumerate}
\item[a.] $ \eta_{(R,S)}(S)\subset U(R_S)$, that is, the homomorphism $\eta_{(R,S)}$ sends elements of $S$ to invertible elements of $R_S$. Moreover, for any commutative unital $K$-algebra $R$ equipped with a multiplicative subset $S\subset R$, the pair $(R_S,\eta_{(R,S)})$ is \textbf{universal} in the sense that any $S$-inverting morphism of unital
$K$-algebras
uniquely factorizes, i.e.~the following diagram commutes:
\begin{equation}\label{EqUniversalityOfNumeratorMap}
\xymatrix{ R \ar[r]^{\eta}\ar[dr]_\alpha & R_S \ar[d]^f \\
& R' & }
\end{equation}
where $f$ is a morphism of unital $K$-algebras determined by
$\alpha$, see e.g.~\cite[p.55, Ch.III]{Mac98}
for definitions of universal objects.
\item[b.] Every element of $ R_S $ can be written as
a fraction
$\eta(r)\eta(s)^{-1}, $ for some $r\in R $ and $s\in S$.
\item[c.] $ \ker(\eta_{(R,S)})=
\{r\in R~|~rs=0\mathrm{\:for\:some\:}s\in S\} $.
\end{enumerate}
\end{prop}
\noindent We shall give a more categorical description in the next section.\\
\noindent \textbf{Remarks} (which will only be used in Section
\ref{SecCommutativelyLocalizedStarProducts}):
\begin{enumerate}
\item Recall the well-known \emph{localization of any $R$-module $M$}, see
e.g.~\cite[p.397]{Jac80}, which is a module $M_S$ with respect to the localized algebra $R_S$. It is naturally
isomorphic to $R_S\otimes_R M$, see e.g.~\cite[p.398, Prop.7.6]{Jac80}.
\item Let $R,R'$ be commutative associative unital
$K$-algebras, and $S\subset R$, $S'\subset R'$ multiplicative subsets, respectively.
Then it is straight-forward to see that $S\otimes_K S'=\{s\otimes_K s'\in
R\otimes_K R'~|~s\in S; s'\in S'\}$ is a multiplicative subset of the $K$-algebra $R\otimes_K R'$ and that the tensor product of the numerator morphisms
$\eta_{(R,S)}\otimes_K\eta_{(R',S')}:R\otimes_K R'\to
R_S\otimes_K R'_{S'}$ induces a natural isomorphism
of unital $K$-algebras
\begin{equation}
\label{EqCompLocalizationOfTensorProducts}
(R\otimes_K R')_{S\otimes_K S'} ~\stackrel{\sim}{\longrightarrow}~
R_S\otimes_K R'_{S'}.
\end{equation}
\end{enumerate}
\subsubsection{Noncommutative Localization: General Construction}
Let $R$ be an associative unital $K$-algebra which is not necessarily commutative. Again, we call
$S\subset R$ a multiplicative subset if for all $s,s'\in S$ we have $ss'\in S$ and $1_R=1\in S$. As above, let $U(R)\subset R$ denote the multiplicative subset (which is even a group) of invertible elements of $R$.
\\
Let $K \mathbf{Alg}$ be the category of all associative unital
$K$-algebras.
Moreover, let $K \mathbf{AlgMS}$ be the category of all pairs $(R,S)$ of associative unital $K$-algebras $R$
with a multiplicative subset $S\subset R$ where the morphisms
$(R,S)\to (R',S')$ are morphisms of unital $K$-algebras
$R\to R'$ mapping $S$ into $S'$. Since any morphism of unital $K$-algebras maps the group of invertible elements in the group of invertible elements there is an obvious functor $\mathcal{U}:K \mathbf{Alg}\to K \mathbf{AlgMS}$ given on
objects by $\mathcal{U}(R)=\big(R,U(R)\big)$.\\
For \emph{commutative $K$-algebras}, the above localization description in Proposition \ref{PLocalizationCommutative},
$a.$, gives rise to a functor
$\mathcal{L}:K \mathbf{AlgMS}\to K \mathbf{Alg}$ associating
to each pair $(R,S)$ the quotient ring $R_S$, and it is not hard to see that it is a \emph{left adjoint} of the functor $\mathcal{U}$, see e.g.~\cite[p.79, Ch.IV]{Mac98} for definitions: the unit of the adjunction gives back the
canonical numerator morphism $\eta$, and the counit is an isomorphism since localization w.r.t.~the group of all invertible elements is isomorphic to the original algebra.
In the general noncommutative situation such a localization
functor $\mathcal{L}:K \mathbf{AlgMS}\to K \mathbf{Alg}$
does also always exist, see e.g.~\cite[Prop.(9.2), p.289]{Lam99} for a proof. We present it in the following categorical form:
\begin{prop}\label{PGeneralLocalization}
There is an adjunction of functors
\[
K\mathbf{AlgMS}~~\begin{array}[c]{c}
\underrightarrow{~~~~~\mathcal{L}~~~~~} \\
\overleftarrow{~~~~~\mathcal{U}~~~~~}
\end{array}~~
K\mathbf{Alg}
\]
where $\mathcal{L}$ is the left adjoint to the above functor
$\mathcal{U}$
such that each component $\eta_{(R,S)}$ of the unit
$\eta:I_{K \mathbf{AlgMS}}\naturalto \mathcal{U}\mathcal{L}$
of the adjunction
satisfies the universal property $a.$ of
the previous Proposition \ref{PLocalizationCommutative}
in the general noncommutative case.
We refer to $\mathcal{L}$ as a \textbf{localization functor}.\\
For a given $(R,S)$ in $K\mathbf{AlgMS}$ we denote
by $R_S$ the $K$-algebra $\mathcal{L}(R,S)$ given by
the functor $\mathcal{L}$, and by
$\eta_{(R,S)}:R\to R_S$ the component of the unit of the adjunction. Then $\eta_{(R,U(R))}:R\to R_{U(R)}$ is an
isomorphism, the inverse being the component
$\epsilon_R$ of the counit
$\epsilon:\mathcal{L}\mathcal{U}\naturalto
I_{K\mathbf{Alg}}$ of the adjunction.
Moreover, every element of the $K$-algebra $R_S$ is a finite
sum of products of the form ($\eta=\eta_{(R,S)}$)
\begin{equation}
\label{EqCompLocAlgebraGeneralElement}
\eta(r_1)\big(\eta(s_1)\big)^{-1}\cdots
\eta(r_N)\big(\eta(s_N)\big)^{-1}
\end{equation}
(which may be called `multifractions') with $r_1,\ldots,r_N\in R$ and $s_1,\ldots,s_N\in S$
(note that $r_1$ or $s_N$ may be equal to the unit element of $R$).
\end{prop}
The idea of the proof of \cite[Prop.(9.2), p.289]{Lam99} is as follows: (see also the PhD thesis
\cite[p.144]{Ara21} for details) there is a natural
surjective morphism of unital $K$-algebras $\hat{\epsilon}_R$ from the
free $K$-algebra generated by the $K$-module $R$,
$T_KR$, to $R$ which provides us with a natural categorical presentation of
$R$ `by generators and relations': this morphism is given by the $R$-component of the counit $\hat{\epsilon}$ of the well-known adjunction
\[
K\mathbf{Mod}~~\begin{array}[c]{c}
\underrightarrow{~~~~~
{T_K}~~~~~} \\
\overleftarrow{~~~~~\mathcal{O}~~~~~}
\end{array}~~
K\mathbf{Alg}
\]
where $\mathcal{O}$ is the forgetful functor and ${T_K}$ the free algebra functor. Let
$\kappa(R)\subset T_KR$ denote the kernel of
$\hat{\epsilon}_R$. The next step is to add to the generating $K$-module $R$ the free $K$-module
$KS$ with basis $S$, and to consider the two-sided ideal $\kappa(R,S)$ in
the free algebra $T_K(R\oplus KS)$ generated by
$\kappa(R)$ and by the subsets
$\{(s,0)\otimes(0,s)-\mathbf{1}_T~|~s\in S\}$
and $\{(0,s)\otimes (s,0)-\mathbf{1}_T~|~s\in S\}$
of $T_K\big(R\oplus KS\big)$ where the multiplication $\otimes$ and the unit $\mathbf{1}_T$ are taken in the free
algebra $T_K\big(R\oplus KS\big)$. The localized algebra
$\mathcal{L}(R,S)=R_S$ is then defined by
$R_S=T_K\big(R\oplus KS\big)/\kappa(R,S)$, and the
`numerator morphism' $\eta_{(R,S)}:R\to R_S$ is simply
the canonical injection of $R$ into $T_KR\subset T_K\big(R\oplus KS\big)$ followed by the obvious projection. It follows that for every
$s\in S$ its image $\eta_{(R,S)}(s)$ has an inverse by
construction. The verification that this leads to a well-defined functor $\mathcal{L}$ which is a left adjoint to
the functor $\mathcal{U}$ is lengthy, but straight-forward.
The preceding construction shows that the
functor $\mathcal{L}$ provides us with an abstract universal numerator map $\eta_{(R,S)}$ which is
\emph{$S$-inverting} in the sense that every $\eta_{(R,S)}(s)$,
$s\in S$, is invertible in $R_S$ and a natural isomorphism $\epsilon_R$
from an algebra to its localization w.r.t.~its group of units.
\subsubsection{Noncommutative Localization: Ore Localization}
Although the preceding general localization construction
is always well-defined, it exhibits the following draw-backs
which show the need for a more particular localization procedure due to {\O}.~ Ore, 1931, \cite{Ore31} which we shall sketch in this Section:
\begin{itemize}
\item The construction by generators and relations renders
the localized algebra $R_S$ quite implicit and not always
computable.
\item Of course, it is easy to see that if $S$ contains $0$ then
the localized algebra is trivial,
$R_S\cong\{0\}$. But even for multiplicative subsets $S\subset R$
not containing $0$ it may happen that the localized algebra
$R_S$ is trivial as example $(9.3)$ of
\cite[p.289]{Lam99} shows.
This can never happen in the commutative case since
the equation $\frac{1}{1}=\frac{0}{1}$ is equivalent to
the fact that $0\in S$. This shows the lack of control
over the kernel of the `numerator morphism' $\eta_{(R,S)}$.
\item The presentation of elements of $R_S$ in terms of
sums of `\emph{multifractions}'
as equation (\ref{EqCompLocAlgebraGeneralElement}) shows
is quite clumsy, and one would prefer simple right or
left fractions.
\end{itemize}
In order to motivate the particular conditions on $S$ in the following definition we look at the multifractions which span
the localized $K$-algebra $R_S$, see eqn (\ref{EqCompLocAlgebraGeneralElement}): it may be desirable to transform a multifraction in a simple right fraction, and a partial step may consist in transforming a left fraction
$\big(\eta(s)\big)^{-1}\eta(r)$ (with $r\in R$ and $s\in S$) directly into a right fraction $\eta(r')
\big(\eta(s')\big)^{-1}$ (for some $r'\in R$ and $s'\in S$) which implies that every multifraction is equal to a right fraction by applying this step a finite number of times. This above condition implies the equation $\eta(rs')=\eta(sr')$ and thus motivates the stronger condition that for any pair $(r,s)\in R\times S$ there is a pair $(r',s')\in R\times S$ such that $rs'=sr'$, and this is the well-known \emph{right Ore condition}:
\begin{defi}\label{defi: bigdef}
Let $R$ be an associative unital $K$-algebra,
and $S\subset R$ be a multiplicative subset.
\begin{itemize}
\item[i.]A $K$-algebra $\check{R}_S$ equipped with a
morphism of unital $K$-algebras $\check{\eta}_{(R,S)}=\check{\eta}:R\to \check{R}_S$ is said to be a \textbf{right $K$-algebra of fractions of $(R,S)$} if the following
conditions are satisfied:
\begin{itemize}
\item[a.] $\check{\eta}_{(R,S)}$ is $S$-inverting,
\item[b.] Every element of $\check{R}_S$ is of the form
$\check{\eta}(r)\big(\check{\eta}(s)\big)^{-1} $ for some
$r\in R $ and $s\in S$;
\item[c.] $ \ker(\check{\eta})
=\{r\in R~|~rs=0, \mathrm{\:for\:some\:}s\in S\}=:I_{(R,S)}=:I$.
\end{itemize}
\item[ii.] $S$ is called a \textbf{right denominator set}
if it satisfies the following two properties:
\begin{itemize}
\item[a.] For all $r\in R$ and $s\in S$ we have
$rS\cap sR\neq\emptyset$ ($S$ \textbf{right permutable} or \textbf{right Ore set}), i.e.~there are $r'\in R$
and $s'\in S$ such that $rs'=sr'$.
\item[b.] For all $r\in R$ and for all $s'\in S$: if $s'r=0$
then there is $s\in S$ such that $rs=0$
($S$ \textbf{right reversible}).
\end{itemize}
\end{itemize}
\end{defi}
\noindent In case $R$ is commutative every multiplicative subset is
a right denominator set. Moreover the group of all invertible
elements $U(R)$ of any unital $K$-algebra is obviously
a right denominator set.
\noindent The next theorem shows that such a right algebra of fractions
exists iff $S$ is a right denominator set, see also
\cite[Thm (10.6), p.300]{Lam99}:
\begin{theorem} \label{TLocalizationForRightDenominatorSets}
Let $R$ be a unital $K$-algebra and $S\subset R$ be
a multiplicative subset. Then the following is true:
\begin{enumerate}
\item
The $K$-algebra $R$ has a right $K$-algebra of fractions $\check{R}_S$ with respect to the multiplicative subset $S$ if and only if $S$ is a right denominator set.
\item If this is the case each such pair $(\check{R}_S,\check{\eta})$
is universal in the sense of diagram (\ref{EqUniversalityOfNumeratorMap}) and each $\check{R}_S$ is isomorphic
to the canonical localized algebra $R_S$ of
Proposition \ref{PGeneralLocalization}.
\item Each $\check{R}_S$ is isomorphic to the quotient set
$RS^{-1}:=(R\times S)/\sim$ with respect to
the following binary relation $\sim$ on $R\times S$
\begin{equation}\label{EqDefRightFractionsEquivalenceRel}
(r_1,s_1)\sim(r_2,s_2) ~~ \Leftrightarrow~~\exists b_1,b_2\in R~
\mathrm{such~that}~s_1b_1=s_2b_2\in S\mathrm{\:and\:} r_1b_1=r_2b_2 \in R
\end{equation}
which is an equivalence relation generalizing relation (\ref{EqDefFractionsEquivalenceRelComm}). $RS^{-1}$ carries a canonical unital
$K$-algebra structure, i.e. addition and
multiplication on equivalence classes
$r_1s_1^{-1}$ and $r_2s_2^{-1}$ (with $r_1,r_2\in R$ and
$s_1,s_2\in S$) is given by
\begin{equation}
\label{EqAdditionMultiplicationOnEqClassesRD}
r_1s_1^{-1}+r_2s_2^{-1}=(r_1c_1+r_2c_2)s^{-1},~~~\mathrm{and}~~~(r_1s_1^{-1})(r_2s_2^{-1})
=(r_1r')(s_2s')^{-1}
\end{equation}
where we have written $s_1c_1=s_2c_2=s\in S$ (with $c_1\in S$ and $c_2\in R$) and $r_2s'=s_1r'$ (with
$s'\in S$ and $r'\in R$) using the right Ore property.
The numerator morphism $\eta_I:R\to RS^{-1}$ is
given by $\eta_I(r)=r1^{-1}$ for all $r\in R$.
\end{enumerate}
\end{theorem}
\noindent
For a proof, see e.g.~\cite[p.244, Thm.~25.3]{Pas91}
\footnote{We are indebted to A.~Eduque for having pointed out this reference to us.} or the PhD thesis
\cite[p.146]{Ara21}.\\
We shortly describe the \textbf{idea of the proof}: whereas
in part 1. the verification of the implication ``$ (i.)~ \Longrightarrow ~(ii.) $'' in Definition \ref{defi: bigdef} is straight-forward, the
converse implication ``$ (i.)~ \Longleftarrow ~(ii.) $''
of Definition \ref{defi: bigdef} is much more involved:
the traditional `steep and thorny way' (originally set up by {\O}ystein Ore, \cite{Ore31}) consists of
a concrete construction of the $K$-algebra $RS^{-1}$ upon
using the above relation (\ref{EqDefRightFractionsEquivalenceRel})
--which reflects the idea of creating `common denominators'-- and defining and verifying the canonical
$K$-algebra structure (\ref{EqAdditionMultiplicationOnEqClassesRD}) on the quotient set $R\times S/\sim$ by hand
which is elementary, but extremely tedious
(even the fact that the above relation
(\ref{EqDefRightFractionsEquivalenceRel}) is transitive requires some work). We refer to Lam's book \cite[p.300-302]{Lam99} for some of the details. \\
There is a different more elaborate way to prove
part 1. and the rest of the theorem (see \cite[p.244, Thm.~25.3]{Pas91} and \cite[Remark (10.13), p.302,
and footnote 70]{Lam99}): it is instructive to
look first at the equivalence relations created by an arbitrary
$S$-inverting morphism of unital $K$-algebras $\alpha:R\to R'$, the classes being defined by the fibres of the map
$p_\alpha:R\times S\to R'$ given by $p_\alpha(r,s)=
\alpha(r)\big(\alpha(s)\big)^{-1}$, which is already
very close to relation (\ref{EqDefRightFractionsEquivalenceRel}): thanks to the
fact that
the right fractions $\alpha(r)\big(\alpha(s)\big)^{-1}$
form a $K$-subalgebra of $R'$ (here the Ore axiom is
needed) it creates an
algebra structure on the quotient set isomorphic to
the aforementioned subalgebra of $R'$ whence there is no need of tedious
verifications of identities of algebraic structures. The central point then is to construct
a unital $K$-algebra $R'$ and an $S$-inverting morphism
$\alpha:R\to R'$ whose kernel is minimal, hence \emph{equal to
$I_{(R,S)}$} which finally shows that the above algebra
$RS^{-1}$ exists and does everything it should do.
For this construction, the following trick is used:
after `regularizing' $R$ by passing to the factor algebra $\overline{R}=R/I_{(R,S)}$ (where the image multiplicative set $\overline{S}$ does no longer contain right or left divisors of zero)
one looks at
the endomorphism algebra of the \textbf{injective hull $E$ of the
right $\overline{R}$-module $\overline{R}$}. Every left
multiplication with elements of $\overline{R}$ can nonuniquely be extended to $E$, and
the extensions of left multiplications with elements
of $\overline{S}$ turn out to be invertible (here the Ore axiom is needed). $R'$ will
then be given by the subalgebra generated by all extensions
of left multiplications and the inverses of left multiplications with elements of $\overline{S}$ modulo the two-sided
ideal of all $\overline{R}$-linear maps $E\to E$ vanishing
on $\overline{R}$: this will resolve the ambiguity of extension, and $\overline{R}$ injects in $R'$, the injection being $\overline{S}$-inverting.
Moreover, in any noncommutative domain (no nontrivial zero divisors) which is \emph{right Noetherian}
(i.e.~where every ascending chain of right ideals stabilizes) the subset of nonzero
elements is always a right denominator set (see \cite[p.304, Cor.~(10.23)]{Lam99} or \cite[p.14, Beisp.~2.3 b)]{BGR73}).
In particular, this applies to every universal enveloping algebra over a finite-dimensional Lie algebra
(over a field $\mathbb{K}$ of characteristic zero) and for the Weyl-algebra generated by
$\mathbb{K}^{2n}$. On the other hand, for the free algebra
$R=T_\mathbb{K}V$ generated by a vector space $V$ of dimension $\geq 2$ over a field $\mathbb{K}$ of characteristic zero
(which is well-known to be isomorphic to the
universal enveloping algebra of the free Lie algebra generated
by $V$) the multiplicative subset of all nonzero elements is neither
a right nor a left denominator set: for two linearly independent elements $v$ and $w$ in $V$ we clearly have
$vR\cap wR=\{0\}$. Hence the above statement about
universal enveloping algebras does no longer apply to
infinite-dimensional Lie algebras like the free Lie algebra
generated by $V$. Moreover inverse images of
right denominator subsets are in general no right denominator
subsets as the example of the natural homomorphism
$T_KV\to S_KV$ of the free to the free commutative algebra generated by $V$ shows: as $S_KV$ is a commutative domain, the subset $S=S_KV\setminus \{0\}$ is a right denominator set whereas its inverse image $T_KV\setminus \{0\}$ is not. On the other hand every homomorphic image of
a right (or left) Ore set clearly is again a right (or left)
Ore set. However, there may be subsets of right (or left)
denominator sets which are no longer right (or left) denominator sets, as we shall see later in Section
\ref{SecNonOreExample}.
\subsection{Star products}
We want to recall some basic definitions and facts about the deformation quantization of smooth manifolds and star products, see \cite{BFFLS78}, \cite{Wal07} for more information.
Given a $\mathbb{K}$-vector space $V$ we denote by $V\ph$ the $\mathbb{K}\ph$-module of formal power series.
An element of $v \in V\ph$ can be written uniquely as $v = \sum_{i=0}^\infty v_i \lambda^i$ with $v_i \in V$, and for a
given $v\in V\ph$ and $i\in\mathbb{N}$ we shall always write $v_i\in V$ for the $i$th component of $v$ as a formal power series. We also note that for two $\mathbb{K}$-vector spaces $V,W$ we have
$\Hom_{\mathbb{K}\ph}(V\ph,W\ph) \cong \Hom_{\mathbb{K}}(V,W)\ph$. \\
In the following considerations of differential geometry
we set $\mathbb{K}=\mathbb{R}$ or $\mathbb{K}=\mathbb{C}$,
and for any smooth differentiable manifold $X$ we
write $\CCinf(X)=\mathcal{C}^\infty(X,\mathbb{K})$.
\begin{defi}[Star product]\label{DefStarProducts}
A (formal) star product $\star$ on a manifold $X$ is a $\mathbb{K}\ph$-bilinear operation $\CCinf(X)\ph \times \CCinf(X)\ph \rightarrow \CCinf(X)\ph$ --which can always be written as a formal series $f \star g = \sum_{k=0}^\infty \lambda^kC_k(f,g)$ for all $f,g\in\CCinf(X)$--
satisfying the following properties for all $f,g, \in \CCinf(X)$ (see section \ref{SubSubSec Commutative algebra of smooth function algebras} for definitions and
notations):
\begin{itemize}
\item $\sum_{l=0}^k \big(C_l\circ_1 C_{k-l}
-C_l\circ_2 C_{k-l}\big)=0, ~\forall k\geq0. $,
\item $C_0(f,g) = fg$,
\item $ 1 \star f = f \star 1 = f$,
\end{itemize}
with $\mathbb{K}$-bilinear operators $C_k: \CCinf(X) \times \CCinf(X) \to \CCinf(X)$ which we
always assume to be bidifferential operators.
\end{defi}
\begin{rem}
It follows from the first equation of Definition \ref{DefStarProducts} that $\star$ is associative.
\end{rem}
\noindent Note that every star product $\star$ can be
analytically localized to an associative star product $\star_U$ defined
on $\mathcal{C}^\infty(U)[[\lambda]]$ by the localization
of all the bidifferential operators $C_k$ to $C_{kU}$
(see section \ref{SubSubSec Commutative algebra of smooth function algebras} for more details).
The following well-known explicit star product $\star_s$ on $\mathbb{R}^2$ with coordinates $(x,p)$ will be used
in the sequel:
\begin{equation}\label{EqDefStandardStarProduct}
f\star_s g =\sum_{k=0}^\infty \frac{\lambda^k}{k!}
\frac{\partial^k f}{\partial p^k}
\frac{\partial^k g}{\partial x^k}
\end{equation}
for any two functions $f,g\in \CCinf(\mathbb{R}^2)$.
In the physics literature $\lambda$ corresponds to
$(-\mathbf{i}\hbar)$. Moreover, for functions polynomial
in the `momenta' $p$ it is obvious that the above series
converges, and for $\lambda=1$ one obtains the usual formula
for the symbol calculus of multiplication of differential operators on the
real line (where partial derivatives are always brought to the right and replaced by the new variable $p$).
The star commutator for $a,b \in \CCinf(X)\ph$ is defined by
$[a,b]_\star = a \star b - b\star a$.
As usual, the star commutator satisfies the Leibniz-identity, i.e. $[a,b \star c]_\star=[a,b]_\star \star c + b \star [a,c]_\star$, and the Jacobi-identity and thus defines the
structure of a non-commutative Poisson algebra. Also the adjoint action is a derivation of $\CCinf(X)\ph$ for all $a \in \CCinf(X)\ph$.\\
From this it can easily be deduced that the first order term of a star product defines a Poisson bracket as follows
\begin{equation}
\{f,g\} =\frac{1}{2} ( C_1(f,g)-C_1(g,f)) = \frac{1}{2\lambda} [f,g] |_{\lambda=0} \text{ for } f,g \in \CCinf(X).
\end{equation}
For $\CCinf(X)$ it is well-known that every Poisson bracket comes from a unique \emph{Poisson structure} $\pi$ which is a smooth
bivector field $\pi$, i.e.~a smooth section in
$\Lambda^2TX$ satisfying the identity $[\pi,\pi]_S=0$
where $[~,~]_S$ denotes the Schouten bracket, see
e.g.~\cite[p.84-87]{Wal07}: the relation is
$\{f,g\}=\pi(df,dg)$.
The very difficult converse problem whether the Poisson bracket associated to any given Poisson structure $\pi$
arises as the first order commutator of a star product
had been solved by M.~Kontsevich, see \cite{Kon03}.\\
The following considerations will only be used in
Section \ref{SecNonOreExample}: two star products $\star$, $\star'$ are called \emph{equivalent} if there exists a formal power series of differential operators
$T= \operatorname{id} + \sum_{k=1}^\infty\lambda^k T_k$, with $T(1) =1$ such that $T(f) \star T(g) = T(f \star' g) $
for all $f,g\in \CCinf(X)\ph$. The operator $T$ in the above definition is always invertible and indeed, given a star product $\star$,
$f \star' g := T^{-1}(T(f) \star T(g))$ always gives a new equivalent star product. Two equivalent star products clearly give rise to the same Poisson bracket.\\
For the star product (\ref{EqDefStandardStarProduct})
there is the following well-known transformation
$T=e^{-\lambda \Delta}$ with
$\Delta(f)=\partial^2f/\partial x\partial p$: together with the $\mathbb{K}$-linear (and not $\mathbb{K}[[\lambda]]$-linear) involution
$L:A[[\lambda]]\to A[[\lambda]]$ given by
$L\left(\sum_{r=0}^\infty\lambda^rf_r\right)=
\sum_{r=0}^\infty(-\lambda)^rf_r$ we get --setting
$V=L\circ T$
\begin{equation} \label{EqDefNeumaierModified}
\big(V(f)\big)\star_s
\big(V(g)\big)
=V\big(g\star_s f\big)
\end{equation}
which can easily be checked on exponential functions
$(x,p) \mapsto e^{ax+bp}$ with $a,b\in \mathbb{K}$.
\section{Noncommutative localization of smooth star products on open subsets}
\label{SecNonComLocOpenSets}
Let $(X,\pi)$ be a Poisson manifold, let $\star=
\sum_{k=0}^\infty \lambda^k C_k$ be a star product on
$(X,\pi)$, and let $\Omega\subset X$ be a fixed open set.
We set $K=\mathbb{K}[[\lambda]]$, and consider the $K$-algebra $\big(R=\mathcal{C}^\infty(X)[[\lambda]],\star\big)$.
Moreover, since the star product $\star$ only involves
bidifferential operators, it restricts to a star product $\star_\Omega$ on formal
power-series $\phi\in R_\Omega:=
\mathcal{C}^\infty(\Omega,\mathbb{K})[[\lambda]]$ such that
$\big(R_\Omega,\star_\Omega\big)$ is also a $K$-algebra.
It follows that the restriction map
$\eta_\Omega=\eta:R\to R_\Omega:f\mapsto f|_\Omega$ is a morphism
of unital $K$-algebras. We define the following subsets
$S_\Omega\subset \mathcal{C}^\infty(X,\mathbb{K})$ and $S\subset R$:
\begin{equation}
S_\Omega:=\left\{ g_0\in \mathcal{C}^\infty(X,\mathbb{K})~|~\forall~x\in\Omega:~g_0(x)\neq 0 \right\}
~~\mathrm{and}~~S:=S_\Omega+\lambda R.
\end{equation}
Clearly, $S_\Omega$ is a commutative multiplicative subset of $\mathcal{C}^\infty(X,\mathbb{K})$. Since the constant function $1$ is in $S$, and for any
$g,h\in S$ we have $(g\star h)_0(x)=g_0(x)h_0(x)\neq 0$ (for all
$x\in \Omega$) it follows that \emph{$S$ is a multiplicative subset of
the unital $K$-algebra $R$}.\\
We can now consider the noncommutative localization of
$R$ with respect to $S$ and compare it with the
unital $K$-algebra $R_\Omega$:
\begin{theorem}\label{TLocalizationEquivalenceOpenSubset}
Using the previously fixed notations we get
for any open set $\Omega\subset X$:
\begin{enumerate}
\item $(R_\Omega,\star_\Omega)$ equipped with the restriction morphism
$\eta$ consitutes a right $K$-algebra of fractions for $(R,S)$.
\item As an immediate consequence we have that $S$ is a right denominator set.
\item \textbf{This implies in particular that the algebraic localization $RS^{-1}$
of $R$ with respect to $S$ is isomorphic to
the concrete localization $R_\Omega$ as unital $K$-algebras.}
\end{enumerate}
\end{theorem}
\begin{proof} \textbf{1.} We have to check properties
$(i.a.)$, $(i.b.)$, and $(i.c.)$ of Definition
\ref{defi: bigdef}:\\
$\bullet$~``\emph{$\eta$ is $S$-inverting}''
(property $(i.a.)$): indeed,
this is a classical reasoning from deformation quantization
which we shall repeat for the convenience of the reader. Let
$g\in S$ and $\gamma=\eta(g)$ its restriction to $\Omega$.
Take $\psi\in R_\Omega$ and try to solve the equation
$\gamma\star_\Omega \psi =1$. At order $k=0$ we get the
condition $\gamma_0\psi_0=1$, but since $\gamma_0(x)\neq 0$
for all $x\in \Omega$ the function
$x\mapsto \psi_0(x):=\gamma_0(x)^{-1}$
is well-defined and smooth
in $\mathcal{C}^\infty(\Omega,\mathbb{K})$.
Suppose by induction that the functions $\psi_0,\ldots,\psi_k\in
\mathcal{C}^\infty(\Omega,\mathbb{K})$ have already been found in order to satisfy equation
$\gamma\star_\Omega \psi =1$ up to order $k$. At order $k+1\geq 1$ the condition reads
\[
0 = \big(\gamma\star_\Omega \psi\big)_{k+1}
= \sum_{ \scriptsize \begin{array}{c}
l,p,q=0 \\
l+p+q=k+1
\end{array}
}^{k+1} C_l(\gamma_p,\psi_q)
= \gamma_0\psi_{k+1}
+F_{k+1}(\psi_0,\ldots,\psi_k,\gamma_0,\ldots,
\gamma_{k+1})
\]
where the term starting with $F_{k+1}$ denotes the difference
$\big(\gamma\star_\Omega \psi\big)_{k+1}-\gamma_0\psi_{k+1}$
which obviously does not contain $\psi_{k+1}$. Again,
since $\gamma_0$ is nowhere zero on $\Omega$ the function
$\psi_{k+1}$ can be computed from this equation by
multiplying with $x\mapsto \gamma_0(x)^{-1}$. Hence there is a solution $\psi\in R_\Omega$ of equation
$\gamma\star_\Omega \psi =1$. In a completely analogous way
there is a solution $\psi'\in R_\Omega$ of the equation
$\psi' \star_\Omega\gamma =1$. By associativity of $\star_\Omega$ we get $\psi=\psi'$ as the unique inverse of $\gamma$ in the unital $K$-algebra $R_\Omega$.\\
$\bullet$~``\emph{Every $\phi\in R_\Omega$ is equal to
$\eta(f)\star_\Omega\eta(g)^{\star_\Omega -1}$ for some $f\in R$ and $g\in S$}'' (property $(i.b.)$): the main idea is to transfer the proof of Lemme 6.1 of Jean-Claude Tougerons's book \cite[p.113]{Tou72} to the
non-commutative situation. Let
$\phi=\sum_{i=0}^\infty \lambda^i\phi_i\in R_\Omega$. We then fix the following data
which we get thanks to the fact that $X$ and therefore
each open set $\Omega$ is a second countable locally compact topological space: there is a sequence of compact sets
$(K_n)_{n\in\mathbb{N}}$ of $X$, a sequence of open sets
$(W_n)_{n\in\mathbb{N}}$,
and a sequence of smooth functions $(g_n)_{n\in\mathbb{N}}:X\to \mathbb{R}$
such that
\[
\bigcup_{n\in\mathbb{N}}K_n=\Omega,
\]
and
\[
\forall~n\in\mathbb{N}: K_n\subset W_n\subset \overline{W_n}\subset K_{n+1}^\circ
~~\mathrm{and}~~g_n(x)=\left\{\begin{array}{lc}
1 & \mathrm{if}~x\in W_n, \\
0 & \mathrm{if}~x\not\in K_{n+1}, \\
y\in [0,1] & \mathrm{else}.
\end{array}\right. .
\]
We denote by $\gamma_j$ the restriction $\eta(g_j)$ of
$g_j$ to $\Omega$ for each nonnegative integer
$j$.
The idea is to define the denominator function $g$
as a (non formal!) converging sum $g=\sum_{j=0}^\infty \epsilon_jg_j$.
Choose a sequence $(\epsilon_j)_{j\in\mathbb{N}}$ of strictly positive real numbers such that
\[
\forall~j\in\mathbb{N}:~~ \epsilon_jp_{K_{j+1},j}(g_j)< \frac{1}{2^j}~~~\mathrm{and}~~~
\forall~i\leq j\in\mathbb{N}:~~ \epsilon_j\sum_{l=0}^i
p_{K_{j+1},j}\big(C_l(\phi_{i-l},g_j)\big)<
\frac{1}{2^j}~
\]
(see eqn (\ref{EqDefSeminorms}) for the definition of the seminorms
$p_{K,m}$)
which is possible since for each nonnegative integer $j$ there are only finitely many seminorms and functions involved. For all nonnegative integers $i,j,N$ we define the functions
$g_{(N)}\in \mathcal{C}^\infty(X,\mathbb{K})$, and
$\psi_{ij},\psi_{(i,N)}\in \mathcal{C}^\infty(\Omega,\mathbb{K})$:
\[
g_{(N)}:=\sum_{j=0}^N\epsilon_jg_j,~~~
\psi_{ij}:=\sum_{l=0}^iC_l\big(\phi_{i-l},\gamma_{j}\big),~~~
\psi_{(i,N)}:=\sum_{j=0}^N\epsilon_j\psi_{ij}
=\sum_{l=0}^iC_l\big(\phi_{i-l},\gamma_{(N)}\big),
\]
and since $\mathrm{supp}(g_{j})\subset K_{j+1}\subset \Omega$, hence
$\mathrm{supp}(g_{(N)})\subset K_{N+1}\subset \Omega$,
there are unique functions
$f_{ij}\in\mathcal{C}^\infty(X,\mathbb{K})$ such that
\[
f_{ij}(x) := \left\{ \begin{array}{cl}
\psi_{ij}(x) & \mathrm{if}~x\in \Omega,\\
0 & \mathrm{if}~x\not\in\Omega.
\end{array}\right.~,~~~
\mathrm{hence}~~\eta(f_{ij})=\psi_{ij}~~~
\mathrm{and}~~~\mathrm{supp}(f_{ij})\subset K_{j+1}.
\]
For each nonnegative integer $N$ we set
$f_{(i,N)}:=\sum_{j=0}^N\epsilon_jf_{ij}\in
\mathcal{C}^\infty(X,\mathbb{K})$ with
$\mathrm{supp}(f_{(i,N)})\subset K_{N+1}$. Clearly,
$\eta(f_{(i,N)})=\phi_{(i,N)}$.\\
We shall now prove that both sequences $(g_{(N)})_{N\in\mathbb{N}}$, and for each nonnegative integer $i$, $(f_{(i,N)})_{N\in\mathbb{N}}$ are Cauchy
sequences in the complete metric space $\mathcal{C}^\infty(X,\mathbb{K})$. First, it is obvious that for any two compact subsets $K,K'$ and nonnegative integers $N,N'$
we always have for all $f\in\mathcal{C}^\infty(\mathbb{R}^n,\mathbb{K})$
\begin{equation}
\mathrm{if}~K\subset K'~\mathrm{and}~m\leq m'~\mathrm{then}~~ p_{K,m}(f)\leq p_{K',m'}(f).
\end{equation}
Fix a nonnegative integer $i$.
Let $\epsilon\in\mathbb{R}$, $\epsilon>0$, $K\subset X$ a compact subset, and $m\in\mathbb{N}$.
Then there is a nonnegative integer $N_0$ such that
\[
\frac{1}{2^{N_0}}<\epsilon,~~~m\leq N_0,~~~\mathrm{and}~~i\leq N_0.
\]
Then for all nonnegative integers $N,p$ with $N\geq N_0$ we get (since for all $j\in\mathbb{N}$
such that $N+1\leq j$ we have
$m\leq N_0\leq N\leq j$ and $i\leq N$,
and $\mathrm{supp}(f_{i,j})\subset K_{j+1}^\circ \subset K_{j+1}$)
\begin{eqnarray*}
p_{K,m}\big(f_{(i,N+p)}-f_{(i,N)}\big) & = &
p_{K,m}\left(\sum_{j=N+1}^{N+p}\epsilon_jf_{i,j}\right)
\leq \sum_{j=N+1}^{N+p}\epsilon_jp_{K,m}\big(f_{i,j}\big)
=\sum_{j=N+1}^{N+p}\epsilon_j
p_{K\cap K_{j+1},m}\big(\psi_{ij}\big)\\
& \leq & \sum_{j=N+1}^{N+p}\epsilon_j
p_{K_{j+1},j}\left(\sum_{l=0}^iC_l(\phi_{i-l},g_j)\right)
\leq
\sum_{j=N+1}^{N+p}\epsilon_j\sum_{l=0}^i
p_{K_{j+1},j}\left(C_l(\phi_{i-l},g_j)\right)
\\
& < & \sum_{j=N+1}^{N+p} \frac{1}{2^j} =\frac{1}{2^N}\left(1-\frac{1}{2^p}\right)
< \frac{1}{2^N} \leq \frac{1}{2^{N_0}} <\epsilon.
\end{eqnarray*}
It follows that for each $i\in\mathbb{N}$ the sequence $(f_{(i,N)})_{N\in \mathbb{N}}$ is a Cauchy
sequence in the locally convex
vector space $\mathcal{C}^\infty(X,\mathbb{K})$ hence converges to a smooth function
$f_i=\sum_{j=0}^\infty \epsilon_j f_{i,j}$. Replacing in the above reasoning the function
$\phi_0$ by the constant function $1$ on $\Omega$ it follows that
the sequence $(g_{(N)})_{N\in\mathbb{N}}$ converges to a smooth function
$g:X\to \mathbb{R}$. Now let $x\in \Omega$. Then there is a nonnegative integer
$j_0$ such that $x\in K_{j_0}$. It follows from the nonnegativity and the definition of all the $g_j$ and
from the strict positivity of $\epsilon_j$ that
\begin{equation}\label{EqCompFonctionApplatisseur}
g(x)=\sum_{j=0}^\infty \epsilon_j g_j(x)\geq \epsilon_{j_0} g_{j_0}(x)=\epsilon_{j_0}>0
\end{equation}
showing that $g$ takes strictly positive values on $\Omega$
whence $g\in S$.\\
Now let $x\not \in\Omega$. Then for any
$v\in T_xX$ with $h(v,v)\leq 1$
we have that
\[
\forall~m\in\mathbb{N}:~~ (D^mg_{(N)})(v)=\sum_{j=0}^N \epsilon_j (D^mg_{j})(v)=0
\]
because each $g_j$
has compact support in $K_{j+1}\subset \Omega$. Since
$g_{(N)}\to g$ for
$N\to \infty$ it follows by the continuity of differential operators and evaluation functionals that
$D^m g_{(N)}(v)\to D^m g(v)$,
and hence
\begin{equation}\label{EqCompGFlatOutsideOmega}
\forall~x\in X\setminus \Omega,~\forall~m\in \mathbb{N},~\forall~v\in T_xX,
~h(v,v)\leq 1:~~(D^m g)(v)=0,
\end{equation}
and in a completely analogous manner
\[
\forall~x\in X\setminus \Omega,~\forall~m\in \mathbb{N},~\forall~v\in T_xX,
~h(v,v)\leq 1:~~(D^m f_{i})(v)=0.
\]
Hence the infinite jets of all the functions $g$ and $f_i$, $i\in\mathbb{N}$,
vanish outside the open subset $\Omega$. J.-C.~Tougeron
calls the function $g$ \emph{fonction aplatisseur} for the
family $(\phi_i)_{i\in\mathbb{N}}$ in case $C_l=0$ for
$l\geq 1$.\\
Now we get
\[
\left(\phi\star_U \eta(g_{(N)})\right)_i
=\sum_{l=0}^iC_l\big(\phi_{i-l}, \eta(g_{(N)})\big)
=\psi_{(i,N)}=\eta(f_{(i,N)}).
\]
Since the restriction map $\eta:\mathcal{C}^\infty(X,\mathbb{K})\to
\mathcal{C}^\infty(\Omega,\mathbb{K})$ is continuous
(where the Fr\'{e}chet topology on $\mathcal{C}^\infty(\Omega,\mathbb{K})$ is induced by those
seminorms $p_{K,m}$ where $K\subset \Omega$) as are the bidifferential operators $C_l$ we can pass to the limit
$N\to \infty$ in the above equation and get
\[
\phi\star_\Omega \eta(g) =
\sum_{i=0}^\infty \lambda^i
\big(\phi\star_\Omega \eta(g)\big)_i
= \sum_{i=0}^\infty \lambda^i\eta(f_i)=:\eta(f).
\]
Since $g\in S$ it follows that $\eta(g)$ is invertible in
$R_\Omega$ by property $(i.a)$ of Definition
\ref{defi: bigdef}, and the preceding equation implies
$\phi=\eta(f)\star_\Omega \eta(g)^{\star_\Omega -1}$ thus proving property $(i.b)$ of Definition \ref{defi: bigdef}.\\
$\bullet$~\emph{The kernel of $\eta$ is equal to the space
of functions $f\in R$ such that there is $g\in S$
with $f\star g=0$} (property $(i.c)$ of Definition
\ref{defi: bigdef}. Clearly if there is $f\in R$ and $g\in S$
such that $f\star g=0$ then $\eta(f)\star_\Omega \eta(g)=0$,
and since $\eta(g)$ is invertible in $R_\Omega$ we have
$\eta(f)=0$.\\
Conversely, if $f\in R$ such that $\eta(f)=0$, then for
all integers $i\in\mathbb{N}$ and for all $x\in \Omega$
we have $f_i(x)=0$. Hence the infinite jet of each $f_i$ vanishes
at each point $x\in \Omega$ since $\Omega$ is open. Take
the \emph{fonction aplatisseur} $g\in S$ constructed
in the preceding part of the proof for $\phi_0=1,\phi_i=0$
for all $i\geq 1$. Then we get
\[
\forall~x\in X:~(f\star g)_i(x)
=\sum_{l=0}^i C_l(f_{i-l},g)(x)=
\left\{\begin{array}{cl}
0 & \mathrm{if~}x\in\Omega~\mathrm{since~every~jet ~of~each}f_i~\mathrm{vanishes~in~}\Omega, \\
0 & \mathrm{if~}x\not\in\Omega~
\mathrm{since~every~jet~of~}g
~\mathrm{vanishes~outside~of~}\Omega,
\end{array}\right.
\]
where we have used eqn (\ref{EqCompGFlatOutsideOmega}) for the second alternative of the above statement.
This proves part \textbf{1.} of the theorem.\\
Statements \textbf{2.} and \textbf{3.} are immediate consequences of \textbf{1.} and Theorem \ref{TLocalizationForRightDenominatorSets}.
\end{proof}
\noindent \textbf{Remarks:}
For zero Poisson structure and
trivial deformation $C_l=0$ for all $l\geq 1$ the above
result specializes upon restricting to terms of order 0 to the classical result that algebraic
and analytic localization with respect to an open subset
$\Omega\subset X$ are isomorphic for
the commutative $\mathbb{K}$-algebra
$\mathcal{C}^\infty(X,\mathbb{K})$. \\
Moreover, since for any
closed set $F\subset X$ Tougeron's above construction gives us a smooth function $g:X\to \mathbb{R}$ which is nowhere zero on the open set $\Omega=X\setminus F$ and zero outside $\Omega$, hence on $F$, one gets the well-known result that
the Zariski topology on $X$ induced by the commutative
$\mathbb{K}$-algebra $\mathcal{C}^\infty(X,\mathbb{K})$ coincides with
the usual manifold topology because each set $Z(I)$ is closed by continuity of all the functions in the ideal $I$, and conversely every closed set $F$ is the
zero set $Z(gA)$ of the ideal $gA$
(where $A=\mathcal{C}^\infty(X,\mathbb{K})$).\\
Finally, note that the numerator morphism $\eta$ is injective iff the open set $\Omega$ is dense which is quite easy to see.
\section{Noncommutative germs for smooth star products}
\label{SecNoncommutative germs for smooth star products}
Let $(X,\pi)$ again be a Poisson manifold, and let
$\star=\sum_{l=0}^\infty\lambda^l
C_l$ be a bidifferential
star product. Let $K=\mathbb{K}[[\lambda]]$, and we denote the unital $K$-algebra $\big(\mathcal{C}^\infty(X,\mathbb{K})[[\lambda]],\star\big)$
by $R$.
For any open set $U\subset X$ let $R_U$ denote the
unital $K$-algebra $\big(\mathcal{C}^\infty(U,\mathbb{K})[[\lambda]],
\star_U\big)$,
where $\star_U$ denotes the obvious action of the bidifferential operators in $\star$ to the local functions
in $\mathcal{C}^\infty(U,\mathbb{K})$. We write $R_X=R$. For any two open sets with
$U\supset V$, denote by $\eta^U_V:R_U\to R_V$ be the restriction morphism where we write $\eta_U$ for $\eta^X_U$. Clearly, for $U\supset V\supset W$ one has the categorical identities $\eta^V_W\circ \eta^U_V=\eta^U_W$ and $\eta^U_U=\mathrm{id}_U$. Denoting by
$\underline{X}$ the topology of $X$ it is readily checked that
the family $\big(R_U\big)_{U\in \underline{X}}$ with the
restriction morphisms $\eta^U_V$ defines a \emph{sheaf of $K$-algebras over $X$}, see e.g.~the book
\cite{KS06} for definitions.\\
Let $x_0$ a fixed point in $X$, and let
$\underline{X}_{x_0}\subset \underline{X}$ the set of all
open sets containing $x_0$. We recall the definition of
the \emph{stalk at $x_0$}, $R_{x_0}$ of the sheaf
$\big(R_U\big)_{U\in \underline{X}}$ whose elements are called \emph{germs at $x_0$}: it is defined as the inductive
limit (or colimit, see \cite{Mac98}) $\lim_{U\in \underline{X}_{x_0}} R_U$. In order to perform computations we recall the more down-to-earth definition: let
$\tilde{R}_{x_0}$ be the disjoint union of all the $R_U$, i.e.~ the set of all pairs $(U,f)$ where $U$ is an open
set containing $x_0$ and $f\in \mathcal{C}^\infty(U,\mathbb{K})[[\lambda]]$. Define an
addition $+$ and a multiplication $\star$ on these pairs
by
\[
(U,f)+(V,g)
:=
\big(U\cap V,\eta^U_{U\cap V}(f)+\eta^V_{U\cap V}(g)\big)
~~~\mathrm{and}~~~
(U,f)\star(V,g)
:=
\big(U\cap V,
\eta^U_{U\cap V}(f)\star_{U\cap V}\eta^V_{U\cap V}(g)\big),
\]
and it is easily checked that the addition is associative and
commutative, that the multiplication is associative, and that
there is the distributive law. Furthermore,
the sum of $(U,f)$
and $(V,0)$ equals $\big(U\cap V, \eta^U_{U\cap V}(f)\big)$
which is equal to $(U,f)\star (V,1)=(V,1)\star (U,f)$. Next
the binary relation $\sim_{x_0}$ defined by
\[
(U,f)\sim_{x_0} (V,g)~~\mathrm{iff}~~
\exists~W\in\underline{X}_{x_0}~\mathrm{with~}
W\subset U\cap V:~
\eta^U_W(f)=\eta^V_W(g)
\]
turns out to be an equivalence relation. Denoting by $R_{x_0}$ the quotient set $\tilde{R}_{x_0}/\sim_{x_0}$
and by $\eta^U_{x_0}:R_U\to R_{x_0}$ the restriction of the
canonical projection $\tilde{R}_{x_0}\to R_{x_0}$ to
$R_U\subset \tilde{R}_{x_0}$ (where $\eta^X_{x_0}$ will be shortened by
$\eta_{x_0}:R\to R_{x_0}$) it is easy to see that the above
addition and multiplication passes to the quotient, that
all the zero elements $(U,0)$ are equivalent as are all the
unit elements $(U,1)$, and that this defines the structure of a unital associative $K$-algebra
denoted by $\big(R_{x_0},\star_{x_0}\big)$ on the quotient set such that
all maps $\eta^U_{x_0}:\big(R_U,\star_U\big)\to \big(R_{x_0},\star_{x_0}\big)$ are morphisms of unital $K$-algebras. Note the following equations for all open sets
$U\supset V$:
\begin{equation} \label{EqCompFunctorialOnGerms}
\eta^V_{x_0}\circ \eta^U_V = \eta^U_{x_0}.
\end{equation}
Define the following subsets $S=S(x_0)$ and $J=J_{x_0}$ of $R$:
\begin{equation}
S=S(x_0)=\left\{g\in R~|~g_0(x_0)\neq 0\right\}
~~~\mathrm{and} ~~~
J=J_{x_0}=\left\{g\in R~|~g_0(x_0)= 0\right\}.
\end{equation}
It is easy to see that $S=R\setminus J$, that $S$ is a \emph{multiplicative subset of $R$}, and that $J_{x_0}$ is a \emph{maximal ideal of $R$} (the quotient $R/J$ is isomorphic
to the quotient $K/(\lambda K)\cong \mathbb{K}$ which is a field).\\
We now have the following analog of Theorem
\ref{TLocalizationEquivalenceOpenSubset}:
\begin{theorem}\label{TLocalizationEquivalenceGerms}
Using the previously fixed notations we get
for any point $x_0\in X$:
\begin{enumerate}
\item $(R_{x_0},\star_{x_0})$ together with the morphism
$\eta_{x_0}:R\to R_{x_0}$ consitutes a right $K$-algebra of fractions for $(R,S(x_0))$.
\item As an immediate consequence we have that $S(x_0)$ is a right denominator set.
\item \textbf{This implies in particular that the algebraic localization $RS^{-1}$
of $R$ with respect to $S=S(x_0)$ is isomorphic to
the concrete stalk $R_{x_0}$ as unital $K$-algebras.}
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{1.} Once again, we have to check properties
$(i.a.)$, $(i.b.)$, and $(i.c.)$ of Definition
\ref{defi: bigdef}:\\
$\bullet$~``\emph{$\eta_{x_0}$ is $S$-inverting}''
(property $(i.a.)$): indeed, let $g\in S(x_0)$. Since
$g_0(x_0)\neq 0$ there is an open neighbourhood $U$ of $x_0$ such that $g_0(y)\neq 0$ for all $y\in U$. Hence the restriction $\eta_U(g)$ is invertible in $(R_U,\star_U)$ by
Theorem \ref{TLocalizationEquivalenceOpenSubset}. Using
eqn (\ref{EqCompFunctorialOnGerms}) we see that $\eta_{x_0}(g)=\eta^U_{x_0}\big(\eta_U(g)\big)$, and the
r.h.s.~is invertible in $R_{x_0}$ as the image of an invertible element $\eta_U(g)$ in $R_U$ with respect to the morphism of
unital $K$-algebras $\eta^U_{x_0}$.\\
$\bullet$~``\emph{Every $\phi\in R_{x_0}$ is equal to
$\eta_{x_0}(f)\star_{x_0}\eta_{x_0}(g)^{\star_{x_0} -1}$ for some $f\in R$ and $g\in S(x_0)$}'' (property $(i.b.)$): indeed, let $\phi\in R_{x_0}$. By definition
of $R_{x_0}$ as a quotient set there is an open neighbourhood $U$ of $x_0$ and an element $\psi\in R_U$
with $\eta^U_{x_0}(U,\psi)=\phi$. According to the preceding
Theorem \ref{TLocalizationEquivalenceOpenSubset} there are
elements $f,g\in R$ with $g_0(y)\neq 0$ for all $y\in U$
such that $\eta_U(f)=\psi\star_U\eta_U(g)$. In particular,
$g_0(x_0)\neq 0$, hence $g\in S(x_0)$. Applying
$\eta^U_{x_0}$ to the preceding equation we get (upon using
eqn (\ref{EqCompFunctorialOnGerms}))
\[
\eta_{x_0}(f)=\eta^U_{x_0}\big(\eta_U(f)\big)
=\Big(\eta^U_{x_0}(\psi)\Big)\star_{x_0}
\Big(\eta^U_{x_0}\big(\eta_U(g)\big)\Big)
=\phi \star_{x_0} \big(\eta_{x_0}(g)\big)
\]
proving the result since $g\in S(x_0)$ and $\eta_{x_0}(g)$
is invertible in the unital $K$-algebra $(R_{x_0},\star_{x_0})$.\\
$\bullet$~\emph{The kernel of $\eta_{x_0}$ is equal to the space of functions $f\in R$ such that there is $g\in S(x_0)$
with $f\star g=0$} (property $(i.c)$). Indeed, given $f\in R$ with $\eta_{x_0}(f)=0$ then there is an open neighbourhood $W$ of $x_0$ such that $\eta_W(f)=\eta_W(0)=0$. By the preceding Theorem
\ref{TLocalizationEquivalenceOpenSubset} there is an element
$g\in S_W\subset S(x_0)$ (which can be chosen to be a
\emph{fonction aplatisseur}) such that $f\star g=0$. This proves \textbf{1.} of the theorem.\\
\textbf{2.} and \textbf{3.} are immediate consequences of
part \textbf{1.} and Theorem \ref{TLocalizationForRightDenominatorSets}.
\end{proof}
\noindent \textbf{Warning:} The stalk $R_{x_0}$ is taken in
the sense of sheaves of $\mathbb{K}[[\lambda]]$-algebras.
Another interpretation would be two consider the
sheaf
$\big(\mathcal{C}^\infty(U,\mathbb{K})
\big)_{U\in\underline{X}_{x_0}}$ of commutative $\mathbb{K}$-algebras and the classical
stalk $\mathcal{C}^\infty(X,\mathbb{K})_{x_0}$: in a completely analogous fashion it can be shown that it is
isomorphic to the algebraic localization with respect to
the multiplicative set of functions which do not vanish
at $x_0$. However the $\mathbb{K}[[\lambda]]$-module
$\mathcal{C}^\infty(X,\mathbb{K})_{x_0}[[\lambda]]$ is NOT
in general isomorphic to the above $R_{x_0}$: if
$f=\sum_{l=0}^\infty \lambda^lf_l$ is
a series of smooth functions such that $f_l$ vanishes
on an closed ball of radius $\epsilon_l>0$ around $x_0$ where
$\epsilon_l\to 0$ (for $l\to \infty$) and is non-zero outside, then the germ of each $f_l$ vanishes, but there
is no common open neighbourhood of $x_0$ such that
$f$ restricted to that neighbourhood vanishes which would imply that the `$\mathbb{K}[[\lambda]]$-germ of
$f$' vanishes. We shall come back to this problem in Section
\ref{SecCommutativelyLocalizedStarProducts}.
\section{Commutatively localized star products}
\label{SecCommutativelyLocalizedStarProducts}
In this section we shall describe a more algebraic framework to generalize the two preceding sections.
Let in the following $K$ be a fixed unital associative
commutative ring. Unadorned tensor products $\otimes$
are always with respect to $K$, hence $\otimes=\otimes_K$.
\subsection{Algebraic (multi)differential operators and
their localization}
\label{SubSecAlgebraicMultidifferentialOperators}
We shall first recall the well-known theory of
\emph{algebraic
(multi)differential operators and their localization},
see e.g.~\cite{KLV86}, \cite{Nes03}, \cite{LR97},
\cite[p.566-578]{Wal07}, and \cite{Vez97}: let $A$ be commutative associative unital $K$-algebra. We shall need the theory only for $A$ and its tensor products over $K$,
but --as usual-- indulging in some more generality has the benefit of being more economic for the computations: let $M$ and $N$ be
left $A$-modules. For each $a\in A$ fix the following
$K$-linear maps $L_a$, $R_a$, and $\mathrm{ad}_a$
from the $K$-module $\mathrm{Hom}_K(M,N)$ to itself defined in the following way for all $\phi\in
\mathrm{Hom}_K(M,N)$ and $m\in M$:
\begin{equation}
\big(L_a(\phi)\big)(m)=a\big(\phi(m)\big),~~
\big(R_a(\phi)\big)(m)=\phi(am),~~
\mathrm{ad}_a(\phi)=L_a(\phi)-R_a(\phi)
\end{equation}
which obviously all commute.
Then a $K$-linear map $\phi:M\to N$ is called a
\emph{differential operator of order $k\in\mathbb{N}$ with respect to the $K$-algebra $A$} iff for all $a_1,\ldots,a_{k+1}\in A$ we have
$\left(\mathrm{ad}_{a_1}\circ\cdots\circ \mathrm{ad}_{a_{k+1}}\right)(\phi)=0$. It is well-known
that the set of all differential operators of order $k$
forms an $A$-$A$-bimodule (w.r.t.~$L_a$ and $R_a$), and
that these bimodules form an increasing filtration (indexed by the order) of
the $A$-$A$-bimodule $\mathrm{Hom}_K(M,N)$ whose union
in $\mathrm{Hom}_K(M,N)$ is called the $A$-$A$-bimodule of all differential operators. The
$A$-$A$-bimodule of all differential operators of order
$0$ is clearly identical to the set of all
$A$-linear maps. Moreover, the composition $\psi\circ \phi$
of a differential operator $\phi:M\to N$ of order
$k_1$ and a differential operator $\psi:M\to P$ (where
$P$ is another $A$-module) of order $k_2$ is a differential operator $M\to P$ of order $k_1+k_2$.
Therefore there is a category $A\mathbf{-Moddiff}$
whose objects are $A$-modules and morphisms differential
operators.
Let $A'$ be another unital associative commutative
$K$-algebra, and $M'$, $N'$ be $A'$-modules. If
$\phi:M\to N$ and $\phi':M'\to N'$ are differential operators of order $k$ and $k'$, respectively, with respect to $A$ and $A'$, respectively, then
\begin{equation}\label{EqCompTensorProductOfDiffOps}
\phi\otimes \phi':
M\otimes M'\to N\otimes N'
~~\mathrm{is~a~differential~operator~
of~order~}k+k'~
\mathrm{with~respect~to~}A\otimes A',
\end{equation}
which follows from the obvious equation
$\mathrm{ad}_{a\otimes a'}(\phi\otimes \phi')
=\left(\mathrm{ad_a}(\phi)\right)
\otimes \left(R_{a'}(\phi')\right)
+\left(L_a(\phi)\right)
\otimes \left(\mathrm{ad}_{a'}(\phi')\right)$
for all $a,a'\in A$, and its iterations.
Moreover, if $\chi:A\to A'$ is a $K$-algebra morphism
and $\phi':M'\to N'$ a differential operator of order
$k'$ with respect to $A'$ it is obvious that $\phi'$ is also
a differential operator of the same order $k'$ with respect to $A$ whence there is an obvious \emph{restriction functor}
from $A'\mathbf{-Moddiff}$ to $A\mathbf{-Moddiff}$.
In the particular case of $A'=A_S$, the algebra of quotients of $A$ with respect to a fixed multiplicative subset $S\subset A$, and $\chi=\eta$, the numerator morphism, this restriction functor has a left adjoint which amounts to the \emph{localization of differential operators} as has been shown by G.Vezzosi in his PhD-thesis, see \cite[Prop.~3.3]{Vez97}:
\begin{theorem}[G.Vezzosi, 1997]
\label{TVezzosi}
Given the $K$-algebra $A$ and
the multiplicative subset $S$ there is a covariant functor
$(~)_S:A\mathbf{-Moddiff}\to A_S\mathbf{-Moddiff}$
which is left adjoint to the above restriction
functor $A\mathbf{-Moddiff}\leftarrow A_S\mathbf{-Moddiff}$ induced by the numerator morphism $A\to A_S$: on objects it is given by
the localization of modules $M\to M_S$, and
each differential operator $D:M\to N$ of order $k$
w.r.t.~$A$ is mapped to the following differential
operator $D_S:M_S\to N_S$ of the same order $k$
w.r.t.~$A_S$
defined as follows for all $m\in M$ and $s\in S$
\begin{equation}\label{EqDefDifferentialOpLocalized}
D_S\left(\frac{m}{s}\right)
= \sum_{r=1}^{k+1}{k+1 \choose r}(-1)^{r+1}
\frac{D\big(s^{r-1}m\big)}{s^r}.
\end{equation}
In particular, $D_S$ is uniquely determined by
its values $D_S\left(\frac{m}{1}\right)=
\frac{D(m)}{1}$ for all $m\in M$, and it follows that $(D\circ D')_S
=D_S\circ D'_S$ whenever the composition $D\circ D'$
makes sense.
\end{theorem}
\noindent The proof is quite technical: eqn (\ref{EqDefDifferentialOpLocalized}) is motivated by
the fact that if $D_S:M_S\to N_S$ is a differential operator of order $k$ satisfying $D_S(m/1)=D(m)/1$ then --by definition-- it satisfies $0=-(1/s)^{k+1}\big(\ad_{s/1}^{k+1}(D_S)\big)(m/s)$
for all $m\in M$ and $s\in S$ which gives
eqn (\ref{EqDefDifferentialOpLocalized}).
The right hand side of eqn (\ref{EqDefDifferentialOpLocalized}) can be defined for any $K$-linear map $M\to N$ as a
set-theoretic map $M\times S\to N_S$, and the fact that
it only depends (first in a set-theoretical way) on the fraction $\frac{m}{s}$ is shown
by induction over the order of the differential operator $D$. Note also that
it can be shown a posteriori that the integer $k$
in eqn (\ref{EqDefDifferentialOpLocalized}) can
be replaced by any integer $k'\geq k$ without changing
the left hand side.
Next, let $p$ be a positive integer, let $M_1,\ldots,M_p$,
$N$ be $A$-modules, and $\mathsf{k} =(k_1,\dots ,k_p) \in
\mathbb{N}^p$ a multi-index. Recall that a $K$-linear map
$C:M:=M_1\otimes\cdots\otimes M_p\to N$ is called a \emph{multidifferential operator of rank} $p$ \emph{of order}
$\mathsf{k}$ \emph{with respect to $A$} --which is sometimes also called a \emph{polydifferential operator}-- iff for each integer $1\leq i\leq p$ and
for all $m_1\in M_1,\ldots,m_{i-1}\in M_{i-1}$, $m_{i+1}\in M_{i+1},\ldots,m_p\in M_p$ the $K$-linear map
$M_i\to N$ given by $m_i\mapsto C(m_1\otimes\cdots\otimes m_p)$ is a differential operator of order $k_i$. For the particular case $A = \CCinf(X)$ for a smooth manifold $X$ this algebraic definition is well-known to coincide with the analytic definition, see e.g.~\cite[p.~575, Satz A.5.2.]{Wal07} which means that in local charts an (algebraically defined) multidifferential operator looks as in eqn (\ref{EqDefMultiDifferentialOperatorsAn}).\\
For our purposes it is more convenient to use the following formulation: note that the $K$-module
$M_1\otimes\cdots\otimes M_p$ is a module with respect to
the unital commutative associative $K$-algebra
$A^{\otimes p}=A\otimes\cdots\otimes A$ ($p$ tensor factors) in a natural way, and that $N$ also can be viewed as a $A^{\otimes p}$-module by means of
$(a_1\otimes\cdots\otimes a_p)n=a_1\cdots a_pn$
for all $a_1,\ldots,a_p\in A$ and $n\in N$. Let
$\Phi:M_1\otimes\cdots\otimes M_p\to N$ be a
$K$-linear map. If it is a differential operator of
order $k$ with respect to $A^{\otimes p}$ it is easy to
see by restricting to $1\otimes \cdots 1\otimes a_i\otimes 1\otimes \cdots \otimes 1 \in A^{\otimes p}$,
$1\leq i\leq p$, $a_i\in A$, that $\Phi$ is a multidifferential operator of rank $p$ and order $(k,\ldots,k)$ with respect to $A$. Conversely,
for any $a\in A$ and any integer $1\leq r\leq p$
writing $L_a, R^{r}_a, \mathrm{ad}^{r}_a$ for the
following $K$-linear maps from
$\mathrm{Hom}_K(M_1\otimes\cdots\otimes M,N)$
to itself given by (for all $m\in M$) $(L_a(C))(m)
=aC(m)$, $(R^{(r)}_a(C))(m)=C(a^{(r)}m)$
(where
$a^{(r)}
=1^{\otimes(r-1)}\otimes a \otimes
1^{\otimes (p-r)}$), and $\mathrm{ad}^{(r)}_a=L_a-
R^{(r)}_a$, there is the easy identity for all
$a_1,\ldots,a_p\in A$
\[
\mathrm{ad}_{a_1\otimes\cdots\otimes a_p}
=\sum_{r=1}^pL_{a_1}\circ \cdots\circ
L_{a_{r-1}}\circ
\mathrm{ad}^{(r)}_{a_{r}}\circ R^{(r+1)}_{a_{r+1}}
\circ\cdots\circ R^{(p)}_{a_{p}}.
\]
By iteration this shows
that if $C$ is a multidifferential operator of rank $p$ and order $\mathsf{k}=(k_1,\ldots,k_p)$ w.r.t.~$A$ then $C$ is
a differential operator of order $k_1+\cdots+k_p$ w.r.t.
$A^{\otimes p}$. Hence
\begin{equation}
\big\{\mathrm{multi differential~operators~of~rank~}p
\mathrm{~w.r.t.}~A\big\}
=\big\{\mathrm{differential~operators}
\mathrm{~w.r.t.}~A^{\otimes p}\big\}.
\end{equation}
With this identification, given a multiplicative subset
$S\subset A$ it is now straight-forward to
localize multidifferential operators by localizing
them as differential operators w.r.t.~$A^{\otimes p}$
taking the multiplicative subset
$S^{\otimes p}\subset A^{\otimes p}$ (which is the obvious iteration of Remark 2 before eqn (\ref{EqCompLocalizationOfTensorProducts})) upon using Vezzosi's Theorem \ref{TVezzosi}. Note that it is easy
to see that the localization of the $A$-module $N$
w.r.t.~the multiplicative subset $S$ is naturally
isomorphic to the localization of $N$ seen as a
$A^{\otimes p}$-module w.r.t.~the multiplicative
subset $S^{\otimes p}$.\\
We are interested in the particular case where all the
$A$ modules $M_1,\ldots,M_p,N$ are equal to $A$
for which we state the preceding considerations in
the following
\begin{prop}\label{PLocMultiDiffOpsAAAAToA}
Let $S_0 \subset A$ be a multiplicative subset, let
$A_{S_0}$ be the ordinary commutative localization of
$A$ w.r.t.~$S_0$, and let
$\eta_{(A,S_0)} =\eta:A\to A_{S_0}$ be the numerator morphism. Let $C$ be a multidifferential operator of rank $p$ from $A^{\otimes p}$ to $A$.\\
Then there exists a unique multidifferential operator of rank $p$ , $C_{S_0}$, from $(A_{S_0})^{\otimes p}$ to $A_{S_0}$ such that $\eta\circ C = C_{S_0} \circ \eta^{\otimes p}$.\\
Furthermore, given another multidifferential operator $C'$ of rank $p'$ we have
$(C \circ_i C')_{S_0} = C_{S_0} \circ_i C'_{S_0}$
for each integer $1\leq i\leq p$.
\end{prop}
\begin{proof}
The first part follows from the above considerations.
The second part follows from the equation
$C\circ_i C'=C\circ \big(
\mathrm{id}^{\otimes (i-1)}\otimes C'\otimes
\mathrm{id}^{\otimes (p-i)}\big)$ seen as composition
of differential operators w.r.t.~the $K$-algebra
$A^{\otimes (p+p'-1)}$ and multiplicative subset
$S_0^{\otimes (p+p'-1)}$ using eqn
(\ref{EqCompTensorProductOfDiffOps}).
\end{proof}
\subsection{Commutatively localized algebraic star products}
\label{SubSecCommutativelyLocalizedStarProducts}
Observe now that the Definition \ref{DefStarProducts} of star products can be generalized to any commutative associative unital $K$-algebra
$A$ whence the significance `bidifferential' for the
$K$-bilinear maps $C_k:A\times A\to A$ is now given by the algebraic definition outlined in the preceding Section \ref{SubSecAlgebraicMultidifferentialOperators}.
We have
\begin{prop}
Let $A$ be a commutative unital $K$-algebra
and a differential star product $\star = \sum_{i=0}^\infty \lambda^i C_i$ on
$R:=A[[\lambda]]$ where the $C_i$ are bidifferential operators on $A$. For any multiplicative subset
$S_0 \subset A$ there exists a
unique star product $\star_{S_0}$ on $A_{S_0}\ph$
such that the numerator map $\eta$ canonically extended as a $K[[\lambda]]$-linear map (also denoted
$\eta$)
$A\ph\to A_{S_0}\ph$ is a morphism of unital $K[[\lambda]]$-algebras.
\end{prop}
\begin{proof}
This follows from the previous Proposition \ref {PLocMultiDiffOpsAAAAToA} by considering the localization of the bidifferential operators $C_i$. It remains associative since the localization is compatible with the compositions $\circ_1$ and $\circ_2$.
\end{proof}
\noindent With the above structures $A,S_0,\star$
we set $R=A[[\lambda]]$ and consider the following rather
natural subset
\begin{equation}
S: = S_0 + \lambda A[[\lambda]] \subset R=A[[\lambda]].
\end{equation}
which can be called the \emph{canonical deformation of the multiplicative subset $S_0$}.
Then we have the
\begin{prop}\label{PSZeroSigmaCMImpliesAllsWell}
The subset $S=S_0+\lambda R$ is a multiplicative subset
of the algebra $(R,\star)$, and its image under
$\eta$ consists of invertible elements of the
$K[[\lambda]]$-algebra
$\big(A_{S_0}[[\lambda]], \star_{S_0}\big)$.\\
It follows that there is a canonical morphism
of unital algebras over $K[[\lambda]]$
\begin{equation}
\label{EqDefMorphismLocCommutesWithDeform}
\Phi:\Big( \big(A[[\lambda]]\big)_S, \star_S\Big)\to
\big( A_{S_0}[[\lambda]] ,\star_{S_0}\big).
\end{equation}
where the localization $\big(A[[\lambda]]\big)_S$
is the general construction, see Proposition
\ref{PGeneralLocalization}.
\end{prop}
\noindent Indeed, since the deformation terms of $\star$ come in
higher orders of $\lambda$ it is clear that $S$ is multiplicative. Since $\eta(S_0)$ is invertible in
$A_{S_0}$ this also holds for the image under $\eta$ of the canonical deformation $S$ of $S_0$, see the reasoning in the beginning of the proof of
Theorem \ref{TLocalizationEquivalenceOpenSubset}
which is completely algebraic.
The existence of the algebra morphims $\Phi$ is then clear from the universal property of the localized algebra, see Proposition \ref{PGeneralLocalization}.
Here we come to two general problems:
\begin{center}
\textbf{1. Does localization commute with deformation ?}\\
Meaning: is the above morphism $\Phi$ (\ref{EqDefMorphismLocCommutesWithDeform}) an
isomorphism?\\[2mm]
\textbf{2. Is $S$ a right (or left) denominator set?}
\end{center}
Note that even in the commutative case, i.e.~the localization of an algebra $R[[\lambda]]$ where $R$ is commutative, the map $\Phi$ is not always an isomorphism. This has already been noted in \cite{Arn73}.
For localization with respect to open sets (see section \ref{SecNonComLocOpenSets} ($S_0=S_\Omega$) the morphism
$\Phi$ is an isomorphism, and $S=S_\Omega+\lambda R$ is a left and right denominator set. However, $\Phi$ is not injective
for the germs ($S_0=A\setminus I_{x_0}$) in section
\ref{SecNoncommutative germs for smooth star products})
as the warning at the end of the section indicates
although $S=S_0+\lambda R$ is a left and right denominator set.
One reason why $\Phi$ is in general not an isomorphism is that $\big(A[[\lambda]]\big)_S$ is
in general no longer a \emph{topologically free $K[[\lambda]]$-module}, see e.g.~\cite[p.388-391]{Kas95}
for all the details.
Given a $K[[\lambda]]$-module $M$ there is a natural topology with basis induced by the (descending) filtration $\{\lambda^k M \}_{k \in \N}$.
The space $M$ is complete if for every sequence $(m_i ) \subset M$ the series $\sum_{i=0}^\infty m_i \lambda^i$ convergences in $M$. It is Hausdorff iff $\bigcap_{i=0}^\infty \lambda^i M = \{0\}$ iff $\{0\}$ is closed in the $\lambda$-adic topology.
A $K[[\lambda]]$-linear map between two $K[[\lambda]]$-modules is always continuous.
\\
Next, a $K[[\lambda]]$-module is called (topologically) free if it is isomorphic to a $K[[\lambda]]$-module of the form $V[[\lambda]]$ for some $K$-module $V$.
We have $V[[\lambda]] = V \hat\otimes_K K[[\lambda]]$. Note that here the tensor product is not the algebraic tensor product, but its completion in the $\lambda$-adic topology, see e.g.~\cite[p.~390-391]{Kas95}. Moreover recall that the $\lambda$-torsion
of a $K[[\lambda]]$ module $M$ is the set of elements $m \in M$ for which $\lambda m =0$.
There is the following well-known characterization, see
e.g.~\cite[p.390, Prop.~XVI.2.4.]{Kas95}:
\begin{prop}
A $K[[\lambda]]$-module $M$ is topologically free if and only if it is complete and Hausdorff in the $\lambda$-adic topology and $\lambda$-torsion free. In this case $M \cong (M/\lambda M)[[\lambda]]$.
\end{prop}
\noindent Since completeness and Hausdorffness are preserved by isomorphism, $\big(A[[\lambda]]\big)_S$ needs to be complete and Hausdorff for $\Phi$ to be an isomorphism. In fact, it needs to be topologically free. \\
If $\big(A[[\lambda]]\big)_S$ is not Hausdorff, $\Phi$ is not injective, since then $\Phi^{-1}(0) \neq \{0\}$, since it is a closed subset in the $\lambda$-adic topology.\\
The example of germs (Section \ref{SecNoncommutative germs for smooth star products}) is an example of this:\\
Consider the example at the end of Section \ref{SecNoncommutative germs for smooth star products}. Then for any $k \in \N$, we have $\sum_{l=0}^k \lambda^l f_l = 0 \in R_{x_0}$ since it vanishes on the ball of radius $\epsilon_k$ around $x_0$ (if we choose the sequence $(\epsilon_l)$ monotone). This means $f = \lambda^k \sum_{l_0}^\infty \lambda^l f_{l-k}$ so $f \in \lambda^k R_{x_0}$ for all $k$ but as stated before $f \neq 0$.
\begin{prop}
Consider the situation of Proposition \ref{PSZeroSigmaCMImpliesAllsWell}.
If $(A[[\lambda]])_S$ is complete then the map $\Phi$ is surjective.
\end{prop}
\begin{proof}
Let $a_0 \in A$ and $s_0 \in S_0$.
We have $\Phi(a_0 \star_S (s_0)^{\star-1}) = \Phi(a) \star_{S_0} \Phi(s^{\star-1}) = \frac{a_0}{s_0} + \lambda r $ with $r = \frac{a_1}{s_1} \in (A)_{S_0}[[\lambda]]$. Recursively one can find $a_i \in A, s_i \in S_0$, such that $\Phi(a_0 \star_S (s_0)^{\star-1} - \sum_{i=1}^\infty \lambda^i a_i \star_S (s_i)^{\star-1} = \frac{a_0}{s_0} $. The series on the left hand side converges since we assume $A[[\lambda]]_S$ to be complete.
\end{proof}
More generally, it is always possible to extend the map $\Phi$ to the completion of $A[[\lambda]]_S$ due to the completeness of $(A_0)_{S_0}[[\lambda]]$ and continuity of $\Phi$. The previous proposition implies that this extension is surjective.
\noindent It may be interesting to develop a noncommutative
localization along the lines of Section \ref{SubSecAlgebraicLocalization}, in particular in the spririt of
Proposition \ref{PGeneralLocalization} and/or
Theorem \ref{TLocalizationForRightDenominatorSets}, for complete unital associative $K[[\lambda]]$-algebras
whose multiplicative subsets have some additional properties.
\subsection{A particular result generalizing the
restriction to open sets, Section \ref{SecNonComLocOpenSets}}
Let $A$ be a $K$-algebra. Suppose that the multiplicative
set $S_0\subset A$ has the following property
\begin{equation}
\label{EqDefSigmaCM}
\forall~\mathrm{~sequence~}(s_n)_{n\in \mathbb{N}}\in S_0~
\exists~\mathrm{~sequence~}
(b_n)_{n\in \mathbb{N}}\in A~\mathrm{and}~s\in S_0:
\mathrm{s.~t.~}\forall~n\in\mathbb{N}:
~~s_nb_n=s.
\end{equation}
Note that for a sequence having only a finite number of pairwise different terms this is always trivially satisfied by choosing
for $s$ the common multiple of all the members in the associated finite set of the sequence. Moreover in the
uninteresting case where $S_0$ contains $0$ the above
property (\ref{EqDefSigmaCM}) is trivially satisfied by
choosing the constant $0$-sequence for $(b_n)_{n\in\mathbb{N}}$.
Returning to the general case, we shall refer to property
(\ref{EqDefSigmaCM}) as $\sigma CM$
(something like `countable common multiple').
A similar property has been considered in \cite{Gil67,She71}. However, there the common multiple $s$ is only considered to be different from 0.
Note however they consider domains, so $s$ is no zero divisor. This implies that one can consider the multiplicative set $S'$ generated by $S$ and $s$. Further the localization with respect to $S$ embeds injectively into the localization with respect to $S'$.
\begin{prop}
For any open set $\Omega\subset X$ of a smooth manifold
the multiplicative subset $S_\Omega=\{g\in A~|~\forall~x\in\Omega:~g(x)\neq 0\}$ appearing in
Section \ref{SecNonComLocOpenSets} has the
$\sigma CM$-property.
\end{prop}
\noindent Indeed this follows from the proof of Theorem \ref{TLocalizationEquivalenceOpenSubset} in the trivial case where all the bidifferential operators of strictly positive order vanish, and where we set --for any given sequence $(s_n)_{n\in\mathbb{N}}$-- $\phi(x)=\sum_{n=0}^\infty
\lambda^n (1/s_n(x))$ for all $x\in \Omega$, and the
function applatisseur $g$ (see eqn (\ref{EqCompFonctionApplatisseur}))
will be the desired element $s\in S_\Omega$. This construction is due to J.-C.Tougeron \cite{Tou65}.
The main result of this subsection is the following
\begin{prop}
Suppose that the multiplicative subset $S_0\subset A$
satisfies the $\sigma CM$ property. \\
Then the morphism
$\Phi$, see eqn (\ref{EqDefMorphismLocCommutesWithDeform}), is an isomorphism, and the deformed multiplicative subset
$S=S_0+\lambda A[[\lambda]]$ is right and left denominator subset of the algebra $R$.
\end{prop}
\begin{proof}
We first note the following easy, but important property
of general differential operators $D:M\to N$ of order $k$ where
$M$ and $N$ are arbitrary $A$-modules: for any $a\in A$ and
$n\in\mathbb{N}$ with $n\geq k$ there are differential
operators $\tilde{D}_{[a]},\check{D}_{[a]}:M\to N$
of order $k$ such that for all $m\in M$
\begin{equation}\label{EqCompDiffopsWithPowersMult}
D(a^nm)=a^{n-k}\tilde{D}_{[a]}(m)
~~~\mathrm{and}~~~
a^nD(m)=\check{D}_{[a]}(a^{n-k}m).
\end{equation}
Indeed write $R^n_a=\big(L_a-\mathrm{ad}_a)^n$ for the
term on the left of the first equation, and
$L^n_a=\big(\mathrm{ad}_a+R_a)^n$ for the
term on the left of the second equation, apply the binomial theorem, and use
that all maps $\mathrm{ad}_a^{l}(D)$ are differential
operators of order $k-l\leq k$ and
$\mathrm{ad}_a^{k+1}(D)=0$.\\
Next, let $\star=\sum_{n=0}^\infty\lambda^nC_n$ be the star product with algebraic bidifferential operators $C_n$,
$n\in\mathbb{N}$. We can assume that each $C_n$ has a
`bi-order' $(k_n,k_n)$ with $k_n\in\mathbb{N}$ for each
$n\in\mathbb{N}$ (of course $k_0=0$), and for each $n\in\mathbb{N}$ we define the nonnegative integer $\kappa_n:=\max\{k_0=0,k_1,\ldots,k_n\}$.\\
We shall show that
$\big(A_{S_0}[[\lambda]],\star_{S_0}\big)$ is a right algebra of fractions of $\big(A[[\lambda]],\star,S\big)$
along the (algebraisized) lines of the proof of Thm
\ref{TLocalizationEquivalenceOpenSubset}:\\
$\bullet$ It follows from the previous section
(and from the beginning of the proof of Theorem \ref{TLocalizationEquivalenceOpenSubset})
that the numerator morphism
$\eta:A[[\lambda]]\to A_{S_0}[[\lambda]]$ is $S$-inverting.
\\
$\bullet$~``\emph{Every $\phi=\sum_{n=0}^\infty
\lambda^n \frac{a_n}{s_n}\in A_{S_0}[[\lambda]]$
is equal to
$\eta(f)\star_{S_0}\eta(g)^{\star_{S_0} -1}$ for some $f=\sum_{n=0}^\infty\lambda^n\alpha_n\in A[[\lambda]]$ and
$g\in S$}'': here of course $a_0,a_1,\ldots\in A$,
$\alpha_0,\alpha_1,\ldots,\in A$, and
$s_0,s_1,\ldots \in S$. We make the ansatz $g=s\in S_0$ of a `fonction applatisseur', and consider
\begin{equation}\label{EqCompPhiStarSZeroSOverOne}
\left(\phi \star_{S_0} \frac{s}{1}\right)_n
= \sum_{u=0}^nC_{uS_0}\left(\frac{a_{n-u}}{s_{n-u}},
\frac{s}{1}\right)
\stackrel{(\ref{EqDefDifferentialOpLocalized})}{=}
\sum_{u=0}^n\sum_{v=1}^{\kappa_n + 1}
{\kappa_n+1 \choose v}(-1)^{v+1}
\frac{C_u\big(s_{n-u}^{v-1}a_{n-u},s\big)}
{s_{n-u}^{v}}
\end{equation}
We have to choose $s\in S_0$ in such a way as to `kill the
denominators occurring on the right hand side of the preceding equation': thanks to the $\sigma CM$ property,
for the sequence
$\left(
(s_0s_1\cdots s_n)^{2\kappa_n+1}\right)_{n\in\mathbb{N}}$
which is in $S_0$ there is a sequence
$(b_n)_{n\in\mathbb{N}}$ and $s\in S_0$ such that for each $n\in\mathbb{N}$
we have
\begin{equation}\label{EqDefKillingSequenceIdentity}
\forall~n\in\mathbb{N}:~~
(s_0s_1\cdots s_n)^{2\kappa_n+1}b_n=s.
\end{equation}
Clearly, in each of the numerators of the fractions on the right hand side of eqn
(\ref{EqCompPhiStarSZeroSOverOne}) the above $s$ can be
written as a product of $s_{n-u}^{2\kappa_n+1}c_{n,u}$ with
$c_{n,u}$ is a product of $b_n$ and some factors of the
above sequence. By the first equation of
(\ref{EqCompDiffopsWithPowersMult}) we can pull
$s_{n-u}^{\kappa_n+1}$ out of the second argument of the bidifferential operator in
the numerator, and this factor in the numerator cancels each denominator.
This shows that there is $f\in A[[\lambda]]$ such that $\phi\star_{S_0}\eta(s)=\eta(f)$ and since
$\eta(s)$ is $\star_{S_0}$-invertible, the statement is
proved.\\
$\bullet$~\emph{The kernel of $\eta$ is equal to the space
of elements $f\in R$ such that there is $g\in S$
with $f\star g=0$:} indeed, the statement
$f=\sum_{n=0}\lambda^n f_n\in A[[\lambda]]$
is such that $\frac{f}{1}=\eta(f)=0$ is equivalent to the statement
for each $n\in\mathbb{N}$ there is $s_n\in S_0$ such that
$f_ns_n=0$. In order to get an idea of $g\in S_0$ we again
make the ansatz $g=s\in S_0$ and we compute for each $n\in\mathbb{N}$
\begin{equation}\label{EqCompFTimesKillerFunction}
(f\star s)_n=\sum_{u=0}^nC_u(f_{n-u},s).
\end{equation}
We now take the same element $s$ constructed in the preceding part of the proof satisfying
eqn (\ref{EqDefKillingSequenceIdentity}) with respect
to the above $s_0,s_1,\ldots \in S_0$ each killing
$f_0,f_1,\ldots$. As in the preceding part, we can pull
a factor $s_{n-u}^{\kappa_n+1}$ out of the second argument
of the bidifferential operator $C_u$ (upon using the first equation of eqn (\ref{EqCompDiffopsWithPowersMult})), and we
put it then into the first argument of the resulting
bidifferential operator where a factor of $s_{n-u}$ remains
in front of $f_{n-u}$ which gives zero (upon using the
second equation of (\ref{EqCompDiffopsWithPowersMult})). It follows that
this choice of $s$ makes all the terms in eqn
(\ref{EqCompFTimesKillerFunction}) vanish which shows the
kernel of $\eta$ is contained in the subset of all $f$
killed by right multiplication of some $g\in S$.
The other inclusion is trivial since $\eta$ is an $S$-inverting morphism of algebras, and $f\star g=0$
for some $g\in S$
implies $\eta(f)\star_{S_0}\eta(g)=0$ implying
$\eta(f)= 0$ since $\eta(g)$ is invertible in $A_{S_0}[[\lambda]]$.\\
It is obvious that the preceding constructions can be
done for left fractions etc. by interchanging the arguments
in the bidifferential operators.
This proves the Proposition since $\big(A_{S_0}[[\lambda]],\star_{S_0}\big)$ is a right (and left)
algebra of fractions of $\big(A[[\lambda]],\star,S\big)$
in the sense of Definition \ref{defi: bigdef}.
\end{proof}
Note that the property $\sigma CM$ is NOT satisfied for any
`interesting' multiplicative subset $S_0$ of a \emph{Noetherian domain} $A$ where we suppose that $S_0$ does not contain $0$: we assume that there is a noninvertible
element $s_0$ in $S_0$ because otherwise both localizations
are isomorphic to the original algebra $\big(A[[\lambda]],\star\big)$. Then the sequence of principal ideals
$\left(s_0^nA\right)_{n\in\mathbb{N}}$ clearly
equals the sequence of powers $\left(I^n\right)_{n\in\mathbb{N}}$ with $I=s_0A$, and Krull's Intersection Theorem (see e.g.~\cite[p.200, Ch.III 3.2, Corollary]{BouCommAlg}) states that
$\cap_{n\in\mathbb{N}}s_0^nA=\{0\}$ whence for the sequence
$(s_0^n)_{n\in\mathbb{N}}$ no sequence $(a_n)_{n\in\mathbb{N}}$ can be found to satisfy property
$\sigma CM$.
\section{Non Ore multiplicative subsets in deformation quantization}
\label{SecNonOreExample}
The following example provides a multiplicative subset $S$
of a deformed algebra $\big(R=A[[\lambda]],\star\big)$
which is of the deformation type
$S_0+\lambda R$ (where $S_0$ is a multiplicative subset of $A$) which fails to satisfy the Ore condition, but
is a subset of a large right denominator subset of $R$.
This shows that the second problem we raised in the previous
Section \ref{SecCommutativelyLocalizedStarProducts} does not seem to be immediately trivial.
Consider
$A=\mathcal{C}^\infty(\mathbb{R}^2,\mathbb{R})$ with the
standard star product $\star$ given by formula
(\ref{EqDefStandardStarProduct}). Let $R=A[[\lambda]]$,
and let $\Omega\subset \mathbb{R}^2$ be the dense open set
of all $(x,p)\in \mathbb{R}^2$ where $x\neq 0$.
Set
\begin{equation}
S_0:=\{1,x,x^2,\ldots\}\subset A~~\mathrm{and}~~
S:=S_0+\lambda R\subset R.
\end{equation}
Recalling the multiplicative subset $S_\Omega=\{g\in A~|~\forall~x\in\Omega:~g(x)\neq 0\}$ we have the
\begin{prop}
The subset $S\subset R$
is a multiplicative subset of $(R,\star)$ which is
contained in the right denominator subset
$S_\Omega+\lambda R\subset R$
(see section \ref{SecNonComLocOpenSets}), but which is neither right nor left Ore.
\end{prop}
\begin{proof}
Since $x^m\star x^n=x^{m+n}$ it is clear that
$S$ is a multiplicative subset of $R$ which clearly
is a subset of
$S_\Omega$. Next pick a smooth real-valued function
$\chi:\mathbb{R}\to\mathbb{R}$ with the following
properties
\[
\forall~p\in\mathbb{R}:~0\leq \chi(p)\leq 1,~~
\mathrm{supp}(\chi)\subset
\left[-\frac{1}{3},\frac{1}{3}\right],~~
\mathrm{and~~}\forall~p\in
\left[-\frac{1}{6},\frac{1}{6}\right]:~\chi(p)=1,
\]
which is well-known to exist,
and define the smooth functions $r\in A\subset R$
and $s\in S_0\subset S$ by
\[
r(x,p):=\sum_{n=0}^\infty
\chi(p-n)\frac{(p-n)^n}{n!}
~~\mathrm{and}~~
s(x,p)=x
\]
where $r$ is well-defined as a locally finite sum whose terms have mutually disjoint supports.
We shall only need the following property of $r$
which is easy to see:
\begin{equation}\label{EqCompDerivativesOfFunnyFunctionr}
\forall~n,k\in\mathbb{N}:~~~
\frac{\partial^k r}{\partial p^k}(0,n)
=\left\{ \begin{array}{cl}
0 & \mathrm{if}~0\leq k\leq n-1, \\
1 & \mathrm{if}~ k=n.
\end{array}\right. .
\end{equation}
We remark that there are also real analytic functions
$r:\mathbb{R}^2\to \mathbb{R}$ having the preceding
property (\ref{EqCompDerivativesOfFunnyFunctionr}): it suffices to take the real part of the holomorphic
function constructed by Weierstrass's elementary factors,
see e.g.~\cite[p.~303, Thm.~15.9]{Rud87}.\\
Next note that an element $s'\in R$ is contained in
$S$ iff there is a unique nonnegative integer $m$ and a unique smooth function
$g\in \lambda A[[\lambda]]$ (i.e.~$g_0=0$) such that $s'(x,p)=x^m+g$.
For any such $s'\in S$ and $r'\in R$ we set
\[
\mathcal{R}(r',s'):=
\sum_{k=0}^\infty \lambda^k\mathcal{R}_k(r',s')
:= r \star s' - x\star r'
\]
which is a kind of deviation from the right Ore property for general $s'\in S$ and $r'\in R$.
It is easy to compute that
\[
\forall~0\leq k\leq m:~~
\mathcal{R}_k(r',s')(x,p)=
{m \choose k}\frac{\partial^k r}{\partial p^k}(x,p)
x^{m-k}
+ \sum_{l=0}^{k-1}\frac{1}{l!}
\frac{\partial^l r}{\partial p^l}(x,p)
\frac{\partial^l g_{k-l}}{\partial x^l}(x,p)
-xr_k'(x,p)
\]
where the empty sum (occurring for $k=0$) is defined to be $0$. Using property
(\ref{EqCompDerivativesOfFunnyFunctionr}) it is immediate
that
\[
\forall~m\in\mathbb{N},~\forall~g\in\lambda R,
~\forall~r'\in R:~~
\mathcal{R}_m(r',s')(0,m)=1 \neq 0,
\mathrm{~~~hence~~~}\mathcal{R}_m(r',s')\neq 0
\]
showing that for the given $r\in R$, $s\in S$ there are
no $r'\in R$ and $s'\in S$ satifying the right Ore condition. An easy application of
eqn (\ref{EqDefNeumaierModified}) using the fact that
$S$ is obviously stable by the bijection $V$ shows that it also fails to satisfy the left Ore condition.
\end{proof}
This example shows the difference between the general
noncommutative localization according to Proposition \ref{PGeneralLocalization} and the localization with respect to multiplicative subsets satisfying the Ore conditions,
see Theorem \ref{TLocalizationForRightDenominatorSets}:
The localization of $R$ w.r.t~$S$ exists, and its elements
are multifractions, see eqn (\ref{EqCompLocAlgebraGeneralElement}): but mapping it into the localization
with respect the the bigger Ore subset $S_\Omega+\lambda R$
helps to transform all the multifractions into simple
right (or left) fractions.
\small
|
3,212,635,537,662 | arxiv | \section{Introduction}
In the preceeding paper, A. Shiryaev, Z. Xu and X.Y. Zhou \cite{SXZ} ask about the optimal time to sell a
stock
over a certain time interval $[0,T]$, knowing that the price is a geometrical Brownian motion with
a certain average return over the risk-free rate $a-r$ and a certain volatility $\sigma$. The answer
to this question depends on the value of the adimensional parameter $\alpha=(a-r)/\sigma^2$. The method
used by the authors allow them to prove that whenever $\alpha > 1/2$ the optimal selling time $\tau^*$
is always at the end of the interval, $\tau^*=T$, whereas $\tau^*=0$ in the case $\alpha < 0$. In
financial words, ``good'' stocks with a sufficiently large average return should be sold as late as
possible, whereas one should immediately get rid of ``bad stocks''. These results are clearly very
interesting; however one feels unsatisfied by the fact that the authors' method do not allow them
to treat the case $0 < \alpha \leq 1/2$. They discuss this point in the conclusion, mentioning (a)
a working paper \cite{SXZ2} based on an alternative method showing that one should in fact sell immediately
as soon as $\alpha < 1/2$ and (b) that the case $\alpha < 1/2$ is not interesting financially because
``most stocks realize $\alpha > 1/2$ by a large margin''.
The aim of this short note is to reconsider the problem using path integral methods which are well known in
physics but perhaps less well known in financial mathematics. This method allows one to treat all values
of $\alpha$ on the same footing. We confirm the results of \cite{SXZ} and extend them to the
$0 < \alpha \leq 1/2$ interval. In fact, we show that there is an exact symmetry in the problem that relates
the problem with $\alpha > 1/2$ to the problem with $\alpha < 1/2$. Our method furthermore allows us to garner
additional results, such as the distribution of the time $t_m$ at which the maximum of the price is reached.
We find that this distribution has inverse square root singularities both at $t_m=0$ and $t_m=T$ {\it for all
values of} $\alpha$; however, the amplitude of the divergence at $t_m=0$ is stronger when $\alpha < 1/2$ and
weaker when $\alpha > 1/2$. This gives a more precise picture to the results of Shiryaev, Xu and Zhou.
For $\alpha=1/2$, the problem is degenerate and the two peaks have exactly the same amplitude (in fact, the
distribution is symmetric under $t_m \to T-t_m$).
Finally, we do not agree with the statement that $\alpha < 1/2$ is not interesting financially. The numbers
provided in \cite{SXZ} are based on the S\&P500 index returns, and are therefore much too optimistic: first,
there is an obvious selection bias since badly performing stocks leave the index; second, the volatility of
the index is two to three times smaller than the volatility of individual stocks, thanks to the diversification
effect. An annualized volatility above $40 \%$ is in fact not uncommon, in particular for small to medium caps --
whereas the S\&P500 only includes large caps. Fig. 1 shows the time series of the (implied) S\&P500 index
volatility and the average stock volatility, in the period 2000-2007. With an average annual return of
$10 \%$, an interest rate of $5\%$ and a volatility of $40 \%$ annual, the parameter $\alpha$ is found to
be $0.3125 < 1/2$.
We hope that this short note will shed a useful light on the work of A. Shiryaev, Z. Xu and X.Y. Zhou, and
that it will convince the reader that path integral methods are extremely powerful to solve a variety of
random walk problems. We refer the reader to a short review paper by one of us \cite{BF} on this topic, see
also \cite{MRKY}.
\begin{figure}
\includegraphics[width=.9\hsize]{satya_vix.eps}
\caption{Time series of the (implied) S\&P500 index
volatility (the VIX, bottom green curve) and the average implied stock volatility for SPX stocks (middle red curve) and
mid-cap stocks (upper black curve), in the period 2000-2007. It is clear that stock volatilities are on average
two to three times larger than the volatility of the index, and that values of $\sigma$ above $40 \%$ are not uncommon.}
\end{figure}
\section{The Set-Up}
In this section we give the set-up of the problem using physicists notations. We assume, as in \cite{SXZ}, that
the price $P_t$ of a stock follows a geometric Brownian motion:
\begin{equation}
P_t= \exp\left[\mu t + \sigma\,B_t\right],
\label{BS1}
\end{equation}
where $\mu$ is Ito corrected drift and $B_t$ the standard Brownian motion.
We will use below the notation $x(t)=\mu t + \sigma B_t$ for the drifted Brownian motion,
with by convention $x(0)=0$. In Ref. \cite{SXZ}, the authors introduce the notation $\alpha=\mu/\sigma^2+1/2$.
A `good' stock in the financial language corresponds to a positive drift $\mu>0$ (i.e., $\alpha>1/2$)
and a `bad' stock corresponds to a negative drift $\mu<0$ ($\alpha<1/2$). In terms of the real return $a$ of the stock,
the condition $\mu >0$ translates into $a-r > \sigma^2/2$, where $a-r$ is the excess return over the risk-free rate $r$.
Note that the process that we talk about is the real world process and not the risk-neutral one,
which has no meaning for the question raised in Ref. \cite{SXZ}.
Let us consider the evolution of the stock price over a fixed time interval
$0\le t \le T$. It is intuitively obvious that the maximum of a
drifted Brownian motion
and hence that of the stock price $P_t$ is most likely to occur at $t=T$ (for $\mu>0$)
and $t=0$ (for $\mu<0$). Thus, it obviously makes sense to sell a `good' stock $(\mu>0)$
at the end of the interval $t=T$, whereas a `bad' stock $(\mu<0)$ at the begining
of the interval $t=0$. This intuitive results are put on a more rigorous mathematical
footing in the rest of this note by (i) calculating exactly, using path integral methods, the maximal
relative error as
defined in Ref. \cite{SXZ}, but for all values of $\mu$ and (ii) also by computing the
full probability density of the time $t_m$ at which the maximum of the price occurs for
all $\mu$.
Let $M_T$ denote the maximum price of the stock over the
interval $[0,T]$, i.e.,
\begin{equation}
M_T = \max_{\substack{0\le t\le T}} P_t.
\label{max1}
\end{equation}
Evidently, the optimal time to sell the stock is the one where the
difference between the price of the stock and its maximal value $M_T$
is minimal. A convenient way to estimate this optimal time is to
consider the relative error at a fixed time $\tau$ where $0\le \tau\le T$
\begin{equation}
r_{\mu} (\tau,T)=
{\rm E}\left(\frac{M_T-P_\tau}{M_T}\right)=1-{\rm E}\left(\frac{P_\tau}{M_T}\right)
\label{re1}
\end{equation}
where ${\rm E}$ denotes the expectation value over all realizations of the Brownian
motion. Minimizing $r_{\mu}(\tau,T)$ over $0\le \tau\le T$ gives the optimal
time $\tau^*$. In other words, $\tau^*$ is the time at which the ratio
\begin{equation}
S_{\mu} (\tau,T)= 1-r_{\mu}(\tau,T)= {\rm E}\left(\frac{P_\tau}{M_T}\right)
\label{se1}
\end{equation}
is maximal. The goal is to estimate $S_{\mu}(\tau,T)$ and then maximize it
with respect to $0\le \tau \le T$. Using the
trivial
identity
\begin{equation}
M_T= \max_{\substack{0\le \tau\le T}} P_\tau=\max_{\substack{0\le \tau\le T}}
\left[\exp\left(x(\tau)\right)\right]= \exp\left[\max_{\substack{0\le \tau\le T}} x(\tau)\right]
\label{iden1}
\end{equation}
one can rewrite $S_\mu(\tau,T)$ in Eq. (\ref{se1}) as
\begin{equation}
S_\mu(\tau,T)= {\rm E}\left[\exp\left(-\{\tilde M_T-x(\tau)\}\right)\right]
\label{se2}
\end{equation}
where
\begin{equation}
\tilde M_T= \max_{\substack{0\le t\le T}} x(t)=\ln[M_T]
\label{max2}
\end{equation}
is the
maximum of the drifted Brownian motion $x(t)$ over $0\le t\le T$. Note that
throughout this paper, we will use $t$ as the running time and $\tau$ as a
fixed time.
Let us consider the random variable $y(\tau)= \tilde M_T-x(\tau)$ at a fixed time
$\tau$ and let
$P_\mu(y,\tau)$ denote its probability density function (pdf). Once we know
$P_\mu(y,\tau)$, then from Eq. (\ref{se2}), we can evaluate
\begin{equation}
S_\mu(\tau,T)= \int_0^{\infty} dy\, e^{-y}\, P_\mu(y,\tau).
\label{se3}
\end{equation}
To evaluate the pdf $P_\mu(y,\tau)$, we need the joint pdf of $\tilde M_T$ and
$x(\tau)$ at fixed $\tau$.
\section{Joint distribution of $\tilde M_T$ and $x(\tau)$}
It is convenient
to compute first the cumulative probability
\begin{equation}
F_{\mu}(x,m,\tau)= {\rm Prob}[x(\tau)=x\,\,{\rm and}\,\, \tilde M_T \le m]
\label{cum1}
\end{equation}
where the walk starts at the origin $x(0)=0$ and $\tilde M_T$ is the
global maximum of the walk in $[0,T]$. This cumulative probability
can be computed using a path-integral approach as detailed below.
Clearly $F_\mu(x,m,\tau)$ is the probability that a drifted Brownian motion
$x(t)$ in $0\le t\le T$, starting from $x(0)=0$, reaches $x(\tau)=x$ at a
fixed time $t=\tau$ and in addition, stays below the level $m$ for all $0\le t\le T$.
The last condition comes from the fact that if the global maximum $\tilde M_T \le m$,
the path necessarily stays below the level $m$ for all $0\le t\le T$. An example of
such a path is seen in Fig. 2.
\begin{figure}
\includegraphics[width=.9\hsize]{dbm1.eps}
\caption{A realization of the drifted Brownian motion $x(t)$ in $t\in [0,T]$,
starting at $x(0)=0$, reaching $x(\tau)=x$ at $t=\tau$ and staying below the
level $m$ for all $0\le t\le T$.}
\end{figure}
To compute $F_{\mu}(x,m,\tau)$, it is convenient to consider the shifted process
$y(t)= m- x(t)$ so that the process $y(t)$ evolves, as
\begin{equation}
dy=-dx= -\mu dt -\sigma dB_t.
\label{lange2}
\end{equation}
Thus the shifted process $y(t)$ represents a Brownian motion with a drift
$-\mu$, opposite to that of $x(t)$. In terms of the process $y(t)$, $F_{\mu}(x,m,\tau)$
is just the probability that the process $y(t)$, starting at $y(0)=m$, reaches
the point $y(\tau)= m-x$ at $t=\tau$ and {\it stays positive} in the whole interval
$0\le t\le T$. An example of such an event is shown in Fig. 3
\begin{figure}
\includegraphics[width=.9\hsize]{dbm2.eps}
\caption{A realization of the shifted Brownian motion $y(t)$ with
drift $-\mu$ in $t\in [0,T]$,
starting at $y(0)=m$, reaching $y(\tau)=m-x$ at $t=\tau$ and staying positive
for all $0\le t\le T$.}
\end{figure}
For the process $y(t)$ in Eq.
(\ref{lange2}),
let us first define the propagator $G_{-\mu}^{+}(y,y_0,t)$ that denotes the
probability that the process starting at $y_0$ at $t=0$, reaches $y$ at time $t$,
but staying {\it positive} in between, i.e., in $[0,t]$. One can then easily
express $F_{\mu}(x,m,\tau)$ in terms of this propagator as (see Fig. 3)
\begin{equation}
F_{\mu}(x,m,\tau)= G_{-\mu}^{+}(m-x,m,\tau)\,\int_0^{\infty} G_{\mu}^{+}(y',m-x,T-\tau)\,
dy' .
\label{propa1}
\end{equation}
In writing Eq. (\ref{propa1}), we have split the interval $[0,T]$ into two
parts: $[0,\tau]$ and $[\tau,T]$. In the first interval (see Fig. 3), the
process propagates from the initial position $y(0)=m$ to $y(\tau)=m-x$
in time $\tau$ (staying positive in between), hence explaining the first factor
$G_{-\mu}^{+}(m-x,m,\tau)$
in Eq. (\ref{propa1}). In the second interval, the process starting at
the new `initial' position $m-x$, propagates to a final position $y'$
in time $T-\tau$, staying positive in between. Also, the final position
$y'$ can be any positive number and one has to integrate over it. This
explains the second factor in Eq. (\ref{propa1}). Of course, in writing
the path decomposition in Eq. (\ref{propa1}) we have used the renewal property of a
Brownian motion (valid due to its Markovian nature) which implies that the two intervals (left of
$\tau$ and right of
$\tau$) are statistically independent.
\vskip 0.3cm
{\noindent {\bf Evaluation of the propagator $G_{-\mu}^{+}(y,y_0,\tau)$:}} Using a physicist
interpretation of Eq. (\ref{lange2}), we note that the Langevin noise $\eta(t)=dB_t/dt$ is a Gaussian
white noise with the associated measure, ${\rm Prob}\left[\{\eta(t)\}, 0\le t\le \tau \right]\propto
\exp\left[-\frac{1}{2}\int_0^{\tau}
\eta^2(t)dt\right]$. Substituting, $\eta(t)=({\dot y}+\mu)/\sigma$ from Eq. (\ref{lange2}),
one can express the propagator as a path integral
\begin{equation}
G_{-\mu}^{+}(y,y_0,\tau)=\int_{y(0)=y_0}^{y(\tau)=y}{\cal D}y(t)
\exp\left[-\frac{1}{2\sigma^2}\int_0^\tau dt \left({\dot y}+\mu\right)^2\right]\,
\left[\prod_{t=0}^\tau
\theta(y(t))\right]
\label{green1}
\end{equation}
where $\left[\prod_{t=0}^\tau
\theta(y(t))\right]$ is an indicator function that enforces the path to stay positive in
the interval $t\in [0,\tau]$. The rhs of Eq. (\ref{green1}) can be rearranged (by expanding
the square $({\dot y}+\mu)^2$ and performing the time integral) as
\begin{equation}
G_{-\mu}^{+}(y,y_0,\tau)= \exp\left[-\frac{\mu^2\tau}{2\sigma^2}-\frac{\mu}{\sigma^2}(y-y_0)\right]\,
G_0^{+}(y,y_0,\tau)
\label{green2}
\end{equation}
where $G_0^{+}(y,y_0,\tau)$ is the propagator associated with the driftless $(\mu=0)$ Brownian motion
\begin{equation}
G_0^{+}(y,y_0,\tau)= \int_{y(0)=y_0}^{y(\tau)=y}{\cal D}y(t)
\exp\left[-\frac{1}{2\sigma^2}\int_0^\tau dt \, {\dot y}^2\right]\, \left[\prod_{t=0}^\tau
\theta(y(t))\right].
\label{greenfree}
\end{equation}
This propagator, which denotes the probability that a driftless Brownian motion
propagates from $y_0$ to $y$ in time $\tau$ without crossing the origin in between,
can be evaluated
very simply by the standard method of images~\cite{Feller,Redner} or
alternatively by the path integral method~\cite{BF} and has the
well known expression
\begin{equation}
G_0^{+}(y,y_0,\tau)= \frac{1}{\sqrt{2\pi \sigma^2
\tau}}\,\left[\exp\left(-\frac{(y-y_0)^2}{2\sigma^2\tau}\right)
-\exp\left(-\frac{(y+y_0)^2}{2\sigma^2\tau}\right)\right].
\label{greenfree1}
\end{equation}
Substituting this in Eq. (\ref{green2}), one then has the required propagator.
Using this explicit expression for $G_{-\mu}^{+}(y,y_0,\tau)$ one can also
easily evaluate the following integral
\begin{equation}
\int_0^{\infty} G_{-\mu}^{+}(y,y_0,\tau)\, dy = \frac{1}{2}\,
\left[ {\rm erfc}\left(-\frac{y_0-\mu \tau}{\sqrt{2\sigma^2 \tau}}\right)-
\exp\left(\frac{2\mu\,y_0}{\sigma^2}\right)\,{\rm erfc}\left(
\frac{y_0+\mu
\tau}{\sqrt{2\sigma^2 \tau}}\right)\right]
\label{righthalf}
\end{equation}
where ${\rm erfc}(x)= \frac{2}{\sqrt{\pi}}\int_x^{\infty} e^{-u^2}\,du$ is the complementary
error function.
Assembling these results in Eq. (\ref{propa1}) gives us an explicit expression for
the cumulative probability
\begin{equation}
F_{\mu}(x,m,\tau)=\frac{e^{-\frac{\mu^2\tau}{2\sigma^2}+\frac{\mu x}{\sigma^2}}}{2\sqrt{2\pi
\sigma^2\tau}}\,
\left[e^{-\frac{x^2}{2\sigma^2\tau}}
-e^{-\frac{(2m-x)^2}{2\sigma^2\tau}}\right]\,
\left[ {\rm erfc}\left(-\frac{m-x-\mu (T-\tau)}{\sqrt{2\sigma^2 (T-\tau)}}\right)-
e^{\frac{2\mu\,(m-x)}{\sigma^2}}\,{\rm erfc}\left(
\frac{m-x+\mu
(T-\tau)}{\sqrt{2\sigma^2 (T-\tau)}}\right)\right].
\label{fxm}
\end{equation}
The joint pdf $Q_{\mu}(x,m)$ of $x(\tau)=x$ at fixed $\tau$ and $\tilde M_T=m$ can then
be
obtained by taking the derivative of $F_{\mu}(x,m,\tau)$ with respect to $m$, i.e.,
\begin{equation}
Q_{\mu}(x(\tau)=x, \tilde M_T=m)= \frac{\partial F_{\mu}(x,m,\tau)}{\partial m}.
\label{jpdfQ}
\end{equation}
\section{Evaluation of the relative error $r_{\mu}(\tau,T)$}
Having obtained the joint pdf $Q_{\mu}(x(\tau)=x, \tilde M_T=m)$ in Eqs. (\ref{jpdfQ})
and (\ref{fxm}), we can easily find the pdf $P_\mu(y,\tau)$ of the variable $y=\tilde M_T-x$
\begin{eqnarray}
P_{\mu}(y,\tau)&=& \int Q_{\mu}(x, m)\, \delta\left(y-(m-x)\right)\, dx\, dm \nonumber \\
&=& \int_0^{\infty} Q_{\mu}(m-y,m)\, dm.
\label{pdfy1}
\end{eqnarray}
The above integral can be performed exactly (we skip the details here). One obtains
the following expression
\begin{equation}
P_{\mu}(y,\tau)= \frac{1}{\sqrt{2\pi \sigma^2\tau}}\,f_{\mu}(y,\tau)\,g_{\mu}(y,T-\tau)
+\frac{1}{\sqrt{2\pi \sigma^2(T-\tau)}}\,f_{-\mu}(y,T-\tau)\,g_{-\mu}(y,\tau)
\label{pdfy2}
\end{equation}
where
\begin{eqnarray}
f_{\mu}(y,\tau)&=&
\exp\left(-\frac{(y+\mu\tau)^2}{2\sigma^2\tau}\right)+\frac{\mu}{\sigma}
\sqrt{\frac{\pi \tau}{2}}\,e^{-2\mu y/\sigma^2}\,{\rm erfc}\left(\frac{y-\mu
\tau}{\sqrt{2\sigma^2\tau}}\right) \label{f1}\\
g_{\mu}(y,\tau)&=& \left[ {\rm erfc}\left(-\frac{y-\mu \tau}{\sqrt{2\sigma^2 \tau}}\right)-
e^{2\mu\,y/\sigma^2}\,{\rm erfc}\left(
\frac{y+\mu\tau}{\sqrt{2\sigma^2 \tau}}\right)\right]
\label{g1}
\end{eqnarray}
Note from the explicit expression of $P_\mu(y,\tau)$ the following symmetry
\begin{equation}
P_{\mu}(y,\tau)= P_{-\mu}(y, T-\tau)
\label{symm1}
\end{equation}
which has the simple physical meaning of time-reversal symmetry, i.e., when the
process propagates in the reverse time direction, one gets the same measure
provided one also reverses the sign of the drift $\mu$.
Having obtained the pdf $P_\mu(y,\tau)$, one can then evaluate the relative
error $r_\mu(\tau,T)=1-S_\mu(\tau,T)$ where
\begin{equation}
S_{\mu}(\tau,T)= \int_0^{\infty} dy\, e^{-y} P_{\mu}(y,\tau)
\label{sm1}
\end{equation}
Evidently $S_{\mu}(\tau,T)$ also has the same time-reversal symmetry namely
\begin{equation}
S_{\mu}(\tau,T)= S_{-\mu}(T-\tau, T)
\label{symm2}
\end{equation}
While it is difficult to do the integral in Eq. (\ref{sm1}) analytically,
one can easily evaluate it
using Mathematica. Besides, the general feature of $S_{\mu}(\tau,T)$ as a function
of $\tau$ can be inferred by just studying the asymptotic properties of the integral in Eq.
(\ref{sm1}) in the
limit $\tau\to 0$ and $\tau \to T$. In Fig. 4, we
show a
plot of $S_{\mu}(\tau,T)$ for three different values
of $\mu=0.1$, $\mu=0$ and $\mu=-0.1$ upon setting $T=1$ and $\sigma=1$.
\begin{figure}
\includegraphics[width=.9\hsize]{st1.eps}
\caption{Plots of $S_{\mu}(\tau, T)$ vs. $\tau$ obtained from Eq. (\ref{sm1}) for three different
values of the drift
$\mu=0.1$, $\mu=0$ and $\mu=-0.1$. We have set $T=1$ and $\sigma=1$. The symmetry
$S_\mu(\tau,T)=S_{-\mu}(T-\tau,T)$ is
evident.}
\end{figure}
\vskip 0.3cm
{\noindent {\bf Optimal Time $\tau^*$:} To find the optimal time $\tau^*$ we need to minimize
$r_\mu(\tau,T)$, i.e., maximize $S_\mu(\tau,T)$ with respect to $\tau\in [0,T]$. It is
evident from Fig. 4 and also from the expression of $S_{\mu}(\tau,T)$ that for all values of $\mu$,
$S_{\mu}(\tau,T)$ has two local maxima at the endpoints of the interval $[0,T]$, i.e., respectively
at
$\tau=0$ and $\tau=T$. However, for $\mu>0$, the maximum at $\tau=T$ has a larger value implying
that for $\mu>0$, $\tau^*=T$. By the symmetry manifest in $S_{\mu}(\tau, T)=S_{-\mu}(T-\tau,T)$ it
follows that for $\mu<0$, the maximum at $\tau=0$ has a higher value implying $\tau^*=0$ for $\mu<0$.
Exactly at $\mu=0$, both local maxima at $\tau=0$ and $\tau=T$ have the same value
($S_0(\tau,T)$ is completely symmetric around the midpoint $\tau=T/2$) implying that
for $\mu=0$, both $\tau^*=0$ and $\tau^*=T$ are optimal.
The optimal value $S_{\mu}(\tau^*,T)$ is actually easier to evaluate since for $\tau=0$ or
$\tau=T$ (at the end-points), the integral in Eq. (\ref{sm1}) can be carried out explicitly.
Omitting details of this integration, we get the following expression for the optimal relative error
for all $\mu$
\begin{eqnarray}
r(\tau^*,T)&=& 1-S_{\mu}(\tau^*,T) \nonumber \\
&=& 1- \frac{|\mu|}{2|\mu|+\sigma^2}\,{\rm erfc}\left(-\frac{|\mu|}{\sigma}\sqrt{\frac{T}{2}}\right)-
\frac{\sigma^2+|\mu|}{\sigma^2+2|\mu|} \, \exp\left[-\left(|\mu|+\frac{\sigma^2}{2}\right)\,T\right]
\,{\rm
erfc}\left(\left(\frac{|\mu|}{\sigma^2}+1\right)\sqrt{\frac{\sigma^2T}{2}}\right).
\label{opterr}
\end{eqnarray}
Note that the optimal relative error is evidently a symmetric function of $\mu$ as manifest
in the above result.
In the preceeding paper~\cite{SXZ}, Shiryaev, Xu and Zhou also obtained an expression of
the optimal relative error $r(\tau^*,T)$ by a completely different method.
Their notations are slightly different from above. In their notation, $\mu=(\alpha-1/2)\sigma^2$
and also their result for $r(\tau^*,T)$ is in terms of the probability distribution of a Gaussian
random variable
with zero mean and unit variance, $\Phi(x)= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-u^2/2} du$
which is related to the complimentary error function via
\begin{equation}
\Phi(x)= \frac{1}{2}\, {\rm erfc}\left(-\frac{x}{\sqrt{2}}\right).
\label{norm1}
\end{equation}
However, their method allows them to obtain an explicit expression for the optimal relative error
only in the range $\alpha\ge 1/2$ (i.e., $\mu\ge 0$) and $\alpha\le 0$ (i.e., $\mu\le -\sigma^2/2$).
In these ranges, their expressions for the optimal relative error (Eqs. (9) and (11) in \cite{SXZ})
reduce precisely to our compact result in Eq. (\ref{opterr}), upon identifying
$\mu=(\alpha-1/2)\sigma^2$ and $\Phi(x)$ as in Eq. (\ref{norm1}). However,
they do not have any result in the range $0<\alpha<1/2$, i.e., for $-\sigma^2/2<\mu<0$. In contrast,
our result in Eq. (\ref{opterr}) is valid for all $\mu$ (and hence for all $\alpha$) and
is therefore more general. In addition, their method somehow does not detect the symmetry
of $r(\tau^*,T)$ under $\mu\to -\mu$ which is manifest in our path integral approach.
\section{The exact distribution of the time $t_m$ of the occurrence of the maximum for a Brownian
motion
with drift $\mu$}
Minimizing the relative error $r(\tau,T)$ with respect to $\tau$ is one way of estimating
the optimal time $\tau^*$ at which one should sell a stock over a fixed investment time
horizon $T$, as explained above. Another alternative and direct measure would be to
first derive the probability density $p(t_m,T)$ of the time $t_m$ at which the maximum $M_T$ of a
stock price over $[0,T]$ actually occurs. This density $p(t_m,T)$ will typically have
a peak (or more peaks). The value of $t_m=t^*$ at which the strongest peak of $p(t_m,T)$
occurs can then be taken as an alternative measure for the optimal time to sell a stock, since the
maximum of the price is most likely to occur at $t_m=t^*$.
In this section, we compute exactly the density $p_\mu(t_m,t)$ of $t_m$ for a Brownian motion
$x(t)$ with
drift $\mu$.
Since the stock price $P_t=\exp[x(t)]$ is just the exponential of $x(t)$ under the Black-Scholes
scenario,
the maximum $M_T$ of the stock price $P_t$ occurs exactly at the same time $t_m$ where
$x(t)$ itself achieves its maximum.
For the case $\mu=0$, the density $p_0(t_m,T)$ was computed by L\'evy~\cite{Levy} and is given
by the derivative of an arcsine form, i.e.,
\begin{equation}
p_0(t_m,T)= \frac{1}{\pi}\, \frac{1}{\sqrt{t_m(T-t_m)}}; \quad\, 0\le t_m \le T
\label{levy1}
\end{equation}
Recently, using an appropriate path integral method, the density of $t_m$ was computed exactly for a
Brownian motion up to
its first-passage time ~\cite{RM} and also for a variety of constrained Brownian motions
such as Brownian excursions, Brownian bridges, Brownian meanders etc.~\cite{MRKY}. Here we adapt
this
path integral method to compute the density $p_{\mu}(t_m,T)$ for a Brownian motion
with arbitrary drift $\mu$.
To compute the density $p_{\mu}(t_m,T)$ the strategy would be to first compute
the joint density of $t_m$ as well as the maximum $\tilde M_T=m$ itself, i.e., $p_{\mu}(t_m,m,T)$
and then integrate over $m$ to obtain the marginal density, $p_\mu(t_m,T)= \int_0^{\infty}
p_\mu(t_m,m,T)\,dm$. The joint density $p_\mu(t_m,m,t)$ is the proportional to the
sum of the statistical weights of all paths that start at the origin $x(0)=0$, reaches
the value $x(t_m)=m$ for the {\it first-time} at $t=t_m$ and then stays below the level $m$
at all subsequent times up to $T$,i.e., in the interval $[t_m,T]$. To enforce the conditions
that $x(t)<m$ in the two intervals $t\in [0,t_m]$ and $t\in [t_m,T]$ and that
exactly at $t_m$ the
path reaches
$x(t_m)=m$, poses a problem for a continuous-time
Brownian motion. This is because if a Brownian motion crosses a level $m$ at a given time $t_m$
then it must cross and re-cross the same level $m$ an infinite number of times in the
vicinity of $t=t_m$. Hence it is impossible to enforce the above constraints simultaneously for
a continuous-time Brownian motion. Note that for lattice random walks this
does not pose any problem. To get around this
difficulty with the continuous-time Brownian motion,
one introduces a small cut-off $\epsilon$~\cite{RM,MRKY}, i.e., one assumes that the path, starting
at $x(0)=0$
reaches the level $m-\epsilon$ at time $t_m$, staying below $m$ for all $0\le t< t_m$
and then starting at $m-\epsilon$ at $t=t_m$ stays below the level $m$ for all $t_m<t\le T$ (see
Fig. 5 for such a realization). Finally one takes the limit
$\epsilon\to 0$ at the
end of the calculation.
\begin{figure}
\includegraphics[width=.9\hsize]{bmtm.eps}
\caption{A realization of the drifted Brownian motion $x(t)$ in $t\in [0,T]$,
starting at $x(0)=0$, reaching $x(t_m)=m-\epsilon$ at $t=t_m$ and staying below the
level $m$ for all $0\le t\le T$.}
\end{figure}
Comparing Figs. (2) and (5), it is clear that the paths that contribute to the joint probability
density $p_{\mu}(t_m,m,T|\epsilon)$ are identical to those that contribute to $F_{\mu}(x,m,\tau)$
with the replacements $x=m-\epsilon$ and $\tau=t_m$ in Eq. (\ref{fxm}), i.e.,
$p_{\mu}(t_m,m,T|\epsilon)\propto F_{\mu}(m-\epsilon,m,t_m)$. Substituting $x=m-\epsilon$
and $\tau=t_m$ in Eq. (\ref{fxm}) and taking the $\epsilon\to 0$ limit
we find, to leading order in $\epsilon$,
\begin{equation}
p_{\mu}(t_m,m,T|\epsilon)\xrightarrow[\epsilon\to 0]{} A\, \epsilon^2\, \frac{m
e^{-(m-\mu t_m)^2/{2\sigma^2\, t_m}}}{\sqrt{2\pi\, \sigma^6\, t_m^3}}\,\left[\frac{2}{\sqrt{2\pi
\sigma^2
(T-t_m)}}\,e^{-\mu^2(T-t_m)/{2\sigma^2}}-\frac{\mu}{\sigma^2}\, {\rm
erfc}\left(\frac{\mu}{\sigma}\sqrt{\frac{T-t_m}{2}}\right)\right]
\label{tmdist1}
\end{equation}
where the constant of proportionality $A$, which is function of $\epsilon$, is determined
from the normalization, $\int_0^T dt_m \int_0^{\infty} dm\, p_{\mu}(t_m,m,T|\epsilon\to 0)=1$.
This fixes $A= \sigma^2/{\epsilon^2}$. Integrating $p_{\mu}(t_m,m,T)$ (now the cut-off
$\epsilon$ has been set to $0$) over $m$ finally gives the marginal density $p_{\mu}(t_m,T)$
in a closed form
\begin{equation}
p_{\mu}(t_m,T)= \frac{1}{\pi \sqrt{t_m(T-t_m)}}\, h(t_m,\mu)\, h(T-t_m,-\mu)
\label{tmdens1}
\end{equation}
where
\begin{equation}
h(t_m,\mu)= \exp\left(-\frac{\mu^2t_m}{2\sigma^2}\right)+\frac{\mu}{\sigma}\,\sqrt{\frac{\pi
t_m}{2}}\,{\rm erfc}\left(-\frac{\mu}{\sigma}\sqrt{\frac{t_m}{2}}\right).
\label{htm1}
\end{equation}
The density $p_{\mu}(t_m,T)$ given in Eqs. (\ref{tmdens1}) and (\ref{htm1}) is the main result of
this section.
Evidently, for $\mu=0$, one recovers from this the well known arcsine result of L\'evy in Eq.
(\ref{levy1}).
Note that the density $p_{\mu}(t_m,T)$ also has a symmetry similar to that in Eq. (\ref{symm2})
namely
\begin{equation}
p_{\mu}(t_m,T)= p_{-\mu}(T-t_m,T).
\label{psymm}
\end{equation}
This symmetry is also evident in Fig. 6 where we plot the density $p_{\mu}(t_m,T)$ in Eq.
(\ref{tmdens1}) for $\mu=1$, $\mu=0$ and $\mu=-1$ upon setting $T=1$ and $\sigma=1$.
\begin{figure}
\includegraphics[width=.9\hsize]{tmax.eps}
\caption{Plots of $p_{\mu}(t_m, T)$ vs. $t_m$ for three different values of the drift
$\mu=1.0$, $\mu=0$ and $\mu=-1.0$. We have set $T=1$ and $\sigma=1$. The symmetry
$p_\mu(t_m,T)=p_{-\mu}(T-t_m,T)$ is
evident.}
\end{figure}
We note from Eq. (\ref{tmdens1}) as well as from Fig. 6 that for all values of $\mu$, the density
$p_{\mu}(t_m,T)$ has two peaks (actually has square root divergences) at the two end points $t_m=0$
and $t_m=T$,
\begin{eqnarray}
p_{\mu}(t_m\to 0, T) &\approx& \frac{A_\mu(T)}{\sqrt{t_m}} \label{tml} \\
p_{\mu}(t_m\to T, T) &\approx & \frac{A_{-\mu}(T)}{\sqrt{T-t_m}} \label{tmr}
\end{eqnarray}
where the amplitude
\begin{equation}
A_{\mu}(T) = \frac{1}{\pi
\sqrt{T}}\,\left[\exp\left(-\frac{\mu^2T}{2\sigma^2}\right)-\frac{\mu}{\sigma}\,\sqrt{\frac{\pi
T}{2}}\,{\rm erfc}\left(\frac{\mu}{\sigma}\sqrt{\frac{T}{2}}\right)\right].
\label{ampl1}
\end{equation}
However, for $\mu>0$, the divergence at $t_m=T$ is stronger than that at $t_m=0$ since $A_{-\mu}(T)>
A_{\mu}(T)$. On the other hand, for $\mu<0$, the opposite is true. At $\mu=0$, both ends have
the same divergences as the density is completely symmetric around $t_m=T/2$. Thus, we conclude
that the maximum of the Brownian motion with drift $\mu$ is most likely to occur at $t_m=T$
for $\mu>0$, at $t_m=0$ for $\mu<0$, and for $\mu=0$ both $t_m=0$ and $t_m=T$ are equally likely.
This then leads us to identify the optimal time $\tau^*$ to sell the stock (within the black-Scholes
economy model) to be $\tau^*=T$ for $\mu>0$, $\tau^*=0$ for $\mu<0$, and $\tau^*=0, T$ (equally
likely) for $\mu=0$. Thus, based on the analysis of the density
$p_{\mu}(t_m,T)$ we draw the same
conclusion as was obtained from the optimization of the relative error in the previous sections.
|
3,212,635,537,663 | arxiv | \section{Introduction}
\setcounter{equation}{0}
The Jacobian conjecture was first formulated in 1939 by Keller \cite{K} which states that a polynomial
map $P:\bfF^n\to \bfF^n$ where $\bfF$ is a field of characteristic zero and $n\geq2$ has a polynomial inverse if the Jacobian
$J(P)$ of $P$ is a nonzero constant. This conjecture has not been solved for any
$n\geq2$ and appears in Smale's list of
the eighteen mathematical problems for the new century \cite{S}. With a suitable normalization, the polynomial
map $P$ may be taken to satisfy the condition
$
P(0)=0$ and $DP(0)=I$ such that it has the representation
\be\lb{1.1}
P(x_1,\dots,x_n)=(x_1+H_1,\dots,x_n+H_n),\quad (x_1,\dots,x_n)\in\bfF^n,
\ee
where $H_1,\dots,H_n$ are polynomials in the variables $x_1,\dots,x_n$ consisting of terms of degrees at least 2 in nontrivial situations so that
the condition imposed on $J(P)$ becomes $J(P)=1$.
Among the notable developments, Wang \cite{W} established the conjecture when $H_1,\dots,H_n$ are all quadratic and Bass, Connell, and Wright \cite{BCW} and Yagzhev \cite{Y} proved an important reduction theorem which states that the general conjecture
amounts to showing that the conjecture is true for the special case when each of $H_1,\dots,H_n$ is either cubic-homogeneous or zero
for {\em all} $n$. Subsequently, Druzkowski \cite{D} further showed that the
cubic-homogeneous reduction of \cite{BCW,Y} may be
assumed to be of the form of cubic-linear type,
\be\lb{1.2}
H_i=(a_{i1}x_1+\cdots+a_{in}x_n)^3,\quad i=1,\dots,n.
\ee
In \cite{BE}, Bondt and Essen proved that the conjecture for the case $\bfF=\bfC$ may be reduced to showing that the conjecture is true when the Jacobian matrix of the map $H=(H_1,\dots,H_n)$ is homogeneous, nilpotent, and symmetric, for all $n\geq2$.
For $n=2$, Moh \cite{M} established the conjecture when the degrees of $H_1$ and $H_2$ are up to 100.
See the survey articles \cite{Dru,E1,Mei} and monograph \cite{E2} and references therein for further results and progress. While these developments
were mainly based on ideas and methods of algebra and algebraic geometry, the problem also naturally prompts us to
explore its structure in view of partial differential equations, which will be our take here. In doing so,
we are able to obtain some new families of polynomial maps satisfying the conjecture. Below we unfold our study
going from low dimensions to arbitrary dimensions.
Due to their analytic simplicity, invertible polynomial maps are of obvious interest and importance in
applications. For example, consider the autonomous dynamical system
\be\lb{1.3}
\dot{x}_i=X_i(x_1,\dots,x_n),\quad i=1,\dots,n,
\ee
describing the trajectory of a hypothetical particle with coordinates $x_1,\dots,x_n$ in $\bfR^n$ in terms
of a time variable $t$ where the overdot denotes the time derivative and $X_1,\dots,X_n$ are polynomial functions
in terms of $x_1,\dots,x_n$. To recast \eq{1.3} into a more tractable form, we consider an invertible transformation
\be\lb{1.4}
(u_1,\dots,u_n)=P(x_1,\dots,x_n)=(P_1(x_1,\dots,x_n),\dots,P_n(x_1,\dots,x_n)),
\ee
where $P_1,\dots,P_n$ are differentiable functions of $x_1,\dots,x_n$, to obtain an equivalent dynamical system
\be\lb{1.5}
\dot{u}_i=U_i(u_1,\dots,u_n),\quad i=1,\dots,n.
\ee
If we are to stay within the class of autonomous dynamical systems with polynomial-type nonlinearity, it suffices to work with the kind of the transformations in \eq{1.4} for which $P_1,\dots,P_n$ are polynomials
in $x_1,\dots,x_n$ with $P$ being an invertible map so that
\be\lb{1.6}
(x_1,\dots,x_n)=Q(u_1,\dots,u_n)\equiv P^{-1}(u_1,\dots,u_n),
\ee
a polynomial map as well. Thus, with \eq{1.4}, \eq{1.6}, and the differential or
the Jacobian matrix $DP=\left(\frac{\pa P_i}{\pa x_j}\right)$
in terms of $x_1,\dots,x_n$, we arrive at
\be\lb{1.7}
(\dot{u}_1,\dots,\dot{u}_n)^\tau=(DP)(\dot{x}_1,\dots,\dot{x}_n)^\tau=(DP)(X_1,\dots,X_n)^\tau(u_1,\dots,u_n),
\ee
which indicates that the functions $U_1,\dots,U_n$ in \eq{1.5} are polynomials in $u_1,\dots,u_n$ indeed.
As a consequence, we may gain insight into a dynamical system under study by transforming it back and forth using
explicitly constructed invertible transformations of polynomial type, which may not be available otherwise.
\section{Two dimensions}
\setcounter{equation}{0}
First we consider $n=2$ and rewrite \eq{1.1} conveniently as
\be\lb{2.1}
P(x,y)=(x+f(x,y),y+g(x,y)),
\ee
where $f$ and $g$ are polynomials in the variables $x,y$ over $\bfF$ consisting of terms of degrees at least 2
in nontrivial situations. Inserting \eq{2.1}
into $J(P)=1$ we have
\be\lb{2.2}
f_x+g_y+J(f,g)(x,y)=0,
\ee
where $f_x$ (e.g.) denotes the partial derivative of $f$ with respect to $x$ and $J(f,g)(x,y)$
the Jacobian of the map $(f,g)$ over $x,y$. That is, $J(f,g)(x,y)=\frac{\pa(f,g)}{\pa(x,y)}$. This is an underdetermined equation which may be solved by
setting
\be\lb{2.3}
f_x+g_y=0,\quad J(f,g)(x,y)=0,
\ee
separately, such that the first equation in \eq{2.3} implies that there is a polynomial $h(x,y)$ serving as a scalar potential of the divergence-free vector field $(f,g)$
satisfying
\be\lb{2.4}
f=h_y,\quad g=-h_x.
\ee
Inserting \eq{2.4} into the second equation in \eq{2.3} we see that $h$ satisfies the homogeneous Monge--Amp\'{e}re equation
\cite{A,E}
\be\lb{2.5}
\det(D^2 h)=h_{xx}h_{yy}-h^2_{xy}=0.
\ee
Alternatively, if we are only concerned with $f$ and $g$ being homogeneous of the same degree, then
a degree counting argument applied to \eq{2.2} leads to
two separate equations, as given in \eq{2.3}, as well. Hence we arrive at \eq{2.5} again.
To proceed, we consider the solution of \eq{2.5} of the homogeneous type
\be\lb{2.6}
h(x,y)=\sigma(\xi),\quad \xi=ax+by,\quad a,b\in\bfF,
\ee
as suggested by \eq{1.1}--\eq{1.2}, where the arbitrary polynomial function $\sigma(\xi)$ is taken to be of degree $m\geq3$ or zero. Thus, with the
notation $P(x,y)=(u,v)$ and the relations \eq{2.1} and \eq{2.4}, we have
\be\lb{2.7}
u=x+b\sigma'(\xi),\quad v=y-a\sigma'(\xi),
\ee
resulting in the invariance condition between the two sets of the variables:
\be\lb{2.8}
au+bv=ax+by=\xi.
\ee
As a consequence of \eq{2.7} and \eq{2.8}, we obtain the inverse of the map $P$ immediately as follows:
\be
x=u-b\sigma'(\xi),\quad y=v+a\sigma'(\xi),\quad\xi=au+bv.
\ee
As a by-product, the arbitrariness of the function $\sigma$ indicates that the solution gives rise to a family of
polynomial maps of arbitrarily high degrees.
For later development, we also observe that the second equation in \eq{2.3} implies that $f$ and $g$ are
functionally dependent. Therefore, if we set $g=G(f)$ (say), then the first
equation in \eq{2.3} leads to
\be
f_x+G'(f)f_y=0,
\ee
which has a nontrivial solution of the homogeneous type, $f=\phi(\xi)$ ($\xi=ax+by$), if and only if
\be\lb{2.11}
g=G(f)=-\frac ab f,\quad b\neq0.
\ee
Thus, it follows that there hold the simplified relations
\be\lb{2.12}
u=x+\phi(\xi),\quad v=y-\frac ab \phi(\xi),\quad au+bv=ax+by=\xi,
\ee
where the invariance relation between the variables again makes the inverse of the map ready to be read off.
If $P:\bfF^n\to \bfF^n$ ($n\geq2$) is a polynomial automorphism (that is, the inverse $P^{-1}$ of $P$ exists
which is also a polynomial map), then it has been shown \cite{BCW,Dru,RW} that there holds the following general bound between the degrees of $P$
and $P^{-1}$:
\be\lb{2.13}
\deg(P^{-1})\leq (\deg(P))^{n-1}.
\ee
In the $n=2$ situation here, the map $P$ stated in \eq{2.12} satisfies $\deg(P)=\deg(P^{-1})$ where $\deg(P)$
may be any positive integer. In other words, the bound \eq{2.13} is seen to be sharp when $n=2$ in all degree cases.
We remark that \eq{2.12} is the most general polynomial automorphism of homogeneous type in two dimensions. To see this, we let the map $P$ be defined by
\be\lb{2.14}
u=x+\phi(\xi),\quad v=y+\psi(\eta),\quad \xi=ax+by,\quad \eta=cx+dy,\quad a,b,c,d\in\bfF,
\ee
where $\phi(\xi)$ and $\psi(\eta)$ are polynomials in the variables $\xi$ and $\eta$, respectively, of
degrees $l\geq2$ and $m\geq2$, satisfying $\phi(0)=\psi(0)=0$. Inserting \eq{2.14} into $J(P)=1$, or $f=\phi$ and
$g=\psi$ into \eq{2.2}, we get
\be\lb{2.15}
a\phi'(\xi)+d\psi'(\eta)+(ad-bc)\phi'(\xi)\psi'(\eta)=0.
\ee
On the other hand, as polynomials in the variables $x,y$, we have
\bea
\deg(a\phi'(\xi)+d\psi'(\eta))&\leq& \max\{l-1,m-1\},\lb{2.16}\\
\deg(\phi'(\xi)\psi'(\eta))&=&(l-1)+(m-1)>\max\{l-1,m-1\},\lb{2.17}
\eea
since $l,m\geq2$. In view of \eq{2.15}--\eq{2.17}, we arrive at $ad-bc=0$. In other words, the variables
$\xi$ and $\eta$ as given in \eq{2.14} are linearly dependent. Consequently, \eq{2.14} is simplified into
the form
\be\lb{2.18}
u=x+\phi(\xi),\quad v=y+\psi(\xi),\quad \xi=ax+by,
\ee
which renders $a\phi'(\xi)+b\psi'(\xi)=0$. Thus \eq{2.12} follows if $b\neq0$ and $\phi(\xi)=-\frac ba\psi(\xi)$ if $a\neq0$. So we have obtained the most general homogeneous solution to the equation
$J(P)=1$ or \eq{2.2}.
It can be checked directly that, when the polynomial map \eq{2.1} is of the type $\deg(P)\leq3$, then
$J(P)=1$ or \eq{2.2} leads to the homogeneous form \eq{2.18}, or more precisely, \eq{2.12}.
\section{Three dimensions}
\setcounter{equation}{0}
Next we consider $n=3$ and rewrite \eq{1.1} as
\be\lb{3.1}
(u,v,w)=P(x,y,z)=(x+f(x,y,z),y+g(x,y,z),z+h(x,y,z)),\quad(x,y,z)\in\bfF^3,
\ee
where $f,g$, and $h$ are polynomials in $x,y,z$ with terms of degrees at least 2
in nontrivial situations. Thus the equation
$J(P)=1$ is recast into
\be\lb{3.2}
f_x+g_y+h_z+J(f,g)(x,y)+J(g,h)(y,z)+J(f,h)(x,z)+J(f,g,h)(x,y,z)=0,
\ee
which is underdetermined as well. If we focus on $f,g$, and $h$ being homogeneous of the same degree, then
\eq{3.2} splits into the coupled system
\be\lb{3.3}
f_x+g_y+h_z=0,\quad J(f,g)(x,y)+J(g,h)(y,z)+J(f,h)(x,z)=0,\quad J(f,g,h)(x,y,z)=0,
\ee
as in Section 2. However, here, we are interested in solutions of more general characteristics.
To proceed, we solve the third equation in \eq{3.3} by setting $h=H(f,g)$ where $H$ is a function of
the variables $f$ and $g$ to be determined. Hinted by the study in Section 2, we seek for solutions of the form
\be\lb{3.4}
f(x,y,z)=\phi(\xi),\, g(x,y,z)=\psi(\eta),\, \xi=ax+by+cz,\, \eta=px+qy+rz,\, a,b,c,p,q,r\in\bfF.
\ee
Inserting \eq{3.4} into the first equation in \eq{3.3}, we get
\be\lb{3.5}
(a+cH_f)\phi'(\xi)+(q+rH_g)\psi'(\eta)=0.
\ee
We are interested in being able to allow $\phi$ and $\psi$ to be arbitrary. This leads to $a+cH_f=0$
and $q+rH_g=0$ or
\be\lb{3.6}
h=-\frac ac f-\frac qr g,\quad c,r\neq0,
\ee
which extends \eq{2.11}.
In view of \eq{3.6}, we see that the second equation in \eq{3.3} is equivalent to the equation
\be\lb{3.7}
(ar-cp)(br-cq)=0.
\ee
Thus either $ar=cp$ or $br=cq$. As an example,
we assume the former. That is, suppose
\be\lb{3.8}
\frac ac=\frac pr.
\ee
Consequently, subject to \eq{3.8}, we have solved the Jacobian equation
$J(P)=1$ where $P(x,y,z)=(u,v,w)$ in 3 dimensions with
\be\lb{3.9}
u=x+\phi(\xi),\,v=y+\psi(\eta),\, w=z-\frac ac\phi(\xi)-\frac qr \psi(\eta),\,\xi=ax+by+cz,\,\eta=px+qy+rz.
\ee
Besides, with \eq{3.8}, we also have
\bea
au+bv+cw&=&ax+by+cz+\left(b-\frac{cq}r\right)\psi(\eta)=\xi+\left(b-\frac{cq}r\right)\psi(\eta),\lb{3.10}\\
pu+qv+rw&=&px+qy+rz=\eta.\lb{3.11}
\eea
So the quantity $\eta$ is seen as an invariant between the two sets of the variables but not $\xi$. In
other words, we achieve a {\em partial invariance}.
As a consequence of \eq{3.9}--\eq{3.11}, we obtain the inverse of the map $P$ given by
\bea
x&=&u-\phi\left(au+bv+cw-\left[b-\frac{cq}r\right]\psi(\eta)\right),\lb{3.12}\\
y&=&v-\psi(\eta),\lb{3.13}\\
z&=&w+\frac ac\phi\left(au+bv+cw-\left[b-\frac{cq}r\right]\psi(\eta)\right)+\frac qr \psi(\eta),\lb{3.14}
\eea
where $\eta=pu+qv+rw$.
It will be of interest to compare the degrees of the map $P$ given by \eq{3.9} and its inverse $P^{-1}$
given by \eq{3.12}--\eq{3.14}, with regard to the general bound \eq{2.13}, which are
\be
\deg(P)=\max\{\deg(\phi),\deg(\psi)\};\quad \deg(P^{-1})=\deg(\phi)\deg(\psi),\quad br\neq cq.
\ee
Hence we have
\be\lb{deg}
\deg(P^{-1})\leq (\deg(P))^2,
\ee
which is a realization of \eq{2.13} when $n=3$. Of course,
$\deg(P^{-1})=(\deg(P))^2$ if and only if $\deg(\phi)=\deg(\psi)$ and all possible integer combinations
in the inequality \eq{deg} can
be achieved concretely by choosing appropriate pair of the generating polynomials, $\phi$ and $\psi$.
It is worth noting that, if both factors in \eq{3.7} vanish, or both \eq{3.8} and
\be\lb{3.15}
\frac bc=\frac qr
\ee
are simultaneously valid, then \eq{3.10} and \eq{3.11} imply that both $\xi$ and $\eta$ are invariant quantities between
the two sets of the variables. In fact, now $\xi$ and $\eta$ are linearly dependent quantities,
\be\lb{3.16}
r\xi=c\eta.
\ee
In this situation, we may rewrite \eq{3.9} as
\be\lb{3.17}
u=x+\phi(\xi),\, v=y+\psi(\xi),\, w=z-\frac ac\phi(\xi)-\frac bc\psi(\xi),\, \xi=ax+by+cz=au+bv+cw,
\ee
where $\phi$ and $\psi$ are arbitrary functions of $\xi$, which is a direct
3-dimensional extension of \eq{2.12} for which the inverse is obviously constructed as well. Of course,
we now have $\deg(P)=\deg(P^{-1})$ and the equality in \eq{deg} never occurs in nontrivial situations where
$\min\{\deg(\phi),\deg(\psi)\}\geq2$.
We emphasize that the polynomial functions $\phi$ and $\psi$ in \eq{3.12}--\eq{3.14} and \eq{3.17} are of arbitrary degrees.
It may be of interest to explore a Monge--Amp\'{e}re equation type structure, as \eq{2.5} as we did in 2 dimensions for the Jacobian equation \eq{2.2}, for \eq{3.2}. For this purpose, we note that the first equation in \eq{3.3} implies that the vector $(f,g,h)$, being divergence free, has a vector potential,
$(A,B,C)$, satisfying
\be\lb{3.18}
(f,g,h)=\mbox{curl of } (A,B,C)=(C_y-B_z,A_z-C_x,B_x-A_y).
\ee
Hence \eq{3.2} becomes the following second-order nonlinear equation
\bea\lb{3.19}
&&\left|\begin{array}{cc}C_{xy}-B_{xz}&C_{yy}-B_{yz}\\A_{xz}-C_{xx}&A_{yz}-C_{xy}\end{array}\right|
+\left|\begin{array}{cc}A_{yz}-C_{xy}&A_{zz}-C_{xz}\\B_{xy}-A_{yy}&B_{xz}-A_{yz}\end{array}\right|
+\left|\begin{array}{cc}C_{xy}-B_{xz}&C_{yz}-B_{zz}\\B_{xx}-A_{xy}&B_{xz}-A_{yz}\end{array}\right|\nn\\
&&+\left|\begin{array}{ccc}C_{xy}-B_{xz}&C_{yy}-B_{yz}&C_{yz}-B_{zz}\\A_{xz}-C_{xx}&A_{yz}-C_{xy}&A_{zz}-C_{xz}\\B_{xx}-A_{xy}&B_{xy}-A_{yy}&B_{xz}-A_{yz} \end{array}\right|=0.
\eea
As another reduction of \eq{3.2}, we may set $h=H(f,g)$ where $H$ is a prescribed function of $f$ and $g$. Hence \eq{3.2} becomes
\be\lb{3.20}
f_x+g_y+H_f f_z+H_g g_z+J(f,g)(x,y)+H_g J(f,g)(x,z)+H_f J(g,f)(y,z)=0.
\ee
The underdetermined equations \eq{3.19} and \eq{3.20} can be reduced further, of course.
\section{General dimensions}
\setcounter{equation}{0}
In the general situation, with the notation
\be\lb{4.1}
(u_1,\dots,u_n)=P(x_1,\dots,x_n)=(x_1+f_1,\dots,x_n+f_n),\quad f_{ij}=\frac{\pa f_i}{\pa x_j},\quad i,j=1,
\dots,n,
\ee
then it is clear that the Jacobian equation $J(P)=1$ or $\det(I +F)=1$ where $F=(f_{ij})$ assumes the form
\be\lb{4.2}
E_1(F)+E_2(F)+\cdots+E_n(F)=0,
\ee
where $E_k(F)$ is the sum of all $k$ by $k$ principal minors of the matrix $F$, $k=1,\dots,n$, such that
$E_1(F)=\mbox{tr}(F)$ and $E_n(F)=\det(F)$ (cf. \cite{HJ}).
We now aim to obtain a family of solutions of \eq{4.2} of our interest that satisfy the Jacobian conjecture and extend what we found earlier in low dimensions.
For such a purpose and suggested by the study in Section 3, we use $(a_{ij})$ to denote an $(n-1)$ by $n$ matrix
in $\bfF$ and introduce the variables
\be\lb{4.3}
\xi_i=\sum_{j=1}^n a_{ij}x_j,\quad i=1,\dots,n-1.
\ee
Define
\be\lb{4.4}
u_j=x_j+f_j,\quad j=1,\dots,n,
\ee
where $f_1,\dots,f_{n-1}$ are arbitrary polynomials in $\xi_1,\dots,\xi_{n-1}$, respectively, but
\be\lb{4.5}
f_n=\sum_{j=1}^{n-1}b_j f_{j}(\xi_j),
\ee
where the coefficients $b_1,\dots,b_{n-1}\in\bfF$ are to be determined through the equation \eq{4.2} which
due to \eq{4.5}
is now slightly reduced into
\be
E_1(F)+E_2(F)+\cdots+E_{n-1}(F)=0,
\ee
which is still rather complicated. For simplicity and in view of the study in Section 3, we
impose the following {\em full invariance} condition between the two sets of variables $x_1,\dots,x_n$ and $u_1,
\dots,u_n$:
\be\lb{4.6}
\xi_i=\sum_{j=1}^n a_{ij} x_j=\sum_{j=1}^n a_{ij}u_i,\quad i=1,\dots,n-1,
\ee
so that by virtue of \eq{4.4} we arrive at
\bea
\sum_{j=1}^n a_{ij} u_j&=&\sum_{j=1}^n a_{ij} x_j+\sum_{j=1}^{n-1} a_{ij}f_j +a_{in}f_n\nn\\
&=&\xi_i+\sum_{j=1}^{n-1}\left(a_{ij}+a_{in}b_j\right)f_j,\quad i=1,\dots,n-1,
\eea
which results in the solution
\be\lb{4.8}
b_j=-\frac{a_{ij}}{a_{in}},\quad a_{in}\neq0,\quad i,j=1,\dots,n-1.
\ee
This solution indicates that the quantities $\xi_1,\dots, \xi_{n-1}$ are linearly dependent:
\be
a_{in}\xi_{j}=a_{jn}\xi_{i},\quad i,j=1,\dots, n-1,
\ee
which extends \eq{3.16}. Since the functions $f_1,\dots,f_{n-1}$ are arbitrary, we may now set
\be
f_i=\phi_i(\xi),\quad i=1,\dots,n-1,\quad \xi=\sum_{j=1}^n a_j x_j.
\ee
Hence we obtain the polynomial map $P$ defined by
\be\lb{4.12}
u_1=x_1+\phi_1(\xi),\quad\dots,\quad u_{n-1}=x_{n-1}+\phi_{n-1}(\xi),\quad u_n=x_n-\sum_{i=1}^{n-1}\frac {a_i}{a_n}
\phi_i(\xi).
\ee
With \eq{4.12}, it is readily checked that the inverse of the polynomial map $P$ defined
in \eq{4.1} is
given by
\be
x_1=u_1-\phi_1(\xi),\quad\dots,\quad x_{n-1}=u_{n-1}-\phi_{n-1}(\xi),\quad x_n=u_n+\sum_{i=1}^{n-1}\frac{a_{i}}{a_{n}} \phi_i(\xi),
\ee
where $\phi_1,\dots,\phi_{n-1}$ are polynomial functions of the variable
$\xi=a_1u_1+\cdots+a_{n}u_{n}$. Of course we now have $\deg(P^{-1})=\deg(P)$.
Note that $\phi_1,\dots,\phi_{n-1}$, in nontrivial situations, consist of terms of degrees at least
2 of the variable $\xi$, which are arbitrary otherwise. Since
$DP(0)=I$, we automatically get $J(P)=1$. In particular, $(f_1,\dots,f_{n-1},f_n)$ so constructed
is a solution to the Jacobian equation \eq{4.2} such that the associated polynomial map $P$ given in
\eq{4.1} satisfies the Jacobian conjecture.
Further reductions to \eq{4.2} may be carried out along the lines shown in Section 3 which are omitted here.
\section{Applications}
\setcounter{equation}{0}
As a concrete example, consider the following nonlinear dynamical system in $\bfR^2$:
\bea
\dot{u}&=&2(u+v)(u-v-2[u+v]^2)+(u-[u+v]^2)(1-v-[u+v]^2),\lb{5.1}\\
\dot{v}&=&-2(u+v)(u-v-2[u+v]^2)+(v+[u+v]^2)(-1+u-[u+v]^2),\lb{5.2}
\eea
which appears complicated.
We are interested in the existence of periodic orbits. First it is readily checked that the equilibria of this system are
\be\lb{5.3}
(0,0),\quad (5,-3),
\ee
for which $(0,0)$ is a saddle point and $(5,-3)$ a center. By the classical index theorem \cite{Gu,St}, we know that the
only possible closed orbits would be those winding about the point $(5,-3)$. To show the existence of
such orbits, we use the transformation
\be\lb{5.4}
u=x+(x+y)^2,\quad v=y-(x+y)^2,
\ee
which satisfies the invariance property $x+y=u+v$ studied in Section 2, rendering the inverse transformation
\be
x=u-(u+v)^2,\quad y=v+(u+v)^2,
\ee
and recasts the system \eq{5.1}--\eq{5.2} into an equivalent but much simplified one,
\be\lb{5.6}
\dot{x}=x-xy,\quad \dot{y}=-y+xy,
\ee
which happens to be the celebrated the Lotka--Volterra equations modeling a predator-prey ecological system in
mathematical biology \cite{Be,Br,Ki,Mu}. Correspondingly, the two equilibria of \eq{5.6} are $(0,0)$ and
$(1,1)$, the former being a saddle point and the latter a center. Hence the possible closed orbits would be
those winding about $(1,1)$. In fact, the simplicity of \eq{5.6} allows a complete integration of it which
establishes that all the trajectories in the first quadrant of the $(x,y)$-plane,
away from the point $(1,1)$, are closed orbits, as depicted
in Figure \ref{F}, which is a classical fact. Consequently, in terms of the transformation
\eq{5.4}, we obtain a full collection of periodic solutions to the original system \eq{5.1}--\eq{5.2}.
\begin{figure}[h]
\begin{center}
\includegraphics[height=8cm,width=8cm]{prey-predator.jpg}
\caption{An $(x,y)$-plane realization of the periodic orbits of the
nonlinear dynamical system \eq{5.1}--\eq{5.2} which is complicated in its original $(u,v)$-plane setting. With the polynomial automorphism \eq{5.4}, the system is recast into the Lotka--Volterra
equations, \eq{5.6}, which may be integrated completely, thus rendering a family of periodic orbits
about its center-type equilibrium, $(5,-3)$, as stated in \eq{5.3}, and depicted here in terms of the $x$ and $y$
variables in the first quadrant of the $(x,y)$-plane. }
\label{F}
\end{center}
\end{figure}
In \cite{Evans} and references therein, the Lotka--Volterra equations are transformed into many other
equivalent systems of nonlinear equations including those with non-polynomial type nonlinearities. Our
work here, however, allows us to stay within the family of dynamical systems with polynomial type
nonlinearities for which the possibilities are unlimited.
The $n$-dimensional extension \cite{Mu,Plank1,Plank2,V} of the Lotka--Volterra equations is another
interesting and important subject. In view of the construction in Section 4, we can obtain a broad
family of dynamical systems in $n$ dimensions with polynomial type nonlinearities which are
equivalent to the Lotka--Volterra equations in $\bfR^n$.
Of course, the explicit construction of the polynomial automorphisms presented here will also be of
immediate applicability in numerous other areas of nonlinear problems in mathematical sciences,
including those studied exemplarily in \cite{Fa,L,Ma,Mu}, for instance.
\section{Conclusions}
\setcounter{equation}{0}
In this work, the Jacobian conjecture is directly studied by way of the partial differential equations
it prompts. These equations are first order, nonlinear, and expressed in terms of the sum of all
principal minors of the Jacobian matrix of the nonlinear part of the polynomial map. In two dimensions, the
equation may be reduced into a homogeneous Monge--Amp\'{e}re equation. In any $n\geq2$ dimension, solutions
depending on $n-1$ arbitrary polynomial functions are obtained, which give rise to polynomial maps
of arbitrarily high degrees, satisfying the
Jacobian conjecture. Thus, the maps may or may not be homogeneous, depending on the choice of
these arbitrary functions. Interestingly and practically, for these maps, the inverse maps are of similar structures and
can be constructed immediately using an invariance property,
either partial or full, of the variables in consideration. This explicit construction can be used to
unveil broad families of nonlinear problems of distinctive characteristics in mathematical sciences.
\medskip
{\bf Data availability statement:} The data that support the findings of this study are available within the article.
{
|
3,212,635,537,664 | arxiv | \section{Introduction and Main Results}\label{section1}
Over the last decade, significant progress has been made concerning the quenched invariance principle on random conductance models. A typical and important
example is random walk on the infinite cluster of supercritical
bond percolation on $\mathbb Z^d$. It is shown that the scaling limit of the random walk
is a (constant time change of) Brownian motion on $\mathbb R^d$ in the quenched sense,
namely almost surely with respect to the randomness of the media.
See \cite{ABDH,BD,BeB,BP,CCK,M,MatP,SS} for related progress on this subject and \cite{Bs,Kum} for overall
introduction on this area and related topics.
Besides i.i.d.\ nearest-neighbour random conductance models, recently there are great developments on the
scaling limit of short range random conductance models on stationary ergodic media
(or the media with suitable correlation conditions),
see \cite{ADS1,ADS2,ACDS,BsRod,DNS,PRS} for more details. Here, short range means only
finite number of conductances are directly connected to each vertex.
Unlike the short range case, there are only a few results concerning quenched invariance principle for long range random conductance models due to their fundamental technical difficulties. There is a beautiful paper by Crawford and Sly \cite{CS1} that obtains
the quenched invariance principle for random walk on the long range percolation cluster to
an isotropic $\alpha$-stable L\'evy process in the range $0<\alpha<1$. While \cite{CS1}
proves the invariance principle for a very singular object like the long range percolation, the arguments heavily rely on the special
properties
(see for instance \cite{Be,Bs1,CS0} for related discussions) of the long range percolation and cannot be easily
generalized to the setting of general (long range) random conductance models.
In this paper, we will discuss the quenched invariance principle on long range random conductance models.
In particular, we consider the case where the conductance between $x$ and $y$ is in average comparable to $|x-y|^{-(d+\alpha)}$ with $\alpha\in (0,2)$
but possibly degenerate.
In this setting,
there is a significant difficulty in applying classical techniques of homogenization for nearest-neighbour random walk (in
random environment) due to the existence of long range conductances.
To emphasize the novelty of our paper, we first make some remarks.
Some more details and technical difficulties of our methods are further discussed in the end of the introduction.
\begin{itemize}\item [(i)] The well known harmonic decomposition method (also called the corrector method in the literature) has been widely used
for the nearest-neighbour random walk in random media,
see \cite{ABDH,ADS1,ADS2,ACDS,BD,BeB,BsRod,SS}.
Because of the lack of
$L^2$ integrability, such method does not work (at least in a straightforward way) for our long range model here.
\item [(ii)] Due to singularity in the infinite cluster of long range percolation,
\cite{CS1} established the quenched invariance principle of the associated random walk in the sense of
weak convergence on $L^q$
(not the Skorohod topology) and only for the case $0<\alpha<1$. In the present paper, we can justify
quenched invariance principle of our model under the Skorohod topology for all $\alpha\in (0,2)$.
(To be fair, the long range percolation is \lq\lq more singular\rq\rq, and it is not
included in our conductance model.)
Moreover, compared with \cite{CKK}, we can prove the quenched invariance principle for the process with fixed
initial point, see e.g. Remark \ref{r4-6} below.
\item[(iii)] Our approach is to utilize recently developed
de Giorgi-Nash-Moser theory for jump processes (see for instance \cite{BBK,CK,CK08,CKW1}).
While detailed heat kernel estimates and Harnack inequalities are
established for uniformly elliptic $\alpha$-stable-like processes,
the arguments rely on pointwise estimates of
the jumping density (conductance in this setting), which cannot hold in our setting unless we assume uniform ellipticity of
conductance. Furthermore, as will be shown in the accompanied paper \cite{CKWan}, Harnack inequalities do not hold (even for large enough balls) in general on long range random conductance models. By these reasons, highly non-trivial modifications are required to work on the present random conductance setting.
Roughly speaking, in this paper we are concerned
with the long rang conductance model
with some large scale summable conditions on the conductance, which in some sense can be viewed as a counterpart of the so-called \lq\lq good ball
condition\rq\rq\,\,in \cite{B,BC} to the non-local setting.
We believe that our methods are rather robust and
could be fundamental tools in exploring scaling limits of random walks on long range random media.
\item[(iv)] The advantage of our methods is that they do not use translation invariance of the original graph
(we do not use the idea of \lq\lq the environment viewed from the particle\rq\rq); hence
they are applicable not only for $\mathds Z^d$ but also for more
general graphs whose scaling limits are nice metric measure spaces.
Even in the setting of $\mathds Z^d$,
our results can apply to the case that the conductance is independent but
possibly degenerate and
not necessarily identically distributed; that is, our results are efficient for some long range random walks on
degenerate non-stationary ergodic media.
The disadvantage
is, since we use the Borel-Cantelli lemma to deduce quenched estimates,
the arguments require \lq\lq strong mixing properties\rq\rq\, of the random conductance
(see \eqref{p3-2-1}--\eqref{l3-1-1-1} below). Hence our method cannot be generalized to
general stationary ergodic case on $\mathds Z^d$.
\end{itemize}
To illustrate our contribution,
we present the statement
about the quenched invariance principle on a half/quarter space
$F:= \mathbb{R}^{d_1}_+\times\mathbb{R}^{d_2}$ where
$d_1,d_2\in \mathbb{N}\cup\{0\}$.
The readers may refer to Sections \ref{section3} and \ref{section5} for general results. Let $\mathbb{L}:=\mathbb{Z}^{d_1}_+\times\mathbb{Z}^{d_2}$. Consider a Markov generator
\begin{equation}\label{eq:geneoe}
L^\omega_{\mathbb{L}} f(x)=\sum_{y\in \mathbb{L}}(f(y)-f(x))
\frac{w_{x,y}(\omega)}{|x-y|^{d+\alpha}},
\quad x\in \mathbb{L},
\end{equation}
where $d=d_1+d_2$, $\alpha\in (0,2)$ and $\{w_{x,y}(\omega):x,y\in \mathbb{L}\}$ is a sequence of random variables
such that $w_{x,y}(\omega)=w_{y,x}(\omega)\ge0$
for all $x\neq y$. We use the convention that
$w_{x,x}(\omega)=w_{x,x}^{-1}(\omega)=0$ for all $x \in \mathbb{L}$.
Let $(X^{\omega}_t)_{t\ge 0}$ be the corresponding Markov process.
For every $n\ge1$ and $\omega\in \Omega$, we define a process
$X_{\cdot}^{(n),\omega}$ on $V_n=n^{-1}\mathbb{L}$ by
$X_{t}^{(n),\omega}:={n}^{-1}X_{n^{\alpha}t}^{\omega}$ for any $t\ge0$. Let
$\mathds P_{x}^{(n),\omega}$ be the law of $X_{\cdot}^{(n),\omega}$ with initial
point $x\in V_n$. Let $Y:=((Y_t)_{t\ge0},
(\mathds P_{x}^Y)_{x\in F})$ be a $F$-valued strong Markov process.
We say that the quenched invariance principle holds for
$X_{\cdot}^{\omega}$ with limit process being $Y$, if for any $\{x_n \in
V_n:n\ge1\}$ such that $\lim_{n \rightarrow \infty}x_n=x$ for
some $x \in F$, it holds that for $\mathds P$-a.s.\ $\omega\in \Omega$ and
every $T>0$, $\mathds P_{x_n}^{(n),\omega}$ converges weakly to $\mathds P_{x}^Y$ on
the space of all probability measures on $\mathscr{D}([0,T];F)$, the collection
of c\`{a}dl\`{a}g $F$-valued functions on $[0,T]$ equipped with
the Skorohod topology.
\begin{theorem}\label{th1} Let $d>4-2\alpha$.
Suppose that $\{w_{x,y}:x,y\in \mathbb{L}\}$ is a
sequence
of
non-negative
independent random variables such that $\mathds E w_{x,y}=1$ for all
$x,y\in \mathbb{L}$,
\begin{equation}\label{eq: prob}
\sup_{x,y \in \mathbb{L},x\neq y}\mathds P\big(w_{x,y}=0\big)<2^{-4}
\end{equation} and
\begin{equation}\label{eq:fhibw}
\sup_{x,y\in \mathbb{L}}\mathds E[w_{x,y}^{2p}]<\infty,\quad \sup_{x,y\in \mathbb{L}}\mathds E[w_{x,y}^{-2q}\mathds 1_{\{w_{x,y}>0\}}]<\infty
\end{equation} for $p,q\in \mathds Z_{+}$ with
\begin{equation}\label{eq:fhibw22}
p>\max\big\{{(d+2)}/{d}, {(d+1)}/(2(2-\alpha))\big\},\quad q>{(d+2)}/{d}.
\end{equation}
Then the quenched invariance principle holds for
$X^{\omega}_{\cdot}$ with the limit process being a symmetric $\alpha$-stable L\'evy process $Y$ on $F$ with jumping measure $|z|^{-d-\alpha}\,dz$. \end{theorem}
\begin{remark}
When $\alpha\in (0,1)$, the conclusion still holds true for $d>2-2\alpha$, if $p>\max\big\{{(d+2)}/{d},{(d+1)}/(2(1-\alpha))\big\}$ and $q>{(d+2)}/{d}.$
See Proposition \ref{ex3-1} for details.
The probability $2^{-4}$ in \eqref{eq: prob} is far from optimal. In fact, it can be replaced by the critical probability
to ensure that condition \eqref{a4-3-1a} (with $V_n=n^{-1}\mathbb{L}$ and $m_n$ being the counting measure on $V_n$)
holds almost surely. However, we do not know what exact value of this critical probability.
We note that the integrability condition \eqref{eq:fhibw22} is far from
optimal too, and we also do not even know what could be the optimal integrability condition.
\end{remark}
Here is one simple example that satisfies
\eqref{eq: prob} and \eqref{eq:fhibw}:
for each distinct $x,y\in \mathds Z^d$,
\begin{eqnarray*}
&\mathds P(w_{x,y}=|x-y|^\varepsilon)=(3|x-y|^{2p\varepsilon})^{-1},\quad
\mathds P(w_{x,y}=|x-y|^{-\delta})=(3|x-y|^{2q\delta})^{-1},\\
&\mathds P\big(w_{x,y}=0\big)=2^{-5},\quad \mathds P(w_{x,y}=g(x,y))=1-(3|x-y|^{2p\varepsilon})^{-1}-(3|x-y|^{2q\delta})^{-1}-2^{-5},
\end{eqnarray*}
where $\varepsilon, \delta>0$ and $g(x,y)$ are chosen so that $\mathds E w_{x,y}=1$. (It is easy to see that
$c^{-1}\le g(x,y)\le c$ for some constant $c\ge1$.)
\bigskip
In the end of the introduction, let us briefly discuss technical difficulties and the ideas of the proof. There
are two essential ingredients in our proof; namely the tightness estimate and the H\"{o}lder regularity of parabolic functions for non-elliptic $\alpha$-stable-like processes on graphs. In order to obtain the former estimate, we first split small jumps and big jumps, which is a standard approach for jump processes, and then change the conductance
to the averaged one outside a ball (we call it localization method). By this localization and the on-diagonal heat kernel upper bound (Proposition \ref{np2-1}),
we can apply the so-called Bass-Nash method to control the mean
displacement of the process (Proposition \ref{np-1}). The tightness estimate
(Theorem \ref{exit}) is established by comparing
the original process, truncated process and the localized process.
We note that when $0<\alpha<1$, tightness can be proved in a much simpler way
using martingale arguments (Proposition \ref{L:tight}).
The key ingredient for the H\"{o}lder regularity of parabolic functions (Theorem \ref{T:holder}) is to deduce the Krylov-type estimate
(Proposition \ref{Kr}) that controls the hitting probability to a large set
before exiting some parabolic cylinder.
Once these estimates are established, we use the arguments in \cite{CKK}
to deduce generalized Mosco convergence, and then obtain the
weak convergence (Theorem \ref{t3-1}).
\section{Truncated $\alpha$-stable-like processes on graphs}\label{S:tr}
In the following few sections, we fix graphs and discuss $\alpha$-stable-like processes
on them. Hence we do not consider randomness of the environment.
With a slight abuse of notation, we still use
$w_{x,y}$ as the deterministic version.
Let $G=(V,E_V)$ be a locally finite and connected graph, where $V$ is the set of vertices, and $E_V$ the set of edges.
For any $x\neq y \in V$, we write $\rho(x,y)$ for the graph distance, i.e., $\rho(x,y)$ is the smallest positive length of a path (that is, a sequence $x_0=x, x_1,\cdots,x_l=y$ such that $(x_i,x_{i+1})\in E_V$ for all $0\le i\le l-1$) joining $x$ and $y$.
Set $\rho(x,x)=0$ for all $x\in V$.
We let $B(x,r)=\{y\in V:\rho(x,y)\le r\}$
denote the ball in graph metric with center $x\in V$ and radius $r>0$. Let $\mu$ be a measure on $V$ such that $\mu_x:=\mu(\{x\})$ satisfies for some constant $c_M\ge1$ that
\begin{equation}\label{al2-0}
c_M^{-1}\le \mu_x\le c_M,\quad x\in V.
\end{equation}
For each $p\in[1,\infty)$, let $L^p(V;\mu)=\{f\in \mathds R^V:\sum_{x\in V}|f(x)|^p\mu_x<\infty\}$, and
denote by $\|f\|_p$ the $L^p$ norm of $f$ with respect to $\mu$. Let $L^\infty(V;\mu)$ be the space of bounded measurable functions on $V$, and let $\|f\|_\infty$ be the $L^\infty$ norm of $f$.
We assume that $(G,\mu)$ satisfies the $d$-set condition with $d>0$, i.e., there exist $r_{G}\in [1,\infty]$ and $c_G\ge1$ such that
\begin{equation}\label{al2-1}
c_G^{-1}r^d\le \mu(B(x,r))\le c_Gr^d,\quad x\in V,1\le r<r_G.
\end{equation}
We consider the operator
$Lf(x)=\sum_{z\in V}(f(z)-f(x))\frac{w_{x,z}}{\rho(x,z)^{d+\alpha}}\mu_z$ and the quadratic form
\begin{align*}
D(f,f)&=\frac{1}{2}\sum_{x,y\in V}(f(x)-f(y))^2\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_x\mu_y,\quad f\in \mathscr{F}=\{f\in L^2(V;\mu): D(f,f)<\infty\},
\end{align*}
where $\alpha\in (0,2)$ and $\{w_{x,y}:x,y\in V\}$ is a sequence
such that $w_{x,x}=0$ for all $x\in V$,
$w_{x,y}\ge0$ and $w_{x,y}=w_{y,x}$ for all $x\neq y$, and
\begin{equation}\label{eq:condwxy}
\sum_{y \in V}\frac{w_{x,y}}
{\rho(x,y)^{d+\alpha}}
\mu_y<\infty,\quad x\in V.
\end{equation}
Here by convention we set $0/0=0$.
According to (the first statement in) \cite[Theorem 3.2]{CKK},
$(D,\mathscr{F})$ is a regular symmetric Dirichlet form on $L^2(V;\mu)$. Let $X:=(X_t)_{t\ge0}$ be the symmetric Hunt process associated with $(D,\mathscr{F})$.
Set $C_{x,y}:=w_{x,y}/\rho(x,y)^{d+\alpha}$.
Under $\mathds P^x$, $X_0=x$; then the process $X$ waits for an
exponentially distributed random time of parameter $C_{x}:=\sum_{y\in
V}C_{x,y}\mu_y$ and jumps to point $y\in V$ with probability
$C_{x,y}\mu_y/C_x$;
this procedure is then iterated choosing independent hopping times.
Such a Markov process is called a variable speed random walk on $V$.
We write $p(t,x,y)$ for the heat kernel of $X$ on $V$; that is, the transition density of the process $X$ with respect to $\mu$ which is defined by
$p(t,x,y)=\mu_y^{-1}\mathds P^x(X_t=y).$
\subsection{On-diagonal upper estimates for heat kernel}
In this subsection, we are concerned with the truncated Dirichlet form corresponding to $(D,\mathscr{F})$. For fixed $1\le \delta<r_G$,
define the operator $L^{\delta}f(x)=\sum_{z \in V:\rho(z,x)\le
\delta}\big(f(z)-f(x)\big)\frac{w_{z,x}}{\rho(z,x)^{d+\alpha}}\mu_z.$ Then, the associated bilinear form is given by
$$
D^{\delta}(f,f)=\frac{1}{2}\sum_{x,y\in V: \rho(x,y)\le
\delta}\big(f(x)-f(y)\big)^2\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_x\mu_y.$$ Throughout this part, we always assume that
\begin{equation}\label{nl2-1-0} C_{V,\delta}:=\sup_{x\in V}\sum_{y\in V: \rho(x,y)>\delta}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_y<\infty.\end{equation}
By \eqref{nl2-1-0} and the symmetry of $w_{x,y}$, we can easily see that for all $f\in \mathscr{F}$,
\begin{align*}D^{\delta}(f,f)\!\le D(f,f)\!\le D^\delta (f,f)\!+2\sum_{x\in V} f(x)^2\mu_x\sum_{y\in V:\rho(y,x)>\delta} \frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_y
\!\le D^{\delta}(f,f)\!+2C_{V,\delta}\|f\|_2^2. \end{align*}
Consequently,
$(D^{\delta},\mathscr{F})$ is also
a regular and symmetric Dirichlet form
on $L^2(V;\mu)$. Denote by $X^{\delta}:=\big((X_t^{\delta})_{t\ge 0}, (\mathds P_x)_{x\in V}\big)$ the associated Hunt process, which is called the truncated process associated with $X$ in the literature.
In order to get on-diagonal upper estimates
for the heat kernel
of the truncated process $X^\delta$, we need the following scaled
Poincar\'{e}-type inequality.
In the following, given a sequence of $w:=\{w_{x,y}:x,y\in V\}$, for every $x\in V$ and $r\ge 1$, we set
$B^w(x,r):=\{z\in B(x,r): w_{x,z}>0\}.$
\begin{lemma}\label{nl2-1}
Suppose that there exist constants $C_1,C_2>0$ and $1\le r_0<r_G$
such that
\begin{equation}\label{nl2-1-1a}
\sup_{x \in V}\sum_{y\in B^w(x,r_0)}w_{x,y}^{-1}\le C_1
r_0^{d}
\end{equation} and
\begin{equation}\label{nl2-1-1b}
\inf_{x\in V}\mu(B^w(x,r_0))\ge C_2r_0^d,
\end{equation}
where $C_1$ and $C_2$ are independent of $r_0$ and $r_G$.
Then there is a constant $C_3>0$ $($also independent of $r_0$ and $r_G$$)$ such that for all $x\in
V$ and measurable function $f$ on $V$,
\begin{equation}\label{nl2-1-2a}
\begin{split}
\sum_{z\in B(x,r_0)}\!\!\!(f(z)-&(f)_{B^w(z,r_0)})^2\mu_z
\le C_3 r_0^{\alpha}\!\!\!
\sum_{z\in B(x,r_0),y\in B(x,2r_0)}\!\!\!\!(f(z)-f(y))^2 \frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}\mu_z\mu_y,\end{split}
\end{equation}
where for $A\subset V$,
$(f)_A:={\mu(A)}^{-1}\sum_{z\in A}f(x)\mu_z.$
\end{lemma}
\begin{proof}
For every $x\in V$ and measurable
function $f$ on $V$, we have
\begin{align*}
&\sum_{z\in B(x,r_0)}(f(z)-(f)_{B^w(z,r_0)})^2\mu_z=\sum_{z\in B(x,r_0)}\Big(\frac{1}{\mu(B^w(z,r_0))}
\sum_{y\in B^w(z,r_0)}(f(z)-f(y))\mu_y\Big)^2\mu_z\\
&\le \frac{c_1}{r_0^{2d}}\sum_{z\in B(x,r_0)}
\bigg[\Big(\sum_{y\in B^w(z,r_0)}(f(z)-f(y))^2 \frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}\Big)
\Big(\sum_{y\in B^w(z,r_0)}w_{z,y}^{-1}\rho(z,y)^{d+\alpha}\Big)\bigg] \\
&\le c_2r_0^{-d+\alpha}\Big(\sup_{z \in V} \sum_{y\in B^w(z,r_0)}w_{z,y}^{-1}\Big)\Big(\sum_{z\in B(x,r_0),y\in B(x,2r_0)}
\big(f(z)-f(y)\big)^2\frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}\Big)\\
&\le c_3r_0^{\alpha}\sum_{z\in B(x,r_0),y\in B(x,2r_0)}
\big(f(z)-f(y)\big)^2\frac{w_{z,y}}{
\rho(z,y)^{d+\alpha}}\mu_z\mu_y,
\end{align*}
where the first inequality follows from \eqref{al2-0}, \eqref{nl2-1-1b} and the Cauchy-Schwarz inequality, in the second inequality we have
used the fact that
$\rho(z,y)\le r_0$ for every $y\in B^w(z,r_0)$, and the third
inequality is due to \eqref{al2-0} and \eqref{nl2-1-1a}.
This proves \eqref{nl2-1-2a}.
\end{proof}
In the following, we denote by $p^{\delta}(t,x,y)$ the heat
kernel of $X^{\delta}$.
\begin{proposition}\label{np2-1}
Suppose that \eqref{nl2-1-0} holds, and that there exist constants $\theta \in (0,1)$ and $C_1, C_2\in (0,\infty)$ $($which are independent of
$\delta$ and $r_G$$)$ such that for every
$\delta^{\theta}\le r \le \delta$,
\begin{equation}\label{nl2-1-1}
\sup_{x\in V}\sum_{y \in B^w(x,r)}w_{x,y}^{-1}\le C_1r^{d},
\end{equation}
\begin{equation}\label{np2-1-1a}
\inf_{x\in V}\mu\big(B^w(x,r)\big)\ge C_2r^d
\end{equation}
and
\begin{equation}\label{np2-1-1}
\sup_{x \in V}\sum_{y \in V: \rho(y,x)\le
r}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-2}}\le C_1r^{2-\alpha}.
\end{equation} Then, for each $\theta'\in (\theta,1)$, there is a constant $\delta_0>0$ $($which only
depends on $\theta'$ and $\theta$$)$ such that
for all
$\delta_0\le \delta<r_G$,
\begin{equation}\label{np2-1-2}
p^{\delta}(t,x,y)\le C_3t^{-d/\alpha},\quad \forall\, 2\delta^{\theta' \alpha}\le t \le \delta^{\alpha} \textrm{ and }x,y\in
V,
\end{equation}
where $C_3$ is a positive constant independent of $\delta_0$, $\delta$,
$t$, $x$, $y$ and $r_G$.
\end{proposition}
\begin{proof}
The proof is partially motivated by that of \cite[Propisition 3.1]{B}, but some non-trivial modification is required.
Without mention, throughout the proof constant $c_i$ will be
independent of $\delta$, $t$, $x$, $y$ and $r_G$. Since, by the Cauchy-Schwarz inequality, $p^{\delta}(t,x,y)\le
p^{\delta}(t,x,x)^{1/2}p^{\delta}(t,y,y)^{1/2}$ for any $t>0$ and $x,y\in V,$ it suffices to verify \eqref{np2-1-2} for the case that
$x=y$. The proof is split into three steps.
{\bf Step (1):} We first note that under \eqref{nl2-1-0} and \eqref{np2-1-1},
$\sup_{x\in V}\sum_{y\in V}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_y<\infty.$ This along with (the second statement in) \cite[Theorem 3.2]{CKK} yields that the process $X^\delta$ is conservative. By \cite[Proposition 5 and Theorem 8]{D}, we have the following upper bound
for $p^{\delta}(t,x,y)$:
\begin{equation}\label{np2-1-3}
\begin{split}
p^{\delta}(t,x_1,x_2)&\le \mu_{x_1}^{-1/2}\mu_{x_2}^{-1/2}\inf_{\psi\in L^{\infty}(V;\mu)}
\exp\big(\phi(x_1)-\phi(x_2)
+b(\phi)t\big)
\end{split}
\end{equation} for all $t>0$ and $x_1,x_2\in V,$
where
$$
b(\phi):=\frac{1}{2}\sup_{x \in V}\sum_{y \in V: \rho(y,x)\le \delta}
\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\Big(e^{\phi(y)-\phi(x)}+
e^{\phi(x)-\phi(y)}-2\Big)\mu_y.
$$
For fixed $x_1,x_2\in V$, taking $\phi(x)=\rho(x,x_1)\wedge
\rho(x_1,x_2)$ for any $x\in V$, we get that
\begin{align*}
b(\phi)&\le \frac{1}{2}\sup_{x \in V}\sum_{y \in V:\rho(y,x)\le
\delta}
\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\Big(e^{\rho(x,y)}+e^{-\rho(x,y)}-2\Big)\mu_y\\
&\le \frac{1}{2}\sup_{x \in V}\sum_{y \in V:\rho(y,x)\le \delta}
\frac{w_{x,y}}{\rho(y,x)^{d+\alpha}}\rho(x,y)^2e^{\rho(x,y)}\mu_y\\
&\le \frac{1}{2}e^{\delta}\sup_{x \in V}\sum_{y\in V:\rho(y,x)\le
\delta}
\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-2}}\mu_y\le c_1e^{\delta}\delta^{2-\alpha}\le 2c_1e^{2\delta},
\end{align*}
where in the first inequality above we have used the facts that
$s\mapsto e^s+e^{-s}$ is increasing on $[0,\infty)$ and
$|\phi(x)-\phi(y)|\le \rho(x,y)$ for all $x,y\in V$, the second inequality is due to the fact that
$e^{s}+e^{-s}-2\le s^2 e^s$ for all $s\ge 0$, and the fourth
inequality follows from \eqref{np2-1-1}. Combining this with
\eqref{np2-1-3}, we arrive at that for all $t>0$ and $x_1,x_2\in
V$,
\begin{equation}\label{np2-1-3a}
p^{\delta}(t,x_1,x_2)\le c_M\exp\big(-\rho(x_1,x_2)+2c_1e^{2\delta }t\big).
\end{equation}
Furthermore, it follows from the symmetry of $w_{x,y}$, the fact that
$p^\delta(t,x,y)\mu_y\le 1$ for all $t>0$ and $x,y\in V$, \eqref{np2-1-1} and \eqref{np2-1-3a} that for
every $x\in V$,
\begin{align*}
&\sum_{z,v\in V:\rho(z,v)\le \delta}
\big(p^{\delta}(t,x,z)-p^{\delta}(t,x,v)\big)^2
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\mu_z\mu_v\\
&\le \sum_{z,v\in V:\rho(z,v)\le \delta}
\big(p^{\delta}(t,x,z)+p^{\delta}(t,x,v)\big)^2
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\mu_z\mu_v\\
&\le 4c_M\sum_{z \in V}p^{\delta}(t,x,z)\Big( \sup_{z\in
V}\sum_{v\in V:\rho(v,z)\le \delta}
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\Big)\\
&\le 4c_M\sum_{z \in V}p^{\delta}(t,x,z)\Big( \sup_{z\in
V}\sum_{v\in V:\rho(z,v)\le \delta}
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha-2}}\Big)\le c_2(\delta,t)\sum_{z \in V}\exp(-\rho(z,x))<\infty,
\end{align*}
where in the last inequality we used the fact that
\begin{align*}
\sum_{z \in V}\exp(-\rho(z,x))&\le c_M\sum_{r=0}^{\infty}\sum_{z \in V: \rho(x,z)=r}
e^{-r}\mu_z\le c_M\sum_{r=0}^{\infty}\mu(B(x,r))e^{-r}\le c_M c_G\sum_{r=1}^{\infty}r^d e^{-r}<\infty.
\end{align*}
Therefore, according to the
Fubini theorem and \eqref{np2-1-3a},
for every $x \in V$,
\begin{equation}\label{np2-1-4}
\begin{split}
&\sum_{z\in V}L^{\delta}p^{\delta}(t,x,\cdot)(z)p^{\delta}(t,x,z)\mu_z
=-\frac{1}{2}\sum_{z,v\in V}
\big(p^{\delta}(t,x,z)-p^{\delta}(t,x,v)\big)^2\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\mu_z\mu_v.
\end{split}
\end{equation}
{\bf Step (2):} Below we fix $x\in V$. Let $f_t(z)=p^{\delta}(t,x,z)$ and
$\psi(t)=p^{\delta}(2t,x,x)$ for all $z\in V$ and $t\ge 0$. Then,
$\psi(t)=\sum_{z\in V} f_t(z)^2\mu_z$, and, by \eqref{np2-1-4},
$$
\psi'(t)= \!2\sum_{z\in V} \!\frac{d f_t(z)}{dt} f_t(z)\mu_z=\!2\sum_{z \in V}\!
L^{\delta}f_t(z) f_t(z)\mu_z=\!-\sum_{z,y\in V}\!(f_t(z)-f_t(y))^2
\frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}\mu_z\mu_y.
$$
Let $\delta^{\theta}\le r(t)\le \delta$ and $R:=R(\delta)\ge 1$ be some
constants to be determined later. Suppose that $B(x_i, r(t)/2)$
($i=1,\cdots, m$) is the maximal collection of disjoint balls with
centers in $B(x,R)$. Set $B_i=B(x_i,r(t))$ and $B_i^*=B(x_i,
2r(t))$. Then, $B(x,R)\subset \cup_{i=1}^mB_i\subset
B(x,R+r(t))\subset \cup_{i=1}^mB_i^*;$ moreover, if $z\in
B(x,R+r(t))\cap B_i^*$ for some $1\le i\le m$, then $B(x_i,r(t)/2)\subset B(z,3r(t))$, and
so
$$c_3r(t)^d\ge \mu(B(z,3r(t)))\ge \sum_{i=1}^m\mathds 1_{\{z\in B_i^*\}} \mu(B(x_i,r(t)/2))\ge c_4r(t)^d|\{i:z\in B_i^*\}|,$$ where
in the second inequality we used the fact that $B(x_i, r(t)/2)$,
$i=1,\cdots, m$, are disjoint, and in the first and the last inequality we have used
\eqref{al2-1}. Thus, every $z\in B(x,R+r(t))$ is in at most
$c_5:=c_3/c_4$ of the ball
$B_i^*$ (hence at most $c_5$ of the ball
$B_i$). In particular,
\begin{equation}\label{np2-1-4a}
\sum_{i=1}^m\sum_{z\in B_i}=\sum_{i=1}^m\sum_{z\in
B(x,R+r(t))}\mathds 1_{B_i}(z) =\sum_{z\in
B(x,R+r(t))}\sum_{i=1}^m\mathds 1_{B_i}(z) \le c_5\sum_{z \in
B(x,R+r(t))}.
\end{equation}
According to (the proof of) Lemma \ref{nl2-1}, \eqref{nl2-1-1} and \eqref{np2-1-1a} imply that for every $\delta^{\theta}\le r \le \delta$,
$x\in V$ and measurable function $f$ on $V$,
\begin{equation}\label{nl2-1-2}
\begin{split}
\sum_{z\in B(x,r)}\!\!(f(z)-(f)_{B^w(z,r)})^2\mu_z\le c_6 r^{\alpha}\!\!
\sum_{z\in B(x,r),y\in B(x,2r)}\!\!(f(z)-f(y))^2 \frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}\mu_z\mu_y.\end{split}
\end{equation}
Hence, noticing that $\delta^\theta\le r(t)\le \delta$,
\begin{align*}
&\sum_{z,y\in V} (f_t(z)-f_t(y))^2\frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}\mu_z\mu_y\ge \frac{1}{c_5}\sum_{i=1}^m\sum_{z\in B_i}\sum_{y\in B_i^*}(f_t(z)-f_t(y))^2\frac{w_{z,y}}{\rho(z,y)^{d+\alpha}}
\mu_z\mu_y\\
&\ge \frac{c_7}{r(t)^{\alpha}}\Big[\sum_{i=1}^m \sum_{z\in B_i}
f_t^2(z)\mu_z-
2\sum_{i=1}^m \sum_{z\in B_i}f_t(z)(f_t)_{B^w(z,r(t))}\mu_z\Big]=: \frac{c_7}{r(t)^{\alpha}}(I_1-I_2), \end{align*}
where in the second inequality we have used \eqref{nl2-1-2}.
Furthermore, since $f_t(z)\mu_z\le 1$ for all $z\in V$ and $t>0$, we have
\begin{align*}
I_1\ge &\!\sum_{z\in \cup_{i=1}^m B_i} f_t^2(z)\mu_z\ge\!\sum_{z\in B(x,R)}
f_t^2(z)\mu_z
=\!\psi(t)\!-\!\sum_{z\in V: \rho(z,x)> R}f_t^2(z)\mu_z\ge\! \psi(t)\!-\!\sum_{z\in V:\rho(z,x)> R}f_t(z).
\end{align*}
So, by \eqref{np2-1-3a}, we can choose
$R:=R(\delta)=2c_1e^{4\delta}$ such that for all
$\delta^{\theta\alpha}\le t\le \delta^{\alpha}$,
\begin{align*}
\sum_{z\in V: \rho(z,x)> R}f_t(z)&\le
\sum_{z\in V:\rho(z,x)> 2c_1e^{4\delta}}\exp\big(-\rho(z,x)+2c_1e^{2\delta}\delta^\alpha\big)\\
&\le c_M\sum_{z\in V:\rho(z,x)>2c_1
e^{4\delta}}\exp\big(-\rho(z,x)/2\big)\mu_z\\
&\le c_M\sum_{r=2c_1e^{4\delta}}^\infty\mu(B(x,r))e^{-r/2}\le c_8\delta^{-d}\le
c_8r(t)^{-d},
\end{align*}
where the last inequality follows from the fact that $r(t)\le \delta$.
On the other hand, due to \eqref{np2-1-1a} and the fact that $\sum_{z \in V}f_t(z)\mu_z\le 1$ for all $t>0$,
\begin{equation*}
\sup_{z \in V}(f_t)_{B^w(z,r(t))}\le
\sup_{z\in V}\mu\big(B^w(z,r(t))\big)^{-1}\cdot \sum_{z\in V}f_t(z)\mu_z\le C_2^{-1}r(t)^{-d}.
\end{equation*}
This along with \eqref{np2-1-4a} yields that
\begin{align*}
I_2&\le C_2^{-1}r(t)^{-d}\sum_{i=1}^m\sum_{z \in B_i} f_t(z)\mu_z
\le C_2^{-1}c_5r(t)^{-d}\sum_{z \in B(x,R+r(t))}f_t(z)\mu_z\le C_2^{-1} c_5r(t)^{-d}.
\end{align*}
Therefore, combining all estimates above,
we arrive at that for every $\delta^{\theta}\le r(t)\le \delta$,
\begin{equation}\label{np2-1-5}
\psi'(t)\le -c_{9}r(t)^{-\alpha}\left(\psi(t)-c_{10}r(t)^{-d}\right).
\end{equation}
{\bf Step (3):} For any $\theta'\in (\theta,1)$ and any $1\le \delta<r_G$ large enough, we claim that there exists
$t_0\in [\delta^{\theta\alpha},\delta^{\theta'\alpha}]$ such that
\begin{equation}\label{np2-1-6}
\left( \frac{1}{2c_{10}}\psi(t_0)\right)^{-1/d}\ge \delta^{\theta}.
\end{equation}
Indeed, suppose that \eqref{np2-1-6} does not hold. Then,
\begin{equation}\label{np2-1-7}
\left( \frac{1}{2c_{10}}\psi(t)\right)^{-1/d}< \delta^{\theta},\quad \forall\ \delta^{\theta\alpha}\le t\le \delta^{\theta'\alpha},
\end{equation}
which means that $\psi(t)\ge 2c_{10}\delta^{-d\theta}$ for all $\delta^{\theta\alpha}\le t\le \delta^{\theta'\alpha}$.
Hence, taking $r(t)=\delta^{\theta}$ in \eqref{np2-1-5}, we find that
$\psi'(t)\le -{2}^{-1}c_{9}\delta^{-\theta\alpha} \psi(t)$ for any $\delta^{\theta\alpha}\le t\le \delta^{\theta'\alpha},$
which along with the fact $\psi(t)\le \mu_x^{-1}\le c_M$ for all $t>0$ yields
that
$\psi(t)\le c_M e^{-2^{-1}{c_{9}}\delta^{-\theta\alpha}(t-\delta^{\theta\alpha})}$ for any $\delta^{\theta\alpha}\le t\le \delta^{\theta'\alpha}.$
In particular,
$\psi(\delta^{\theta'\alpha})\le c_Me^{-2^{-1}{c_{9}}\delta^{-\theta\alpha}(\delta^{\theta'\alpha}-\delta^{\theta\alpha})}.$
On the other hand, according to \eqref{np2-1-7}, we have
$\psi(\delta^{\theta'\alpha})\ge
2c_{10} \delta^{-d\theta}.$
Thus, there is a contradiction between these two inequalities above for $\delta$ large enough, and so \eqref{np2-1-6} is true.
Next, assume that we can take $1\le \delta<r_G$ large enough such that \eqref{np2-1-6} holds. Since $t\mapsto\psi(t)$ is non-increasing on $(0,\infty)$
and $t_0\le \delta^{\theta'\alpha}$,
$$
\left( \frac{1}{2c_{10}}\psi(t)\right)^{-1/d}\ge \delta^{\theta},\quad \forall\ \delta^{\theta'\alpha}\le t\le \delta^{\alpha}.
$$
Let $$\tilde{t}_0:=\sup\bigg\{t>0: \left(
\frac{1}{2c_{10}}\psi(t)\right)^{-1/d}<\delta/2\bigg\}.$$ By the
non-increasing property of $\psi$ on $(0,\infty)$ again, if $\tilde t_0\le
\delta^{\theta'\alpha}$, then
$
\psi(t)\le \psi(\tilde t_0)=2c_{10}(\delta/2)^{-d}\le
c_{11}t^{-d/\alpha}$ for any $\delta^{\theta'\alpha}\le t\le \delta^{\alpha}.$
This proves \eqref{np2-1-2}.
When $\tilde t_0>\delta^{\theta'\alpha}$,
$$
\delta^{\theta}\le \left( \frac{1}{2c_{10}}\psi(t)\right)^{-1/d}\le \delta/2
,\quad \forall\ \delta^{\theta'\alpha}\le t\le \tilde t_0.
$$
Then, taking $r(t)=\big( \frac{1}{2c_{10}}\psi(t)\big)^{-1/d}$ in
\eqref{np2-1-5}, we have
$
\psi'(t)\le
-c_{12}\psi(t)^{1+d/\alpha}$ for any $\delta^{\theta'\alpha}\le t\le \tilde t_0.
$
Hence,
$
\psi(s) \le
c_{13}\big(s-\delta^{\theta'\alpha}+\psi(\delta^{\theta'\alpha})^{-\alpha/d}\big)^{-d/\alpha}\le
c_{14}s^{-d/\alpha}$ for any $2\delta^{\theta'\alpha}\le s\le
\tilde t_0.$
If $\tilde t_0>\delta^{\alpha}$, then \eqref{np2-1-2} holds.
If $\delta^{\theta'\alpha}<\tilde t_0\le \delta^{\alpha}$, then, for all $\tilde t_0 \le s\le \delta^{\alpha}$,
$
\psi(s)\le \psi(\tilde t_0)=2c_{10}(\delta/2)^{-d}\le c_{15}s^{-d/\alpha},
$
so \eqref{np2-1-2} also holds. The proof is complete.
\end{proof}
\subsection{Localization method and moment estimates of the truncated process} In this part, we fix $x_0\in V$ and $R\ge 1$. Define a
symmetric regular Dirichlet form $(\hat D^{x_0, R},\hat
\mathscr{F}^{x_0, R})$ as follows
\begin{align*}
\hat D^{x_0,R}(f,f)=&\sum_{x,y\in
V}\big(f(x)-f(y)\big)^2\frac{\hat w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_x\mu_y,\quad
f\in
\hat \mathscr{F}^{x_0,R},\\
\hat \mathscr{F}^{x_0,R}=&\{f \in L^2(V;\mu): \hat D^{x_0,
R}(f,f)<\infty\},
\end{align*}
where
\begin{equation*}
\hat w_{x,y}=
\begin{cases}
& w_{x,y},\ \ \ \text{if}\ x\in B(x_0, R)\ \text{or}\ y\in B(x_0, R),\\
& \,\, 1,\ \ \ \ \ \ \text{otherwise}.
\end{cases}
\end{equation*}
Note that, according to the definition of $\hat w_{x,y}$, for any $x\in V$,
\begin{equation}\label{e:bound}
\begin{split}
&\sum_{y\in V}\frac{\hat w_{x,y}}{\rho(x,y)^{d+\alpha}}=\sum_{y \notin B(x_0, R)}\frac{\hat w_{x,y}}{\rho(x,y)^{d+\alpha}}+\sum_{y \in B(x_0, R)}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\\
&\le \sup_{z\in B(x_0, R)} \sum_{v\in V} \frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}+
\sup_{z\notin B(x_0,R)}\sum_{y\in V:y\neq z}\frac{1}{\rho(z,y)^{d+\alpha}}
+\sum_{y \in B(x_0, R)}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\\
&\le \sup_{z\in B(x_0, R)} \sum_{v\in V} \frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}
+c_M\sup_{z\notin B(x_0,R)}\sum_{k=1}^{\infty}\sum_{y\in V: 2^{k-1}\le \rho(y,z)< 2^{k}} \frac{1}{\rho(y,z)^{d+\alpha}}\mu_y\\
&\quad+\sum_{y \in B(x_0, R)}\bigg(\sup_{z \in B(x_0, R)}\sum_{v \in V}
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\bigg)\\
&\le \sup_{z\in B(x_0, R)} \sum_{v\in V} \frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}
\!+\!c_M c_G\sum_{k=1}^{\infty} \frac{2^{kd}}{2^{(k-1)(d+\alpha)}}
\!+\!\sum_{y \in B(x_0, R)}\sup_{z \in B(x_0, R)}\sum_{v \in V}
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\\
&\le c_1+c_2(1+R^d)\sup_{z \in B(x_0, R)}\bigg(\sum_{v \in V}
\frac{w_{z,v}}{\rho(z,v)^{d+\alpha}}\bigg)=:C(x_0,R)<\infty,
\end{split}
\end{equation}
where \eqref{eq:condwxy} was used in the
fourth inequality. In particular, by \eqref{e:bound} and (the second statement in) \cite[Theorem 3.2]{CKK}, the associated Hunt process $\hat X^{
R}:=((\hat X_t^{R})_{t\ge 0}, (\mathds P_x)_{x\in V})$ is conservative. Here and in what follows, we omit the index $x_0$ for simplicity.
We also consider the following
truncated Dirichlet form $(\hat D^{x_0,R, R}, \hat\mathscr{F}^{x_0,R})$:
\begin{align*}
&\hat D^{x_0, R, R}(f,f)=\sum_{x,y\in V: \rho(x,y)\le
R}\big(f(x)-f(y)\big)^2\frac{\hat w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_x\mu_y,\quad
f\in \hat\mathscr{F}^{x_0, R}.
\end{align*}
Let $\hat X^{R,R}:=((\hat X_t^{
R,R})_{t\ge 0},(\mathds P_x)_{x\in V})$ be the associated Hunt process.
In particular, due to \eqref{e:bound} again, the process $\hat X^{R,R}$ is also conservative.
Denote by $\hat
p^{R}(t,x,y)$ and $\hat p^{
R,R}(t,x,y)$ heat kernels of the processes $\hat X^R$ and $\hat X^{R,R}$, respectively.
\ \
The following statement is concerned with moment estimates of $\hat
X^{R,R}$, which are key to yield exit time estimates of the original process $X$
in the next section. We mainly use the method of Bass \cite{Bass}
(see also Barlow \cite{B} and Nash \cite{Nash}), but some non-trivial modifications are
required.
\begin{proposition}\label{np-1}
Suppose that there exist $1\le R_0<r_G$ and $\theta \in (0,1)$ such that
for every $R_0<R<r_G$ and
$R^{\theta}\le r \le R$,
\begin{equation}\label{np1-1}
\sup_{x\in {B(x_0,3R)}}\sum_{y\in V:\rho(x,y)\le r}
\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-2}}
\le C_1 r^{2-\alpha},
\end{equation}
\begin{equation}\label{np1-1b}
\inf_{x\in B(x_0,3R)}\mu(B^w(x,r))\ge C_2r^d
\end{equation}
and
\begin{equation}\label{np1-1a}
\sup_{x \in { B(x_0,3R)}}\sum_{y\in B^w(x,r)}w_{x,y}^{-1}\le C_1r^d,
\end{equation}
where $C_1$ and $C_2$ are positive constants independent of $x_0$,
$R_0$, $R$, $r$ and $r_G$. Then for every $\theta' \in (\theta,1)$,
there exists a constant $R_1>R_0$ $($which
depends on $\theta$, $\theta'$ and $R_0$ only$)$ such
that for every $R_1<R<r_G$ and $x\in V$,
\begin{equation}\label{np1-3}
\mathds E_x\big[\rho\big(\hat X_t^{R, R},x\big)\big]\le C_3R\left(\frac{t}{R^\alpha}\right)^{1/2}
\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right],\quad \forall \ R^{\theta' \alpha}\le t \le R^{\alpha},
\end{equation}
where $C_3$ is a positive constant independent of $x_0$,
$R_1$, $R$, $t$, $x$ and $r_G$.
\end{proposition}
\begin{proof}
Throughout the proof, we first suppose that
there exist
positive constants $c(x_0, R)$ and $\tilde c(x_0,R)$
such that
\begin{equation}\label{np1-2a}
\tilde c(x_0,R)\le \inf_{x,y\in V}\hat w_{x,y}\le \sup_{x,y \in V}\hat w_{x,y}\le c(x_0,R).
\end{equation}
If \eqref{np1-2a} is not satisfied, then, by taking $w_{x,y}^{\varepsilon}:=w_{x,y}+\varepsilon$ and then letting $\varepsilon \downarrow 0$, we can prove
that \eqref{np1-3} still holds true. Moreover, all the constants in the proof below are independent of $\varepsilon$ unless specifically
claimed.
{\bf Step (1):}
By \eqref{np1-1}, \eqref{np1-1b}, \eqref{np1-1a} and the definition of $\hat w_{x,y}$, for every $R_0<R<r_G$ and $R^{\theta}\le r\le R$,
\begin{equation}\label{np1-4}
\sup_{x\in V}\sum_{y\in V:\rho(x,y)\le r}
\frac{\hat w_{x,y}}{\rho(x,y)^{d+\alpha-2}}\le c_0r^{2-\alpha},
\end{equation}
$
\inf_{x\in V}\mu(B^{\hat w}(x,r))\ge c_1r^d
$
and
$
\sup_{x \in V}\sum_{y \in B^{\hat w}(x,r)}
\hat w_{x,y}^{-1}\le c_0r^d,
$
where $B^{\hat w}(x,r):=\{z\in V:\rho(z,x)\le r,\ \hat w_{z,x}>0\}$.
Let $\theta' \in (\theta,1)$ and $\theta_0=({\theta+\theta'})/{2}$. Taking $\rho=R$ in Proposition \ref{np2-1}, we find that there exists a constant $\tilde R_0\ge R_0$ (which only depends on $\theta$ and $\theta'$) such that
whenever $\tilde R_0<R<r_G$,
\begin{equation}\label{np1-2}
\hat p^{R,R}(t,x,y)\le c_2t^{-d/\alpha},\quad \forall\
2R^{\theta_0 \alpha}\le t \le R^{\alpha},\ x,y\in V.
\end{equation}
For every $t>0$, we define
$$
M(t)=\sum_{y\in V}\rho(x,y)\hat p^{R,R}(t,x,y)\mu_y,\quad Q(t)=-\sum_{y \in V}\hat p^{R,R}(t,x,y)\left[\log \hat p^{R,R}(t,x,y)\right]\mu_y.
$$
Below, we fix $x \in V$ and set $f_t(y)=\hat p^{
R,R}(t,x,y)$ for all $y\in V$ and $t>0$.
By \eqref{np1-2a}, we
can obtain upper and lower bounds for $\hat p^{R,R}(t,x,y)$
(see \cite{D} for upper bounds on graph or \cite{CKKmn} for two-sided estimates in the Euclidean space), which yields that
\begin{align*}
&\sum_{y,z\in V:\rho(y,z)\le R}|f_t(y)-f_t(z)|
|\log f_t(y)-\log f_t(z)|\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\\
&\le \sum_{y,z\in V: \rho(y,z)\le R}\big(f_t(y)+f_t(z)\big)
\big(|\log f_t(y)|+|\log f_t(z)|\big)\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z
<\infty.
\end{align*}
Thus,
\begin{align*}
&-\sum_{y\in V}(\log f_t(y)+1)\hat L^{R,R}f_t(y)\mu_y\\
&=\frac{1}{2}\sum_{y,z\in V:\rho(y,z)\le R}\big(f_t(y)-f_t(z)\big)
\big(\log f_t(y)-\log f_t(z)\big)\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z,
\end{align*} where $\hat L^{R,R}$ is the generator associated with
$(\hat D^{x_0,R, R},\hat \mathscr{F}^{x_0,R,R})$, i.e.,
$$\hat L^{R,R} f(x)=\sum_{y\in V: \rho(x,y)\le R}(f(y)-f(x))\frac{\hat w_{x,y}}{\rho(x,y)^{d+\alpha}}\mu_y.$$
Therefore,
\begin{align*}
Q'(t)&=-\sum_{y\in V}(\log f_t(y)+1)\hat L^{R,R}f_t(y)\mu_y\\
&=\frac{1}{2}\sum_{y,z\in V:\rho(y,z)\le R}\big(f_t(y)-f_t(z)\big)
\big(\log f_t(y)-\log f_t(z)\big)\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\ge 0.
\end{align*}
In particular, $Q(\cdot)$ is a non-decreasing function on $(0,\infty)$.
On the other hand, for all $\tilde R_0<R<r_G$, by the Cauchy-Schwarz inequality,
\begin{align*}
M'(t)&=\sum_{y\in V}\rho(x,y)\hat L^{R,R}f_t(y)\mu_y\\
&=-\frac{1}{2}\sum_{y,z \in V: \rho(y,z)\le R}\big(\rho(x,y)-\rho(x,z)\big)\big(f_t(y)-f_t(z)\big)\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}
\mu_y\mu_z\\
&\le \left(\frac{1}{4}\sum_{y,z \in V: \rho(y,z)\le R}
\big(\rho(x,y)-\rho(x,z)\big)^2
\big(f_t(y)+f_t(z)\big)\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\right)^{1/2}\\
&\quad \times\left(\sum_{y,z\in V: \rho(y,z)\le R}
\frac{(f_t(y)-f_t(z))^2}{f_t(y)+f_t(z)}\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\right)^{1/2}\\
&\le \left(\frac{c_M}{2}\sup_{z \in V}\sum_{y\in V: \rho(y,z)\le R}
\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha-2}}\right)^{1/2}\\
&\quad \times \left(\sum_{y,z\in V: \rho(y,z)\le R}
\frac{(f_t(y)-f_t(z))^2}{f_t(y)+f_t(z)}\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\right)^{1/2}\\
&\le c_3R^{1-\alpha/2}\left(\sum_{y,z\in V:\rho(y,z)\le R}
\frac{(f_t(y)-f_t(z))^2}{f_t(y)+f_t(z)}\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\right)^{1/2},
\end{align*}
where the equality above follows from the fact
$$
\sum_{y,z\in V:\rho(y,z)\le R}|f_t(y)-f_t(z)|
\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha-1}}<\infty,
$$
thank to \eqref{np1-2a} again, in the second inequality we used \eqref{al2-0} and the fact that
${\sum_{z\in V}}f_t(z)\mu_z\le 1$
for all $t>0$, and in the last inequality we have
used \eqref{np1-4}.
Noting that
$$
\frac{(s-t)^2}{s+t}\le \big(s-t\big)\big(\log s-\log t\big), \quad s,t>0,
$$
we have
\begin{align*}
&\sum_{y,z\in V:\rho(y,z)\le R}
\frac{(f_t(y)-f_t(z))^2}{f_t(y)+f_t(z)}\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z\\
&\le \sum_{y,z\in V:\rho(y,z)\le R}\big(f_t(y)-f_t(z)\big)
\big(\log f_t(y)-\log f_t(z)\big)\frac{\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\mu_z=2Q'(t).
\end{align*}
Hence, combining all the estimates above, we arrive at that for all $\tilde R_0<R<r_G$,
\begin{equation}\label{np1-4a}
M'(t)\le \sqrt{2}c_3R^{1-\alpha/2}Q'(t)^{1/2},\ \ \forall\ t>0.
\end{equation}
{\bf Step (2):} \eqref{np1-2} yields that for all $\tilde R_0<R<r_G$ and $2R^{\theta_0\alpha}\le t \le R^{\alpha}$,
\begin{align*}
Q(t)\ge -\left(\sum_{y \in
V}f_t(y)\right)\log (c_2t^{-d/\alpha})=\frac{d}{\alpha}\log t-c_4,\end{align*} where $c_4>0$ and the conservativeness of $\hat X^{R,R}$ was used in the equality above. Define
$$K(t)=d^{-1}\Big(Q(t)+c_4-\frac{d}{\alpha}\log t\Big),\quad t>0.$$ Obviously,
$K(t)\ge 0$ for all $t\in [2R^{\theta_0\alpha},R^{\alpha}]$, and
\begin{equation}\label{np1-5}
Q'(t)=d K'(t)+\frac{d}{\alpha t},\quad t>0.
\end{equation}
Set $T_0(R):=0\vee \sup\{t<2R^{\theta_0\alpha}: K(t)< 0 \}.$ It is easy to see that
$K(t)\ge 0$ for all $t\in [T_0(R),R^{\alpha}]$ and $T_0(R)\le 2R^{\theta_0\alpha}$.
By \eqref{np1-4a} and \eqref{np1-5}, we have for all $t \in [T_0(R),R^{\alpha}]$,
\begin{equation}\label{np1-6}
\begin{split}
M(t)&=M(T_0(R))+\int^t_{T_0(R)}M'(s)\,ds\le M(T_0(R))+\sqrt{2}c_3R^{1-\alpha/2}\int^t_{T_0(R)}Q'(s)^{1/2}\,ds\\
&= M(T_0(R))+\sqrt{2}c_3R^{1-\alpha/2}\int^t_{T_0(R)}\Big(d K'(s)+\frac{d}{\alpha s}\Big)^{1/2}\,ds.
\end{split}
\end{equation}
Note that, by the mean-value theorem, for every $a\in\mathds R$ and $b>0$ with $a+b\ge 0$,
\begin{equation}\label{np1-7}
(a+b)^{1/2}\le b^{1/2}+a/(2b^{1/2}).
\end{equation}
Then, applying \eqref{np1-7} in the second term of the right hand
side of \eqref{np1-6} with $a=K'(s)$ and $b=\frac{1}{\alpha s}$, we
obtain that for all $t \in [T_0(R), R^{\alpha}]$,
\begin{equation}\label{np1-8}
\begin{split}
M(t)&\le M(T_0(R))+c_4R^{1-\alpha/2}
\int_{T_0(R)}^t s^{-1/2}\,ds+c_5R^{1-\alpha/2}\int_{T_0(R)}^t s^{1/2}K'(s)\,ds\\
&\le M(T_0(R))+c_6R^{1-\alpha/2}t^{1/2} +
c_5R^{1-\alpha/2}\int_{T_0(R)}^t\left[\big(s^{1/2}K(s)\big)'-\frac{s^{-1/2}K(s)}{2}\right]\,ds\\
&\le
M(T_0(R))+c_6R^{1-\alpha/2}t^{1/2}+c_5R^{1-\alpha/2}t^{1/2}K(t),
\end{split}
\end{equation}
where the last inequality we used the fact that $K(t)\ge 0$ for all
$t \in [T_0(R),R^{\alpha}]$.
Furthermore, suppose that $T_0(R)>0$. Since $Q'(t)\ge 0$, by \eqref{np1-4a} and the
Cauchy-Schwarz
inequality, we have
\begin{align*}
M(T_0(R))&=\int_0^{T_0(R)}M'(s)\,ds\le \sqrt{2}c_3R^{1-\alpha/2}\int_0^{T_0(R)}Q'(s)^{1/2}\,ds\\
&\le \sqrt{2} c_3R^{1-\alpha/2}T_0(R)^{1/2}\left(\int_0^{T_0(R)}Q'(s)\,ds\right)^{1/2}\\
&\le c_7R^{1-\alpha(1-\theta_0)/2}\big(Q(T_0(R))-(Q(0)\wedge0)\big)^{1/2},
\end{align*}
where in the last inequality we have used
the fact that $T_0(R)\le 2R^{\theta_0
\alpha}$.
By the definition of $T_0(R)$, it holds that
$K(T_0(R))=0$, and so
$Q(T_0(R))=({d}/{\alpha})\log T_0(R)-c_4\le c_8(1+\log R),$
where we have used again $T_0(R)\le 2R^{\theta_0 \alpha}$.
On the other hand,
$Q(0)=\lim_{t\to0} Q(t)=\log \mu_x\ge -\log c_M.$
Thus, we can find
$R_1\ge 1$ large enough such that for all $R>R_1$ and $t \in
[R^{\theta'\alpha},R^{\alpha}]$,
\begin{align*}
M(T_0(R))&\le c_9R^{1-\alpha(1-\theta_0)/2}
(1+\log R)^{1/2}= c_9R^{1-\alpha/2}R^{\theta_0\alpha/2}
(1+\log R)^{1/2}\\
&\le c_9R^{1-\alpha/2}R^{\theta'\alpha/2}\le c_9R^{1-\alpha/2}t^{1/2},
\end{align*}
where in the second inequality we used the fact that $\theta_0\in (\theta,\theta')$, and the last inequality is due to $t\ge R^{\theta'\alpha}$. Note
that $M(0)=0$, so the above estimate still holds when $T_0(R)=0$.
Therefore, combining this with \eqref{np1-8}, we arrive at that
for all $t \in [R^{\theta'\alpha},R^{\alpha}]$,
\begin{equation}\label{np1-9}
M(t)\le c_{10}R^{1-\alpha/2}t^{1/2}\big(1+K(t)\big).
\end{equation}
{\bf Step (3):} Note that $s(\log s+t)\ge -e^{-1-t}$ for all $s>0$ and $t\in \mathds R$. Then,
for every $0<a\le 2$, $b\in \mathds R$ and $t>0$,
\begin{equation}\label{np1-10a}
\begin{split}
-Q(t)+aM(t)+b&=\sum_{y \in V}f_t(y)\big(\log f_t(y)+a\rho(x,y)+b\big)\mu_y\\
&\ge -\sum_{y \in V}\exp\big(-1-a\rho(x,y)-b\big)\mu_y\ge -c_{11}e^{-b}a^{-d},
\end{split}
\end{equation}
where
the equality above follows from the conservativeness of $X^{R,R}$,
and in the last inequality we used the fact that
\begin{align*}
\sum_{y \in V}e^{-a \rho(x,y)}\mu_y&\le c_M+
\sum_{k=1}^{\infty}\sum_{y\in B(x,2^k)\setminus B(x,2^{k-1})}e^{-a2^{k-1}}\mu_y\le c_M+c_G\sum_{k=1}^{\infty}2^{d k}e^{-a2^{k-1}}
\le Ca^{-d}
\end{align*}
for all
$0<a\le 2$ (see \cite[line 6--7 in p.\ 3056]{B}).
According to \eqref{np1-2}, we could find $R_1>\tilde R_0$ large enough such that for all
$R_1<R<r_G$ and $t\in [R^{\theta'\alpha},R^{\alpha}]$,
\begin{align*}
M(t)&=\sum_{y\in V}\rho(x,y)f_t(y)\mu_y
\ge \sum_{y \in V: \rho(x,y)>0}f_t(y)\mu_y= 1-\mathds P_x\big(\hat X_t^{R,R}=x\big)\\
&\ge 1-c_2t^{-d/\alpha}\ge 1-c_2R^{-\theta'd}>1/2.
\end{align*}
Then, choosing $a=1/M(t)$ and $e^b=M(t)^d=a^{-d}$ in
\eqref{np1-10a}, we have
$
-Q(t)+1+d\log M(t)\ge -c_{11},
$
which implies that for all $R_1<R<r_G$ and $t \in [R^{\theta'\alpha},R^{\alpha}]$,
$
M(t)\ge c_{12}\exp(Q(t)/d).
$
This along with the definition of $K(t)$ yields that
\begin{equation}\label{np1-10}
M(t)\ge c_{12}\exp(Q(t)/d)\ge c_{13}t^{1/\alpha}e^{K(t)}.
\end{equation}
Combining \eqref{np1-9} with \eqref{np1-10}, we obtain that for all
$t \in [R^{\theta'\alpha},R^{\alpha}]$,
$
e^{K(t)}\le c_{14}R^{1-\alpha/2}\big(1+K(t)\big)t^{1/2-1/\alpha}, $
which is equivalent to
$$
K(t)\le
c_{15}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)+\log(1+K(t))\right].
$$
This implies that for all $R_1<R<r_G$ and $t \in [R^{\theta'\alpha},R^{\alpha}]$,
$$
K(t)\le c_{16}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)\right].
$$
The inequality above along with \eqref{np1-9} further gives us that for all $R_1<R<r_G$ and $t \in [R^{\theta'\alpha},R^{\alpha}]$,
$$M(t)\le
c_{17}R^{1-\alpha/2}t^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right]\le c_{18}R\left(\frac{t}{R^{\alpha}}\right)^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right].
$$
The proof is complete.
\end{proof}
\section{Stable-like processes on graphs}
Let $(D,\mathscr{F})$ be a regular symmetric Dirichlet form on $L^2(V;\mu)$ given in the beginning of Section \ref{S:tr}. In particular, we assume that \eqref{eq:condwxy} holds. Let $X:=((X_t)_{t\ge0}, (\mathds P_x)_{x\in V})$ be the associated symmetric Hunt process associated with $(D,\mathscr{F})$.
\subsection{Estimates of exit time}
In order to get estimates of exit time for the process $X$, we
will make full use of results in the previous section. We still adopt
notations as before. Fix $x_0\in V$ and $R\ge1$. According to the definition of $(\hat
D^{x_0,R},\hat \mathscr{F}^{x_0, R})$, we have
\begin{equation}\label{ne2-1}
\mathds P_{x_0}\big(\tau_{B(x_0,R)}\le
t\big)=\mathds P_{x_0}\big(\hat\tau^{R}_{B(x_0,R)}\le
t\big),
\end{equation}
where $\tau_A:=\inf\{t>0: X_t\notin A\}$ and $\hat \tau^{
R}_A:=\inf\{t\ge0: \hat X_t^{R}\notin A\}$ for any subset
$A\subseteq V$.
In the following, we denote by $(\hat P_t^{R,B(x_0,
R)})_{t\ge 0}$ and $(\hat P_t^{R,R,B(x_0,R)})_{t \ge
0}$ Dirichlet semigroups of the processes $\hat X^{R}$ and
$\hat X^{R,R}$ exiting $B(x_0, R)$, respectively.
Let $\hat\tau^{R,R}_{A}=\inf\{t\ge0: \hat
X_t^{R,R}\notin A\}$ for any
$A\subseteq V$.
\begin{lemma}\label{nl-1}
For any $f\in
L^2(V;\mu)$, $t>0$ and $x\in B(x_0,R)$,
\begin{equation}\label{nl1-1}\begin{split}
|\hat P_t^{R, R,B(x_0,R)}f(x)-&\hat P_t^{
R,B(x_0,R)}f(x)|\le C_1t \left(\sup_{y\in B(x_0,R)}J(y, R)\right)\left(\sup_{z\in B(x_0,R)}|f(z)|\right),\end{split}
\end{equation}
where $C_1$ is a positive constant independent of $R$ and $x_0$,
and \begin{equation}\label{nl-1-00}
J(y,R)=\sum_{z\in V:
\rho(y,z)>R}\frac{ w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_z,\quad y\in B(x_0,R).
\end{equation}
In particular, it holds that for any $t>0$ and $x\in B(x_0,R)$,
\begin{equation}\label{nl1-2}
\big|\mathds P_x\big(\hat\tau_{B(x_0,R)}^{R, R}\le
t\big)-\mathds P_x\big(\hat\tau^{R}_{B(x_0,R)}\le t\big)\big|
\le C_1t\sup_{y\in B(x_0,R)}J(y,R).
\end{equation}
\end{lemma}
\begin{proof}
Let $T_{R}^{R}=\inf\{t>0: \rho(\hat X_{t-}^{R},\hat
X_t^{R})>R\}$. By \eqref{e:bound},
$\sup_{y\in V}\sum_{z \in V:\rho(z,y)>R}\frac{\hat
w_{z,y}}{\rho(z,y)^{d+\alpha}}\mu_z<\infty.$ Then, by Meyer's construction
of $\hat X^{R}$ (see \cite[Section 3.1]{BGK}), $\hat
X_t^{R}=\hat X_t^{R, R}$ if $t<T_{R}^{R}$.
Hence, for any $f\in
L^2(V;\mu)$,
\begin{align*}
&\big|\hat P_t^{R, R,B(x_0,R)}f(x)-\hat P_t^{R,B(x_0,R)}f(x)\big|\\
&=\big|\mathds E_x(f(\hat X_t^{R}):t\le \hat \tau_{B(x_0,
R)}^{R} )- \mathds E_x(f(\hat X_t^{R, R}):t\le \hat
\tau_{B(x_0,R)}^{R,R} )\big|\\
&\le \sup_{z\in B(x_0,R)}|f(z)|\Big[\mathds P_x\big(T_{R}^{R}\le t \le
\hat\tau_{B(x_0,R)}^{R}\big)+
\mathds P_x\big(T_{R}^{R}\le t \le \hat \tau_{B(x_0,R)}^{R,R}\big)\Big]\\
&\le 2\left(\sup_{z\in B(x_0,R)}|f(z)|\right)\mathds P_x\big(T_{R}^{R}\le t, \hat X^{
R, R}_{s}\in B(x_0,R)\ \text{for all } s\in [0,T_{R}^{
R}]\big).
\end{align*}
According to \cite[Lemma 3.1(a)]{BGK},
$$\mathds P_x\Big(T_{R}^{R}\in dt\big|\mathscr{F}^{\hat X^{R, R}}\Big)=
\hat J(\hat X_t^{R, R},R)\exp\left(-\int_0^t \hat J(\hat
X_s^{R, R},R)\,ds\right)\,dt,$$ where $\mathscr{F}^{\hat X^{
R,R}}$ denotes the $\sigma$-algebra generated by $\hat X^{
R,R}$, and
$$\hat J(y,R)=\sum_{z\in V: \rho(y,z)>R}\frac{
\hat w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_z,\quad y\in B(x_0,R).$$ In
particular, by the definition of $\hat w_{x,y}$, $J(y,R)=\hat
J(y,R)$ for all $y\in B(x_0,R).$ Therefore,
\begin{align*}
&\mathds P_x\Big(T_{R}^{ R}\le t, \hat X^{R, R}_{s}\in
B(x_0, R)\ \text{for all }
s\in [0,T_{R}^{ R}]\Big)\\
&\le \mathds E_x\left[\int_0^t J(\hat X_r^{ R,
R},R)\exp\left(-\int_0^r J(\hat X_s^{ R, R},R)\,ds
\right)\mathds 1_{\{\hat X_s^{R, R}\in B(x_0, R)\text{ for all } s\in [0,r]\}}\,dr\right]\\
&\le c_1t\sup_{y\in B(x_0, R)}J(y,R).
\end{align*}
Combining all the estimates above, we can obtain \eqref{nl1-1}.
\eqref{nl1-2} is a direct consequence of \eqref{nl1-1} by taking
$f\equiv1$ on $B(x_0,R)$.
\end{proof}
\begin{proposition}\label{np-2}
Assume that for some $\theta\in (0,1)$, there
exists $R_0\ge 1$ such that
for every
$R_0<R<r_G$ and $R^{\theta}\le r \le R$, \eqref{np1-1}, \eqref{np1-1b} and \eqref{np1-1a} as
well as
\begin{equation}\label{np2-1a}
\sup_{x \in B(x_0, R)} \sum_{y\in
V:\rho(x,y)>R}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\le C_1R^{-\alpha}
\end{equation} hold,
where $C_1>0$ is a constant independent of $x_0$,
$R_0$, $R$,
$r$ and $r_G$.
Then
\begin{itemize}
\item[(i)] for any $\theta'\in (\theta,1)$, there is a constant $R_1\ge 1$ $($which only
depends on $\theta$, $\theta'$, $R_0$ and $r_G$$)$
such that for every $R_1<R<r_G$,
\begin{equation}\label{np2-2a}
\mathds P_{x_0}\big(\tau_{B(x_0, R)}\le t\big)\le
C_2\left(\frac{t}{R^{\alpha}}\right)^{1/2}\left[1\vee\log\left(\frac{R^{\alpha}}{t}\right)
\right],\quad t\ge
R^{\theta'\alpha},
\end{equation} where $C_2$ is a positive constant independent of $x_0$, $R_1$, $R$, $t$ and $r_G$.
\item[(ii)] for any $\varepsilon>0$,
there is a constant $R_2\ge1$ $($depending on $\theta$,
$R_0$, $r_G$ and $\varepsilon)$
such that for all $R_2<R<r_G$, \begin{equation}\label{np2-2}
\mathds P_{x_0}\big(\tau_{B(x_0, R)}\le t\big)\le
\varepsilon+\frac{C_3(\varepsilon)t}{R^{\alpha}},\quad t>0,
\end{equation}
where $C_3(\varepsilon)$ is a positive constant
independent of $x_0$, $R_1$, $R$, $t$ and $r_G$.
In particular, the process $X$ is conservative.
\end{itemize}
\end{proposition}
\begin{proof}
{\bf Step (1):} It immediately follows from \eqref{np2-1a} that
\begin{equation}\label{np2-3}
\sup_{y\in B(x_0, R)}J(y,R)\le c_1R^{-\alpha},
\end{equation}
where $J(y,R)$ is defined by \eqref{nl-1-00}.
Since \eqref{np1-1}, \eqref{np1-1b} and \eqref{np1-1a} are true,
by \eqref{np1-3}, for any $\theta' \in (\theta,1)$, there
is a constant $\tilde R_1\ge 1$ such that for all $R_1<R<r_G$ and $x\in V$,
$$
\mathds E_x\big[\rho(\hat X_t^{R,R},x)\big]\le
c_2R\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right],\quad \forall \
R^{\theta'\alpha}\le t \le R^{\alpha}.
$$
Hence, by the Markov inequality, for all $x\in V$ and
$R^{\theta'\alpha}\le t \le R^{\alpha}/2$,
$$
\sup_{s \in [t,2t]}\mathds P_x\Big( \rho\big(\hat X_{s}^{
R,R},x\big)>\frac{R}{2} \Big)\le
c_3\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right].
$$
Therefore, for all $R^{\theta'\alpha}\le t \le R^{\alpha}/2$,
\begin{align*}
\mathds P_{x_0}\big(\hat\tau_{B(x_0,R)}^{R, R}\le t\big)
&\le \mathds P_{x_0}\Big(\hat\tau_{B(x_0,R)}^{R,R}\le t;
\rho\big(\hat X_{2t}^{R, R},x_0\big)\le \frac{R}{2} \Big)
+\mathds P_{x_0}\Big(\rho\big(\hat X_{2t}^{R, R},x_0\big)> \frac{R}{2} \Big)\\
&\le \mathds E_{x_0}\left[\mathds 1_{\{\hat \tau^{R, R}_{B(x_0,R)}\le t\}}
\mathds P_{\hat X_{\hat\tau_{B(x_0,R)}^{
R,R}}^{R,R}} \Big(\rho\big(\hat X_{2t-\tau^{R,
R}_{B(x_0,R)}}^{ R, R},\hat X_0^{
R,R}\big)>\frac{R}{2}\Big)\right]\\
&\quad +c_3\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right]\\
&\le \sup_{y \in V}\sup_{s \in [t,2t]}\mathds P_{y}\Big(\rho\big(\hat
X_{s}^{R,R},y\big)
>\frac{R}{2}\Big)+c_3\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}
\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right]\\
&\le
2c_3\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right].
\end{align*}
Combining this with \eqref{ne2-1}, \eqref{nl1-2} and \eqref{np2-3}
yields that for all $\tilde R_1<R<r_G$ and $R^{\theta'\alpha}\le t \le
R^{\alpha}/2$,
$$
\mathds P_{x_0}\big(\tau_{B(x_0,R)}\le t\big)\le
2c_3\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}\left[1+\log\left(\frac{R^{\alpha}}{t}\right)
\right]+\frac{c_4
t}{R^{\alpha}}\le
c_5\Big(\frac{t}{R^{\alpha}}\Big)^{1/2}\left[1\vee\log\left(\frac{R^{\alpha}}{t}\right)
\right].
$$
Thus, \eqref{np2-2a} has been verified for all $R^{\theta'\alpha}\le
t \le R^{\alpha}/2$. When $t>R^{\alpha}/2$, it
holds that
$$
\mathds P_{x_0}\big(\tau_{B(x_0,R)}\le t\big)\le 1 \le
\Big(\frac{2t}{R^{\alpha}}\Big)^{1/2}\left[1\vee\log\left(\frac{R^{\alpha}}{t}\right)
\right].
$$
Hence we prove \eqref{np2-2a}.
{\bf Step (2):}
Fix $\theta'\in (\theta,1)$.
By \eqref{np2-2a} and Young's inequality, there is a constant $\tilde R_1\ge 1$ such that for every $\tilde
R_1<R<r_G$, $t\ge R^{\theta'\alpha}$ and $\varepsilon>0$,
$
\mathds P_{x_0}\big(\tau_{B(x_0, R)}\le t\big)\le
2^{-1}\varepsilon+{c_6(\varepsilon) t}{R^{-\alpha}}.
$
If $0<t\le R^{\theta'\alpha}$, then, taking
$\tilde R_2(\varepsilon)\ge \tilde R_1$ large enough, we obtain that for all $\tilde R_2(\varepsilon)\le R<r_G$,
$\mathds P_{x_0}\big(\tau_{B(x_0, R)}\le t\big)\le
\mathds P_{x_0}\big(\tau_{B(x_0, R)}\le R^{\theta'\alpha}\big)\le
2^{-1}\varepsilon+c_6(\varepsilon)R^{-(1-\theta')\alpha}\le \varepsilon.
$
Combining both estimates above together, we know that for all $\tilde R_2(\varepsilon)<R<r_G$ and $t>0$,
$
\mathds P_{x_0}\big(\tau_{B(x_0, R)}\le t\big)\le \varepsilon+{c_7(\varepsilon)t}{R^{-\alpha}},
$
which implies that \eqref{np2-2} holds.
\end{proof}
\medskip
We are now in a position to present the main result in this subsection. For
this, we need the following assumption on $\{w_{x,y}:x,y\in V\}$,
which is regarded as the summary of all assumptions in the
statements before.
For any $x,z\in V$ and $r>0$, denote $B_z^w(x,r):=\{u\in B(x,r): w_{u,z}>0\}$. In particular, $B_x^w(x,r)=B^w(x,r)$.
\medskip
\paragraph{{\bf Assumption (Exi.)}}
{\it Suppose that for some fixed $\theta\in (0,1)$ and $0\in V$, there
exists a constant $R_0\ge1$ such that the following hold. \begin{itemize}
\item[(i)] For
every $R_0<R<r_G$ and ${R^{\theta}/2}\le r \le 2R$,
\begin{equation}\label{a2-2-1}
\sup_{x\in B(0,6R)}\sum_{y\in V:\rho(x,y)\le r}
\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-2}}\le C_1 r^{2-\alpha},
\end{equation}
\begin{equation}\label{a2-2-1a}
\mu(B_z^w(x,r))\ge c_0\mu(B(x,r)),\quad x,z\in B(0,6R)
\end{equation}
and
\begin{equation}\label{a2-2-2}
\sup_{x\in B(0,6R)}\sum_{y\in B^w(x,
c_*r)}w_{x,y}^{-1}\le C_1r^{d},
\end{equation}
where $c_0>1/2$ is independent of $R_0,R,r,x$ and $z$, and $c_*:=8c_G^{2/d}$.
\item[(ii)] For every $R_0<R<r_G$ and $r\ge {R^{\theta}/2}$,
\begin{equation}\label{a2-2-3}
\begin{split}
\sup_{x \in B(0,6R)}\sum_{y \in V:
\rho(x,y)>r}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\le C_1r^{-\alpha}.
\end{split}
\end{equation}
\end{itemize} Here $C_1$ is a positive constant
independent of $R_0$, $R$ and $r_G$.}
\begin{lemma}\label{l2-3} Let $c_*$ be the constant in Assumption ${\bf (Exi.)}{\rm(i)}$.
Under \eqref{a2-2-1a} and
\eqref{a2-2-2}, for every $R_0<R<r_G/(2c_*)$ and ${R^{\theta}/2}\le r \le 2R$,
\begin{equation}\label{a2-2-3a}
\inf_{x \in B(0,6R)}\sum_{y \in V: \rho(x,y)>3r}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\ge C_2r^{-\alpha},
\end{equation}
where $C_2>0$ is independent of $R_0$, $R$ and $r_G$.
\end{lemma}
\begin{proof}
Noting that {$c_*>4$},
for every $x\in V$ and
$1\le r<r_G/c_*$, we have
\begin{align*}
\sum_{y\in V: 3r<\rho(x,y)\le c_*r, w_{x,y}>0}\mu_y&
\ge
\mu(B^w(x,c_*r))-\mu(B(x,4r))\ge c_0c_G^{-1}(c_*r)^{d}-c_G(4r)^{d}
\ge c_1r^{d},
\end{align*}
where we have used \eqref{al2-1} and \eqref{a2-2-1a}.
On the other hand, for every $R_0<R<r_G/(2c_*)$, $x \in B(0,6R)$ and
${R^{\theta}/2}\le r \le 2R$,
\begin{align*}
\sum_{y\in V: 3r<\rho(x,y)\le c_*r,w_{x,y}>0}\mu_y&\le
\Big(\sum_{y\in B^w(x,c^*r)}w_{x,y}^{-1}\mu_y\Big)^{1/2}
\Big(\sum_{y\in V: 3r<\rho(x,y)\le c_*r}w_{x,y}\mu_y\Big)^{1/2}\\
&\le c_2r^{d/2}\Big(\sum_{y\in V: 3r<\rho(x,y)\le c_*r}w_{x,y}\Big)^{1/2},
\end{align*} where in the first inequality we have applied the Cauchy-Schwarz inequality, and we used \eqref{a2-2-2} in the last inequality.
Combining both estimates above together yields that
for every $R_0<R<r_G/(2c_*)$, $x \in B(0,6R)$ and
${R^{\theta}/2}\le r \le 2R$,
$\sum_{y\in V: 3r<\rho(x,y)\le c_*r}w_{x,y}\ge c_3 r^{d},$ and so
$$
\sum_{y\in V: \rho(x,y)>3r}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\!\!\!
\ge \sum_{y\in V: 3r<\rho(x,y)\le c_*r}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha}}\ge (c_*r)^{-d-\alpha}\!\!\!\sum_{y\in V: 3r<\rho(x,y)\le c_*r}w_{x,y}\ge c_4r^{-\alpha}.
$$
Thus, \eqref{a2-2-3a} is proved.
\end{proof}
\begin{theorem} \label{exit}
If Assumption {\bf (Exi.)} holds {with some constant $\theta \in (0,1)$}, then, for every $\theta'\in
(\theta,1)$, there exist constants {$\delta \in (\theta,1)$} and
$R_1\ge1$ such that for all $R_1<R<r_G/(2c_*)$ and $ R^{\delta}\le r \le R$,
\begin{itemize}
\item[(1)]
\begin{equation}\label{l2-2-1a}
\sup_{x \in B(0,2R)}\mathds P_x\big(\tau_{B(x,r)}\le C_0r^{\alpha}\big)\le
\frac{1}{4},
\end{equation} where $C_0>0$ is a constant independent of $R_0$, $R_1$, $R$ and
$r$.
\item[(2)]
\begin{equation}\label{l2-2-0a}
\begin{split}
\sup_{x \in B(0,2R)}\mathds P_x\big(\tau_{B(x,r)}\le t \big)\le
C_1\Big(\frac{ t}{r^{\alpha}}\Big)^{1/2}\Big[1\vee \log\Big(\frac{r^{\alpha}}{t}\Big)\Big],\quad t\ge r^{\theta'\alpha},
\end{split}
\end{equation}
and
\begin{equation}\label{l2-2-1}
\begin{split}
C_2r^{\alpha}\le \inf_{x \in B(0,2R)}\mathds E_x\big[\tau_{B(x,r)}\big]
\le \sup_{x \in B(0,2R)}\mathds E_x\big[\tau_{B(x,r)}\big]\le
C_1r^{\alpha},
\end{split}
\end{equation}
where
$C_1,C_2$ are positive constants independent of $R_0$, $R_1$, $R$,
$r$, $t$ and $r_G$.
\end{itemize}
\end{theorem}
\begin{proof}
Suppose that Assumption {\bf(Exi.)} holds with some $\theta\in (0,1)$ and $R_0\ge 1$. Then, for any $\theta<\theta_1<\theta'<1$, $R_0<R<r_G$
and $R^{\delta}\le s
\le R$ with $\delta=\theta/\theta_1$, we know that \eqref{np1-1}, \eqref{np1-1a} and \eqref{np2-1a} hold
uniformly (that is, they hold with uniform constants) for every
${s^{\theta_1}}\le r \le s$ and $x_0\in B(0,2R)$.
Hence, according to
\eqref{np2-2a} and \eqref{np2-2}, we obtain that for every $\theta'\in
(\theta,1)$, there exists a constant
$R_1\ge R_0$ such that for each $R_1<R<r_G$ and ${R^{\delta}}\le r \le R$,
\eqref{l2-2-0a} and
\begin{equation}\label{l2-2-2}
\sup_{x \in B(0,2R)}\mathds P_x\big(\tau_{B(x,r)}\le t \big)\le
\frac{1}{8}+\frac{c_1t}{r^{\alpha}}, \quad \ \forall\ t>0
\end{equation} hold true.
In particular, taking $t=(8c_1)^{-1}r^{\alpha}$ in \eqref{l2-2-2},
we get \eqref{l2-2-1a} immediately.
Let $C_0$ be the constant in \eqref{l2-2-1a}. For any $R>R_1$, $x\in
B(0, 2R)$ and ${R^{\delta}}\le r \le R$, we have
\begin{align*} \mathds E_x[\tau_{B(x,r)}]
&= \int_0^{\infty} \mathds P_x(\tau_{B(x,r)}> s)\,ds\ge \int_0^{C_0r^{\alpha}} \mathds P_x(\tau_{B(x,r)}> s)\,ds\\
&\ge C_0r^{\alpha} \mathds P_x(\tau_{B(x,r)}>C_0r^{\alpha})\ge
\frac{3C_0r^{\alpha}}{4}.
\end{align*} This gives us the first inequality in \eqref{l2-2-1}.
On the other hand, let $c_*$ be the constant in Assumption ${\bf (Exi.)}{\rm(i)}$. By the L\'evy system (see
\cite[Appendix A]{CK08}), for any $R_1<R<r_G/(2c_*)$, $x\in B(0,2R)$ and ${R^{\delta}}\le r \le R$,
\begin{align*}
1&\ge \mathds P_x\big(X_{\tau_{B(x,r)}}\notin B(x,2r)\big)=
\mathds E_x\left[\int_0^{\tau_{B(x,r)}}\sum_{y\in V:\rho(x,y)>2r}\frac{w_{X_s,y}}{\rho(X_s,y)^{d+\alpha}}\mu_y\,ds\right]\\
&\ge c_M^{-1}\mathds E_x\left[\int_0^{\tau_{B(x,r)}}\sum_{y\in V: \rho(y,X_s)> 3r}\frac{w_{X_s,y}}{\rho(X_s,y)^{d+\alpha}}\,ds\right]\\
&\ge c_M^{-1}\left(\inf_{v\in B(0,2R+r)}
\sum_{y\in V: \rho(y,v)> 3r}\frac{w_{v,y}}{\rho(v,y)^{d+\alpha}}\right) \mathds E_x[\tau_{B(x,r)}]\ge c_2r^{-\alpha} \mathds E_x[\tau_{B(x,r)}], \end{align*} where in the
last inequality we have used \eqref{a2-2-3a}, also thanks to the fact that $\delta=\theta/\theta_1>\theta$.
Thus, we also prove the third inequality in \eqref{l2-2-1}.
\end{proof}
When $\alpha\in(0,1)$, we can obtain a probability estimate such like \eqref{np2-2} for the exit time in a more direct way under the following assumption.
\paragraph{{\bf Assumption (Exi.')}}
{\it Suppose that for some fixed $\theta\in (0,1)$ and $0\in V$, there
exists a constant $R_0\ge1$ such that \begin{itemize}
\item[(i)] for
every $R_0<R<r_G$ and ${R^{\theta}/2}\le r\le 2R$, \begin{equation}\label{l2-1-0}
\sup_{x\in B(0,6R)}\sum_{y\in V:\rho(x,y)\le r}
\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-1}}\le C_1 r^{1-\alpha}
\end{equation} and \eqref{a2-2-2} hold.
\item[(ii)] ${\rm(ii)}$ in Assumption {\bf(Exi.)} is satisfied.
\end{itemize} Here $C_1$ is a positive constant
independent of $R_0$, $R$ and $r_G$.}
\begin{proposition}\label{L:tight}
Under \eqref{l2-1-0} and ${\rm(ii)}$ in Assumption {\bf(Exi.)},
there exists a constant $R_1>R_0$ such that for all $R_1<R<r_G$,
$x\in B(0,2R)$, ${R^{\theta}}\le r\le R$ and $t>0$,
\begin{equation}\label{l2-1-1}
\mathds P_x(\tau_{B(x,r)}\le t)\le \frac{C_2t}{r^{\alpha}},
\end{equation}
where $C_2>0$ is a constant independent of $R_1$, $R$, $r$, $x$,
$t$ and $r_G$.
\end{proposition}
\begin{proof} Fix $x\in B(0,2R)$.
Given $f\in C_b^1([0,\infty))$ with $f(0)=0$ and $f(u)=1$ for all
$u\ge1$, we set
$f_{x,r}(z)=f\left(\frac{\rho(z,x)}{r}\right)$ for any $ z\in V$ and $ r>0.$
For any $r>0$,
$$\left\{f_{x,r}(X_t)-f_{x,r}(X_0)-\int_0^t Lf_{x,r}(X_s)\,ds, t\ge0\right\} $$ is a local martingale.
Then, for any $t>0$ and $x \in V$,
\begin{align*}\mathds P_x(\tau_{B(x,r)}\le t)\le
&\mathds E_x f_{x,r}(X_{t\wedge \tau_{B(x,r)}})\!=\!\mathds E_x\left[\int_0^{t\wedge \tau_{B(x,r)}}Lf_{x,r}(X_s)\,ds\right]\le t\sup_{z\in B(x,r)}Lf_{x,r}(z),\end{align*} where we used the
fact that $f_{x,r}(x)=0$ in the equality above.
Furthermore, for any $x \in V$ and $z\in B(x,r)$,
\begin{align*} L f_{x,r}(z)=&\sum_{y\in V} \big(f_{x,r}(y)-f_{x,r}(z)\big)\frac{w_{y,z}}{\rho(z,y)^{d+\alpha}}\mu_y\\
=&\sum_{y\in V: \rho(y,z)\le r} \left(f_{x,r}(y)-f_{x,r}(z)\right) \frac{w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\\
&+ \sum_{y\in V: \rho(y,z)> r} \left(f_{x,r}(y)-f_{x,r}(z)\right)\frac{w_{y,z}}{\rho(y,z)^{d+\alpha}}\mu_y\\
\le&c_1\left(r^{-1}\sum_{y\in V: \rho(z,y)\le
r}\frac{w_{y,z}}{\rho(y,z)^{d+\alpha-1}}
+\sum_{y \in V: \rho(z,y)>r}\frac{w_{y,z}}{\rho(y,z)^{d+\alpha}}\right)=:c_1(I_1(z,r)+I_2(z,r)),
\end{align*}
where in the first inequality above we have used
$|f_{x,r}(y)-f_{x,r}(z)|\le c_1r^{-1}\rho(y,z).$
According to \eqref{l2-1-0} and
\eqref{a2-2-3}, we can find a constant $R_1\ge1$ such that for all
$R_1<R<r_G$, $x \in B(0,2R)$ and $R^{\theta}\le r \le R$,
$
\sup_{z \in B(x,r)}\big(I_1(z,r)+I_2(z,r)\big)\le c_2r^{-\alpha}.
$
Combining with all estimates above, we prove the desired assertion.
\end{proof}
\subsection{H\"older regularity}
Let $R_+:=(0,\infty)$ and $Z:=(Z_t)_{t\ge0}=(U_t,X_t)_{t\ge0}$ be the time-space process such
that $U_t=U_0+t$ for any $t\ge0$. Denote by $\mathds P_{(s,x)}$
the probability of the process $Z$ starting from $(s,x)\in \mathds R_+\times V$. For any subset $A\subseteq \mathds R_+\times
V$, define $\tau_A=\inf\{s>0:Z_s\in A\}$ and
$\sigma_A=\inf\{s>0:Z_s\in A\}$. For any $t\ge0$, $x\in V$ and $R\ge1$, let
$Q(t,x,R)=\left(t, t+ C_0R^{\alpha}\right)\times B(x,R)$ and $
d\nu=ds\times d\mu$, where $C_0$ is the constant in \eqref{l2-2-1a}.
In the following, let $c_*$ be the constant in Assumption ${\bf (Exi.)}{\rm(i)}$.
\begin{proposition}\label{Kr}
If Assumption {\bf (Exi.)} holds with some $\theta\in (0,1)$, then there exist constants
{$\delta \in (\theta,1)$} and $R_1\ge1$ such that for any $R_1<R<r_G/(2c_*)$,
$2R^{\delta}\le r \le R$, $x\in
B(0,2R)$, $t\ge0$ and $A\subseteq Q(t,x,r/2)$ with
$\frac{\nu(A)}{\nu(Q(t,x,r/2))}\ge {1}/{2}$,
\begin{equation}\label{p2-2-1a}
\mathds P_{(t,x)}(\sigma_A<\tau_{Q(t,x,r)})\ge C_1,
\end{equation}
where $C_1\in (0,1)$ is a constant independent of $R_1$, $R$, $r$, $t$, $x$ and $r_G$.
\end{proposition}
\begin{proof} The proof is based on that of \cite[Lemma 4.11]{CK} with some slight modifications.
We write $Q_r=Q(t,x,r)$ for simplicity.
Without loss of generality, we may and can assume that $\mathds P_{(t,x)}(\sigma_A<\tau_{Q_r})\le 1/4$; otherwise the conclusion holds
trivially. Let $T=\sigma_A\wedge\tau_{Q_r}$ and $A_s=\{y\in V:
(s,y)\in A\}$ for all $s>0$. According to the L\'evy system,
\begin{align*}
\mathds P_{(t,x)}(\sigma_A<\tau_{Q_r})&\ge\mathds E_{(t,x)}\left(\sum_{s\le T} \mathds 1_{\{X_s\neq X_{s-},X_s\in A_s\}}\right)=\mathds E_{(t,x)} \left[\int_0^T\sum_{u\in A_s} \frac{w_{X_s,u}}{\rho(X_s,u)^{d+\alpha}}\mu_u\,ds\right]\\
&\ge c_M^{-1}\mathds E_{(t,x)} \left[\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} \frac{w_{X_s,u}}{\rho(X_s,u)^{d+\alpha}}\,ds; T\ge C_0(r/2)^{\alpha}\right]\\
&\ge c_1r^{-d-\alpha}\left( \inf_{z\in B(x,r)}\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} w_{z,u}\,ds \right)
\mathds P_{(t,x)}(T\ge C_0(r/2)^{\alpha}),
\end{align*}
where in the last inequality we have used fact that $\rho(u,z)\le 2r$
for every $u,z\in B(x,r)$.
Furthermore,
according to Theorem \ref{exit}(1),
there exist constants $R_1\ge 1$ and {$\delta \in (\theta,1)$} such that for any $R_1<R<r_G/(2c_*)$,
$R^{\delta} \le r/2 \le R$ and $x \in B(0,2R)$,
\begin{align*}\mathds P_{(t,x)}\big(T\ge C_0(r/2)^{\alpha}\big)&=\mathds P_{(t,x)}\big(\sigma_A\wedge\tau_{Q_r}\ge C_0(r/2)^{\alpha}\big)\\
& \ge1- \mathds P_{(t,x)}\big(\sigma_A< \tau_{Q_r}\big)-\mathds P_{x}\big(\tau_{B(x,r)}\le C_0(r/2)^{\alpha}\big)\ge 1-\frac{1}{4}-\frac{1}{4}\ge \frac{1}{2},\end{align*}
where in the first inequality we have used the fact that
$$
\mathds P_{(t,x)}\big(\tau_{Q_r}\le C_0(r/2)^{\alpha}\big)=
\mathds P_x\big(\tau_{B(x,r)}\wedge (C_0r^{\alpha})\le C_0(r/2)^{\alpha}\big)
=\mathds P_x\big(\tau_{B(x,r)}\le C_0(r/2)^{\alpha}\big),
$$
and the second inequality follows from \eqref{l2-2-1a}.
On the other hand, let $Q^w_z(t,x,r):=(t+C_0r^\alpha)\times B^w_z(x,r)$. Then, for every $R_1<R<r_G$, ${2R^{\delta}} \le r \le R$,
$x\in B(0,2R)$ and $z\in B(x,r)$,
\begin{align*}
\nu(A\cap Q^w_z(t,x,r/2))&=\int_0^{C_0(r/2)^{\alpha}}\sum_{u \in A_s\cap B^w_z(x,r/2)}\mu_u\, ds\\
&\le \Big(\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s \cap B^w_z(x,r/2)} w_{z,u}^{-1}\mu_u\,ds\Big)^{1/2}
\Big(\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} w_{z,u}\mu_u\,ds\Big)^{1/2}\\
&\le c_3r^{\alpha/2}\Big(\sum_{u\in B_z^w(x,r)} w_{z,u}^{-1}\Big)^{1/2}
\Big(\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} w_{z,u}\,ds\Big)^{1/2}\\
&\le c_3r^{\alpha/2}\Big(\sup_{z\in B(0,3R)}\sum_{u\in B^w(z,2r)}w_{z,u}^{-1}\Big)^{1/2}
\Big(\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} w_{z,u}\,ds\Big)^{1/2}\\
&\le c_4r^{(d+\alpha)/2}\Big(\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} w_{z,u}\,ds\Big)^{1/2},
\end{align*}
where in the first inequality we have used the Cauchy-Schwarz
inequality, the third inequality is due to the fact that $B^w_z(x,r)\subset B^w(z,2r)$ for all
$z\in B(x,r)$, and the last inequality follows from
\eqref{a2-2-2}. Note that, by \eqref{a2-2-1a} and the assumption that $\frac{\nu(A)}{\nu(Q(t,x,r/2))}\ge {1}/{2}$, we have
$\nu(A\cap Q_z^w(t,x,r/2))\ge \big(1/2+c_0-1 \big)\cdot \nu(Q(t,x,r/2))\ge c_5r^{d+\alpha}.$
Combing all estimates above yields that for all $R_1<R<r_G$, $2R^{\delta}\le r \le R$, $x\in
B(0,2R)$ and $z \in B(x,r)$,
$
\int_0^{C_0(r/2)^{\alpha}}\sum_{u\in A_s} w_{z,u}\,ds\ge c_6r^{d+\alpha}.
$
According to all the estimates above, we prove the required assertion.
\end{proof}
We also need the following hitting probability estimate.
\begin{lemma}\label{l3-3}
Suppose that Assumption {\bf (Exi.)}\ holds {with some $\theta \in (0,1)$}. Then there are constants
{$\delta\in (\theta,1)$} and $R_1\ge1$
such that for every $R_1<R<r_G/(2c_*)$, {$R^{\delta}\le r \le R$},
$x\in B(0,2R)$, $K>4r$, $t\ge0$
and $z \in B(x,{r}/{2})$,
\begin{equation}\label{e:ex}
\begin{split}
\mathds P_x(X_{\tau_{Q(t,x,r)}}\notin B(z,K))\le C_1\left(\frac{r}{K}\right)^\alpha,
\end{split}
\end{equation}
where
$C_1>0$ is a positive constant independent of $R_0$, $R_1$, $r$, $t$, $x$, $z$ and $r_G$.
\end{lemma}
\begin{proof}
According to the L\'evy system, we know that for every $z \in B(x,r/2)$,
\begin{align*}
\mathds P_x(X_{\tau_{Q(t,x,r)}}\notin B(z,K))
&=\mathds E_x\left[\int_0^{\tau_{B(x,r)}}\sum_{y\notin B(z,K)} \frac{w_{X_s,y}}{\rho(X_s,y)^{d+\alpha}}\mu_y\,ds \right]\\
&\le c_1\sup_{u\in B(x,r)}\left(\sum_{y\in V: \rho(u,y)>K-2r}\frac{w_{u,y}}{\rho(u,y)^{
d+\alpha}}
\right)\mathds E_{x}[\tau_{B(x,r)}]\\
&\le c_1\sup_{u \in B(0,2R)}
\left(\sum_{y\in V: \rho(u,y)>K/2}\frac{w_{u,y}}{\rho(u,y)^{d+\alpha}}
\right)\mathds E_{x}[\tau_{B(x,r)}].
\end{align*}
Note that $K/2>2r\ge R^{\delta}$
and $R^{\delta}\le r \le R$.
Then, by \eqref{a2-2-3} and \eqref{l2-2-1}, we can find a constant
$R_1\ge1$ such that for all $R_1<R<r_G/(2c_*)$ and $x\in B(0,2R)$,
$$
\sup_{u \in B(0,2R)}
\left(\sum_{y\in V: \rho(u,y)>K/2}\frac{w_{u,y}}{\rho(u,y)^{d+\alpha}}
\right)\le c_2K^{-\alpha}
$$ and $\mathds E_{x}[\tau_{B(x,r)}]\le c_3r^{\alpha}.$
Combining with all the estimates above immediately yields \eqref{e:ex}.
\end{proof}
We say that a measurable function $q(t,x)$ on
$[0,\infty)\times V$ is parabolic in an open subset $A$ of
$[0,\infty)\times V$, if for every relatively compact open subset
$A_1$ of $A$, $q(t,x)=\mathds E^{(t,x)}q(Z_{\tau_{A_1}})$ for every
$(t,x)\in A_1$.
Let $C_0>0$ be the constant in \eqref{l2-2-1a}, and $\theta$ be the
constant in Assumption {\bf (Exi.)}. Set
$Q(t_0,x_0,r)=(t_0,t_0+C_0r^{\alpha})\times B(x_0,R)$.
\begin{theorem}\label{T:holder}
Suppose that Assumption {\bf (Exi.)} holds {with some $\theta \in (0,1)$}, and let $c_*$ be the constant in Assumption ${\bf (Exi.)}{\rm(i)}$. Then, there are
constants $R_1\ge1$ and {$\delta \in (\theta,1)$} such that for all
$R_1<R<r_G/(2c_*)$, $x_0 \in B(0,R)$, ${R^{\delta}}\le r \le R$, $t_0\ge0$ and parabolic function $q$
on $Q(t_0,x_0, 2r)$,
\begin{equation}\label{t3-2-1}
|q(s,x)-q(t,y)|\le C_1\|q\|_{\infty,r}\left(\frac{|t-s|^{1/\alpha}+\rho(x,y)}{r} \right)^\beta,
\end{equation}
holds for all $(s,x),(t,y)\in Q(t_0,x_0, r)$ such that
$(C_0^{-1}|s-t|)^{1/\alpha}+\rho(x,y)\ge {2r^{\delta}}$, where
$\|q\|_{\infty,r}=\sup_{(s,x)\in [t_0, t_0+C_0(2r)^\alpha]\times V}q(s,x),$
and $C_1>0$ and $\beta\in (0,1)$ are constants independent of $R_0$, $R_1$,
$x_0$, $t_0$, $R$, $r$, $s$, $t$, $x$, $y$ and $r_G$.
\end{theorem}
\begin{remark}\label{remhold}
Note that unlike the case of random walk on the supercritical percolation cluster (\cite[Proposition 3.2]{BaHa}),
in which the H\"older regularity holds for all points in the parabolic cylinder when
$r$ is large enough,
in the preset setting
we can only obtain the H\"older regularity in the region
$(C_0^{-1}|s-t|)^{1/\alpha}+\rho(x,y)\ge 2r^{\delta}$
inside the cylinder.
\end{remark}
\begin{proof}[Proof of Theorem $\ref{T:holder}$]
We mainly follow the argument of \cite[Theorem 4.14]{CK}
with some modification.
For simplicity, we assume that $\|q\|_{\infty, r}=1$ and $q\ge 0$.
Now, we first show that there are constants $\eta\in (0,1)$, {$\delta \in (\sqrt{\delta_0},1)$ with
$\delta_0\in (0,1)$ being the constants $\delta$ in Theorem \ref{exit}, Proposition \ref{Kr} and
Lemma \ref{l3-3},}
$R_1>R_0$ and $\xi\in (0,(1/4)\wedge \eta^{1/\alpha})$ (which are
determined later) such that for any $R_1<R<r_G/(2c_*)$, ${R^{\delta}}\le r \le
R$, $k\ge 1$ with $\xi^kr\ge {2r^{\delta}}$, and any $(\tilde
t,\tilde x)\in Q(t_0,x_0,r)$ with $x_0\in B(0,R)$ and $t_0\ge0$,
\begin{equation}\label{e:ph1}\sup_{Q(\tilde t,
\tilde x, \xi^k r)} q-\inf_{Q(\tilde t,\tilde x, \xi^k r)} q \le \eta^k.\end{equation}
Let $Q_i=Q(\tilde t,\tilde x, \xi^i r)$ and $B_i=B(\tilde x, \xi^i
r)$. Define $a_i=\inf_{Q_i}q$ and $b_i=\sup_{Q_i}q$. Clearly,
$b_i-a_i\le \eta^i$ for all $i\le 0$. Suppose that $b_i-a_i\le
\eta^i$ for all $i\le k$ with some $k\ge 0$. Choose $z_1, z_2\in
Q_{k+1}$ such that $q(z_1)=b_{k+1}$ and $q(z_2)= a_{k+1}.$ Letting
$z_1=(t_1,x_1)$, we define $ \tilde Q_k=Q(t_1,x_1,\xi^k r),$
$\tilde Q_{k+1}=Q(t_1,x_1,\xi^{k+1} r)$ and
$$A_{k}=\left\{z\in \tilde Q_{k+1}:q(z)\le \frac{a_k+b_k}{2}\right\}.$$
Without of loss of generality, we may and do assume that
$\nu(A_{k})/\nu(\tilde Q_{k+1})\ge 1/2;$ otherwise, we will choose
$1-q$ instead of $q$. We have
\begin{align*}b_{k+1}-a_{k+1} =&q(z_1)-q(z_2)= \mathds E_{z_1}[q(Z_{\sigma_{A_k}\wedge\tau_{\tilde Q_k}})]-q(z_2)\\
=& \mathds E_{z_1}\left[q(Z_{\sigma_{A_{k}}\wedge\tau_{\tilde Q_k}})-q(z_2): \sigma_{A_{k}}\le \tau_{\tilde Q_k}\right]\\
&+\mathds E_{z_1}\left[q(Z_{\sigma_{A_k}\wedge\tau_{\tilde Q_k}})-q(z_2): \sigma_{A_k}> \tau_{\tilde Q_k}, X_{\tau_{\tilde Q_k}}\in B_{k-1}\right]\\
&+\sum_{i=1}^{\infty}\mathds E_{z_1}\left[q(Z_{\sigma_{A_k}\wedge\tau_{\tilde
Q_k}})-q(z_2): \sigma_{A_k}> \tau_{\tilde Q_k},
X_{\tau_{\tilde Q_k}}\in B_{k-i-1}\setminus B_{k-i}\right]\\
=&:I_1+I_2+I_3. \end{align*}
It is easy to see that
$$I_1\le \left(\frac{a_k+b_k}{2}-a_k\right)\mathds P_{z_1}(\sigma_{A_k}\le \tau_{\tilde Q_k})\le \frac{b_k-a_k}{2} p_k\le \frac{\eta^k}{2}p_k
=\eta^{k+1} \eta^{-1} \frac{p_k}{2}$$ and
$I_2\le (b_{k-1}-a_{k-1}) (1-p_k)\le \eta^{k-1}(1-p_k)=
\eta^{k+1}\eta^{-2}(1-p_k),$ where
$p_k:= \mathds P_{z_1}(\sigma_{A_k}\le \tau_{\tilde Q_k})=\mathds P_{(t_1,x_1)}(\sigma_{A_k}\le \tau_{Q(t_1,x_1,\xi^k
r)}).$ On the other hand, since {$\xi^k r\ge 2r^{\delta}\ge
2R^{\delta_0}$}, $\tilde x \in B(x_1,{\xi^{k+1} r})\subset B(x_1,{\xi^k r}/{2})$ and
$\xi^{k-i}r>4\xi^{k}r$ for $i\ge 1$, we can apply \eqref{e:ex} and
obtain that
$$
\mathds P_{x_1}(X_{\tau_{\tilde Q_k}}\in B_{k-i-1}\setminus B_{k-i})\le
\mathds P_{x_1}\big(X_{\tau_{Q(t_1,x_1,\xi^k r)}}\in B_{k-i}^c\big)\le
c_2\left(\frac{\xi^k r}{\xi^{k-i}r}\right)^{\alpha}.
$$
Thus,
\begin{align*}I_3&\le \sum_{i=1}^{\infty}(b_{k-i-1}-a_{k-i-1}) \mathds P_{x_1}(X_{\tau_{\tilde Q_k}}\in B_{k-i-1}\setminus B_{k-i})\\
&\le c_2\sum_{i=1}^\infty \eta^{(k-i-1)}
\left(\frac{\xi^{k}r}{\xi^{k-i}r}\right)^\alpha \le
\frac{c_2\eta^{k+1} \eta^{-2}\xi^\alpha}{\eta-\xi^\alpha}.
\end{align*}
Note that, since $x_1\in B(0, 2R)$ and {$\xi^k r\ge 2r^{\delta}\ge 2R^{\delta_0}$}, by \eqref{p2-2-1a} we
have $p_k\ge c_3>0$. Combining with all the conclusions above, we
arrive at that
\begin{align*}b_{k+1}-a_{k+1}&\le \eta^{k+1}\left( \frac{\eta^{-1} p_k}{2}+\eta^{-2}(1-p_k)+
\frac{c_2\eta^{-2}\xi^\alpha}{\eta-\xi^\alpha} \right)\\
&=\eta^{k+1}\left[\eta^{-2}-\Big(\eta^{-2}-\frac{\eta^{-1}}{2}\Big)p_k+
\frac{c_2\eta^{-2}\xi^\alpha}{\eta-\xi^\alpha} \right]\\
&\le\eta^{k+1}\left(\eta^{-2}(1-c_3)+\frac{\eta^{-1}c_3}{2}+
\frac{c_2\eta^{-2}\xi^\alpha}{\eta-\xi^\alpha}
\right).\end{align*} Choosing $\eta$ close to $1$ and then $\xi\in
(0,(1/4)\wedge \eta^{1/\alpha})$ close to $0$ such that $$
\eta^{-2}(1-c_3)+\frac{\eta^{-1}c_3}{2} +
\frac{c_2\eta^{-2}\xi^\alpha}{\eta-\xi^\alpha}\le 1,$$ we get
$b_{k+1}-a_{k+1}\le \eta_{k+1}.$ This proves \eqref{e:ph1}.
For any $(s,x),(t,y)\in Q(t_0,x_0, r)$ with $s\le t$ and $(C_0^{-1}|t-s|)^{1/\alpha}+\rho(x,y)\ge 2r^{\delta}$,
let $k$ be the smallest integer such that $(C_0^{-1}|s-t|)^{1/\alpha}+\rho(x,y)\ge \xi^{k+1} r$. Then,
$(C_0^{-1}|s-t|)^{1/\alpha}+\rho(x,y)\le \xi^{k}r$, and so $\xi^k r\ge 2r^{\delta}$ and $(t,y)\in Q(s,x,\xi^kr)$. According to \eqref{e:ph1}, we know that
$$|q(s,x)-q(t,y)|\le \eta^k\le \eta^{-1}\left(\frac{(C_0^{-1}|s-t|)^{1/\alpha}+\rho(x,y)}{r}\right)^{\log_\xi\eta}.$$ The proof is finished.
\end{proof}
\begin{remark}\label{R:H} According to Proposition \ref{L:tight}, the proof of Theorem \ref{exit} and the arguments in this subsection, we can obtain that, when $\alpha\in(0,1)$, Theorems \ref{exit} and \ref{T:holder} still hold under assumption {\bf(Exi.')}. \end{remark}
\section{Convergence of stable-like processes
on metric measure spaces}\label{section3}
In this section, we give convergence criteria for stable-like processes
on metric measure spaces.
Let $(F,\rho,m)$ be a metric measure space, where $(F,\rho)$ is a locally compact separable and connected
metric space, and $m$ is a Radon measure on $F$. For every $x\in F$ and $r>0$,
let $B_F(x,r)=\{z\in F: \rho(z,x)<r\}$.
We always assume the following assumptions on $(F,\rho,m)$.
\paragraph{{\bf Assumption (MMS)}}\label{a3-1}{\it
\begin{itemize}
\item [(i)] For every $x\in F$ and $r>0$, the closure of $B_F(x,r)$ is
compact,
and it holds that $m(\partial (B_F(x,r)))=0$, where $\partial
(B_F(x,r))=\overline{B_F(x,r)}\backslash B_F(x,r).$
\item [(ii)] $\rho: F\times F\rightarrow \mathds R_+$ is geodesic, i.e., for any $x,y\in F$, there exists a
continuous map $\gamma: [0,\rho(x,y)]\rightarrow F$ such that $\gamma(0)=x$, $\gamma(\rho(x,y))=y$
and $\rho(\gamma(s),\gamma(t))=t-s$ for all $0\le s \le t \le \rho(x,y)$.
\item[(iii)] There exist constants $c_F\ge1$ and $d>0$ such that
\begin{equation}\label{a3-1-1}
c_F^{-1}r^{d}\le m(B_F(x,r))\le c_Fr^{d},\quad \forall\ x\in F,\ 0<r<r_F:=\sup_{y,z\in F}\rho(y,z).
\end{equation}
\end{itemize}}
The metric measure space $(F,\rho,m)$ will serve as the state space
of the stable-like process $Y$ which will be defined later.
According to \cite[Theorem 2.1]{CKK}, such a metric measure space is
endowed with the following graph approximations.
\begin{lemma}\label{L:ac} Under assumption {\bf(MMS)}, $F$ admits a sequence of approximating graphs $\{G_n:=(V_n,E_{V_n}),n\ge1\}$ such that the following properties hold.
\begin{itemize}
\item [(1)] For every $n \ge1$, $V_n\subseteq F$, and $(V_n,E_{V_n})$ is connected and has
uniformly bounded degree. Moreover,
$\cup_{n=1}^{\infty}V_n$ is dense in $F$.
\item[(2)] There exist positive constants
$C_1$ and $C_2$ such that for every $n \ge1$ and $ x,y\in V_n$,
\begin{equation}\label{a3-3-1}
\frac{C_1}{n}\rho_{n}(x,y)\le \rho(x,y) \le \frac{C_2}{n}\rho_{n}(x,y),
\end{equation}
where $\rho_n$ is the graph distance of $(V_n,E_{V_n})$.
\medskip
\item[(3)] For each $n\ge1$, there exist a class of subsets $\{U_n(x): x\in V_n\}$ of $F$ such that
$\bigcup_{x \in V_n}U_n(x)\subset F$, $m\big(U_n(x)\cap U_n(y)\big)=0$ for $x \neq y$,
\begin{equation}\label{a3-3-2} V_n\cap \text{Int}\, U_n(x)=\{x\},\,\,
\sup\{\rho(y,z):y,z\in U_n(x)\}\le \frac{C_3}{n},\quad \forall\ x\in V_n,
\end{equation}
and
\begin{equation}\label{a3-3-3}
\frac{C_4}{n^{d}}\le m\big(U_n(x)\big)\le \frac{C_5}{n^{d}},\quad \forall\ n\ge1,\ x\in V_n,
\end{equation} where $\text{Int}\, U_n(x)$ denotes the set of the interior points of $U_n(x)$.
Moreover, for all $r>0$ and $y\in F$,
\begin{equation}\label{a3-3-2a}
\lim_{n \rightarrow \infty}m\Big(
B_F(y,r)\bigcap\big(F\setminus\bigcup_{x \in V_n}U_n(x)\big)\Big)=0.
\end{equation}
For each $n\ge1$ and $y \in F\setminus\bigcup_{x \in V_n}U_n(x)$, there exists $z\in V_n$ such that
$\rho(y,z)\le {C_6}{n}^{-1}.$ Here $C_i$ $(i=3,\cdots,6)$ are positive constants independent of $n$.
\end{itemize}
\end{lemma}
We will consider stable-like processes on the
graphs $\{G_n\}_{n\ge 1}$.
\subsection{Stable-like processes on graphs and the metric measure spaces}\label{S:app}
We first introduce a class of Dirichlet forms $(D_{V_n}, \mathscr{F}_{V_n})$ on
the graph $(V_n,E_{V_n})$.
For any $n\ge1$, define
\begin{align*}
D_{V_n}(f,f)&=\frac{1}{2}\sum_{x,y\in V_n}(f(x)-f(y))^2\frac{w_{x,y}^{(n)}}{\rho(x,y)^{d+\alpha}}
m_n(x)m_n(y),\quad f\in \mathscr{F}_{V_n},\\
\mathscr{F}_{V_n}&=\{f\in L^2(V_n;m_n): D_{V_n}(f,f)<\infty\},
\end{align*}
where $\alpha\in (0,2)$, $\rho(x,y)$ is the distance function on $F$, $m_n$ is the measure on $V_n$ defined by
$$
m_n(A):=\sum_{x \in A}m\big(U_n(x)\big),\ \ \forall\ A\subset V_n,
$$(for simplicity, we write $m_n(x)=m_n(\{x\})$ for all $x\in V_n$),
and $\{w_{x,y}^{(n)}:x,y\in V_n\}$ is a sequence
satisfying that $w_{x,y}^{(n)}\ge0$ and $w_{x,y}^{(n)}=w_{y,x}^{(n)}$ for all $x\neq y$, and
$$
\sum_{y\in V_n}\frac{w_{x,y}^{(n)}}{\rho(x,y)^{d+\alpha}}m_n(y)<\infty,\quad x\in V_n.
$$ We note that, in the definition of
the Dirichlet form
$(D_{V_n}, \mathscr{F}_{V_n})$ we use the metric $\rho(x,y)$ instead of the graph metric $\rho_n(x,y)$ on $V_n$.
According to \cite[Theorem 2.1]{CKK}, for any $n\ge1$, $(D_{V_n}, \mathscr{F}_{V_n})$ is a regular Dirichlet form on $L^2(V_n;m_n)$. Let $X^{(n)}:=\{(X_t^{(n)})_{t\ge0}, (\mathds P_x)_{x\in V_n}\}$ be the associated symmetric Markov process.
To obtain the weak
convergence for $X^{(n)}$, we also introduce a kind of scaling processes
associated with $\{X^{(n)}\}_{n\ge1}$.
For any $n\ge1$, let $\mathbf{P}_n$ be the projection map from $(V_n, \rho)$ to $(V_n,\rho_{n})$ such that $\mathbf{P}_n(x):=x$ for
$x \in V_n$. Define a measure $\tilde m_n$ on
$(V_n,\rho_{n})$ as follows
$$
\tilde m_n(A)=n^{d}m_n\big(\mathbf{P}_n^{-1}(A)\big)=n^{d}\sum_{x \in \mathbf{P}_n^{-1}(A)}m_n(x),\quad A\subset V_n.
$$ For simplicity, $\tilde m_n(x)=\tilde m_n(\{x\})$ for any $x\in V_n$.
For any $n\ge1$, we consider the following Dirichlet form:
$\big(\tilde D_{V_n},\tilde \mathscr{F}_{V_n}\big)$ on $L^2(V_n;\tilde m_n)$
\begin{align*}
\tilde D_{V_n}(f,f)&=\frac{1}{2}\sum_{x,y\in V_n}(f(x)-f(y))^2 \frac{\tilde w_{x,y}^{(n)}}{\rho_{n}(x,y)^{d+\alpha}}\tilde m_n(x)
\tilde m_n(y),\quad
f\in \tilde \mathscr{F}_{V_n},\\
\tilde \mathscr{F}_{V_n}&=\{f\in L^2(V_n;\tilde m_n): \tilde D_{V_n}(f,f)<\infty\},
\end{align*}
where $$\tilde w^{(n)}_{x,y}:=w^{(n)}_{x,y}
\left(\frac{\rho_{n}(x,y)}{n\rho(x,y)}\right)^{d+\alpha},\quad x,y\in V_n.$$
Note that $\tilde D_{V_n}(f,f)=n^{d-\alpha}D_{V_n}(f,f)$ and $\tilde \mathscr{F}_{V_n}=\mathscr{F}_{V_n}$.
Let $\tilde X^{(n)}$ be the symmetric Markov process associated with $\big(\tilde D_{V_n},\tilde \mathscr{F}_{V_n}\big)$.
According to the expressions of $\big(D_{V_n},\mathscr{F}_{V_n}\big)$ and $\big(\tilde D_{V_n},\tilde \mathscr{F}_{V_n}\big)$, we know that
$\big(\mathbf{P}_n(X^{(n)}_t)\big)_{t\ge 0}$ has the same distribution as
$\big(\tilde X^{(n)}_{n^{\alpha}t}\big)_{t\ge 0}$.
As a candidate of the scaling limit of the discrete forms
$(D_{V_n}, \mathscr{F}_{V_n})$, we now define a symmetric Dirichlet form $(D_0, \mathscr{F}_0)$ on
$L^2(F;m)$
as follows
\begin{equation}\label{e4-1}
\begin{split}
D_0(f,f)&=\frac{1}{2}\int_{\{F\times F\setminus {\rm diag}\}}\big(f(x)-f(y)\big)^2\frac{c(x,y)}{\rho(x,y)^{d+\alpha}}\,m(dx)\,m(dy),\quad f \in \mathscr{F}_0,\\
\mathscr{F}_0&=\{f\in L^2(F;m): D_0(f,f)<\infty\},
\end{split}
\end{equation}
where $\alpha\in(0,2)$, ${\rm diag}:=\{(x,y)\in F\times F:x=y\}$ and $c:F\times
F\rightarrow (0,\infty)$ is a symmetric
continuous
function such that $0<c_1\le c(x,y)\le c_2<\infty$ for all
$(x,y) \in F\times F\setminus{\rm diag}$ and some constants
$c_1,c_2$. According to \eqref{a3-1-1} and the fact that $\alpha\in(0,2)$, we have
\begin{align*}
&\sup_{x\in F}\int_{F\setminus\{y\in F: y\neq x\}}\big(1\wedge \rho^2(x,y)\big)\frac{c(x,y)}{\rho(x,y)^{d+\alpha}}\,m(dy)\\
&\le \sup_{x\in F}\sum_{k=0}^\infty\int_{\{y\in F: 2^{-(1+k)}<\rho(y,x)\le 2^{-k}\}}\frac{c(x,y)}{\rho(x,y)^{d+\alpha-2}}\,m(dy)\\
&\quad+\sup_{x\in F}\sum_{k=0}^\infty\int_{\{y\in F: 2^{k}<\rho(y,x)\le 2^{1+k}\}}\frac{c(x,y)}{\rho(x,y)^{d+\alpha}}\,m(dy)\\
&\le c_2\sup_{x\in F}
\left(\sum_{k=0}^{\infty}m(B_F(x,2^{-k}))2^{(d+\alpha-2)(1+k)}+\sum_{k=0}^{\infty}m(B_F(x,2^{1+k}))2^{-(d+\alpha)k}\right)\\
&\le c_3\left(
\sum_{k=0}^{\infty}2^{-(2-\alpha) k}+\sum_{k=0}^{\infty}2^{-\alpha k}\right)<\infty.
\end{align*}
This implies ${\rm Lip}_c(F)\subseteq \mathscr{F}_0$, where ${\rm Lip}_c(F)$ denotes the space of Lipschitz continuous functions
on $F$ with compact support. We also need the following assumption on $(D_0,\mathscr{F}_0)$.
\paragraph{{\bf Assumption (Dir.)}}\label{a3-2}
{\it
${\rm Lip}_c(F)$ is dense in $\mathscr{F}_0$ under the norm $\|\cdot\|_{D_0,1}:=\!\!\big(D_0(\cdot,\cdot)+\|\cdot\|^2_{L^2(F;m)}\big)^{1/2}$.}
\noindent
Therefore, $(D_0,\mathscr{F}_0)$ is a regular Dirichlet form on $L^2(F;m)$, and there exists a strong Markov process
$Y:=(Y_{t})_{t\ge 0}$ associated with $(D_0,\mathscr{F}_0)$. Moreover, by \cite[Theorem 1.1]{CK} or \cite[Theorem 1.2]{CK08},
the process $Y$ has a heat kernel $p^Y:(0,\infty)\times F \times F \rightarrow (0,\infty)$, which is jointly continuous.
In particular, the process
$Y:=\big((Y_{t})_{t\ge 0}, (\mathds P_{x}^Y)_{x \in
F}\big)$ can start from all $x\in F$.
The process $Y$ is called a $\alpha$-stable-like process in the literature, see \cite{CK, CK08}.
Two-sided estimates for heat kernel $p^Y(t,x,y)$ of the process $Y$ have been obtained in \cite{CK}.
\subsection{Generalized Mosco convergence}
To study the convergence property of process $X^{(n)}$, we will use
some results from \cite{CKK}, which are concerned with the
generalized Mosco convergence of $X^{(n)}$.
For any $n\ge1$, we define an
extension operator $E_n:L^2\big(V_n;m_n\big) \rightarrow
L^2(F;m)$ as follows
\begin{equation}\label{e4-2a}
E_n(g)(z)=
\begin{cases}
g(x), & z\in \text{Int}U_n(x)\ \text{for some}\ x\in V_n,\\
0, & z\in F\setminus\bigcup_{x \in V_n}U_n(x),\
\end{cases}\quad g\in L^2\big(V_n;m_n\big).\end{equation}
Note that because $m(\partial U_n(x))=0$ for any $x\in V_n$ by Assumption {\bf(MMS)}(i), there
is no need to worry about $E_n(g)$ on $\bigcup_{x\in V_n}\partial
U_n(x)$, and the function $E_n(g)$ is a.s.\ well defined on $F$.
Note also that the definition of the extension operator $E_n$ above is a little different from that in \cite{CKK}, see \cite[(2.14)]{CKK}. Furthermore, we define a projection (restriction) operator
$\pi_n: L^2(F;m) \rightarrow L^2\big(V_n;m_n\big)$ as
follows
$$
\pi_n(f)(x)=m_n(x)^{-1}\int_{U_n(x)}f(z)\,m(dz),\quad x \in V_n,\
f\in L^2\big(F;m\big).
$$
\begin{remark} As shown in Lemma \ref{L:ac}, under assumption {\bf (MMS)}, the space $F$ admits a sequence of approximating graphs $\{(V_n,E_{V_n}):n\ge1\}$ enjoying all the properties mentioned in Lemma \ref{L:ac}. Though these properties are weaker than {\bf(AG.1)}--{\bf(AG.3)} in \cite[Theorem 2.1]{CKK},
one can verify that \cite[Lemma 4.1]{CKK} and so \cite[Theorem 4.7]{CKK} still hold with notations above. \end{remark}
For simplicity, we assume that
there exists a point $0 \in \bigcap_{n=1}^{\infty}V_n$; otherwise, we can take a sequence $\{o_n\}_{n\ge1}$ such that
$o_n\in V_n$ for all $n\ge1$ and $\lim_{n \rightarrow \infty}o_n$ exists, and then the arguments below still hold true with this limit point $0:=\lim_{n \rightarrow \infty}o_n$.
Fix $0\in \cap_{n=1}^{\infty}V_n$. We assume that the following conditions hold for $\{w_{x,y}^{(n)}: x,y \in V_n\}$.
\paragraph{{\bf Assumption (Mos.)}} \label{a4-2}{\it
\begin{itemize}
\item[(i)] For every $R>0$,
\begin{equation}\label{p3-1-1}
\lim_{\varepsilon \to 0}\limsup_{n \rightarrow \infty}
\bigg[n^{-2d}\sum_{x,y\in B_F(0,R)\cap V_n: 0<\rho(x,y)\le \varepsilon}\frac{w_{x,y}^{(n)}}{\rho(x,y)^{d+\alpha-2}}\bigg]=0
\end{equation} and
\begin{equation}\label{p3-1-1a}
\lim_{l \rightarrow \infty}\limsup_{n \rightarrow \infty}
\bigg[n^{-2d}\sum_{x,y\in B_F(0,R)\cap V_n: \rho(x,y)\ge
l}\frac{w_{x,y}^{(n)}}{\rho(x,y)^{d+\alpha}}\bigg]=0.
\end{equation}
\item[(ii)] For any sufficiently small $\varepsilon>0$, large $R>0$ and any
$f \in {\rm Lip}_c(F)$,
\begin{equation}\label{p3-1-2}
\begin{split}
\lim_{n \rightarrow \infty}\left[n^{-d}\!\!
\sum_{x\in B_F(0,R)\cap V_n}\!\!\bigg(\sum_{y \in B_F(0,R)\cap V_n: \rho(x,y)>\varepsilon}
\!\!\!\!\big(f(x)\!\!-\!\!f(y)\big)\frac{(w_{x,y}^{(n)}\!\!-\!\!c(x,y))}{\rho(x,y)^{d+\alpha}}m_n(y)\bigg)^2\right]=0.
\end{split}
\end{equation}
\item[(iii)] For any sufficiently small $\varepsilon>0$, large $R>0$ and any
$f\in C_b(B_F(0,R))$,
\begin{equation}\label{p3-1-3}
\begin{split}
\lim_{n \rightarrow \infty}\sum_{x,y \in B_F(0,R)\cap V_n: \rho(x,y)>\varepsilon}
\big(f(x)-f(y)\big)^2\frac{\big(w_{x,y}^{(n)}-c(x,y)\big)}{\rho(x,y)^{d+\alpha}}m_n(x)m_n(y)=0.
\end{split}
\end{equation}
\end{itemize}}
Denote by $(P^Y_{t})_{t \ge 0}$ the Markov semigroup of the process $Y$, and
denote by $(P_t^{(n)})_{t \ge 0}$ the Markov semigroup of the process
$X^{(n)}$. We set
$\hat P_t^{(n)} f(x)= E_n(P_t^{(n)}(\pi_n(f)))(x)$ for any $f\in L^2(F;m). $
\begin{proposition}\label{p3-1}
Suppose that Assumptions {\bf{(MMS)}, \bf{(Dir.)}} and {\bf{(Mos.)}} hold. Then
$$
\lim_{n \rightarrow \infty}\|\hat P_t^{(n)}f-P_{t}^Y
f\|_{L^2(F;m)}=0,\quad f\in L^2(F;m),\ t>0.
$$
\end{proposition}
\begin{proof}
It is easy to see that the Dirichlet form $(D_0,\mathscr{F}_0)$ satisfies
{\bf $(A2)$} in \cite[Section 2]{CKK}. By assumption {\bf (Dir.)}
and the continuity of $c(x,y)$,
we know that {\bf $(A3)^*$} in \cite[Section 2]{CKK} holds true.
Clearly, condition {\bf $(A4)^*$} (i) in \cite[Section 2]{CKK} is a
direct consequence of \eqref{p3-1-1} and \eqref{p3-1-1a}. For any
$R,\varepsilon>0$ and $f \in {\rm Lip}_c(F)$, define
\begin{align*}
L_{R,\varepsilon}f(x)&=\int_{\{z\in B_F(0,R): \rho(z,x)>\varepsilon\}}
(f(z)-f(x))
\frac{c(x,z)}{\rho(x,z)^{d+\alpha}}\,m(dz),\quad x\in F,\\
\overline{L_{R,\varepsilon}^n}f(x)&=\sum_{z \in B_F(0,R)\cap V_n: \rho(x,z)>\varepsilon}(f(z)-f(x))
\frac{w^{(n)}_{x,z}}{\rho(x,z)^{d+\alpha}}m_n(z),\quad x\in V_n,\\
L_{R,\varepsilon}^n f(x)&=E_n(\overline{L_{R,\varepsilon}^n}f)(x),\quad x\in F.
\end{align*}
Then,
$$\int_{B_F(0,R)}|L_{R,\varepsilon}^n f(x)-L_{R,\varepsilon}f(x)|^2\, m(dx)
\le \sum_{i=1}^4I_{i,n},
$$
where
\begin{align*}
I_{1,n}&=2\sum_{x\in B_F(0,R)\cap V_n}\left(\sum_{{y \in B_F(0,R)\cap V_n:}\atop{\rho(x,y)>\varepsilon}}
\big(f(x)-f(y)\big)\frac{(w_{x,y}^{(n)}-c(x,y))}{\rho(x,y)^{d+\alpha}}m_n(y)\right)^2m_n(x),\\
I_{2,n}&=8\text{osc}_n(f)^2 \sum_{x\in B_F(0,R)\cap V_n}\left(\sum_{y \in B_F(0,R)\cap V_n: \rho(x,y)>\varepsilon}
\frac{c(x,y)}{\rho(x,y)^{d+\alpha}}m_n(y)\right)^2m_n(x),\\
I_{3,n}&=8\|f\|_{\infty}^2\text{osc}_n(c)^2
\int_{B_F(0,R)}\left(\int_{B_F(0,R)\cap\{y\in F: \rho(x,y)>\varepsilon\}}
\frac{1}{\rho(x,y)^{d+\alpha}}\,m(dy)\right)^2\,m(dx),\\
I_{4,n}&=4\|f\|_{\infty}^2\|c\|_{\infty}^2\int_{B_F(0,R)\cap(F\setminus\cup_{z\in V_n}U_n(z))}
\!\!\left(\int_{B_F(0,R)\cap({F\setminus\cup_{z\in V_n}U_n(z))}\atop{\cap\{y\in F: \rho(x,y)>\varepsilon\}}}\!\!
\frac{1}{\rho(x,y)^{d+\alpha}}\,m(dy)\right)^2\!\!m(dx),\\
\text{osc}_n(f)&=\sup_{x\in B_F(0,R)\cap V_n,x_1,x_2\in U_n(x)}
|f(x_1)-f(x_2)|,\\
\text{osc}_n(c)&=\sup_{x,y\in B_F(0,R)\cap V_n,x_1,x_2\in U_n(x),y_1,y_2\in U_n(y)}|c(x_1,y_1)-c(x_2,y_2)|.
\end{align*}
It follows from \eqref{a3-3-3} and \eqref{p3-1-2} that $\lim_{n \rightarrow \infty}I_{1,n}=0$. Since
$f\in {\rm Lip}_c(F)$,
$\text{osc}_n(f) \rightarrow 0$ as $n \rightarrow \infty$. Then,
we arrive at
\begin{align*}
\limsup_{n\rightarrow \infty}I_{2,n} &\le c_1\varepsilon^{-2(d+\alpha)}
\big[\limsup_{n\rightarrow \infty} \text{osc}_n(f)^2\big]\\
&\quad\times\sup_{n\ge1}\Bigg\{n^{-3d}\sum_{x\in B_F(0,R)\cap V_n}\Big(\sum_{y\in B_F(0,R)\cap V_n:\rho(x,y)>\varepsilon}
{c(x,y)}\Big)^2
\Bigg\}\\
&\le c_2(\varepsilon)
\big[\limsup_{n\rightarrow \infty} \text{osc}_n(f)^2\big]=0.
\end{align*}
By the continuity of $c(x,y)$, it is also easy to see that $\lim_{n \rightarrow \infty}I_{3,n}=0$. Obviously,
\eqref{a3-3-2a} implies that $\lim_{n \rightarrow \infty}I_{4,n}=0$.
Therefore, we have
$$
\lim_{n \rightarrow \infty}\int_{B_F(0,R)}|L_{R,\varepsilon}^n f(x)-L_{R,\varepsilon}f(x)|^2\, m(dx)=0,
$$
which implies that condition {$(A4)^*$ (ii)} in \cite[Section
2]{CKK} is satisfied.
Similarly, with aid of \eqref{p3-1-3}, we can claim that condition
{$(A4)^*$} (iii) in \cite[Section 2]{CKK} is also fulfilled.
Therefore, we can verify that all the conditions of {\bf $(A4)^*$}
in \cite[Section 2]{CKK} hold under assumptions {\bf{(MMS)},
\bf{(Dir.)}} and {\bf{(Mos.)}}. Hence, the required assertion
follows from \cite[Theorem 4.7 and Theorem 8.3]{CKK}.
\end{proof}
\subsection{Weak convergence}
The main purpose of this subsection is to establish the weak convergence theorem of the law for $X^{(n)}$.
For any $T\in (0,\infty]$, denote by $\mathscr{D}([0,T];F)$ the
collection of c\`{a}dl\`{a}g $F$-valued functions on $[0,T]$ equipped
with the Skorohod topology. Let $\mathds P_{x}^{(n)}$ be the law of $X^{(n)}$ with starting
point $x\in V_n$. Note that $\mathds P_{x}^{(n)}$ can be seen as a distribution on $\mathscr{D}([0,T]; F)$.
We will make use of scaling processes $\{\tilde X^{(n)}\}_{n\ge1}$
constructed in Subsection \ref{S:app}. First, we consider some
properties of the space $(V_n,\rho_n,\tilde m_n)$. For any $x\in
V_n$ and $r>0$, let $B_{V_n}(x,r)=\{z \in V_n: \rho_{n}(z,x)\le
r\}$.
\begin{lemma}\label{l4-1}
Under assumption {\bf(MMS)}, there are constants $C_0>0$ and $c_V\ge1$ such that for
all $n\ge1$,
\begin{equation}\label{l4-1-1}
c_V^{-1}\le \tilde m_n(x)\le c_V,\quad x\in V_n
\end{equation} and
\begin{equation}\label{l4-1-2}
c_V^{-1} r^{d}\le \tilde m_n(B_{V_n}(x,r))\le c_Vr^{d},\quad x\in V_n, 1\le r <C_0nr_F,
\end{equation}
where $r_F$ is the constant in \eqref{a3-1-1}.
\end{lemma}
\begin{proof}
By the definition of $\tilde m_n$ and \eqref{a3-3-3}, \eqref{l4-1-1} holds trivially.
Note that, for any $x\in V_n$, $y \in B_F(x,r)\cap V_n$ and $z \in
U_n(y)$, by \eqref{a3-3-2}, we have $\rho(z,x)\le
\rho(z,y)+\rho(y,x) \le C_3n^{-1}+r,$ and so $\bigcup_{y \in
B_F(x,r)\cap V_n}U_n(y)\subseteq B_F(x,r+C_3n^{-1}).$ Hence, for
any $x\in V_n$ and $1\le r<{(nr_F-C_3)}/{C_2}$ (where $C_2$ and
$C_3$ are constants in \eqref{a3-3-1} and \eqref{a3-3-2}),
\begin{align*}
\tilde m_n\big(B_{V_n}(x,r)\big)&=n^{d}m_n\big(B_{V_n}(x,r)\cap V_n\big)\le
n^{d}m_n\big(B_{F}(x,C_2n^{-1}r)\cap V_n\big)\\
&=n^{d}\sum_{y \in B_F(x,C_2n^{-1}r)\cap V_n}m\big(U_n(y)\big)\le n^{d}m\big(B_F(x,C_2n^{-1}r+C_3n^{-1})\big)\le c_0r^{d},
\end{align*}
where in the first inequality we used
\eqref{a3-3-1}, the second inequality is due to the facts that $m(U_n(x)\cap U_n(y))=0$ for all $x\neq y$ and $\bigcup_{y \in B_F(x,C_2n^{-1}r)\cap V_n}U_n(y)
\subseteq B_F(x,C_2n^{-1}r+C_3n^{-1})$ as explained above, and the last inequality follows from \eqref{a3-1-1}.
On the other hand, for any
$z \in B_F(x,r)$, by $(3)$ in Lemma \ref{L:ac}, there exists $y\in V_n$ such that $\rho(y,z)\le c_0n^{-1}$ for some constant $c_0>0$, and so
$\rho(y,x)\le \rho(z,x)+\rho(z,y)\le r+c_0n^{-1}.$ This implies that $B_F(x,r)\subset \bigcup_{y \in
B_F(x,r+c_0n^{-1})\cap V_n}B_F(y,c_0n^{-1}).$ Hence, for
$(2(C_1^{-1}c_0))\vee1< r<({nr_F+c_0})/{C_1}$ (where $C_1$ is the
constant in \eqref{a3-3-1}) and $x \in V_n$,
\begin{align*}
\tilde m_n\big(B_{V_n}(x,r)\big)&=n^{d}m_n\big(B_{V_n}(x,r)\big)\ge
n^{d}m_n\big(B_{F}(x,C_1n^{-1}r)\cap V_n\big)\\
&=n^{d}\sum_{y \in B_F(x,C_1n^{-1}r)\cap V_n}m\big(U_n(y)\big)\ge c_1n^{d}\sum_{y \in B_F(x,C_1n^{-1}r)\cap V_n}m\big(B_F(y,c_0n^{-1})\big)\\
&\ge c_1 n^{d}m\big(B_F(x,C_1n^{-1}r-c_0n^{-1})\big)\ge c_2r^{d},
\end{align*}
where in the first inequality we used
\eqref{a3-3-1} again, the second inequality follows from \eqref{a3-1-1} and \eqref{a3-3-3},
the third inequality is due to $\bigcup_{y \in B_F(x,C_1n^{-1}r)\cap V_n}B_F(y,c_0n^{-1})
\supseteq B_F(x,C_1n^{-1}r-c_0n^{-1})$ as claimed before, and in the last one we have used \eqref{a3-1-1}.
Therefore, combining both estimates above and changing the corresponding constants properly, we prove \eqref{l4-1-2}.
\end{proof}
By \eqref{a3-3-1}, for all $n\ge1$,
$
\sup_{x,y \in V_n}\rho_{n}(x,y)\le C_1^{-1}nr_F,
$ where $r_F$ is the constant in \eqref{a3-1-1}.
Below, we let $C_0'=C_1^{-1}$.
For any $x,z\in
V_n$ and $r>0$, let $B^{w^{(n)}}_{V_n,z}(x,r)=\{y\in B_{V_n}(x,r): w^{(n)}_{y,z}>0\}$, and $B^{w^{(n)}}_{V_n}(x,r)=
B^{w^{(n)}}_{V_n,x}(x,r)$.
We need the following further assumptions
on $\{w_{x,y}^{(n)}:x,y\in V_n\}$.
\paragraph{{\bf Assumption (Wea.)}} {\it
Suppose that for some fixed $\theta\in (0,1)$, there exists a constant $R_0\ge1$ such that
\begin{itemize}
\item [(i)] For any $n \ge1$,
$R_0<R<C_0'r_F$ and ${R^{\theta}/2}\le r \le 2R$,
\begin{equation}\label{a4-3-1}
\sup_{x \in B_{V_n}(0,6R)}\sum_{y\in V_n: \rho_n(y,x)\le r}
\frac{w_{x,y}^{(n)}}{\rho_n(x,y)^{d+\alpha-2}}\le C_3r^{2-\alpha},
\end{equation}
\begin{equation}\label{a4-3-1a}
m_n(B_{V_n,z}^{w^{(n)}}(x,r))\ge c_0 m_n(B_{V_n}(x,r)),\quad x,z\in B_{V_n}(0,6R),
\end{equation}
and
\begin{equation}\label{a4-3-2}
\sup_{x\in B_{V_n}(0,6R)\cap V_n}\sum_{y \in V_n: \rho_n(y,x)\le
c_*r,w^{(n)}_{x,y}>0} (w_{x,y}^{(n)})^{-1}\le C_3r^d,
\end{equation}
where $c_0>1/2$ is independent of $n,R_0, R, r,x, z$ and $r_F$, $c_*=8c_V^{2/d}$ and $c_V$ is the constant in \eqref{l4-1-2}.
When $\alpha\in (0,1)$, \eqref{a4-3-1} can be replaced by
\begin{equation}\label{e5-1}
\sup_{x \in B_{V_n}(0,6R)}\sum_{y \in V_n: \rho_n(y,x)\le
r} \frac{w_{x,y}^{(n)}}{\rho_n(x,y)^{d+\alpha-1}}\le
C_3r^{1-\alpha}.
\end{equation}
\item[(ii)] For every $n\ge1$,
$R_0<R< C_0^{'}r_F$ and $r\ge {R^{\theta}/2}$,
\begin{equation}\label{a4-3-3}
\sup_{x \in B_{V_n}(0,6R)}\sum_{y \in V_n:
\rho_n(x,y)>r}\frac{w_{x,y}^{(n)}}{\rho_n(x,y)^{d+\alpha}}\le
C_3r^{-\alpha}.
\end{equation}
\end{itemize}
Here $C_3$ is a positive constant independent of $n$, $R_0$ and
$r_F$.}
\medskip
The main result of this section is as follows. It is in some sense a
generalization of \cite[Proposition 2.8]{CCK}. Indeed, in our case
we have the H\"older regularity of parabolic functions only in the region
$(C_0^{-1}|s-t|)^{1/\alpha}+\rho(x,y)\ge 2r^{\delta}$ (see Theorem
\ref{T:holder}), hence more careful arguments are required.
\begin{theorem}\label{t3-1}
Suppose that Assumptions {\bf (MMS)}, {\bf(Dir.)}, {\bf(Mos.)} and {\bf(Wea.)} hold. Then,
for any $\{x_n\in V_n: n\ge 1\}$
such that $\lim_{n \rightarrow \infty}x_n=x$ for some $x \in F$,
it holds that for every $T>0$, $\mathds P^{(n)}_{x_n}$ converges weakly to
$\mathds P_{x}^Y$ on the space of all probability measures on
$\mathscr{D}([0,T];F)$, where $\mathds P^{(n)}_{x_n}$ and $\mathds P_x^Y$ denote the laws of
$X_{\cdot}^{(n)}$ and $Y_{\cdot}$ on $\mathscr{D}([0,T]; F)$, respectively.
\end{theorem}
\begin{proof}
Throughout the proof, we denote the law of $(X_t^{(n)})_{t\ge 0}$ on $\mathscr{D}([0,\infty);F)$ and that of $(\tilde X_t^{(n)})_{t\ge 0}$
on $\mathscr{D}([0,\infty);V_n)$ by
$\mathds P_{\cdot}^{(n)}$ and $\tilde\mathds P_{\cdot}^{(n)}$, respectively. Let $X_{\cdot}^{(n)}$ and $\tilde X_{\cdot}^{(n)}$ be
their associated canonical paths.
Suppose that $\{x_n\in V_n: n\ge 1\}$ is a sequence with $\lim_{n
\rightarrow \infty}x_n=x$ for some $x \in F$.
{\bf Step (1):} We show
that for each fixed $T>0$, $\{\mathds P_{x_n}^{(n)}\}_{n\ge1}$ is tight on
$\mathscr{D}([0,T];F)$. To prove the tightness of
$\{\mathds P_{x_n}^{(n)}\}_{n\ge1}$, it suffices to verify that
\begin{equation}\label{t3-1-0a}
\lim_{R \rightarrow \infty}\limsup_{n \rightarrow \infty}\mathds P_{x_n}^{(n)}
\big(\sup_{s\in [0,T]}\rho(0,X_{s}^{(n)})>R\big)=0,
\end{equation}
and for any sequence of stopping time $\{\tau_n\}_{n\ge1}$
such that $\tau_n\le T$ and any sequence
$\{\varepsilon_n\}_{n\ge1}$ with $\lim_{n \rightarrow
\infty}\varepsilon_n=0$,
\begin{equation}\label{t3-1-1}
\limsup_{n \rightarrow \infty}\mathds P_{x_n}^{(n)}\Big(
\rho\big(X_{\tau_n+\varepsilon_n}^{(n)}, X_{\tau_n}^{(n)}\big)>\eta\Big)=0,\quad \eta>0.
\end{equation} See, e.g., \cite[Theorem 1]{Ad}.
When $r_F<\infty$, \eqref{t3-1-0a} holds trivially. Now, we are going to prove \eqref{t3-1-0a} for the case that
$r_F=\infty$. As we mentioned above, $\big(\mathbf{P}_n(X^{(n)}_t)\big)_{t\ge 0}$ has the same distribution as
$\big(\tilde X^{(n)}_{n^{\alpha}t}\big)_{t\ge 0}$, where $(\tilde X_t^{(n)})_{t\ge 0}$ is a strong Markov process generated by
the Dirichlet form $(\tilde D_{V_n},\tilde \mathscr{F}_{V_n})$. Therefore,
\begin{equation}\label{t3-1-4}
\begin{split}
\mathds P_{x_n}^{(n)} \big(\sup_{s\in [0,T]}\rho(X_{s}^{(n)},0)>R\big)&=\mathds P_{x_n}^{(n)} \Big(\sup_{s\in [0,T]}\rho\big(\mathbf{P_n}(X_{s}^{(n)}),0\big)>R\Big)\\
&=\tilde \mathds P_{x_n}^{(n)} \Big(\sup_{s \in
[0,n^{\alpha}T]}\rho(\tilde X_s^{(n)},0)>R\Big)\\
&\le \tilde \mathds P_{x_n}^{(n)} \Big(\sup_{s \in
[0,n^{\alpha}T]}\rho_{n}(\tilde X_s^{(n)},0)>c_1^*nR\Big),
\end{split}
\end{equation}
where the last inequality follows the fact that $\rho_{n}(x,y)\ge c_1^*n\rho(x,y)$ for all $x,y \in V_n$, thanks to \eqref{a3-3-1}.
On the other hand, under assumption {\bf(Wea.)},
it is easy to verify that assumption
{\bf(Exi.)} (or assumption {\bf(Exi.')} when $\alpha\in (0,1)$)
holds on the space $(V_n,\rho_n,\tilde m_n)$
{with associated constants independent of $n$.}
{Combining this fact with \eqref{l4-1-1} and \eqref{l4-1-2}, we can apply} Theorem
\ref{exit} (or Remark \ref{R:H}) to derive that for any fixed $\theta'\in
(\theta,1)$, there exist constants {$\delta\in (\theta,1)$} and $R_1\ge1$, such that for all
$n\ge1$, $R_1<R<C_0'r_Fn$ and
${R^{\delta}}\le r \le R$,
\begin{equation}\label{t3-1-3a}
\sup_{x \in B_{V_n}(0,2R)\cap V_n}\tilde
\mathds P_{x}^{(n)}\big(\tau_{B_{V_n}(0,r)} (\tilde X^{(n)}_{\cdot})\le
t\big)\le c_1\Big(\frac{t}{r^{\alpha}}\Big)^{1/3}, \quad \forall\
t\ge r^{\theta'\alpha},
\end{equation} where $B_{V_n}(x,r)=\{z \in V_n: \rho_{n}(z,x)\le
r\},$ $\tau_{B_{V_n}(0,r)}(\tilde X^{(n)}_{\cdot})$ is the first exit time from $B_{V_n}(0,r)$ of the process $\tilde X^{(n)}_\cdot$, and $c_1>0$ is independent of $R_1$, $n$, $r$, $R$ and $r_F$.
Suppose that $\rho(x_n,0)\le K$ for all $n\ge 1$ and some constant
$K>0$. Note that, also thanks to \eqref{a3-3-1}, $\rho_{n}(x_n,0)\le
c_2^*n\rho(x_n,0)\le c_2^*nK$. For every fixed $R>2c_2^*K/c^*_1$ and
$T>0$, we have
$R_1<c^*_1nR< C_0'nr_F$ (since $r_F=\infty)$ and
$n^{\alpha}T>\big({c^*_1nR}/{2}\big)^{\theta'\alpha}$ for $n$ large
enough. Thus, by \eqref{t3-1-4} and \eqref{t3-1-3a},
\begin{align*}
\mathds P_{x_n}^{(n)} \big(\sup_{s\in [0,T]}\rho(X_{s}^{(n)},0)>R\big)
&\le \tilde \mathds P_{x_n}^{(n)} \big(\sup_{s\in [0,n^{\alpha}T]} \rho_{n}(\tilde X_{s}^{(n)},0)>c^*_1nR\big)\\
&\le \sup_{z \in B_{V_n}(0,c_2^*nK)\cap V_n}\tilde \mathds P_{z}^{(n)}\big(\tau_{B_{V_n}(0,c^*_1nR)}(\tilde X_{\cdot}^{(n)})\le n^{\alpha}T\big)\\
&\le \sup_{z \in
B_{V_n}(0,{c^*_1nR}/{2})\cap V_n}\tilde \mathds P_{z}^{(n)}\big(\tau_{B_{V_n}(z,{c_1^*nR}/{2})}(\tilde X_{\cdot}^{(n)})\le
n^{\alpha}T\big) \\
&\le
c_1\left(\frac{n^{\alpha}T}{(c^*_1nR/2)^{\alpha}}\right)^{1/3}=
c_2\left(\frac{T}{R^{\alpha}}\right)^{1/3},
\end{align*}
which implies
\begin{align*}
&\lim_{R \rightarrow \infty}\limsup_{n \rightarrow
\infty}\mathds P_{x_n}^{(n)} (\sup_{s\in [0,T]}\rho(X_{s}^{(n)},0)>R) \le
\lim_{R \rightarrow
\infty}c_2\left(\frac{T}{R^{\alpha}}\right)^{1/3}=0.
\end{align*} This proves \eqref{t3-1-0a}.
Next, let $\{\tau_n\}_{n\ge1}$ be a sequence of stopping time
such that $\tau_n\le T$, and $\{\varepsilon_n\}_{n\ge1}$ be a sequence such that $\lim_{n\to\infty}\varepsilon_n=0$. By the strong Markov property, for every $\eta>0$ small enough and
$R\ge1$ large enough,
\begin{align*}
& \mathds P_{x_n}^{(n)}\big(\rho(X_{\tau_n+\varepsilon_n}^{(n)},X_{\tau_n}^{(n)})>\eta\big)= \mathds E_{x_n}^{(n)}\big[\mathds P^{(n)}_{X_{\tau_n}^{(n)}}\rho(X_{\varepsilon_n}^{(n)},X_{0}^{(n)})>\eta)\big]\\
&\le \sup_{z \in B_F(0,R)\cap V_n}\mathds P_{z}^{(n)}\big(\rho(X_{\varepsilon_n}^{(n)},X_{0}^{(n)})>\eta\big)+\mathds P_{x_n}^{(n)}
\big(\sup_{s\in [0,T]}\rho(X_{s}^{(n)},0)>R\big)\\
&\le \sup_{z \in B_{V_n}(0,(c_2^*nR)\wedge (C_0'nr_F))\cap V_n}\tilde \mathds P_{z}^{(n)}\big(
\rho_{n}(\tilde X_{n^{\alpha}\varepsilon_n}^{(n)},\tilde X_{0}^{(n)})>c_1^*n\eta\big)+\mathds P_{x_n}^{(n)}
\big(\sup_{s\in [0,T]}\rho(X_{s}^{(n)},0)>R\big)\\
&\le \sup_{z \in B_{V_n}(0,(c_2^*nR)\wedge (C_0'nr_F))\cap V_n}\!\!\tilde \mathds P_{z}^{(n)}\big(\tau_{B_{V_n}(z,c_1^*n\eta)}(\tilde X_{\cdot}^{(n)}\big)\le
n^{\alpha}\varepsilon_n)\!+\!\mathds P_{x_n}^{(n)} \big(\sup_{s\in
[0,T]}\rho(X_{s}^{(n)},0)>R\big),
\end{align*}
where in the second inequality we have used the fact that $c_1^*n\rho(x,y)\le \rho_{n}(x,y)\le c_2^*n\rho(x,y)$ for $x,y\in V_n$, due to
\eqref{a3-3-1}. Taking $n$ large enough and $\eta$ small enough such that $c_2^*nR>R_1$ and ${(c_2^*nR)^{\delta}}\le c_1^*n\eta\le
(c_2^*nR)\wedge (C_0'nr_F)$. Then, it
follows from \eqref{t3-1-3a} that
\begin{align*}
&\sup_{z \in B_{V_n}(0,(c_2^*nR)\wedge (C_0'nr_F))\cap V_n}\tilde \mathds P_{z}^{(n)}\big(\tau_{B_{V_n}(z,c^*_1n\eta )}(\tilde X_{\cdot}^{(n)})\le
n^{\alpha}\varepsilon_n)\\
&\le \sup_{z \in B_{V_n}(0,(c_2^*nR)\wedge (C_0'nr_F))\cap V_n}\tilde \mathds P_{z}^{(n)}\big(\tau_{B_{V_n}(z,c_1^*n\eta )}(\tilde X_{\cdot}^{(n)})\le (n^{\alpha}\varepsilon_n)\vee (2c_1^*n\eta)^{\theta'\alpha}\big)\\
&\le c_1\left(\frac{(n^{\alpha}\varepsilon_n)\vee
(2c_1^*n\eta)^{\theta'\alpha}}{(c_1^*n\eta)^{\alpha}}\right)^{1/3} \le
c_3\left((\varepsilon_n\eta^{-\alpha})\vee
(n\eta)^{-(1-\theta')\alpha}\right)^{1/3}.
\end{align*}
Combining both
estimates above with \eqref{t3-1-0a},
we obtain \eqref{t3-1-1}.
{\bf Step (2):} Now it suffices to show that any finite dimensional distribution of
$\mathds P_{x_{n}}^{(n)}$ converges to that of $\mathds P^Y_{x}$. We first claim
that for any fixed $t>0$, $f\in C_{\infty}(F)\cap
L^2(F;m)$ and a sequence $\{z_n: z_n \in
V_n\}_{n=1}^{\infty}$ with $\lim_{n \rightarrow \infty}z_n=z\in F$,
\begin{equation}\label{t3-1-1a}
\lim_{n \rightarrow \infty}E_n\big(P_t^{(n)}f\big)(z_n)=P^Y_{t}f(z),
\end{equation}
where $C_\infty(F)$ denotes the set of continuous functions on $F$
vanishing at infinity.
Indeed,
according to assumption {\bf(Mos.)},
Proposition \ref{p3-1} and \eqref{a3-3-2a},
there are a subsequence of $\{\hat P_t^{(n)}f: n\ge 1\}$ (we still
denote it by $\{\hat P_t^{(n)}f:n\ge 1\}$ for simplicity) and a
sequence
{$\{y_k\in \cup_{n\ge 1}\cup_{x\in V_n}\text{Int}(U_n(x)):k\ge 1\}$}
such that
(i) $y_k\neq z$ and $\lim_{k \rightarrow \infty}y_k=z$; (ii) for every $k\ge1$,
\begin{equation}\label{t3-1-2}
\lim_{n \rightarrow \infty}\hat P_t^{(n)} f(y_k)=P_{t}^Y f(y_k).
\end{equation}
For every $k\ge 1$ and $t>0$, we have
\begin{equation}\label{t3-1-2a}
\begin{split}
&|E_n(P_t^{(n)} f)(z_n)-P_{t}^Y f(z)|\\
&\le |\hat P_t^{(n)} f(y_k)-P_{t}^Y f(y_k)|+
|\hat P_t^{(n)} f(y_k)- E_n(P_t^{(n)} f)(y_k)|\\
&\quad+|E_n(P_t^{(n)} f)(y_k)-E_n(P_t^{(n)} f)(z_n)|
+|P_{t}^Y f(z)-P_{t}^Y f(y_k) |\\
&=: |\hat P_t^{(n)} f(y_k)-P_{t}^Y f(y_k)|+\sum_{i=1}^3 J_{i,n,k}.
\end{split}
\end{equation}
Recall that $\hat P_t^{(n)} f(x)=E_n(P_t^{(n)}(\pi_n(f)))(x)$ for all $x\in F$. By the definition of $\pi_n$,
$$\lim_{n \rightarrow \infty}\sup_{z\in V_n}|\pi_n (f)(z)-f(z)|=0$$ for any $f\in C_\infty(F).$
Hence,
\begin{align*}
\lim_{n \rightarrow \infty}\sup_{k \ge1}J_{1,n,k}&= \lim_{n
\rightarrow \infty}\sup_{k \ge1}
|E_n (P_t^{(n)} (\pi_n (f)))(y_k)- E_n(P_t^{(n)} f)(y_k)|\\
&\le \lim_{n \rightarrow \infty}\sup_{z\in V_n}|\pi_n f(z)-f(z)|=0,
\end{align*} where in the last inequality we used the contractivity of $(P_t^{(n)})_{t\ge1}$ in $L^\infty(V_n; m_n)$.
In the following, for any $x\in F$,
let $[x]_n\in V_n$ be
such that $x\in U_n([x]_n)$ and $\rho(x,[x]_n)\le c_3^*n^{-1}$, due
to (3) in Lemma \ref{L:ac}. For any $n\ge 1$ and $z\in V_n$,
noticing that $\big(\tilde X^{(n)}_{n^{\alpha}t}\big)_{t\ge 0}$ has
the same distribution as $\big(\mathbf{P_n}(X^{(n)}_t)\big)_{t\ge
0}$,
\begin{align*}
E_n(P_t^{(n)}f)(z)&=P_t^{(n)}f([z]_n)=\mathds E_{[z]_n}^{(n)}[f(X_t^{(n)})]=\tilde \mathds E_{[z]_n}^{(n)}[f(\tilde X^{(n)}_{n^{\alpha}t})]=\tilde P_{n^{\alpha}t}^{(n)}f([z]_n),
\end{align*}
where $\tilde P_t^{(n)}f(\cdot):=\tilde \mathds E_{\cdot}^{(n)}[f(\tilde X^{(n)}_t)]$ is the Markov semigroup of
$\tilde X^{(n)}:=(\tilde X_t^{(n)})_{t\ge 0}$.
{As mentioned above, due to assumption {\bf(Wea.)} and Lemma \ref{l4-1},}
we can apply Theorem \ref{T:holder} (also thanks to Remark \ref{R:H}) to obtain that there are constants
{$\delta \in (\theta,1)$ and $R_1\ge1$} such that for all $R_1<R<C'_0nr_F$, \eqref{t3-2-1} holds for every $\{\tilde X^{(n)}\}_{n\ge1}$ and with
constants independent of $n$.
Let $C_0>0$ be the constant in \eqref{l2-2-1a}.
For fixed $T>0$, we define $H_{T,n,f}(s,x)=\tilde P_{1+n^{\alpha}T-s}^{(n)}f(x)$, which
is a parabolic function on
$Q_{V_n}\big(0,y,(2^{-1}C_0^{-1}n^{\alpha}T)^{1/\alpha}\big)$ for each $y\in V_n$. Take $K$ large enough such that $K>(2^{-1}C_0^{-1}t)^{1/\alpha}$, $R_1<nK<C_0'nr_F$ and
$z_n\in B_{V_n}(0,nK)$ for all $n\ge 1$.
According the facts that $y_k\rightarrow z$ as $k\to\infty$ and $y_k\neq z$ for all $k\ge1$, for any {fixed} $t>0$, we can choose $k \ge 1$ large enough such that $0<\varepsilon_k<\rho(y_k,z)\le
(4c_2^*)^{-1}((2^{-1}C_0^{-1}t)^{1/\alpha}\wedge (2^{-1}C_0'r_F))$, where $\varepsilon_k$ is a positive constant with
$\lim_{k \rightarrow \infty}\varepsilon_k=0$, and $c_2^*>0$ is the constant such that
$\rho_{n}(x,y)\le c_2^*n^{-1}\rho(x,y)$ for any $x,y\in V_n$.
Furthermore, for these fixed {$k$ and $t$}, we take $n$ large enough such that ${(nK)^{\delta}\le r_n\le nK}$,
{$\rho(z_n,z)\le (4c_2^*)^{-1}n^{-1}r_n$ and
$n\varepsilon_k\ge 4(c_1^*)^{-1}r_n^{\delta}$, where
$r_n:=(2^{-1}C_0^{-1}n^{\alpha}t)^{1/\alpha}\wedge (2^{-1}C_0'nr_F)$.} Hence,
\begin{align*}
\rho_{n}\big([z_n]_n,[y_k]_n\big)&\ge c^*_1n\rho\big([z_n]_n,[y_k]_n\big)\ge c_1^*n
\big(\rho\big(z,y_k\big)-\rho\big(y_k,[y_k]_n\big)-\rho\big(z,[z]_n\big)\big)\\
&\ge c_1^*n\varepsilon_k-2c^*_1c_3^*\ge r_n^{\delta},
\end{align*}
\begin{align*}
\rho_{n}\big([z]_n,[y_k]_n\big)&\le c_2^*n \rho\big([z]_n,[y_k]_n\big)\le
c_2^*n\big(\rho\big(z,y_k\big)+\rho\big(z,[z]_n\big)+\rho\big(y_k,[y_k]_n\big)\big)\\
&{ \le 4^{-1}r_n+2c_2^*c_3^*\le 2^{-1}r_n}
\end{align*}
and
\begin{align*}
\rho_{n}\big([z]_n,[z_n]_n\big)&\le c_2^*n \rho ([z]_n,[z_n]_n )\le
c_2^*n\big(\rho(z,z_n)+\rho(z,[z]_n)+\rho(z_n,[z_n]_n)\big)\\
&{\le 4^{-1}r_n+2c_2^*c_3^*\le 2^{-1}r_n,}
\end{align*}
where we used the fact that $\rho(y,[y]_n)\le c_3^*n^{-1}$ for all $y \in F$.
Note that since $z_n\in V_n$, $[z_n]_n=z_n$.
Then as a summary, $(nK)^{\delta}\le
r_n\le nK$, $z_n\in B_{V_n}(0,nK)$, and
$[z_n]_n, [y_k]_n \in Q_{V_n}\big(0,[z]_n, r_n\big)$
with $\rho_{n}\big([z_n]_n, [y_k]_n\big)\ge r_n^{\delta}$.
Now, applying
\eqref{t3-2-1} to the parabolic function $H_{t,n,f}$ on
$Q_{V_n}(0,[z]_n,r_n)$, we can obtain that
\begin{align*}
&|\tilde P_{n^{\alpha}t}^{(n)}f([y_k]_n)-\tilde P_{n^{\alpha}t}^{(n)}f([z_n]_n)|\\
&=|H_{t,n,f}(1,n[y_k]_n)-H_{t,n,f}(1,n[z_n]_n)|\le c_4\|\tilde P_{n^{\alpha}t}^{(n)}f\|_{\infty}\bigg|\frac{\rho_{n}\big([y_k]_n,[z_n]_n\big)}{r_n}\bigg|^{\beta}\\
&\le c_{5}(t)\|f\|_{\infty}\rho\big([y_k]_n,[z_n]_n\big)^{\beta}\le c_{6}(t)\|f\|_{\infty}\big(\rho(y_k,z)^{\beta}+n^{-\beta}\big).
\end{align*}
This yields immediately that
\begin{align*}
\limsup_{n \rightarrow \infty}J_{2,n,k}&=
\limsup_{n \rightarrow \infty}|\tilde P_{n^{\alpha}t}^{(n)}f([y_k]_n)-\tilde P_{n^{\alpha}t}^{(n)}f([z_n]_n)|\\
&\le c_{6}(t)\limsup_{n \rightarrow \infty}\|f\|_{\infty}\big(\rho(y_k,z_n)^{\beta}+n^{-\beta}\big)=c_{7}(t)\|f\|_{\infty}\rho(y_k,z)^{\beta}.
\end{align*}
According to \cite[Theorem 4.14]{CK},
$
J_{3,n,k}\le c_{8}(t)\|f\|_{\infty}\rho(y_k,z)^{\beta}.
$
Combining all estimates with \eqref{t3-1-2a} and \eqref{t3-1-2}, we
arrive at that
$$
\limsup_{n \rightarrow \infty}\big|E_n(P_t^{(n)} f)(z_n)-P_{t}^Y
f(z)\big|\le c_{9}(t)\|f\|_{\infty}\rho(y_k,z)^{\beta},
$$
where $c_9(t)>0$ is independent of $k$. Note that $k$ is arbitrary, letting $k\rightarrow \infty$ in the
last inequality, then we prove \eqref{t3-1-1a}. In particular, according to \cite[Lemma 2.7]{CCK}, \eqref{t3-1-1a} implies
that for every compact set
$K\subseteq F$,
\begin{equation}\label{t3-1-3}
\limsup_{n \rightarrow \infty}\sup_{x \in K}|E_n(P_t^{(n)}
f)(x)-P_{t}^Y f(x)|=0.
\end{equation}
Next, for any $f_1,f_2\in C_\infty(F)$, $0<s<t\le T$ and any
sequence $x_n \in V_n$ with $\lim_{n \rightarrow \infty}x_n=x\in F$,
\begin{align*}
&\mathds E_{x_n}^{(n)}\big[f_1(X_{s}^{(n)})f_2(X_{t}^{(n)})\big]=
\mathds E_{x_n}^{(n)}\big[f_1(X_{s}^{(n)})P_{t-s}^{(n)}f_2(X_{s}^{(n)})\big]\\
&=\mathds E_{x_n}^{(n)}\big[f_1(X_{s}^{(n)})P^Y_{t-s}f_2(X_{s}^{(n)})\big]+
\mathds E_{x_n}^{(n)}\big[f_1(X_{s}^{(n)})\big(P_{t-s}^{(n)}f_2(X_{s}^{(n)})-P^Y_{t-s}f_2(X_{s}^{(n)})\big)\big]\\
&=:J_{1,n}+J_{2,n}.
\end{align*}
Set $g(z)=f_1(z)P^Y_{t-s}f_2(z)$. Then $g\in C_\infty(F)$, due to
the $C_\infty$-Feller property of the process $Y$, see \cite[Theorem
1.1]{CK}. Then, according to \eqref{t3-1-1a}, we have
$$
\lim_{n \rightarrow \infty}J_{1,n}=\lim_{n \rightarrow
\infty}P_t^{(n)}g(x_n)=P_{s}^Yg(x)=
\mathds E_{x}^Y\big[f_1(Y_{s})f_2(Y_{t})\big].
$$
On the other hand, for any $t>0$, $R>2K$ and $n$ large enough,
\begin{align*}
J_{2,n}&\le \|f_1\|_{\infty}\sup_{z \in B_F(0,R)}
\big|E_n(P_{t-s}^{(n)}f_2)(z)-P^Y_{t-s}f_2(z)\big|+\|f_1\|_{\infty}\|f_2\|_{\infty}\mathds P_{x_n}^{(n)}\big(\sup_{s
\in [0,t]}\rho(X_s^{(n)},0)>R\big),
\end{align*}
By \eqref{t3-1-0a} and \eqref{t3-1-3}, we let $n \rightarrow \infty$
and then $R\rightarrow \infty$ in the last inequality, yielding that
$ \lim_{n \rightarrow \infty}J_{2,n}=0.
$
Combining all above estimates, we prove that
$$
\lim_{n \rightarrow \infty}
\mathds E_{x_n}^{(n)}\big[f_1(X_{s}^{(n)})f_2(X_{t}^{(n)})\big]=\mathds E^Y_{x}\big[f_1(Y_{s})f_2(Y_{t})\big].
$$
Following the same arguments as above and using the induction procedure,
we can obtain from \cite[Chapter 3; Proposition 4.4 and Theorem 7.8(b)]{EK} that any finite
dimensional distribution of $\mathds P_{x_n}^{(n)}$ converges to $\mathds P^Y_{x}$. The proof is finished.
\end{proof}
\begin{remark}\label{r4-6} As shown in the proof of Theorem \ref{t3-1} above, the role of adopting the
generalized Mosco convergence is to identify the limit process in the $L^2$ sense. Actually, according to \cite[Theorem 5.1]{CKK}, under Assumption {\bf(Mos.)} only, any finite
dimensional distribution of $X^{(n)}$ converges to that of $Y$, when the initial distribution is absolutely continuous with respect to the reference measure $m$. Thus, Theorem \ref{t3-1} improves this weak convergence for any initial distribution. We emphasize that such improvement is highly non-trivial, see \cite{HK} for discussions on the uniformly elliptic case by using heat kernel estimates. Here, we will make use of the H\"{o}lder regularity of parabolic functions on large scale (Theorem \ref{T:holder}). This is much weaker than the approach used in \cite[Proposition 2.8]{CCK}, where
the H\"older regularity of parabolic functions is assumed to be satisfied on the whole space. \end{remark}
\section{Random conductance model: quenched invariance principle}\label{section5}
We will apply results from Section \ref{section3} to study the quenched invariance principle for random conductance models.
\subsection{Quenched invariance principle for stable-like processes on $d$-sets.}
\label{subsu5-1}
Suppose that $(F,\rho,m)$ is a metric measure space satisfying assumption {\bf(MMS)}.
By Lemma \ref{L:ac},
we have a sequence of graphs with measure $\{(V_n,\rho_n, m_n): n\ge 1\}$ that approximate
$(F,\rho,m)$. In this
part, we further assume the following:
\begin{itemize}
\item[(i)]
$\rho(\cdot,\cdot)$ is a metric with dilation; namely,
there exists another distance $\bar \rho$ on $F$ such that
\begin{itemize}
\item [(i$'$)] for all $x,y\in F$,
$
C_1\bar \rho(x,y)\le \rho(x,y) \le C_2\bar\rho(x,y)$ holds for some constants $0<C_1<C_2<\infty$.
\item[(i$''$)] for each $x,y\in F$, $i\in \{-1,1\}$ and $n\in \mathds{N}$, there are
$x^{(n^i)},y^{(n^i)}\in F$ (we write $n^ix:=x^{(n^i)},n^iy:=y^{(n^i)}$ for notational simplicity) such that $\bar \rho(n^ix,n^iy)=n^i \bar \rho(x,y)$.
\end{itemize}
\item[(ii)] There exists $0\in { V_1}\subset F$ such that $n^i0=0$ for all $i\in \{-1,1\}$ and $n\in \mathds{N}$.
\item[(iii)]
$V_n=n^{-1}{V_1}:=\{n^{-1}z: z\in V_1\}$, and
$F$ is a closure of $\cup_{n\ge 1}V_n$.
Moreover, $nV_1\subset V_1$ and
$\mu_n(A)=\mu_1(nA)$ for all $A\subset V_n$
and $n\in \mathds{N}$,
where $\mu_n$ denotes the counting measure on $V_n$.
\end{itemize}
We note that, due to \eqref{a3-3-3}, for any $n\in \mathds{N}$, there exists a measurable function $K_n$ on $V_n$ such that
$m_n=n^{-d}K_n \,\mu_n$ and
\begin{equation}\label{e:kkk1}0<C_3\le K_n\le C_4<\infty,\end{equation}
where $\mu_n$ denotes the counting measure on $V_n$, and $C_3,C_4$ are constants independent of $n$.
\begin{remark}\label{r5-1}
Obviously conditions (i$'$) and (i$''$) in assumption (i) above hold true for a bounded Lipschitz domain $F \subset \mathds R^d$.
For simplicity, in the arguments below we assume that
$\rho(n^ix,n^iy)=n^i \rho(x,y)$ for all $n\in \mathds{N}$; otherwise,
we can express Dirichlet forms
$(D_{V_n}^{\omega},\mathscr{F}_{n}^{\omega})$ and $(D_{0},\mathscr{F}_{0})$ below with $\rho$,
$w_{x,y}^{(n)}(\omega)$ and $c(x,y)$ replaced by
$\bar \rho$,
$\bar w_{x,y}^{(n)}(\omega):=\frac{\bar\rho(x,y)^{d+\alpha}}{\rho(x,y)^{d+\alpha}}
w_{x,y}^{(n)}(\omega)$
and
$\bar c(x,y):=\frac{\bar\rho(x,y)^{d+\alpha}}{\rho(x,y)^{d+\alpha}}c(x,y)$, respectively.
Hence, by applying the arguments below for $\bar \rho$, $\bar w_{x,y}^{(n)}(\omega)$ and $\bar c(x,y)$,
we can still
obtain the quenched invariance principle for $(X_t^{\omega})_{t\ge 0}$.
\end{remark}
Let
$\{w_{x,y}(\omega): x,y \in V_1\}$ be a sequence of random variables
defined on
a probability space $\big(\Omega,\mathscr{F},\mathds P\big)$ such that
$w_{x,y}(\omega)=w_{y,x}(\omega)$ and
$w_{x,y}(\omega)\ge 0$
for all $x\neq y\in V_1$.
For any $x\in V_n$, $m_n(x):=m_n(\{x\})=n^{-d}K_n(x).$
Define
\begin{equation}\label{kkk2}w_{x,y}^{(n)}(\omega)=\frac{K_1(nx)K_1(ny)}{K_n(x)K_n(y)}w_{nx,ny}(\omega).\end{equation}
We consider the following class of Dirichlet forms
\begin{align*}
D_{V_n}^{\omega}(f,f)&=\frac{1}{2
}\sum_{x,y\in V_n}(f(x)-f(y))^2\frac{w_{x,y}^{(n)}(\omega)}{\rho(x,y)^{d+\alpha}}
m_n(x)m_n(y),\quad f\in \mathscr{F}_{n}^{\omega},\\
\mathscr{F}_{n}^{\omega}&=\{f\in L^2(V_n;m_n): D_{V_n}^{\omega}(f,f)<\infty\}.
\end{align*}
Let $X^{V_1,\omega}$ be the strong Markov process on $V_1$ associated with $(D_{V_1}^{\omega},\mathscr{F}_{1}^{\omega})$.
Then, it is easy to show that for a.s.\ $\omega\in \Omega$, $(D_{V_n}^{\omega},\mathscr{F}_{n}^{\omega})$ generates a Markov process $X^{(n),\omega}=(X_t^{(n),\omega})_{t\ge0}$ such that $X_t^{(n),\omega}={n}^{-1}X_{n^{\alpha}t}^{V_1,\omega}$ for all $t\ge0$.
Here and what follows, $=$ means two processes enjoy the
same distribution.
Now, consider the Dirichlet
form $(D_0,\mathscr{F}_0)$ given by \eqref{e4-1}, i.e.,
\begin{equation*}
\begin{split}
D_0(f,f)&=\frac{1}{2}\int_{\{F\times F\setminus {\rm diag}\}}\big(f(x)-f(y)\big)^2\frac{c(x,y)}{\rho(x,y)^{d+\alpha}}\,m(dx)\,m(dy),\quad f \in \mathscr{F}_0,\\
\mathscr{F}_0&=\{f\in L^2(F;m): D_0(f,f)<\infty\},
\end{split}
\end{equation*}
where $\alpha\in(0,2)$, ${\rm diag}:=\{(x,y)\in F\times F:x=y\}$, and $c:F\times
F\rightarrow (0,\infty)$ is a
symmetric
continuous function
such that $0<c_1\le c(x,y)\le c_2<\infty$ for all
$(x,y) \in F\times F\setminus{\rm diag}$ and some constants
$c_1,c_2$. We suppose that assumption {\bf(Dir.)} holds. Let $Y:=((Y_t)_{t\ge0}, (\mathds P_{x}^Y)_{x\in F})$ be a
$\alpha$-stable-like process on $F$.
We next apply Theorem \ref{t3-1} to prove the quenched invariance principle for $(X^{\omega}_t)_{t\ge 0}$ under some assumptions on $w_{x,y}$.
We first assume that the following holds.
\paragraph{{\bf Assumption (Den.)}} \begin{itemize}\it
\item[(i)] $\mathds E[w_{x,y}]=J_1(x,y)$ and
$\mathds E[w_{x,y}^{-1}\mathds 1_{\{w_{x,y}>0\}}]=J_2(x,y)$ for any $x,y\in V_1$, where $0<C_1<J_i(x,y)<C_2<\infty$ for all $i=1,2$ and $x,y\in V_1$.
\item[(ii)] For every compact set
$S\subseteq F$,
\begin{equation}\label{p3-2-0}
\lim_{n \rightarrow \infty}\Big[\sup_{x,y\in S}\Big|J_1(n[x]_n,n[y]_n)\cdot
\frac{K_1(n[x]_n)K_1(n[y]_n)}{K_n([x]_n)K_n([y]_n)}-c(x,y)\Big|\Big]=0,
\end{equation} where $[x]_n\in V_n$ is the element such that $x\in U_n([x]_n)$.
\end{itemize}
\begin{remark}
Obviously when $F=\mathds R^d$, it follows from \eqref{p3-2-0} that for any $x\neq y\in \mathds R^d$ and $s\neq0$,
$c(x,y)=c(sx,sy),$
which, along with the fact that $K_n\equiv 1$ for all $n\in \mathds{N}$ as mentioned in Remark \ref{r5-1},
implies that the limit process $(Y_t)_{t\ge 0}$ satisfies the scaling invariant property as follows
$$ \mathds P^Y_{\varepsilon^{-1}x}\left(\left(\varepsilon Y_{t\varepsilon^{-\alpha}}\right)_{t\ge 0}\in A\right)=
\mathds P^Y_x\left(\left(Y_{t}\right)_{t\ge 0}\in A\right)$$ for any $x\in \mathds R^d,$ $\varepsilon>0$ and $ A\subset \mathscr{D}([0,\infty);\mathds R^d)$.
\end{remark}
For $\varepsilon>0$, $x\in V_1$, $R,r>0$,
$c_0>1/2$,
$c_0^*\ge2$ and
a sequence of bounded functions $\{h_n\}_{n\ge 1}$ on $V_1\times V_1$, define
\begin{align*}
p_1(r,R,\varepsilon)&=\mathds P\Big(\Big|\sum_{x,y \in V_1: \rho(0,x)\le R, \rho(x,y)\le r}(w_{x,y}-J_1(x,y))\Big|>\varepsilon r^{d}R^d\Big),\\
p_2(x,r,\varepsilon)&=\mathds P\Big(\Big|\sum_{y \in V_1:\rho(x,y)\le r}\big(w_{x,y}-J_1(x,y)\big)\Big|>\varepsilon r^{d}\Big),\\
p_3(x,r,\varepsilon)&=\mathds P\Big(\Big|
\sum_{y\in V_1: \rho(x,y)\le r}\frac{(w_{x,y}-J_1(x,y))}{\rho(x,y)^{d+\alpha-2}}\Big|>\varepsilon r^{2-\alpha}\Big),\\
p_3^*(x,r,\varepsilon)&=\mathds P\Big(\Big|
\sum_{y\in V_1: \rho(x,y)\le r}\frac{(w_{x,y}-J_1(x,y))}{\rho(x,y)^{d+\alpha-1}}\Big|>\varepsilon r^{1-\alpha}\Big),\quad \alpha\in(0,1),\\
p_4(x,r,c_0^*,\varepsilon)&=\mathds P\Big(\Big|\sum_{y\in V_1: \rho(x,y)\le
c_0^*r}\big(w_{x,y}^{-1}-J_2(x,y)\big)\Big|
>\varepsilon_0r^d\Big),\\
p_5^{(n)}(x,R,r,h,\varepsilon)&=\mathds P\Big(\Big|\sum_{{y \in B_F(0,nR)\cap V_1:}
\atop{\rho(x,y)\ge nr}}h_n(x,y)\frac{(w_{x,y}-J_1(x,y))}{\rho(x,y)^{d+\alpha}}\Big|^2>\varepsilon (nr)^{-2\alpha}\Big),\\
p_6(x,z,r,c_0)&=\mathds P\left(\frac{\mu_1\{y\in V_1: \rho(y,x)\le r, w_{y,z}>0\}}{\mu_1\{y\in V_1: \rho(y,x)\le r\}}\le C_4c_0C_3^{-1}\right),
\end{align*} where $C_3\le C_4$ are positive constants in \eqref{e:kkk1}.
\begin{theorem}\label{t5-1}
Suppose that assumption {\bf(Den.)}\! holds, and that there exists a constant $\theta \in (0,1)$
such that
\begin{itemize}
\item[(i)]
for any $\varepsilon_0$ and $\varepsilon$ small enough, any $N$ large enough, and any sequence of
bounded function $\{h_n\}_{n\ge 1}$ on $V_1\times V_1$ with $\sup_{n\ge 1}\|h_n\|_{\infty}<\infty$,
\begin{equation}\label{p3-2-1}
\sum_{R=1}^{\infty}\sum_{r=1}^{R}p_1(r,R,\varepsilon_0)<\infty,
\end{equation}
\begin{equation}\label{p3-2-1a}
\sum_{R=1}^{\infty}\sum_{x \in
B_F(0,6R)\cap V_1}\sum_{r=R^{\theta}/2}^{\infty}p_2(x,r,\varepsilon_0)<\infty,
\end{equation} and
\begin{equation}\label{p3-2-2}
\sum_{n=1}^{\infty}\sum_{x\in B_F(0,nN)\cap V_1}p_5^{(n)}(x,N,\varepsilon,h_n,\varepsilon_0)<\infty.
\end{equation}
\item[(ii)] any $\varepsilon_0$ small enough, \begin{equation}\label{l3-1-1}
\sum_{R=1}^{\infty}\sum_{x \in
B_F(0,6R)\cap V_1}p_3(x,{R^{\theta}},\varepsilon_0)<\infty
\end{equation} and
\begin{equation}\label{l3-1-2}
\sum_{R=1}^{\infty}\sum_{x \in
B_F(0,6R)\cap V_1}\sum_{r=R^{\theta}/2}^{2R}
p_4(x,r,c_0^*,\varepsilon_0)<\infty,
\end{equation} for any fixed $c_0^*\ge0$, as well as
\begin{equation}\label{l3-1-2-a}
\sum_{R=1}^{\infty}\sum_{x,z\in
B_F(0,6R)\cap V_1}\sum_{r=R^{\theta}/2}^{2R}
p_6(x,z,r,c_0)<\infty
\end{equation} for some fixed $c_0>1/2$.
When $\alpha\in (0,1)$, \eqref{l3-1-1} can be replaced by
\begin{equation}\label{l3-1-1-1}
\sum_{R=1}^{\infty}\sum_{x \in
B_F(0,6R)\cap V_1}p^*_3(x,R^{\theta},\varepsilon_0)<\infty.
\end{equation} \end{itemize}
Then for
$\mathds P$-a.s.\ $\omega\in \Omega$ and
any $\{x_n\in V_n: n\ge 1\}$
such that $\lim_{n \rightarrow \infty}x_n=x$ with some $x \in F$,
it holds that for every $T>0$, $\mathds P^{(n),\omega}_{x_n}$ converges weakly to
$\mathds P_{x}^Y$ on the space of all probability measures on
$\mathscr{D}([0,T];F)$, where $\mathds P^{(n),\omega}_{x_n}$ denotes the distribution of
process $X_t^{(n),\omega}=n^{-1}X^{V_1,\omega}_{n^{\alpha}t}$.
\end{theorem}
Theorem \ref{t5-1} immediately holds by applying Theorem \ref{t3-1}, Lemmas \ref{p5-2} and \ref{l5-1-0} below
to process $X_t^{(n),\omega}$. From now on, for simplicity we will assume that $K_n\equiv 1$ for all $n\in \mathds{N}$
(in particular, $C_3=C_4=1$ in \eqref{e:kkk1}), since the proof directly works for general case
with some mild changes due to the facts that
$w_{x,y}^{(n)}(\omega)=\frac{K_1(nx)K_1(ny)}{K_n(x)K_n(y)}w_{nx,ny}(\omega)$ and $C^{-1}\le
\frac{K_1(nx)K_1(ny)}{K_n(x)K_n(y)}\le C$ for all $x,y\in V_n$ and $n\in \mathds{N}$ with some constant $C\ge1$ independent of $x,y,n$.
\begin{lemma}\label{p5-2} Under assumption ${\rm(i)}$ in Theorem $\ref{t5-1}$, for $\mathds P$-a.s.\ $\omega\in \Omega$,
Assumption {\bf(Mos.)} holds for the conductance $\{w_{x,y}^{(n)}(\omega)\}$.
\end{lemma}
\begin{proof}
Under \eqref{p3-2-1}, for any $\varepsilon_0>0$,
\begin{align*}
&\sum_{R=1}^{\infty} \mathds P\Big(\bigcup_{r=1}^{R}
\Big\{\Big|\sum_{x,y\in V_1: \rho(0,x)\le R,\rho(x,y)\le r}\big(w_{x,y}-J_1(x,y)\big)\Big|>\varepsilon_0 r^{d}R^d\Big\}\Big)\\
&\le \sum_{R=1}^{\infty}\sum_{r=1}^{R}\mathds P\Big(
\Big|\sum_{x,y\in V_1: \rho(0,x)\le R,\rho(x,y)\le r}\big(w_{x,y}-J_1(x,y)\big)\Big|>\varepsilon_0 r^{d}R^d\Big)=\sum_{R=1}^{\infty}\sum_{r=1}^{R}p_1(r,R,\varepsilon_0)<\infty.
\end{align*}
Since $C_1\le J_1(x,y)\le C_2$ for all $x,y \in V_1$ and some
positive constants $C_1$ and $C_2$, by the Borel-Cantelli lemma, we
know that, for $\mathds P$-a.s.\ $\omega \in \Omega$, there exists a constant
$R_0(\omega)\ge1$ such that for every $R>R_0(\omega)$,
$$
c_1r^{d}R^d\le \sum_{x,y\in V_1:\rho(0,x)\le R, \rho(x,y)\le r}w_{x,y}(\omega)\le
c_2r^{d}R^d,\quad \forall\
1\le r\le R,
$$
where $c_1,c_2$ are positive constants independent of $\omega$. Then,
for any $0<2\eta<N$ and $nN>R_0(\omega)$, we have
\begin{align*}
&n^{-2d}\sum_{x,y\in B_F(0,N)\cap V_n: 0<\rho(x,y)\le \eta}\frac{w_{nx,ny}(\omega)}{\rho(x,y)^{d+\alpha-2}}\\
&\le n^{-d+\alpha-2}\sum_{k=0}^{[\log(n\eta)/\log 2]+1}
\sum_{x,y\in V_1: \rho(0,x)\le nN \text{ and }2^k\le \rho(x,y)<2^{k+1}}\frac{w_{x,y}(\omega)}{\rho(x,y)^{d+\alpha-2}}\\
&\le n^{-d+\alpha-2}\sum_{k=0}^{[\log(n\eta)/\log 2]+1}2^{-k(d+\alpha-2)}\sum_{x,y\in V_1: \rho(0,x)\le nN \text{ and }2^k\le \rho(x,y)<2^{k+1}}
w_{x,y}(\omega)\\
&\le c_3n^{-d+\alpha-2}\sum_{k=0}^{[\log(n\eta)/\log 2]+1}2^{-k(d+\alpha-2)}2^{(k+1)d}(nN)^d\le
c_4N^d\eta^{2-\alpha}.
\end{align*}
This yields that \eqref{p3-1-1} holds for $\mathds P$-a.s.\ $\omega\in \Omega$.
According to \eqref{p3-2-1a}, for every $\varepsilon_0>0$
small enough,
\begin{align*}
&\sum_{R=1}^{\infty}\mathds P\Big(\bigcup_{x\in B_F(0,6R)\cap V_1}\bigcup_{r=R^{\theta}/2}^{\infty}\Big\{
\Big|\sum_{y\in V_1: \rho(x,y)\le r}\big(w_{x,y}-J_1(x,y)\big)\Big|>\varepsilon_0 r^d\Big\}\Big)\\
&\le \sum_{R=1}^{\infty}\sum_{x \in B_F(0,6R)\cap V_1}\sum_{r=R^{\theta}/2}^{\infty}
\mathds P\Big(\Big\{
\Big|\sum_{y\in V_1: \rho(x,y)\le r}\big(w_{x,y}-J_1(x,y)\big)\Big|>\varepsilon_0 r^d\Big\}\Big)\\
&\le \sum_{R=1}^{\infty}\sum_{x \in B_F(0,6R)}\sum_{r=R^{\theta}/2}^{\infty}p_2(x,r,\varepsilon_0)<\infty.
\end{align*}
Hence, by the Borel-Cantelli lemma, we can find
a constant $R_{1}(\omega)>0$ such that for every $R>R_{1}(\omega)$,
$x\in B_F(0,6R)$ and $r\ge R^{\theta}/2$,
$
\Big|\sum_{y\in V_1:\rho(x,y)\le r}(w_{x,y}-J_1(x,y))\Big| \le
\varepsilon_0 r^{d}.
$
Due to the fact that $0<C_1\le J_1(x,y)\le
C_2<\infty$ for any $x,y\in V_1$ again, we arrive at that for all $R>R_1(\omega)$,
\begin{equation}\label{p3-2-3}
c_5r^{d}\le \sum_{y\in V_1:\rho(x,y)\le r} w_{x,y} \le c_6 r^{d},\quad
\forall\
x\in B_F(0,6R),\ r\ge R^{\theta}/2.
\end{equation}
Therefore, by \eqref{p3-2-3}, for every $n,j\ge1$ large enough such that $2nN>R_1(\omega)$ and $j>N$,
\begin{align*}
&n^{-2d}\sum_{x,y\in B_F(0,N)\cap V_n: \rho(x,y)\ge j}\frac{w_{nx,ny}(\omega)}{\rho(x,y)^{d+\alpha}}\\
&\le n^{-d+\alpha}\sum_{x \in V_1:\rho(0,x)\le nN}
\sum_{y \in V_1: \rho(x,y)\ge nj}\frac{w_{x,y}(\omega)}{\rho(x,y)^{d+\alpha}}\\
&\le n^{-d+\alpha}\sum_{x \in V_1: \rho(0,x)\le nN}\sum_{k=\big[\frac{\log (nj)}{\log 2}\big]}^{\infty}
2^{-k(d+\alpha)}\sum_{y\in V_1: \rho(x,y)\le 2^{k+1}}w_{x,y}(\omega)\\
&\le c_7n^{-d+\alpha}\sum_{x \in V_1: \rho(0,x)\le nN}
\sum_{k=\big[\frac{\log (nj)}{\log 2}\big]}^{\infty}2^{-k(d+\alpha)}2^{(k+1)d}
\le c_8N^dj^{-\alpha}.
\end{align*}
Hence, letting $n \rightarrow \infty$ first and then $j \rightarrow
\infty$, we prove that \eqref{p3-1-1a} holds for $\mathds P$-a.s. $\omega\in
\Omega$.
Given $f \in {\rm Lip}_c(F)$, let
\begin{equation*}
h_n(x,y):=
\begin{cases}
f(n^{-1}{y})-f(n^{-1}{x}),&n^{-1}{x},n^{-1}{y}\in V_n,\\
0, &\text{otherwise}.
\end{cases}
\end{equation*}
Applying \eqref{p3-2-2} to
$h_n(x,y)$ and using the Borel-Cantelli lemma, we know that for any $\varepsilon$ and $\varepsilon_0$ small enough, and $N$ large enough, there
exists a constant $n_{0}(\omega)>0$ (which may depend on
$\varepsilon_0$, $\varepsilon$, $N$ and $f$) such that for every $n>n_{0}(\omega)$ and $x\in {B_F(0,nN)}$,
$$
\Big|\sum_{y\in B_F(0,nN)\cap V_1: \rho(x,y)\ge n\varepsilon}\big(
f(n^{-1}{y}\big)-f(n^{-1}{x})\big)\frac{(w_{x,y}(\omega)-J_1(x,y))}{\rho(x,y)^{d+\alpha}}\Big|^2 \le
\varepsilon_0(n\varepsilon)^{-2\alpha}.
$$
Then, for $n$ large enough such that $n\varepsilon>(nN)^{\theta}$, we have
\begin{align*}
&n^{-d}
\sum_{x\in B_F(0,N)\cap V_n}\left(\sum_{{y \in B_F(0,N)\cap V_n:}\atop {\rho(x,y)>\varepsilon}}
\big(f(x)-f(y)\big)\frac{\big(w_{nx,ny}(\omega)-J_1(nx,ny)\big)}{\rho(x,y)^{d+\alpha}}m_n(y)\right)^2\\
&=n^{-d+2\alpha}
\sum_{x\in B_F(0,nN)\cap V_1}\left(\sum_{y \in B_F(0,nN)\cap V_1 : \rho(x,y)>n\varepsilon}
h_n(x,y)\frac{\big(w_{x,y}(\omega)-J_1(x,y)\big)}{\rho(x,y)^{d+\alpha}}\right)^2\\
&\le n^{-d+2\alpha}
\sum_{x \in B_F(0,nN)\cap V_1}\varepsilon_0(n\varepsilon)^{-2\alpha}\le
c_9N^d\varepsilon^{-2\alpha} \varepsilon_0.
\end{align*}
On the other hand, due to \eqref{p3-2-0}, we can verify that every
fixed $N>0$ and $\varepsilon>0$,
\begin{align*}
&\lim_{n \rightarrow \infty}
n^{-d}\sum_{x\in B_F(0,N)\cap V_n}\bigg(\sum_{{y \in B_F(0,N)\cap V_n:}\atop {\rho(x,y)>\varepsilon}}
\big(f(x)-f(y)\big)\frac{\big(J_1(nx,ny)-c(x,y)\big)}{\rho(x,y)^{d+\alpha}}m_n(y)\bigg)^2\\
&\le 4\|f\|_{\infty}^2\varepsilon^{-2(d+\alpha)}\lim_{n \rightarrow \infty}
n^{-3d}
\sum_{x\in B_F(0,N)\cap V_n}\Big(\sum_{y \in B_F(0,N)\cap V_n: \rho(x,y)>\varepsilon}\big|J_1(nx,ny)-c(x,y)\big|\Big)^2\\
&\le c_{10}\|f\|_{\infty}^2\varepsilon^{-2(d+\alpha)}N^{d}\lim_{n \rightarrow \infty}\bigg\{
n^{-2d}\sum_{x,y\in B_F(0,N)\cap V_n}\big(J_1(nx,ny)-c(x,y)\big)^2\bigg\}=0.
\end{align*}
Combining two estimates above, we can obtain that \eqref{p3-1-2}
holds for $\mathds P$-a.s.\ $\omega\in \Omega$ by first letting $n \rightarrow
\infty$ and then taking $\varepsilon_0\rightarrow 0$.
Since \eqref{p3-1-3} can been proved in the similar way, we omit it here.
\end{proof}
\begin{lemma}\label{l5-1-0} Suppose that condition \eqref{p3-2-1a} and assumption ${\rm (ii)}$ in Theorem $\ref{t5-1}$ hold. Then for $\mathds P$-a.s. $\omega\in \Omega$, Assumption {\bf(Wea.)} holds for the conductance $\{w_{x,y}^{(n)}(\omega)\}$.
\end{lemma}
\begin{proof}
First, according to \eqref{l3-1-2-a}, the property $\mu_n(A)=\mu_1(nA)$ and the definitions of $m_n$ and $w^{(n)}_{x,y}$, we can easily deduce from the Borel-Cantelli lemma
that there is a constant $R_0(\omega)>0$ such that for
any $R>R_0(\omega)$ and $R^\theta/2\le r \le R$, \eqref{a4-3-1a} holds.
By \eqref{l3-1-1},
\begin{align*}
&\sum_{R=1}^{\infty}\mathds P\Bigg(\bigcup_{x \in B_F(0,6R)\cap V_1}\Big\{
\Big|\sum_{y\in V_1: \rho(x,y)\le
R^{\theta}}\frac{\big(w_{x,y}-J_1(x,y)\big)}{\rho(x,y)^{d+\alpha-2}}\Big|>\varepsilon_0
R^{\theta(2-\alpha)}\Big\}\Bigg)\\
&\le \sum_{R=1}^{\infty}\sum_{x \in B_F(0,6R)\cap V_1}
\mathds P\Big(\Big|
\sum_{y\in V_1: \rho(x,y)\le R^{\theta}}
\frac{\big(w_{x,y}-J_1(x,y)\big)}{\rho(x,y)^{d+\alpha-2}}\Big|>\varepsilon_0 R^{\theta(2-\alpha)}\Big)\\
&=\sum_{R=1}^{\infty}\sum_{x \in
B_F(0,6R)\cap V_1} p_3(x,R^{\theta},\varepsilon_0)<\infty.
\end{align*}
Hence, by the Borel-Cantelli lemma, there exists a constant $R_0(\omega)>0$ such that for
any $R>R_0(\omega)$,
\begin{equation}\label{l3-1-4}
\sum_{y\in V_1:\rho(x,y)\le R^{\theta}}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-2}}\le
c_1R^{\theta(2-\alpha)},\quad \forall\ x\in B_F(0,6R)\cap V_1.
\end{equation}
Furthermore, using \eqref{p3-2-3} and choosing $\varepsilon_0$ small
enough and $R_0(\omega)$ large enough, we find that for every $R>R_0(\omega)$,
\begin{equation}\label{l3-1-4a}
c_2^{-1}r^{d}\le \sum_{y \in V_1: \rho(x,y)\le r}w_{x,y}\le
c_2r^{d},\quad \forall\ r>R^{\theta}/2,\ x\in B_F(0,6R)\cap V_1.
\end{equation}
Combining this with \eqref{l3-1-4}, we see that for every
$R>R_0(w)$, $x \in B_F(0,6C_2R/n)\cap V_n$ and $R^{\theta}/2\le r \le 2R$,
\begin{align*}
&n^{-(d+\alpha-2)}\sum_{y\in V_n:\rho(x,y)\le C_2r/n}\frac{w^{(n)}_{x,y}}{\rho(x,y)^{d+\alpha-2}}\\
&\le \sum_{y\in V_1: \rho(x,y)< R^{\theta}/2}\frac{w_{x,y}}{\rho(x,y)^{d+\alpha-2}}+
\sum_{k=[\log (R^{\theta}/2)/\log 2]}^{[\log (C_2r)/\log 2]+1} 2^{-k(d+\alpha-2)}\Big(\sum_{y\in V_1: 2^k<\rho(x,y)\le 2^{k+1}}
w_{x,y}\Big)\\
&\le c_4 \Big(R^{\theta(2-\alpha)}+
\sum_{k=[\log (R^{\theta}/2)/\log 2]}^{[\log (C_2r)/\log 2]+1}2^{-k(\alpha-2)}\Big)\le c_5 r^{2-\alpha}.
\end{align*}
Therefore, \eqref{a4-3-1} holds for $\mathds P$-a.s.\ $\omega\in \Omega$.
Due to \eqref{l3-1-4a} again, we know that for every $R>R_0(\omega)$, $x \in B_F(0,6C_2R/n)\cap V_n$ and $r>R^{\theta}/2$,
\begin{align*}
n^{-(d+\alpha)}\sum_{y\in V_n: \rho(x,y)>C_1r/n}\frac{w^{(n)}_{x,y}}{\rho_n(x,y)^{d+\alpha}}&\le
\sum_{k=[\log (C_1r)/\log 2]}^{\infty}2^{-k(d+\alpha)}
\Big(\sum_{y \in V_1: 2^k<\rho(x,y)\le 2^{k+1}}w_{x,y}\Big)\\
&\le c_6\sum_{k=[\log (C_1r)/\log 2]}^{\infty}2^{-k(d+\alpha)}2^{d(k+1)}\le c_7 r^{-\alpha},
\end{align*}
which implies that \eqref{a4-3-3} is satisfied for $\mathds P$-a.s.\ $\omega\in \Omega$.
Following the arguments above, and using \eqref{l3-1-2} and the
Borel-Cantelli lemma, we can obtain that \eqref{a4-3-2} holds for
$\mathds P$-a.s.\ $\omega\in \Omega$. On the other hand, when $\alpha\in (0,1)$, we can use \eqref{l3-1-1-1} to prove that \eqref{e5-1} holds for
$\mathds P$-a.s.\ $\omega\in \Omega$.
The proof is complete.
\end{proof}
\subsection{Examples}
As an application of Theorem \ref{t5-1}, we consider three examples.
One is a lattice on a half/quarter space, and other two are time-change of stable-like processes and a fractal graph respectively.
\subsubsection{Lattice on a half/quarter space}\label{laths}
Let $F:=\mathbb{R}^{d_1}_+\times\mathbb{R}^{d_2}$ with $d_1,d_2\in \mathbb{N}\cup\{0\}$,
and $\rho$ and $m$ be the Euclidean distance and the Lebesgue measure respectively, which clearly satisfy assumption {\bf(MMS)}.
Therefore the process $Y$ associated with Dirichlet form $(D_0,\mathscr{F}_0)$ is a reflected stable-like process on $F$, see e.g. \cite{CK}.
Obviously $(D_0,\mathscr{F}_0)$ satisfies assumption {\bf(Dir.)}.
Here we will take
$
V_1=\mathbb{L}:=\mathbb{Z}^{d_1}_+\times\mathbb{Z}^{d_2},
$ and $K_n\equiv 1$ for all $n\in \mathds{N}$.
Note that the scaling limit of $n^{-1}\mathbb{L}$ is $F$.
Let $\{w_{x,y}: x,y\in \mathbb{L}\}$ be a sequence of non-negative independent random
variables, and $(X_t^{\omega})_{t\ge 0}$ be the Markov process with infinitesimal generator
$L_{\mathbb{L}}^{\omega}$ defined by \eqref{eq:geneoe}.
Obviously $(X_t^{\omega})_{t\ge 0}$ is the symmetric Hunt process associated with the Dirichlet form
$(D_{V_1}^{\omega},\mathscr{F}^{\omega}_1)$ with $V_1=\mathbb{L}$ and $w^{(1)}_{x,y}(\omega)=w_{x,y}(\omega)$.
\begin{proposition}\label{ex3-1} Let $d:=d_1+d_2>4-2\alpha$.
Suppose that $\{w_{x,,y}:x,y\in \mathbb{L}\}$ is a sequence of non-negative independent
random variables satisfying that
\begin{equation}\label{ex3-1-1a}
\sup_{x,y \in \mathbb{L},x\neq y}\mathds P\big(w_{x,y}=0\big)<2^{-4}
\end{equation}
and
\begin{equation}\label{ex3-1-1}
\sup_{x,y\in \mathbb{L}}\mathds E[w_{x,y}^{2p}]<\infty~~~\mbox{and}~~~\sup_{x,y\in
\mathbb{L}}\mathds E[w_{x,y}^{-2q}\mathds 1_{\{w_{x,y}>0\}}]<\infty
\end{equation}
for $p,q\in \mathds Z_{+}$ with
$p>\max\big\{{{(d+2)}/{d}, (d+1)}/(2(2-\alpha))\big\}$ and $q>{(d+2)}/{d}.$ If moreover
\eqref{p3-2-0} holds true, then the quenched invariance principle
holds for
$X^{\omega}_{\cdot}$ with the limit process $Y$. Moreover, when $\alpha\in (0,1)$, the conclusion still holds true for $d>2-2\alpha$, if $p>\max\big\{{(d+1)}/(2(1-\alpha)), {(d+2)}/{d}\big\}$ and $q>{(d+2)}/{d}.$
\end{proposition}
\begin{proof}
According to Theorem \ref{t5-1}, it suffices to verify
\eqref{p3-2-1} --- \eqref{l3-1-1-1}.
We first verify \eqref{l3-1-2-a}. Recall that in the present setting $K_n\equiv 1$ for all $n\in \mathds{N}$, and so $C_3=C_4=1$. Suppose that $p_0:=\sup_{x,y \in \mathbb{L},x\neq y}\mathds P\big(w_{x,y}=0\big)<2^{-4}$. Denote by
$L(x,r):=|\{y\in \mathbb{L}: |y-x|\le r\}|$.
Then, for every $r>0$ and $x,z\in \mathbb{L}$,
\begin{align*}
\mathds P\Big(\sum_{y\in \mathbb{L}:|y-x|\le r}
\mathds 1_{\{w_{z,y}\neq 0\}}\le \frac{3}{4}L(x,r)\Big)
&\le \sum_{k=0}^{[\frac{3}{4}L(x,r)]+1}\begin{pmatrix}L(x,r) \\ k\end{pmatrix}p_0^{L(x,r)-k}\\
&\le 2^{L(x,r)}p_0^{\left[\frac{1}{4}L(x,r)\right]-1}\left(\left[\frac{3}{4}L(x,r)\right]+1\right)\le c_02^{-c_0'r^d},
\end{align*}
where in the second inequality we used the fact that $\begin{pmatrix}L(x,r) \\ k\end{pmatrix}\le 2^{L(x,r)}$ for all
$0\le k \le L(x,r)$, and the last inequality follows from $p_0<2^{-4}$ and $L(x,r)\asymp r^d$. The estimate above yields that
$
\sum_{R=1}^{\infty}\sum_{x,z\in
B_F(0,6R)\cap V_1}\sum_{r=R^{\theta}/2}^{2R}
p_6(x,z,r,3/4)<\infty.$ This is, \eqref{l3-1-2-a} holds with $c_0=3/4.$
Recall that, for any independent sequence $\{\xi_n: n\ge 1\}$ such that
$\mathds E[\xi_n]=0$ for all $n\ge1$ and $\sup_{n}\mathds E[|\xi_n|^{2p}]<\infty$ for some $p\in
\mathds Z_+$, it holds for every $N\ge 1$ that
$
\mathds E\Big[\big|\sum_{n=1}^{N}\xi_i\big|^{2p}\Big] \le c_1(p)N^p,
$
where $c_1(p)$ is a constant independent of $N$. Then, for every $\varepsilon_0>0$, $R$, $r>0$, $c_0^*\ge2$,
$n\ge 1$ and a subsequence of bounded measurable function $h_n$ on $\mathbb{L}\times \mathbb{L}$ such that $\sup_{n\ge1}\|h_n\|_\infty<\infty$,
\begin{align*}
p_1(r,R,\varepsilon_0)&\le\! \varepsilon_0^{-2p}R^{-2pd}r^{-2pd}
\mathds E\Big[\Big|\sum_{x,y\in \mathbb{L}: |x|\le R, |y-x|\le r}\!\!(w_{x,y}\!-\!\mathds E[w_{x,y}])\Big|^{2p}\Big]\!\le\! c_1(\varepsilon_0)r^{-pd}R^{-pd}, \\
p_2(x,r,\varepsilon_0)&\le \varepsilon_0^{-2p}r^{-2pd}\mathds E
\Big[\Big|\sum_{y\in \mathbb{L}:|y-x|\le r}
\big(w_{x,y}-\mathds E[w_{x,y}]\big)\Big|^{2p}\Big]\le c_2(\varepsilon_0)r^{-pd},\\
p_4(x,r,c_0^*,\varepsilon_0)&\le \varepsilon_0^{-2q}r^{-2qd}
\mathds E\Big[\Big|\sum_{y\in \mathbb{L}: |y-x|\le c_0^*r}\big(w_{x,y}^{-1}-\mathds E[w_{x,y}^{-1}]\big)\Big|^{2q}\Big]\le c_3(\varepsilon_0,c_0^*)r^{-qd},\\
p_5^{(n)}(x,N,\varepsilon,h_n,\varepsilon_0)&\le c_4(\varepsilon_0,\varepsilon,\sup_{n\ge1}\|h_n\|_\infty)n^{2\alpha p}
\mathds E\left[\left|\sum_{y\in \mathbb{L}: |y-x|\ge n\varepsilon, |y|\le nN}\frac{w_{x,y}-\mathds E[w_{x,y}]}{|x-y|^{d+\alpha}}\right|^{2p}\right]\\
&\le c_5(\varepsilon_0, N,\varepsilon,\sup_{n\ge1}\|h_n\|_\infty)n^{2\alpha p}n^{pd}n^{-2p(d+\alpha)}=c_5(\varepsilon_0,N,\varepsilon,\sup_{n\ge1}\|h_n\|_\infty)n^{-pd}.
\end{align*}
In the following, we fix $x\in \mathbb{L}$. Let
\begin{align*}
S_p(i)&:=\mathds E\left[\left|\sum_{y\in
\mathbb{L}: |y-x|\le 2^i}
\frac{\big(w_{x,y}-J_1(x,y)\big)}{|x-y|^{d+\alpha-2}}\right|^{2p}\right]\\
&\le c_6\mathds E\left[\left|
\sum_{j=0}^{i}2^{j(2-d-\alpha)}\sum_{y\in
\mathbb{L}: 2^{j-1}<|y-x|\le 2^j}
\big(w_{x,y}-J_1(x,y)\big)\right|^{2p}\right]=:c_6\mathds E\left[\left|\sum_{j=1}^i a(j)\xi(j)\right|^{2p}\right],
\end{align*}
where $a(j)=2^{j(2-d-\alpha)}$ and $\xi(j)=\sum_{y\in
\mathbb{L}: 2^{j-1}<|y-x|\le 2^j}
\big(w_{x,y}-\mathds E[w_{x,y}]\big)$. Noting that
$\mathds E[\xi(j)]=0$ and $\mathds E[|\xi(j)|^{2p}]\le c_72^{jdp}$ for all $j\ge 1$, by the independent property of $\{w_{x,y}(\omega)\}$,
$
\sup_{i\ge 1}S_1(i)\le c_6\sup_{i\ge 1}\Big(\sum_{j=1}^i a(j)^2\mathds E[\xi(j)^2]\Big)\le
c_8\sum_{j=1}^{\infty} 2^{j(4-d-2\alpha)}<\infty,
$
where the last step is due to the fact $d>4-2\alpha$. Suppose that $\sup_{i\ge 1}S_k(i)<\infty$ for every $0\le k<p$. Then
\begin{equation*}
\begin{split}
S_{k+1}(i)-S_{k+1}(i-1)&=\sum_{l=1}^{k+1}\begin{pmatrix}k+1 \\ l\end{pmatrix}
a(i)^{2l}
\mathds E[\xi(i)^{2l}]S_{k+1-l}(i-1)\\
&\le c_9(k)\big(\sup_{0\le j \le k, i\ge 1}S_j(i)\big)
2^{i(4-d-2\alpha)},
\end{split}
\end{equation*}
which implies
$
\sup_{i\ge 1}S_{k+1}(i)\le c_{10}(k)\sum_{r=1}^{\infty}2^{i(4-d-2\alpha)}<\infty.
$
So, by induction, we arrive at that $\sup_{i\ge 1}S_p(i)<\infty$. Hence, for every $x \in \mathbb{L}$,
$p_3(x,R,\varepsilon_0)\le c_9(\varepsilon_0)R^{-2(2-\alpha)p}.
$
Under assumptions of the proposition, we can choose $\theta\in
(0,1)$ (close to $1$) such that
$$p>\max\left\{\frac{d+1+\theta}{d\theta },\frac{d+1}{2\theta(2-\alpha)}\right\}\,\, \text{ and }\,\, q>\frac{d+1+\theta}{d\theta},$$ also thanks to the condition that $d>4-2\alpha$ again. Then, according to all the estimates above, we know
immediately that
\eqref{p3-2-1} --- \eqref{l3-1-2} hold for this $\theta\in (0,1)$ and
every sufficiently small $\varepsilon_0>0$.
Suppose that $\alpha\in (0,1)$. If $d>2-2\alpha$, $p>\max\big\{{(d+1)}/(2(1-\alpha)), {(d+2)}/{d}\big\}$ and $q>{(d+2)}/{d}$, then we can choose $\theta\in
(0,1)$ (close to $1$) such that
$$p>\max\left\{\frac{d+1+\theta}{d\theta },\frac{d+1}{2\theta(1-\alpha)}\right\} \,\, \text{ and }\,\, q>\frac{d+1+\theta}{d\theta}.$$ Following the argument above, we can prove that \eqref{p3-2-1} --- \eqref{p3-2-2}, \eqref{l3-1-2} and \eqref{l3-1-1-1} are satisfied. Then, the desired assertion follows from Theorem \ref{t5-1} again. The proof is complete.
\end{proof}
Theorem \ref{th1} is a direct consequence of Proposition
\ref{ex3-1}, since \eqref{p3-2-0} holds trivially in this setting.
\subsubsection{Time-change of $\alpha$-stable-like process on $\mathds R^d$}
Let us first fix the triple $(F,\rho,m)$ with $F=\mathds R^d$,
$\rho$ being the Euclidean distance and $m(dx)=K(x)\,dx$, where $dx$ denotes the Lebesgue measure on $\mathds R^d$ and
$K$ is a continuous function on $\mathds R^d$ satisfying that $0<C_1\le K(x)\le C_2<\infty$ for some constants
$C_1\le C_2$.
Then, the process $Y$ associated with the Dirichlet form
$(D_0,\mathscr{F}_0)$ given at the beginning of
Subsection \ref{subsu5-1} is a time-change of symmetric $\alpha$-stable process on $\mathds R^d$ with $c(x,y)=K(x)^{-1}K(y)^{-1}$ for $x,y\in \mathds R^d$. It is obvious that $(D_0,\mathscr{F}_0)$
satisfies assumption {\bf(Dir.)}.
Similar to the previous part, we can take $V_1=\mathds Z^d$, and $m_n=K_n\mu_n$
with $\mu_n$ being the counting measure on $n^{-1}\mathds Z^d$ and
\begin{equation*}
K_n(x)=n^{-d}\int_{U_n(x)}K(x)\,dx,\quad x\in n^{-1}\mathds Z^d,
\end{equation*}
where $U_n(x)=\Pi_{i=1}^d[x_i,x_i+n^{-1})$ for any $x=(x_1,\cdots,x_d)\in n^{-1}\mathds Z^d$.
Let $(X_t^{\omega})_{t\ge 0}$ be the symmetric Hunt process associated with Dirichlet form $(D_{V_1}^{\omega},\mathscr{F}^{\omega}_1)$ with
$V_1=\mathds Z^d$ and $w^{(1)}_{x,y}(\omega)=w_{x,y}(\omega)$.
Note that for any compact set $S \subset \mathds R^d$,
$
\lim_{n \rightarrow \infty}\sup_{x\in S}\big|K_n(n[x]_n)-K(x)\big|=0.
$ If $J_1(x,y)=\mathds E[w_{x,y}]=K_1(x)^{-1}K_1(y)^{-1}$ for all $x,y \in \mathds Z^d$, then
\eqref{p3-2-0} holds true. Hence, following the same arguments in the proof of
Proposition \ref{ex3-1}, we can obtain that under assumption \eqref{ex3-1-1} the quenched invariance
principle holds for $(X_t^{\omega})_{t\ge 0}$ with limiting process $Y$ being a time-change of symmetric $\alpha$-stable process on $\mathds R^d$.
\begin{remark}\label{r5-2}
From the example above, we know that to identity the limit process consists of two ingredients. One
is to verify locally weak convergence of $m_n$ to $m$, and the other is to justify convergence of the jumping kernel
for the associated Dirichlet form. In fact, by carefully tracking the proof above, we can see that if the measure $m_n$ is replaced
by a more general (random) measure which converges locally weakly to $m$,
then the quenched invariance principle still holds
with the same limiting process.
\end{remark}
\subsubsection{Bounded Lipschitz domain}
In fact, Proposition \ref{ex3-1} holds not only for a half/quarter space, but
also for the closure of a bounded Lipschitz domain in
$\mathbb{R}^d$,
whose intrinsic distance is equivalent to the Euclidean distance and whose
volume growth is with order $d$.
In details, let $F\subset \mathbb{R}^d$ be
a closed set such that for any $x,y\in F$ and $r>0$,
$
c_1r^{d}\le m(B_F(x,r))\le c_2r^{d}$ and
$
c_1|x-y|\le \rho_F(x,y)\le c_2|x-y|,
$
where $$\rho_F(x,y):=\inf\left\{\int_{0}^1 |\dot{\gamma}(s)|\,ds: \gamma\in C^1([0,1];F), \gamma(0)=x,\gamma(1)=y\right\}$$ is the
intrinsic metric on $F$, $m$ is the Lebesgue measure,
and $B_F(x,r)$ is the ball with respect to $\rho_F$.
For $x=(x_1,\cdots,x_d)\in n^{-1}\mathds Z^d$,
set $U_n(x)=\Pi_{i=1}^d[x_i,x_i+n^{-1}).$
Note that when $F$ is the closure of a {bounded} Lipschitz domain, $V_n:=\{n^{-1}\mathbb{Z}^d\cap F: U_n(x)\subset F\}$
satisfies the properties given in Lemma \ref{L:ac}.
Suppose that $\{w_{x,,y}:x,y\in \mathbb{Z}^d\}$ is a sequence of independent
random variables satisfying the conditions in Proposition \ref{ex3-1}.
Then the conclusion of Proposition \ref{ex3-1} holds on $F$. Indeed, in this case, by taking
$V_n$ as above, the proofs of Theorem \ref{t5-1} and Proposition \ref{ex3-1} go through without any change
({with $\rho$ replaced by $\rho_F$ as explained in Remark \ref{r5-1}}). Note that neither $V_n=n^{-1}V_1$ nor
$X_t^{(n),\omega}={n}^{-1}X_{n^{\alpha}t}^{V_1,\omega}$
holds in general in this setting.
{(However, we can verify that $X_t^{(n),\omega}={n}^{-1}X_{n^{\alpha}t}^{\tilde V_{n},\omega}$, where
$\tilde V_n:=nV_n \subset nF.$)}
Note that the proofs do not require these properties, and the integrability condition
given for all $x,y\in \mathbb{Z}^d$ is (more than) enough for the estimates in the
proofs to hold.
\subsubsection{Fractal graph} The arguments in Example \ref{laths}
work for more general graphs
that satisfy (i)--(iv), and that its scaling limit $(F,\rho,m)$ and Dirichlet form which satisfy {\bf(MMS)} and {\bf(Dir.)} respectively as discussed at the beginning of subsection \ref{subsu5-1}. In particular, we can
prove quenched invariance principle for stable-like processes on various fractal graphs.
Here we introduce the most typical fractal graph; namely the Sierpinski gasket graph.
Let $e_0=(0,0,\cdots, 0)\in \mathds R^{N}$, and for
$1\le i\le N$, $e_i$ be the unit vector in $\mathds R^{N}$ whose $i$-th element is $1$. Set
$F_i(x)=(x-e_i)/2+e_i$ for $0\le i\le N$.
Then, there exists the unique non-void compact set such that
$K=\cup_{i=0}^N F_i(K)$; $K$ is called the $N$-dimensional
Sierpinski gasket. Set $F:=\cup_{n=0}^{\infty}2^nK$, which is the unbounded Sierpinski gasket. Let
\[V_1=\bigcup_{m=0}^\infty 2^m\Big(\bigcup_{i_1,\cdots,i_m=0}^NF_{i_1}\circ\cdots
\circ F_{i_m}(\{e_0,\cdots, e_N\})\Big),~~V_n=2^{-n+1}V_1.\]
(Hence, $n^{-1}$ in the definition of $V_n$ in the previous subsection is now
$2^{-n+1}$.) The closure of $\cup_{m\ge 1}V_m$ is $F$.
$F$ satisfies assumption {\bf(MMS)} with $d=\log (N+1)/\log 2$. We can naturally construct
a regular stable-like Dirichlet form satisfying
assumption {\bf(Dir.)}.
Let $\{w_{x,y}: x,y\in V_1\}$ be a sequence of independent random
variables. Then we have Proposition \ref{ex3-1}
with the same proof in this case as well.
\ \
\noindent {\bf Acknowledgements.} The research of Xin Chen is
supported by National Natural Science Foundation of China (No.\ 11501361).\ The research of Takashi Kumagai is supported by the
Grant-in-Aid for Scientific Research (A) 25247007 and 17H01093,
Japan.\ The research of Jian Wang is supported by National Natural
Science Foundation of China (No.\ 11522106), Fok Ying Tung Education
Foundation (No.\ 151002) and the Program for Probability and
Statistics: Theory and Application (No.\ IRTL1704).
|
3,212,635,537,665 | arxiv | \section{Introduction}
Entanglement plays a very important role in quantum information
processes \cite{yu1, wbps, AFOV, FLWDR} (see also references
therein). Even if different parts of the quantum system (quantum
register) are initially disentangled, entanglement naturally appears
in the process of quantum protocols. This ``constructive
entanglement'' must be preserved during the time of quantum
information processing. On the other hand, the system generally
becomes entangled with the environment. This ``destructive
entanglement'' must be minimized in order to achieve a needed
fidelity of quantum algorithms. The importance of these effects
calls for the development of rigorous mathematical tools for
analyzing the dynamics of entanglement and for controlling the
processes of constructive and destructive entanglement. Another
problem which is closely related to quantum information is quantum
measurement. Usually, for a qubit (quantum two-level system),
quantum measurements operate under the condition
$\hbar\omega>\!\!>k_BT$, where $T$ is the temperature, $\omega$ is
the transition frequency, $\hbar$ is the Planck constant, and $k_B$
is the Boltzmann constant. This condition is widely used in
superconducting quantum computation, when $T\sim 10-20mK$ and
$\hbar\omega/k_B\sim 100-150 mK$. In this case, one can use
Josephson junctions (JJ) and superconducting quantum interference
devices (SQUIDs), both as qubits \cite{MSS, YHCCW, DM, Setal, Ketal}
and as spectrometers \cite{Cletal} measuring a spectrum of noise and
other important effects induced by the interaction with the
environment. Understanding the dynamical characteristics of
entanglement through the environment on a large time interval will
help to develop new technologies for measurements not only of
spectral properties, but also of quantum correlations induced by
the environment.
In this paper, we develop a consistent perturbation theory of
quantum dynamics of entanglement which is valid for arbitrary times
$t\geq 0$. This is important in many real situations because (i) the
characteristic times which usually appear in quantum systems with
two and more qubits involve different time-scales, ranging from a
relatively fast decay of entanglement and different reduced density
matrix elements (decoherence) to possibly quite large relaxation
times, and (ii) for not exactly solvable quantum Hamiltonians
(describing the energy exchange between the system and the
environment) one can only use a perturbative approach in order to
estimate the characteristic dynamical parameters of the system.
Note, that generally not only are the time-scales for decoherence
and entanglement different, but so are their functional
time-dependences. Indeed, usually the off-diagonal reduced density
matrix elements in the basis of the quantum register do not decay to
zero for large times, but remain at the level of $O(\lambda^2)$,
where $\lambda$ is a characteristic constant of interaction between
a qubit and an environment \cite{MSB1}. On the other hand,
entanglement has a different functional time dependence, and in many
cases decays to zero in finite time. Another problem which we
analyze in this paper is a well-known cut-off procedure which one
must introduce for high frequencies of the environment in order to
have finite expressions for the interaction Hamiltonian between the
quantum register and the environment. Generally, this artificial
cut-off frequency enters all expressions in the theory for physical
parameters, including decay rates and dynamics of observables. At
the same time, one does not have this cut-off problem in real
experimental situations. So, it would be very desirable to develop a
regular theoretical approach to derive physical expressions which do
not include the cut-off parameter. We show that our approach allows
us to derive these cut-off independent expressions as the main terms
of the perturbation theory, which is of $O(\lambda^2)$. The cut-off
terms are included in the corrections of $O(\lambda^4)$. At the same
time, the low-frequency divergencies still remain in the theory, and
need additional conditions for their removal.
We describe the characteristic dynamical properties of the
simplest quantum register which consists of two not directly
interacting qubits (effective spins), which interact with local and
collective environments. We introduce a classification of the
decoherence times based on a partition of the reduced density matrix
elements in the energy basis into clusters. This classification,
valid for general $N$-level systems coupled to reservoirs, is rather
important for dealing with quantum algorithms with large registers.
Indeed, in this case different orders of ``quantumness" decay on
different time-scales. The classification of decoherence time-scales
which we suggest will help to separate environment-induced effects
which are important from the unimportant ones for performing a
specific quantum algorithm. We point out that all the populations
(diagonal of density matrix) always belong to the same cluster to
which is associated the relaxation time.
We present analytical and numerical results for decay and creation
of entanglement for both solvable (integrable, energy conserving)
and unsolvable (non-integrable, energy-exchange) models, and explain
the relations between them.
This paper is devoted to a physical and numerical discussion of the
dynamical resonance theory, and its application to the evolution of
entanglement. A detailed exposition of the resonance method can be
found in \cite{MSB1,mm}. As the mathematical details leading to
certain expressions used in the discussion presented in this paper
are rather lengthy, we report them separately in \cite{mm}.
\section{Model}
We consider two qubits ${\rm S}_1$ and ${\rm S}_2$, each one coupled to a local
reservoir, and both together coupled to a collective reservoir. The
Hamiltonian of the two qubits is
\begin{equation}
H_{\rm S}=B_1S^z_1+B_2S^z_2,
\label{n1}
\end{equation}
where $B_j=\hbar\omega_j/{2}$ are effective magnetic fields,
$\omega_j$ is the transition frequency, and $S_j^z$ is the Pauli
spin operator of qubit $j$. The eigenvalues of $H_{\rm S}$ are
\begin{equation}
E_1=B_1+B_2,\ E_2=B_1-B_2,\ E_3=-B_1+B_2,\ E_4=-B_1-B_2,
\label{46}
\end{equation}
with corresponding eigenstates
\begin{equation}
\varPhi_1 = |++\rangle,\ \varPhi_2 = |+-\rangle,\ \varPhi_3 = |-+\rangle,\ \varPhi_4 = |--\rangle,\
\label{Sbasis}
\end{equation}
where $S^z|\pm\rangle =\pm|\pm\rangle$.
Each of the three reservoirs consists of free thermal bosons at
temperature $T=1/\beta>0$, with Hamiltonian
\begin{equation}
H_{{\rm R}_j} = \sum_k \hbar\omega_k a_{j,k}^\dagger a_{j,k},\qquad j=1,2,3.
\label{n2}
\end{equation}
The index $3$ labels the collective reservoir. The creation and
annihilation operators satisfy $[a_{j,k},a_{j',k'}^\dagger] =
\delta_{j,j'}\delta_{k,k'}$. The interaction between each qubit and
each reservoir has two parts: an energy conserving and an energy
exchange one. The total Hamiltonian is
\begin{eqnarray}
H &=& H_{\rm S} +H_{{\rm R}_1}+H_{{\rm R}_2} + H_{{\rm R}_3} \label{n3}\\
&& +\left( \lambda_1 S^x_1 +\lambda_2 S^x_2\right)\otimes\phi_3(g)\label{n4}\\
&& +\left( \kappa_1 S^z_1 +\kappa_2 S^z_2\right)\otimes\phi_3(f)\label{n5}\\
&& + \,\mu_1 S^x_1\otimes\phi_1(g) +\mu_2 S^x_2\otimes\phi_2(g)\label{n6}\\
&& + \,\nu_1 S^z_1\otimes\phi_1(f) +\nu_2 S^z_2\otimes\phi_2(f)\label{n7}.
\end{eqnarray}
Here, $\phi_j(g)=\frac{1}{\sqrt 2}(a_j^\dagger(g) +a_j(g))$, with
\begin{equation}
a_j^\dagger(g)=\sum_k g_k a^\dagger_{j,k},\qquad a(g)=\sum_k g^*_k a_{j,k}.
\label{n8}
\end{equation}
The $\lambda, \kappa, \mu,\nu $ are the dimensionless coupling
constants. The collective interaction is given by \fer{n4}
(energy-exchange) and \fer{n5} (energy conserving), the local
interactions are given by \fer{n6}, \fer{n7}. Also, $S_j^x$ is the
spin-flip operator (Pauli matrix) of qubit $j$. In the continuous
mode limit, $g_k$ becomes a function $g(k)$, $k\in{\mathbb R}^3$. Our
approach is based on analytic spectral deformation methods
\cite{MSB1} and requires some analyticity of the form factors $f,g$.
Instead of presenting this condition we will work here with examples
satisfying the condition.
\begin{itemize}
\item[(A)] Let $r\geq 0$, $\Sigma\in S^2$ be the spherical coordinates
of ${\mathbb R}^3$. The form factors $h=f,g$ (see \fer{n4}-\fer{n7}) are $h(r,\Sigma)
= r^p{\rm e}^{-r^m}h_1(\Sigma)$, with $p=-1/2+n$, $n=0,1,\ldots$ and $m=1,2$. Here,
$h_1$ is any angular function.
\end{itemize}
This family contains the usual physical form factors \cite{PSE}. We
point out that we include an ultraviolet cutoff in the interaction
in order for the model to be well defined. (The minimal mathematical
condition for this is that $f(k), g(k)$ be square integrable over
$k\in{\mathbb R}^3$.) However, as discussed in point 2. before equation
\fer{cutoff}, our approach yields expressions for decay and
relaxation rates which, to lowest order in the couplings between the
qubits and the reservoirs, do {\it not} depend on the ultraviolet
characteristics of the model.
\section{Evolution of qubits: resonance approximation}
\label{sectevol}
We take initial states where the qubits are not entangled with the
reservoirs. Let $\rho_{\rm S}$ be an arbitrary initial state of the qubits,
and let $\rho_{{\rm R}_j}$ be the thermal equilibrium state of reservoir ${\rm R}_j$.
Let $\rho_{\rm S}(t)$ be the reduced density matrix of the two qubits at time $t$.
The reduced density matrix elements in the energy basis are
\begin{eqnarray}
\lefteqn{
[\rho_{\rm S}(t)]_{mn}:=\scalprod{\varPhi_m}{\rho_{\rm S}(t)\varPhi_n}}\nonumber\\
&& ={\rm Tr}_{{\rm R}_1+{\rm R}_2+{\rm R}_3}\left[ \rho_{\rm S}\otimes\rho_{{\rm R}_1}\otimes
\rho_{{\rm R}_2}\otimes\rho_{{\rm R}_3}\ {\rm e}^{-\i tH/\hbar} |\varPhi_n\rangle\langle\varPhi_m| \,
{\rm e}^{\i tH/\hbar}\right],
\label{40}
\end{eqnarray}
where we take the trace over all reservoir degrees of freedom.
Under the non-interacting dynamics (all coupling parameters zero),
we have
\begin{equation}
[\rho(t)]_{mn} = {\rm e}^{\i t e_{mn}/\hbar} [\rho(0)]_{mn}, \label{30}
\end{equation}
where $e_{mn}=E_m-E_n$.
In the rest of the paper we use the dimensionless functions and
parameters. For this we introduce a characteristic frequency,
$\omega_0$, to be defined later, in Section 8, and the dimensionless
energies, temperature, frequencies and wave vectors of thermal
excitations, and time by setting
\begin{equation}
E_n^\prime = E_n/(\hbar\omega_0),\qquad
f_k^\prime=f_k/(\hbar\omega_0), \qquad
g_k^\prime=g_k/(\hbar\omega_0),\qquad T^\prime=k_BT/(\hbar\omega_0),
\end{equation}
$$
\beta^\prime=1/T^\prime,\qquad
\omega_k^\prime=\omega_k/\omega_0,\qquad\vec{k}^\prime=c\vec{k}/\omega_0,\qquad
t^\prime = \omega_0 t,
$$
where $c$ is the speed of light. Below we omit index ``prime" in all
expressions.
As the interactions with the reservoirs are turned on (some of
$\kappa_j,\lambda_j,\mu_j,\nu_j$ nonzero), the dynamics \fer{30}
undergoes two qualitative changes.
\begin{itemize}
\item[1.] The ``Bohr frequencies''
\begin{equation}
e\in \{ E_k-E_l\ : E_k,E_l\in{\rm spec}(H_{\rm S})\} \label{17}
\end{equation}
in the exponent of \fer{30} become {\it complex resonance energies},
$e\mapsto \varepsilon_e$, satisfying $\Im\varepsilon_e\geq 0$. If
$\Im\varepsilon_e >0$ then the corresponding density matrix elements
decay to zero (irreversibility).
\item[2.] The matrix elements do not evolve independently any more.
To lowest order in the couplings, all matrix elements
with $(m,n)$ belonging to a fixed energy difference $E_m-E_n$ will
evolve in a coupled manner. Thus to a given energy difference $e$,
\fer{17}, we associate the cluster of matrix element indexes
\begin{equation}
{\cal C}(e)=\{ (k,l)\ :\ E_k-E_l=e\}.
\label{32}
\end{equation}
\end{itemize}
\noindent Both effects are small if the coupling is small, and they
can be described by perturbation theory of energy differences
\fer{17}. We view the latter as the eigenvalues of the {\it
Liouville operator}
\begin{equation}
L_{\rm S} = H_{\rm S}\otimes\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{{\rm S}} - \mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{{\rm S}}\otimes H_{\rm S},
\label{16'}
\end{equation}
acting on the doubled Hilbert space ${\cal H}_{\rm S}\otimes{\cal H}_{\rm S}$ (and ${\cal
H}_{\rm S}={\mathbb C}^2\otimes{\mathbb C}^2$). The appearance of `complex energies' for
open systems is well known to stem from passing from a Hamiltonian
dynamics to an effective non-Hamiltonian one by tracing out
reservoir degrees of freedom. The fact that independent clusters arise in the dynamics to lowest
order in the coupling can be understood heuristically as follows.
The interactions change the effective energy of the two qubits, i.e.
the basis in which the reduced density matrix is diagonal. Thus the
eigenbasis of $L_{\rm S}$ \fer{16'} is changed. However, to lowest order
in the perturbation, spectral subspaces with fixed $e\in{\rm
spec}(L_{\rm S})$ {\it are left invariant} and stay orthogonal for
different unperturbed $e$. So matrix elements associated to ${\cal
C}(e)$ get mixed, but they do not mix with those in ${\cal C}(e')$,
$e\neq e'$.
\medskip
Let $e$ be an eigenvalue of $L_{\rm S}$ of multiplicity ${\rm mult}(e)$.
As the coupling parameters are turned on, there are generally many
distinct resonance energies bifurcating out of $e$. We denote them
by $\varepsilon_e^{(s)}$, where the parameter $s$ distinguishes different
resonance energies and varies between $s=1$ and
$s=\nu(e)$, where $\nu(e)$ is some number not exceeding ${\rm
mult}(e)$. We have a perturbation expansion
\begin{equation}
\varepsilon_e^{(s)} = e +\delta_e^{(s)} +O(\varkappa^4),
\label{u1}
\end{equation}
where
\begin{equation}
\varkappa:= \max\{ |\kappa_j|, |\lambda_j|, |\mu_j|, |\nu_j|\ :\
j=1,2\} \label{12}
\end{equation}
and where $\delta_e^{(s)}=O(\varkappa^2)$ and $\Im\delta_e^{(s)}\geq 0$. The lowest order corrections $\delta_e^{(s)}$ are the eigenvalues of an explicit {\em level shift operator} $\Lambda_e$ (see
\cite{MSB1}), acting on the eigenspace of $L_{\rm S}$ associated to $e$. There are two
bases $\{\eta_e^{(s)}\}$ and $\{\widetilde\eta_e^{(s)}\}$ of the eigenspace, satisfying
\begin{equation}
\Lambda_e\eta_e^{(s)} = \delta_e^{(s)}\eta_e^{(s)}, \qquad [\Lambda_e]^*\widetilde\eta_e^{(s)} = {\delta_e^{(s)}}^*\ \widetilde\eta_e^{(s)}, \qquad \scalprod{\widetilde\eta_e^{(s)}}{\eta_e^{(s')}} =\delta_{s,s'}.
\label{19}
\end{equation}
We call the eigenvectors $\eta_e^{(s)}$ and $\widetilde\eta_e^{(s)}$ the `resonance vectors'. We take interaction parameters ($f,g$ and the coupling constants) such that the following condition is satisfied.
\begin{itemize}
\item[(F)] There is complete splitting of resonances under perturbation at second order, i.e., all the $\delta_e^{(s)}$ are distinct for fixed $e$ and varying $s$.
\end{itemize}
This condition implies in particular that there are ${\rm mult}(e)$ distinct resonance energies $\varepsilon_e^{(s)}$, $s=1,\ldots,{\rm mult}(e)$ bifurcating out of $e$, so that in the above notation, $\nu(e)={\rm mult}(e)$.
Explicit evaluation of $\delta_e^{(s)}$ shows that condition (F) is satisfied for generic values of the interaction parameters (see also \fer{52}-\fer{56}).
\medskip
The following result is obtained from a detailed analysis of a
representation of the reduced dynamics given in \cite{MSB1}, and generalized to the present model with three reservoirs. The mathematical details are presented in \cite{mm}.
\medskip
{\bf Result on reduced dynamics.\ }
{\it
Suppo\-se that Conditions (A) and (F) hold. There
is a constant $\varkappa_0>0$ such that if $\varkappa
<\varkappa_0$, then we have for all $t\geq 0$
\begin{equation}
[\rho_t]_{mn} =\sum_{(k,l)\in{\cal C}(E_m-E_n)}A_t(m,n;k,l)\ [\rho_0]_{kl} +O(\varkappa^2),
\label{42}
\end{equation}
where the remainder term is uniform in $t\geq 0$, and where the amplitudes $A_t$ satisfy the {\em Chapman-Kolmogorov} equation
\begin{equation}
A_{t+r}(m,n;k,l)= \sum_{(p,q)\in{\cal C}(E_m-E_n)}A_t(m,n;p,q)A_r(p,q;k,l),
\label{chko}
\end{equation}
for $t,r\geq 0$, with initial condition $A_0(m,n;k,l) = \delta_{m=k}\delta_{n=l}$ (Kronecker delta). Moreover, the amplitudes are given in terms of
the resonance vectors and resonance energies by
\begin{equation}
A_t(m,n;k,l) = \sum_{s=1}^{{\rm mult}(E_n-E_m)} {\rm e}^{\i
t\varepsilon_{E_n-E_m}^{(s)}}
\scalprod{\varPhi_l\otimes\varPhi_k}{\eta_{E_n-E_m}^{(s)}}
\scalprod{\widetilde\eta_{E_n-E_m}^{(s)}}{\varPhi_n\otimes\varPhi_m}.
\label{35}
\end{equation}
}
{\it Remark.\ } The upper bound $\varkappa_0$ satisfies $\varkappa^2_0\leq {\rm const.\,} T$, where $T$ is the temperature of the reservoirs, \cite{MSB1}.
We will call the first term on the r.h.s. of \fer{42} the {\bf resonance approximation} of the reduced density matrix dynamics.
\medskip
\noindent
{\bf Discussion.\ } 1. The result shows that to lowest
order in $\varkappa$, {\em and homogeneously in time}, the reduced
density matrix elements evolve in clusters. A cluster is determined
by indices in a fixed ${\cal C}(e)$. Within each cluster the
dynamics has the structure of a Markov process. Moreover, the
transition amplitudes of this process are given by the resonance
data. They can be calculated explicitly in concrete models. We have
therefore a simple approximation of the true dynamics, valid
homogeneously in time. This is an {\it advantage} of the resonance
representation. A {\it limitation} is that this method cannot
describe the evolution of quantities (averages of observables) which
are of the order of the square of the coupling parameters, since the
error in the approximation is of the same order.\footnote{However, by
including higher order terms in the perturbation theory, one can
refine the resonance method and resolve processes of higher order in
the coupling.} An illustration of this limitation of the method is the large-time behaviour of off-diagonal matrix elements. Generically, all off-diagonals decay to a limit having the size $O(\varkappa^2)$, as $t\rightarrow\infty$ \cite{MSB1}. As soon as a matrix element is of order $O(\varkappa^2)$, the resonance approximation \fer{42} cannot resolve its dynamics, since it is of the same order as the remainder.
One of our goals is to describe the evolution of entanglement of the qubits (see sections \ref{sectentcre}, \ref{numres}). From the above explanations, it is clear that the resonance approximation is well suited to describe decay of initial entanglement of qubits (if the initial entanglement is much larger than $O(\varkappa^2)$). On the other hand, an initially unentangled qubit state will typically become entangled due to the interaction with reservoirs. It is expected that the entanglement created may be of the same order as the error in the approximation \fer{42}, and hence the question arises if it is possible to describe this process using the resonance approximation.
The answer is positive, as we show numerically in section \ref{numres}: indeed we see that the amount of entanglement created is {\it independent} of the coupling strength. (The effect of changing $\varkappa$ is to shift the time-dependent curve of entanglement along the time-axis.)
2. {\bf Cluster classification of density matrix
elements.\ } The main dynamics partitions the reduced dynamics into
independent clusters of jointly evolving matrix elements, according
to \fer{32}. Depending on the energy level distribution of the two
isolated qubits and on the interaction parameters, each cluster has
its associated decay rate. It is possible that some clusters decay
very quickly, while some others stay populated for much longer
times. The resonance dynamics furnishes us with a very concrete
recipe telling us which parts of the matrix elements disappear when. This reveals a pattern of where in the density matrix quantum
properties are lost at which time. The same feature holds for an
arbitrary $N$-level system coupled to reservoirs, \cite{MSB1}, and
notably for complex systems ($N>\!>1$). In particular, this approach
may prove useful in the analysis of feasibility of quantum
algorithms performed on $N$-qubit registers. We point out that that {\it the diagonal belongs always to a single cluster}, namely the one associated with $e=0$. If the energies of the $N$-level system are degenerate, then some off-diagonal matrix elements belong to the same cluster as the diagonal as well.
3. The sum in \fer{42}
alone, which is the main term in the expansion, preserves the
hermiticity but not the positivity of density matrices. In other
words, the matrix obtained from this sum may have negative
eigenvalues. Since by adding $O(\varkappa^2)$ we do get a true
density matrix, the mentioned negativity of eigenvalues can only be
of $O(\varkappa^2)$. This can cause complications if one tries to
calculate for instance the concurrence by using the main term in
\fer{42} alone. Indeed, concurrence is not defined in general for a
`density matrix' having negative eigenvalues. See also section \ref{numres},
Numerical Results.
4. It is well known that the time decay of matrix
elements is not exponential for all times. For example, for small
times it is quadratic in $t$ \cite{PSE}. How is this behaviour
compatible with the representation \fer{42}, \fer{35}, where only
exponential time factors ${\rm e}^{\i t\varepsilon}$ are present? The
answer is that {\it up to an error term of $O(\varkappa^2)$}, the
``overlap coefficients'' (scalar products in \fer{35}) mix the
exponentials in such a way as to produce the correct time behaviour.
5. Since the coupled system has an equilibrium state, one of the
resonances $\varepsilon_0^{(s)}$ is always zero \cite{MLSO}, we set
$\varepsilon_0^{(1)}=0$. The condition $\Im\varepsilon_e^{(s)}>0$
for all $e,s$ except $e=0$, $s=1$ is equivalent to the entire system
(qubits plus reservoirs) converging to its equilibrium state for
large times.
\medskip
As we have remarked above, the decay of matrix elements is not in
general exponential, but we can nevertheless represent it
(approximate to order $\varkappa^2$) in terms of superpositions of
exponentials, for all times $t\geq 0$. In regimes where the actual
dynamics has exponential decay, the rates coincide with those we
obtain from the resonance theory (large time dynamics, see Section \ref{sectcomp} and also
\cite{PSE,MSB1}). It is therefore reasonable to define the
thermalization rate by
$$
\gamma^{\rm th} = \min_{s\geq 2}\Im\varepsilon_0^{(s)}\geq 0
$$
and the {\it cluster decoherence rate} associated to ${\cal C}(e)$, $e\neq 0$, by
$$
\gamma^{{\rm dec}}_e =\min_{1\leq s \leq {\rm mult}(e)} \Im\varepsilon_e^{(s)} \geq 0.
$$
The interpretation is that the cluster of matrix elements of the
true density matrix associated to $e\neq 0$ decays to zero, modulo
an error term $O(\varkappa^2)$, at the rate $\gamma_e^{\rm dec}$,
and the cluster containing the diagonal approaches its equilibrium
(Gibbs) value, modulo an error term $O(\varkappa^2)$, at rate
$\gamma^{\rm th}$. If $\gamma$ is any of the above rates, then
$\tau=1/\gamma$ is the corresponding (thermalization, decoherence)
time.
It should be understood that characterizing the dynamcs via the cluster decoherence and relaxation times corresponds to a `coarse graining': matrix elements are grouped into clusters and the dynamics of clusters is effectively given by a decoherence (or the relaxation) time. Expression \fer{42} gives much more detail, it gives the dynamics of each single matrix element. The breakup of the density matrix into individually evolving clusters may be advantageous especially in complex systems, where instead of two qubits, one deals with $N$-qubit registers.
\medskip
\noindent {\bf Remark on the markovian property.\ } In the first
point discussed after \fer{35}, we remark that within a cluster,
our approximate dynamics of matrix elements has the form of a Markov
process. In the theory of markovian master equations, one constructs
commonly an approximate dynamics given by a markovian quantum
dynamical semigroup, generated by a so-called Lindblad (or weak
coupling) generator \cite{BP}. Our representation is {\it not} in
Lindblad form (indeed, it is not even a positive map on density
matrices). To make the meaning of the markovian property of our
dynamics clear, we consider a fixed cluster $\cal C$ and denote the
associated pairs of indices by $(m_k,n_k)$, $k=1,\ldots, K$. Retaining only the main part in \fer{42}, and making use of
\fer{chko} we obtain for $t,s\geq 0$
\begin{equation}
\left[
\begin{array}{c}
{}[\rho_{t+s}]_{m_1 n_1}\\
\vdots\\
{}[\rho_{t+s}]_{m_K n_K}
\end{array}
\right]
= A_{\cal C}(t)
\left[
\begin{array}{c}
{}[\rho_{s}]_{m_1 n_1}\\
\vdots\\
{}[\rho_{s}]_{m_K n_K}
\end{array}
\right],
\label{clustermarkov}
\end{equation}
where $[A_{\cal C}(t)]_{m_jn_j, m_ln_l} =A_t(m_j,n_j;m_l,n_l)$, c.f.
\fer{35}. Thus the dynamics of the vector having as components the
density matrix elements, has the semi-group property in the time
variable, with generator $G_{\cal C}:=\frac{\d}{\d t}A_{\cal C}(0)$,
\begin{equation}
\left[
\begin{array}{c}
{}[\rho_t]_{m_1 n_1}\\
\vdots\\
{}[\rho_t]_{m_K n_K}
\end{array}
\right]
= {\rm e}^{tG_{\cal C}}
\left[
\begin{array}{c}
{}[\rho_{0}]_{m_1 n_1}\\
\vdots\\
{}[\rho_{0}]_{m_K n_K}
\end{array}
\right].
\label{clustermarkov1}
\end{equation}
This is the meaning of the Markov property of the resonance dynamics.
While the fact that our resonance approximation is not in the form of the weak coupling limit (Lindblad) may represent disadvantages in certain
applications, it may also allow for a description of effects possibly
not visible in a markovian master equation approach. Based on results \cite{yu1, BLC}, one may believe that revival of entanglement is a non-markovian effect, in the sense that it is not detectable under the markovian master equation
dynamics (however, we are not aware of any demonstration of this result). Nevertheless, as we show in our numerical analysis below, the resonance approximation captures this effect (see Figure \ref{f1}). We may attempt to explain this as follows. Each cluster is a (indpendent) markov process with its own decay rate, and while some clusters may depopulate very quickly, the ones responsible for creating revival of entanglement may stay alive for much longer times, hence enabling that process. Clearly, on time-scales larger than the biggest decoherence time of all clusters, the matrix is (approximately) diagonal, and typically no revival of entanglement is possible any more.
\section{Explicit resonance data}
We consider the Hamiltonian $H_{\rm S}$, \fer{n2}, with parameters
$0<B_1<B_2$ s.t. $B_2/B_1\neq 2$. This assumption is a
non-degeneracy condition which is not essential for the
applicability of our method (but lightens the exposition). The
eigenvalues of $H_{\rm S}$ are given by \fer{46} and the spectrum of
$L_{\rm S}$ is $\{e_1,\pm e_2,\pm e_3,\pm e_4,\pm e_5\}$, with
non-negative eigenvalues
\begin{equation}
e_1=0,\ e_2=2B_1,\ e_3=2B_2,\ e_4=2(B_2-B_1),\ e_5=2(B_1+B_2),
\label{45}
\end{equation}
having multiplicities $m_1=4$, $m_2=m_3=2$, $m_4=m_5=1$,
respectively. According to \fer{45}, the grouping of jointly
evolving elements of the density matrix above and on the diagonal
is given by\footnote{Since the density matrix is hermitian, it
suffices to know the evolution of the elements on and above the
diagonal.}
\begin{eqnarray}
{\cal C}_1 &:=&{\cal C}(e_1) =\{(1,1), (2,2), (3,3), (4,4)\}\label{47}\\
{\cal C}_2 &:=&{\cal C}(e_2) = \{ (1,3), (2,4)\}\label{48}\\
{\cal C}_3 &:=&{\cal C}(e_3) = \{ (1,2), (3,4)\}\label{49}\\
{\cal C}_4 &:=&{\cal C}(-e_4) = \{ (2,3)\}\label{50}\\
{\cal C}_5 &:=&{\cal C}(e_5) = \{ (1,4)\}\label{51}
\end{eqnarray}
There are five clusters of jointly evolving elements (on and above the diagonal). One cluster is the diagonal, represented by ${\cal C}_1$.
{} For $x>0$ and $h\in L^2({\mathbb R}^3,\d^3k)$ we define
\begin{equation}
\sigma_h(x) = 4 \pi x^2 \coth(\beta x) \int_{S^2} |h(2x,\Sigma)|^2\d\Sigma
\label{57}
\end{equation}
(spherical coordinates) and for $x=0$ we set
\begin{equation}
\sigma_h(0) = 4 \pi \lim_{x\downarrow 0} x^2 \coth(\beta x) \int_{S^2} |h(2x,\Sigma)|^2\d\Sigma.
\label{58}
\end{equation}
Furthermore, let
\begin{eqnarray}
Y_2 &=& \big| \Im \left[ 4\kappa_1^2\kappa_2^2 r^2 -\i (\lambda_2^2+\mu_2^2)^2 \sigma^2_g(B_2) -4\i \kappa_1\kappa_2 \ (\lambda_2^2+\mu_2^2) \ r r_2' \right]^{1/2}\big|, \label{69}\\
Y_3 &=& \big| \Im \left[ 4\kappa_1^2\kappa_2^2 r^2 -\i (\lambda_1^2+\mu_1^2)^2 \sigma^2_g(B_1) -4\i \kappa_1\kappa_2 \ (\lambda_1^2+\mu_1^2) \ r r_1' \right]^{1/2}\big|,\label{70}
\end{eqnarray}
(principal value square root with branch cut on negative real axis) where
\begin{equation}
r={\rm P.V.}\int_{{\mathbb R}^3} \frac{|f|^2}{|k|}\d^3k,\qquad r_j' = 4B_j^2\int_{S^2}|g(2 B_j,\Sigma)|^2 \d\Sigma.
\label{pvr}
\end{equation}
The following results are obtained by an explicit calculation of level shift operators. Details are presented in \cite{mm}.
\medskip
{\bf Result on decoherence and thermalization rates.}
{\it
The thermalization and decoherence rates are given by
\begin{eqnarray}
\gamma^{\rm th} &=& \min_{j=1,2}\left\{ (\lambda_j^2+\mu_j^2)\sigma_g(B_j)\right\}+O(\varkappa^4) \label{52}\\
\gamma_2^{\rm dec} &=& \textstyle\frac 12 (\lambda^2_1+\mu_1^2)\sigma_g(B_1) + \textstyle\frac 12 (\lambda^2_2+\mu_2^2)\sigma_g(B_2) \nonumber\\
&& - Y_2 +(\kappa_1^2+\nu^2_1)\sigma_f(0) +O(\varkappa^4)\label{53}\\
\gamma_3^{\rm dec} &=& \textstyle\frac 12 (\lambda^2_1+\mu_1^2)\sigma_g(B_1) + \textstyle\frac 12 (\lambda^2_2+\mu_2^2)\sigma_g(B_2) \nonumber\\
&& - Y_3 +(\kappa_2^2+\nu^2_2)\sigma_f(0) +O(\varkappa^4)\label{54}\\
\gamma_4^{\rm dec} &=& (\lambda^2_1+\mu_1^2)\sigma_g(B_1) + (\lambda^2_2+\mu_2^2)\sigma_g(B_2) \nonumber\\
&& +\left[(\kappa_1-\kappa_2)^2 +\nu_1^2+\nu_2^2\right]\sigma_f(0) +O(\varkappa^4) \label{55}\\
\gamma_5^{\rm dec} &=& (\lambda^2_1+\mu_1^2)\sigma_g(B_1) + (\lambda^2_2+\mu_2^2)\sigma_g(B_2) \nonumber\\
&& +\left[(\kappa_1+\kappa_2)^2 +\nu_1^2+\nu_2^2\right]\sigma_f(0)+O(\varkappa^4) \label{56}
\end{eqnarray}
}
\noindent
{\bf Discussion.\ } 1. The thermalization rate depends on energy-exchange parameters $\lambda_j$, $\mu_j$ only. This is natural since an energy-conserving dynamics leaves the populations constant. If the interaction is purely energy-exchanging ($\kappa_j=\nu_j=0$), then all the rates depend {\it symmetrically} on the local and collective interactions, through $\lambda_j^2+\mu_j^2$. However, for purely energy-conserving interactions ($\lambda_j=\mu_j=0$) the rates are not symmetrical in the local and collective terms. (E.g. $\gamma^{\rm dec}_4$ depends only on local interaction if $\kappa_1=\kappa_2$.) The terms $Y_2$, $Y_3$ are complicated nonlinear combinations of exchange and conserving terms. This shows that effect of the energy exchange and conserving interactions are correlated.
2. We see from \fer{57}, \fer{58} that the leading orders of the rates \fer{52}-\fer{56} do not depend on an ultraviolet features of the form factors $f,g$. (However, $\sigma_{f,g}(0)$ depends on the infrared behaviour.) The coupling constants, e.g. $\lambda_j^2$ in \fer{52} multiply $\sigma_g(B_j)$, i.e., the rates involve quantities like (see \fer{57})
\begin{equation}
\pi \lambda_j^2\int_{{\mathbb R}^3} \coth\big(\beta |k|/2\big)\ \big|g(|k|,\Sigma)\big|^2 \ \delta^{(1)}(|k|-2B_j) \ \d^3k.
\label{cutoff}
\end{equation}
The one-dimensional Dirac delta function appears due to energy
conservation of processes of order $\varkappa^2$, and $2B_j$ is (one
of) the Bohr frequencies of a qubit. Thus energy conservation
chooses the evaluation of the form factors at finite momenta and
thus an ultraviolet cutoff is not visible in these terms.
Nevertheless, we do not know how to control the error terms
$O(\varkappa^4)$ in \fer{52}-\fer{56} homogeneously in the cutoff.
3. The case of a single qubit interacting with a thermal bose gas has been extensively studied, and decoherence and thermalization rates for the spin-boson system have been found using different techniques, \cite{Leggett, SMS, Weiss}. We recover the spin-boson model by setting all our couplings in \fer{n3}-\fer{n7} to zero, except for $\lambda_1=\kappa_1\equiv\lambda$, and setting $f=g$. In this case, the spectral density $J(\omega)$ of the reservoir is linked to our quantity \fer{57} by
$$
J(\omega) = \frac{\sigma_h(\omega/2)}{\coth(\beta\omega/2)}.
$$
The relaxation rate is
$$
\gamma^{\rm th}=\frac 12\pi\lambda^2\coth(\beta B) J(2B),
$$
where $2B$ is the transition frequency of qubit (in units where $\hbar=1$), see \fer{n1}. The decoherence rate is given by
$$
\gamma^{\rm dec} = \frac{\gamma^{\rm th}}{2} + \lambda^2\pi \sigma_h(0),
$$
where $\sigma_h(0)$ is the limit as $\omega\rightarrow 0$
of $\coth(\beta\omega)J(2\omega)$. These rates obtained with our resonance
method agree with those obtained in \cite{Leggett, SMS, Weiss}
by the standard Bloch-Redfield approximation.
\medskip
\noindent {\bf Remark on the limitations of the resonance approximation.\ } As mentioned in section \ref{sectevol}, the dynamics \fer{42} can only resolve the evolution of quantities larger than $O(\varkappa^2)$. For instance, assume that in an initial state of the two qubits, all off-diagonal density matrix elements are of the order of unity (relative to $\varkappa$). As time increases, the off-diagonal matrix elements decrease, and for times $t$ satisfying ${\rm e}^{-t\gamma^{\rm dec}_j} \leq O(\varkappa^2)$, the off-diagonal cluster ${\cal C}_j$ is of the same size $O(\varkappa^2)$ as the error in \fer{42}. Hence the evolution of this cluster can be followed accurately
by the resonance approximation for times $t< \ln(\varkappa^{-2})/\gamma^{\rm dec}_j\propto\frac{\ln(\varkappa^{-2})}{\varkappa^2 (1+T)}$, where $T$ is the temperature. Here, $T, \varkappa$ (and other parameters) are dimensionless. To describe the cluster in question for larger times, one has to push the perturbation theory to higher order in $\varkappa$. It is now clear that if a cluster is initially not populated, the resonance approximation does not give any information about the evolution of this cluster, other than saying that its elements will be $O(\varkappa^2)$ for all times.
Below we investigate analytically decay of entanglement (section \ref{disentsect}) and numerically creation of entanglement (section \ref{numres}). For the same reasons as just outlined, an analytical study of entanglement decay is possible if the initial entanglement is large compared to $O(\varkappa^2)$. However, the study of creation of entanglement is more subtle from this point of view, since one must detect the emergence of entanglement, presumably of order $O(\varkappa^2)$ only, starting from zero entanglement. We show in our numerical analysis that entanglement of size 0.3 is created {\it independently} of the value of $\varkappa$ (ranging from 0.01 to 1). We are thus sure that the resonance approximation does detect creation of entanglement, {\it even} if it may be of the same order of magnitude as the couplings. Whether this is correct for other quantities than entanglement is not clear, and so far, only numerical investigations seem to be able to give an answer. As an example where things can go wrong with the resonance approximation we mention that for small times, the approximate density matrix has {\it negative eigenvalues}. This makes the notion of concurrence of the approximate density matrix ill-defined for small times.
\section{Comparison between exact solution and resonance approximation: explicitly solvable model}
\label{sectcomp}
We consider the system with Hamiltonian \fer{n3}-\fer{n8} and $\lambda_1=\lambda_2=0$, $\mu_1=\mu_2=0$, $\kappa_1=\kappa_2=\kappa$ and $\nu_1=\nu_2=\nu$. This energy-conserving model can be solved explicitly \cite{PSE,MSB1} and has the {\bf exact solution}
\begin{equation}
[\rho_t]_{mn} = [ \rho_0]_{mn}\ {\rm e}^{-\i t(E_m-E_n)}
\ {\rm e}^{\i\kappa^2 a_{mn} S(t)}\ {\rm e}^{ -[\kappa^2 b_{mn} +\nu^2 c_{mn}]\Gamma(t)}
\label{exact}
\end{equation}
where
$$
(a_{mn}) = \left[
\begin{array}{cccc}
0 & -4 & -4 & 0\\
4 & 0 & 0 & 4\\
4 & 0 & 0 & 4\\
0 & -4 & -4 & 0
\end{array}
\right],
(b_{mn}) = \left[
\begin{array}{cccc}
0 & 4 & 4 & 16\\
4 & 0 & 0 & 4\\
4 & 0 & 0 & 4\\
16 & 4 & 4 & 0
\end{array}
\right],
(c_{mn}) = \left[
\begin{array}{cccc}
0 & 4 & 4 & 8\\
4 & 0 & 8 & 4\\
4 & 8 & 0 & 4\\
8 & 4 & 4 & 0
\end{array}
\right]
$$
and
\begin{eqnarray}
S(t) & = & \frac 12 \int_{{\mathbb R}^3} |f(k)|^2 \ \frac{|k|t - \sin(|k|t)}{|k|^2}\d^3k
\label{S}\\
\Gamma(t) &=& \int_{{\mathbb R}^3} |f(k)|^2\coth(\beta |k|/2)\frac{\sin^2(|k|t/2)}{|k|^2}\d^3 k.
\label{Gamma}
\end{eqnarray}
On the other hand, the {\it main} contribution (the sum) in \fer{42}
yields the {\bf resonance approximation} to the true dynamics, given
by
\begin{eqnarray}
{}[\rho_t]_{mm} &\doteq& [\rho_0]_{mm} \mbox{\qquad $m=1,2,3,4$}\label{l1}\\
{}[\rho_t]_{1n} &\doteq& {\rm e}^{-\i t(E_1-E_n)}\ {\rm e}^{-2\i t\kappa^2 r} {\rm e}^{-t(\kappa^2+\nu^2)\sigma_f(0)}[\rho_0]_{1n} \qquad n=2,3\label{l2}\\
{}[\rho_t]_{14} &\doteq& {\rm e}^{-\i t(E_1-E_4)}\ {\rm e}^{-t(4\kappa^2+2\nu^2)\sigma_f(0)}[\rho_0]_{14}\label{l3}\\
{}[\rho_t]_{23} &\doteq& {\rm e}^{-\i t(E_2-E_3)}\ {\rm e}^{-2t \kappa^2 \sigma_f(0)}[\rho_0]_{23}\label{l4}\\
{}[\rho_t]_{m4} &\doteq& {\rm e}^{-\i t(E_m-E_4)}\ {\rm e}^{2\i t\kappa^2 r}\ {\rm e}^{-t(\kappa^2+\nu^2)\sigma_f(0)}[\rho_0]_{m4} \qquad m=2,3 \label{l5}
\end{eqnarray}
The dotted equality sign $\doteq$ signifies that the left side equals the right side modulo an error term $O(\kappa^2+\nu^2)$, homogeneously in $t\geq 0$.\footnote{To arrive at \fer{l1}-\fer{l5} one calculates the $A_t$ in \fer{42} explicitly, to second order in $\kappa$ and $\nu$. The details are given in \cite{mm}.}
Clearly the decoherence function $\Gamma(t)$ and the phase $S(t)$ are nonlinear in $t$ and depend on the ultraviolet behaviour of $f$. On the other hand, our resonance theory approach yields a representation of the dynamics in terms of a superposition of exponentially decaying factors. From \fer{exact} and \fer{l1}-\fer{l5} we see that the resonance approximation is obtained from the exact solution by making the replacements
\begin{eqnarray}
S(t) &\mapsto& \textstyle\frac{1}{2} r t,\label{mm30}\\
\Gamma(t) &\mapsto& \textstyle\frac{1}{4} \sigma_f(0) t.\label{mm31}
\end{eqnarray}
We emphasize again that, according to \fer{42}, the difference
between the exact solution and the one given by the resonance
approximation is of the order $O(\kappa^2+\nu^2)$, homogeneously in
time, and where $O(\kappa^2+\nu^2)$ depends on the ultraviolet
behaviour of the couplings. This shows in particular that up to
errors of $O(\kappa^2+\nu^2)$, the dynamics of density matrix
elements is simply given by a phase change and a possibly decaying
exponential factor, both linear in time and entirely determined by
$r$ and $\sigma_f(0)$. Of course, the advantage of the resonance
approximation is that even for not exactly solvable models, we can
approximate the true (unknown) dynamics by an explicitly calculable
superposition of exponentials with exponents linear in time,
according to \fer{42}. Let us finally mention that one easily sees
that
$$
\lim_{t\rightarrow\infty}S(t)/t=r/2 \mbox{\quad and\quad} \lim_{t\rightarrow\infty}\Gamma(t)/t=\sigma_f(0)/4,
$$
so \fer{mm30} and \fer{mm31} may indicate that the resonance approximation is closer to the true dynamics for large times -- but nevertheless, our analysis proves that the two are close together ($O(\kappa^2+\nu^2)$) {\it homogeneously} in $t\geq 0$.
\section{Disentanglement}
\label{disentsect}
In this section we apply the resonance method to obtain estimates on
survival and death of entanglement under the full dynamics
\fer{n3}-\fer{n7} and for an initial state of the form
$\rho_{\rm S}\otimes\rho_{{\rm R}_1}\otimes \rho_{{\rm R}_2}\otimes\rho_{{\rm R}_3}$,
where $\rho_{\rm S}$ has nonzero entanglement and the reservoir initial
states are thermal, at fixed temperature $T=1/\beta>0$. Let $\rho$
be the density matrix of two qubits $1/2$. The {\it concurrence}
\cite{Wo, BDSW} is defined by
\begin{equation}
C(\rho) = \max\{ 0, \sqrt{\nu_1}-\big[\sqrt{\nu_2}+\sqrt{\nu_3}+\sqrt{\nu_4}\big]\},
\label{60}
\end{equation}
where $\nu_1\geq\nu_2\geq\nu_3\geq\nu_4\geq0$ are the eigenvalues of the matrix
\begin{equation}
\label{nn60}
\xi(\rho) = \rho(S^y\otimes S^y) \rho^*(S^y\otimes S^y).
\end{equation}
Here, $\rho^*$ is obtained from $\rho$ by representing the latter in the energy basis and then taking the elementwise complex conjugate, and $S^y$ is the Pauli matrix
$
S^y =
\left[
\begin{array}{cc}
0 & -\i \\
\i & 0
\end{array}
\right]$.
The concurrence is related in a monotone way to the {\it entanglement of formation}, and \fer{60} takes values in the interval $[0,1]$. If $C(\rho)=0$ then
the state $\rho$ is separable, meaning that $\rho$ can be written
as a mixture of pure product states. If $C(\rho)=1$ we call $\rho$ maximally entangled.
Let $\rho_0$ be an initial state of $S$. The smallest number $t_0\geq 0$ s.t. $C(\rho_t)=0$ for all $t\geq t_0$ is called the {\em disentanglement time} (also `entanglement sudden death time', \cite{yu1,HZ}). If $C(\rho_t)>0$ for all $t\geq 0$ then we set $t_0=\infty$. The disentanglement time depends on the initial state.
\medskip
Consider the family of pure initial states of ${\rm S}$ given by
$$
\rho_0=|\psi\rangle\langle\psi|, \mbox{\qquad with \qquad} \psi = \frac{a_1}{\sqrt{|a_1|^2+|a_2|^2}}\, |++\rangle \ + \frac{a_2}{\sqrt{|a_1|^2+|a_2|^2}}\,|--\rangle,
$$
where $a_1, a_2\in{\mathbb C}$ are arbitrary (not both zero). The initial concurrence is
$$
C(\rho_0) = 2\frac{|\Re \,a_1 a_2^*|}{|a_1|^2+|a_2|^2},
$$
which covers all values between zero (e.g. $a_1=0$) to one (e.g. $a_1=a_2\in{\mathbb R}$). According to \fer{42}, the
density matrix of ${\rm S}$ at time $t\geq 0$ is given by
\begin{equation}
\rho_t =
\left[
\begin{array}{cccc}
p_1 & 0 & 0 & \alpha\\
0 & p_2 & 0 & 0 \\
0 & 0 & p_3 & 0 \\
\alpha^* & 0 & 0 & p_4
\end{array}
\right] +O(\varkappa^2),
\label{m3}
\end{equation}
with remainder uniform in $t$, and where $p_j=p_j(t)$ and
$\alpha=\alpha(t)$ are given by the main term on the r.h.s. of
\fer{42}. The initial conditions are $p_1(0) = \frac{|a_1|^2}{|a_1|^2+|a_2|^2}$, $p_2(0)=p_3(0)=0$, $p_4(0) = \frac{|a_2|^2}{|a_1|^2+|a_2|^2}$, and $\alpha(0) = \frac{a_1^* a_2}{|a_1|^2+|a_2|^2}$. We set
\begin{equation}
p:= p_1(0)\in [0,1]
\label{p}
\end{equation}
and note that
$p_4(0) =1-p$ and $|\alpha(0)| = \sqrt{p(1-p)}$. In terms of $p$, the initial concurrence is
$C(\rho_0)=2\sqrt{p(1-p)}$.
Let us set
\begin{equation}
\delta_2 := (\lambda^2_1+\mu^2_1)\sigma_g(B_1), \qquad \delta_3 := (\lambda^2_2+\mu^2_2)\sigma_g(B_2),
\label{delta2}
\end{equation}
\begin{equation}
\delta_5 := \delta_2+\delta_3 +\left[(\kappa_1+\kappa_2)^2 +\nu_1^2+\nu_2^2\right]\sigma_f(0).
\label{deltas}
\end{equation}
\begin{equation}
\delta_+ := \max\{\delta_2,\delta_3\},\qquad \delta_-:=\min\{\delta_2,\delta_3\}.
\label{deltapm}
\end{equation}
An analysis of the concurrence of \fer{m3}, where the $p_j(t)$ and $\alpha(t)$ evolve according to \fer{42} yields the following bounds on disentanglement time.
\medskip
{\bf Result on disentanglement time.\ }
{\it Take $p\neq 0,1$ and suppose that $\delta_2, \delta_3>0$. There is a constant $\varkappa_0>0$
(independent of\ $p$) such that we have:
\medskip
{\bf A.\ } {\em (Upper bound.)} There is a constant $C_A>0$ (independent of\
$p,\varkappa$) s.t. $C(\rho_t)=0$ for all $t\geq t_A$, where
\begin{equation}
t_A:= \max\left\{ \frac{1}{\delta_5}\ln\left[C_A
\frac{\sqrt{p(1-p)}}{\varkappa^2}\right],
\frac{1}{\delta_2+\delta_3} \ln\left[C_A
\frac{p(1-p)}{\varkappa^2}\right],
\frac{C_A}{\delta_2+\delta_3}\right\}. \label{m17}
\end{equation}
{\bf B.\ } {\em (Lower bound.)} There is a constant $C_B>0$ (independent of $p$, $\varkappa$) s.t. $C(\rho_t)>0$ for all $t\leq t_B$, where
\begin{equation}
t_B:=\min\left\{ \frac{1}{\delta_2+\delta_3}\ln[1+ C_B p(1-p)], \frac{1}{\delta_+}\ln\left[1+C_B\varkappa^2 \right], \frac{C_B}{\delta_5-\delta_-/2}
\right\}. \label{m18}
\end{equation}
}
Bounds \fer{m17} and \fer{m18} are obtained by a detailed analysis of \fer{60}, with $\rho$ replaced by $\rho_t$, \fer{m3}. This analysis is quite straightforward but rather lengthy. Details are presented in \cite{mm}.
\medskip
\noindent
{\bf Discussion.\ } 1. The result gives disentanglement bounds for the true dynamics of the qubits for interactions which are not integrable.
2. The disentanglement time is {\it finite}. This follows from $\delta_2, \delta_3>0$ (which in turn implies that the total system approaches equilibrium as $t\rightarrow\infty$). If the system does not thermalize then it can happen that entanglement stays nonzero for all times (it may decay or even stay constant) \cite{yu1, PR}.
3. The rates $\delta$ are of order $\varkappa^2$. Both $t_A$ and $t_B$ increase with decreasing coupling strength.
4. Bounds \fer{m17} and \fer{m18} are not optimal. The disentanglement time bound \fer{m17} depends on both kinds of couplings. The contribution of each interaction decreases $t_A$ (the bigger the noise the quicker entanglement dies). The bound on entanglement survival time \fer{m18} does not depend on the energy-conserving couplings.
\section{Entanglement creation}
\label{sectentcre}
Consider an initial condition $\rho_{\rm S}\otimes\rho_{{\rm R}_1}\otimes
\rho_{{\rm R}_2}\otimes\rho_{{\rm R}_3}$, where $\rho_{\rm S}$ is the initial state
of the two qubits, and where the reservoir initial states are
thermal, at fixed temperature $T=1/\beta>0$.
Suppose that the qubits
are not coupled to the collective reservoir ${\rm R}_3$, but only to the
local ones, via energy conserving and exchange interactions (local
dynamics). It is not difficult to see that then, if $\rho_{\rm S}$ has
zero concurrence, its concurrence will remain zero for all times.
This is so since the dynamics factorizes into parts for ${\rm S}_1+{\rm R}_1$
and ${\rm S}_2+{\rm R}_2$, and acting upon an unentangled initial state does
not change entanglement. In contrast, for certain {\it entangled}
initial states $\rho_{\rm S}$, one observes death and revival of
entanglement \cite{BLC}: the initial concurrence of the qubits
decreases to zero and may stay zero for a certain while, but it then
grows again to a maximum (lower than the initial concurrence) and
decreasing to zero again, and so on. The interpretation is that
concurrence is shifted from the qubits into the (initially
unentangled) reservoirs, and if the latter are not Markovian,
concurrence is shifted back to the qubits (with some loss).
Suppose
now that the two qubits are coupled only to the collective
reservoir, and not to the local ones. Braun \cite{Braun} has
considered the explicitly solvable model (energy-conserving
interaction), as presented in Section \ref{sectcomp} with
$\kappa=1$, $\nu=0$.\footnote{In fact, Brown uses this model and
sets the Hamiltonian of the qubits equal to zero. This has no
influence on the evolution of concurrence, since the free dynamics
of the qubits can be factored out of the total dynamics
(energy-conserving interaction), and a dynamics of ${\rm S}_1$ and ${\rm S}_2$
which is a prouct does not change the concurrence.} Using the exact
solution \fer{exact}, Braun calculates the smallest eigenvalue of
the partial transpose of the density matrix of the two qubits, with
$S$ and $\Gamma$ considered as non-negative parameters. For the
initial product state where qubits 1 and 2 are in the states
$\frac{1}{\sqrt 2}(|+\rangle-|-\rangle)$ and $\frac{1}{\sqrt
2}(|+\rangle+|-\rangle)$ respectively, i.e.,
\begin{equation}
\rho_{\rm S} = \frac{1}{4}
\left[
\begin{array}{cccc}
1 & 1 & -1 & -1\\
1 & 1 & -1 & -1 \\
-1 & -1 & 1 & 1\\
-1 & -1 & 1 & 1
\end{array}
\right],
\label{instate}
\end{equation}
it is shown that for small values of $\Gamma$ (less than 2,
roughly), the negativity of the smallest eigenvalue of the partial
transpose oscillates between zero and -0.5 for $S$ increasing from
zero. As $\Gamma$ takes values larger than about 3, the smallest
eigenvalue is zero (regardless of the value of $S$). According to
the Peres-Horodecki criterion \cite{peres, horodecki}, the qubits
are entangled exactly when the smallest eigenvalue is strictly below
zero. Therefore, taking into account \fer{S} and \fer{Gamma},
Braun's work \cite{Braun} shows that for small times ($\Gamma$
small) the collective environment (with energy-conserving
interaction) induces first creation, then death and revival of
entanglement in the initially unentangled state \fer{instate}, and
that for large times ($\Gamma$ large), entanglement disappears.
\medskip
{\bf Resonance approximation.\ } The main term of the r.h.s. of \fer{42} can be calculated explicitly, and we give in Appendix A the concrete expressions. How does concurrence evolve under this approximate evolution of the density matrix?
(1) {\it Purely energy-exchange coupling.}\
In this situation we have $\kappa=\nu=0$. The explicit expressions (Appendix A)
show that the density matrix elements $[\rho_t]_{mn}$ in the resonance
approximation depend on $\lambda$ (collective) and $\mu$ (local) through
the symmetric combination $\lambda^2+\mu^2$ only. It follows that
the dominant dynamics \fer{42} (the true dynamics modulo an error
term $O(\varkappa^2)$ homogeneously in $t\geq 0$) is {\it the same}
if we take purely collective dynamics ($\mu=0)$ or purely local
dynamics ($\lambda =0$).
{\it In particular, creation of entanglement under purely collective and
purely local energy-exchange dynamics is
{\em the same}, modulo $O(\varkappa^2)$}.
For instance, for the initial state \fer{instate},
collective energy-exchange couplings can create entanglement of at
most $O(\varkappa^2)$, since local energy-exchange
couplings do not create any entanglement in this initial state.
(2) {\it Purely energy-conserving coupling.}\ In this situation we have $\lambda=\mu=0$. The evolution of the density matrix elements is not symmetric as a function of the coupling constants $\kappa$ (collective) and $\nu$ (local). One may be tempted to conjecture that concurrence is independent of the local coupling parameter $\nu$, since it is so in absence of collective coupling ($\kappa=0$).
However, for $\kappa\neq 0$, concurrence {\it depends} on $\nu$ (see numerical results below). We can understand this as follows. Even if the initial state is unentangled, the collective coupling creates quickly a little bit of entanglement and therefore the local environment does not see a product state any more, and starts processes of creation, death and revival of entanglement.
(3) {\it Full coupling.}\ In this case all of $\kappa, \lambda, \mu, \nu$ do not vanish. Matrix elements evolve as complicated functions of these parameters, showing that the effects of different interactions are correlated.
\section{Numerical Results}
\label{numres}
In the following, we ask whether the resonance approximation is
sufficient to detect creation of entanglement. To this end, we take
the initial condition \fer{instate} (zero concurrence) and study
numerically its evolution under the approximate resonance evolution
(Appendices A, B), and calculate concurrence as a function of time.
Let us first consider the case of purely energy conserving
collective interaction, namely $\lambda=\mu=\nu=0$ and only $\kappa
\ne 0$.
\begin{figure}[!ht]
\begin{center}
\epsfxsize=85mm
\epsfbox{f1.eps}
\end{center}
\caption{{\small Energy conserving collective interaction $\lambda=\mu=\nu=0$.
a) Concurrence a function of time for different $\kappa$ values as indicated in the legend.
b) The same as a) but in the renormalized time $\kappa^2 t $.
}}
\label{f1}
\end{figure}
Our simulations (Figure \ref{f1}a) show that,
a concurrence of value approximately 0.3 is created, independently of the value of
$\kappa$ (ranging from 0.01 to 1). It is clear from the graphs that the effect of
varying $\kappa$ consists only in a time shift. This shift of time is particularly accurate, as can be seen in Fig.~\ref{f1}b, where the three curves drawn in a)
collapse to a single curve under the time rescaling $t \to \kappa^2 t $.
In particular, the maximum concurrence is taken at times $t_0\approx 0.5\kappa^{-2}$. We also point
out that the revived concurrence has very small amplitude (approximately 15 times smaller than
the maximum concurrence)
and takes its maximum at $t_1\approx 2.1\kappa^{-2}$. Even though the amplitude of the revived concurrence is small as compared to $\kappa^2$, the graphs show that it is {\it independent} of $\kappa$, and hence our resonance dynamics does reveal concurrence revival.
\begin{figure}[!ht]
\begin{center}
\epsfxsize=85mm
\epsfbox{f2.eps}
\end{center}
\caption{{\small Energy conserving collective and local interaction $\lambda=\mu=0$.
a) Concurrence a function of time for fixed collective interaction $\kappa = 0.01$
and different local interaction $\nu$ as indicated in the legend.
b) Variation of the maximum of concurrence as a function of the local
interaction strength $\nu$ for different collective interaction strengths $\kappa$ as indicated in the legend.
}}
\label{f2}
\end{figure}
When switching on the local energy conserving coupling, $\nu\neq 0$, we see
in Fig.~\ref{f2}a,
that the maximum of concurrence decreases with increasing $\nu$.
Therefore, the effect of a local coupling is to reduce the entanglement. It is also interesting
to study the dependence of the maximal value of the concurrence, $C_{\rm max}$, as a function of the energy-conserving interaction parameters. This is done in Fig.~\ref{f2}b, where $C_{\rm max}$ is plotted as a function of the local interaction $\nu$, for different fixed collective
couplings $\kappa$. The graphs show that as the local coupling $\nu$ is increased to the value of the collective coupling $\kappa$, $C_{\rm max}$ becomes zero. This means that if the local coupling exceeds the collective one, then there is no creation of concurrence. We may interpret this as a competition between the concurrence-reducing tendency of the local coupling (apart from very small revival effects) and the concurrence-creating tendency of the collective coupling (for not too long times). If the local coupling exceeds the collective one, then concurrence is prevented from building up.
\begin{figure}[!ht]
\begin{center}
\epsfxsize=85mm
\epsfbox{f3.eps}
\end{center}
\caption{{\small Energy conserving collective and local interaction $\lambda=\mu=0$.
Rescaled concurrence $C(\rho)/C_{max}$ as function of time for fixed local interaction $\nu = 0.005$
and different collective interaction $\kappa > \nu $ (as indicated in the legend) as a function
of the rescaled time $(\kappa^2 + \nu^2) t $.
}}
\label{f3}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\epsfxsize=85mm
\epsfbox{f4.eps}
\end{center}
\caption{{\small Energy-exchanging collective and local interactions $\lambda=\mu\ne 0$.
Concurrence $C(\rho)$ as function of time for fixed energy-conserving
collective interactions $\kappa=0.02$, $\nu=0$
and different energy-exchanging couplings $\lambda$ as indicated in the legend.
Here we used $B_1 = 1$, $B_2 = 1.25$, and $\beta = 1$.
}}
\label{f4}
\end{figure}
Looking at Fig.~\ref{f2}, it is clear that the effect of the local coupling is not only
to decrease concurrence but also to induce a shift of time, similarly to the effect
of the collective coupling $\kappa$. Indeed, taking as a variable the rescaled
concurrence $C(\rho)/C_{max}$, one can see that the approximate scaling $(\kappa^2+\nu^2)t$ is at work, see Fig.~\ref{f3}.
{\it We conclude that both local and collective energy conserving interactions produce a cooperative time shift of the entanglement creation, but only the local interaction can destroy entanglement creation. There is no entanglement creation for $\nu > \kappa$.}
\medskip
Let us now consider an additional energy exchange coupling $\lambda, \mu \ne 0$.
Since these parameters appear in the resonance dynamics only in
the combination $\lambda^2+\mu^2$, see Appendix A, we set without
loosing generality $\lambda = \mu$. We plot in Fig.~\ref{f4} the time evolution of
the concurrence, at fixed energy-conserving couplings $\kappa=0.02$ and $\nu=0$,
for different values of the energy exchange coupling $\lambda$.
In this case we have chosen $B_1=1$ which
corresponds to $\omega_0=\omega_1/2$, where $\omega_1$ is a
transition frequency of the first qubit. We also used the
conditions: $\sigma_g(B_1)=r_g(B_1)=1$, which lead to the
renormalization of the interaction constants. The relations between
$\sigma_g(B_2)$ and $\sigma_g(B_1)$, and $r_g(B_2)$ and $r_g(B_1)$
are discussed in Appendix B.
Figure \ref{f4} shows that the effect of the energy exchanging coupling is to shift
slightly the time where concurrence is maximal and, at the same time, to decrease the amplitude of concurrence for each fixed time. This feature is analogous to the effect of local energy-conserving interactions, as discussed above.
Unfortunately, it is quite difficult in this case to extract the
threshold values of $\lambda$ at which the creation of concurrence is prevented
for all times. The difficulty comes from the fact that for larger values of $\lambda$, the
concurrence is very small and the negative eigenvalues on order $O(\varkappa^2)$ do not allow
a reliable calculation.
This picture does not change much if a local energy-conserving interaction $\nu < \kappa$ is added. In Fig.~\ref{f5}, we show respectively,
the time shift of the maximal concurrence
$\Delta t = t_{max}(\lambda) - t_{max} (\lambda=0)$ as a function of the
energy-exchanging coupling $\lambda$ (a)
and the behavior of the maximal concurrence as a function of the same parameter
$\lambda$ for two different values of the local
coupling $\nu$. Is appears evident that the role played by the energy-exchange
coupling is very similar to that played by the local energy-conserving one.
\begin{figure}[!ht]
\begin{center}
\epsfxsize=85mm
\epsfbox{f5.eps}
\end{center}
\caption{{\small Energy-exchanging collective and local interaction $\lambda=\mu\ne 0$.
a) Time shift induced by energy-exchanging coupling, for the same
energy conserving collective coupling $\kappa=0.02$ and different local couplings $\nu$
as indicated in the legend. b) Decay of the maximal concurrence as a function of $\lambda$,
for the same cases as (a). Magnetic fields and temperature is the same as in Fig.~\ref{f4}.
}}
\label{f5}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\epsfxsize=85mm
\epsfbox{f6.eps}
\end{center}
\caption{{\small Energy-exchanging collective and local interaction $\lambda=\mu\ne 0$.
Rescaled concurrence $C(\rho)/C_{max}$ {\it vs } time $t$, for different
$\lambda $ values. Here,$\kappa = 0.02$ and $\nu=0$.
Magnetic fields and temperature is the same as in Fig.~\ref{f4}.
}}
\label{f6}
\end{figure}
Let us comment about concurrence revival. The effect of a collective
energy-conserving coupling consists of creating entanglement, destroying it
and creating it again but with a smaller amplitude. Generally speaking,
an energy-exchanging coupling, if extremely small, does not change this picture.
Nevertheless, it is important to stress that the damping effect the energy-exchange
coupling has on the concurrence amplitude is stronger on the revived
concurrence than on the initially created one. This is shown in
Fig.~\ref{f6}, where the renormalized concurrence $C(\rho)/C_{max}$ is plotted
for different $\lambda $ values.
For these parameter values, only a very small
coupling $\lambda \leq 0.001$ will allow revival of concurrence.
In the calculation of concurrence, the square roots of the eigenvalues of the matrix
$\xi(\rho)$ (\ref{nn60}) should be taken. As explained before, the non positivity, to order $O(\varkappa^2)$
of the density matrix $\rho$ reflects on the non positivity of the eigenvalues of the matrix
$\xi(\rho)$.
When this happens ($\nu_i < 0 $) we simply put $\nu_i = 0$ in the numerical calculations. This produces
an approximate (order $O(\varkappa^2)$) concurrence which produces spurious effects,
especially for small time, when concurrence is small. These effects are particularly
evident in Fig.~\ref{f6}, for small time, where artificial oscillations occur, instead
of an expected smooth behavior. In contrast to this behaviour, the revival of entanglement as revealed in Figure 6 varies smoothly in $\lambda$, indicating that this effect is not created due to the approximation.
\section{Conclusion}
We consider a system of two qubits interacting with local and
collective thermal quantum reservoirs. Each qubit is coupled to its
local reservoir by two channels, an energy-conserving and an
energy-exchange one. The qubits are collectively coupled to a third
reservoir, again through two channels. This is thus a versatile
model, describing local and collective, energy-conserving and
energy-exchange processes.
We present an approximate dynamics which describes the evolution of
the reduced density matrix for all times $t\geq 0$, modulo an error
term $O(\varkappa^2)$, where $\varkappa$ is the typical coupling
strength between a single qubit and a single reservoir. The error
term is controlled rigorously and for all times. The approximate
dynamics is markovian and shows that different parts of the reduced
density matrix evolve together, but independently from other parts.
This partitioning of the density matrix into {\it clusters} induces
a classification of decoherence times -- the time-scales during
which a given cluster stays populated. We obtain explicitly the
decoherence and relaxation times and show that their leading
expressions (lowest nontrivial order in $\varkappa$) is independent
of the ultraviolet behaviour of the system, and in particular,
independent of any ultraviolet cutoff, artificially needed to make
the models mathematically well defined.
We obtain analytical estimates on entanglement death and
entanglement survival times for a class of initially entangled
qubit states, evolving under the full, not explicitly solvable
dynamics. We investigate numerically the phenomenon of entanglement
creation and show that the approximate dynamics, even though it is
markovian, {\it does} reveal creation, sudden death and revival of
entanglement. We encounter in the numerical study a disadvantage of
the approximation, namely that it is not positivity preserving,
meaning that for small times, the approximate density matrix has
slightly negative eigenvalues.
The above-mentioned cluster-partitioning of the density matrix is
valid for general $N$-level systems coupled to reservoirs. We think
this clustering will play a useful and important role in the
analysis of quantum algorithms. Indeed, it allows one to separate
``significant" from ``insignificant" quantum effects, especially
when dealing with large quantum registers for performing quantum
algorithms. Depending on the algorithm, fast decay of some blocks of
the reduced density matrix elements can still be tolerable for
performing the algorithm with high fidelity.
We point out a further possible application of our method to
novel quantum measuring technologies based on superconducting
qubits. Using two superconducting qubits as measuring devices
together with the scheme considered in this paper will allow one to
extract not only the special density of noise, but also possible
quantum correlations imposed by the environment. Modern methods of
quantum state tomography will allow to resolve these issues.
|
3,212,635,537,666 | arxiv | \section{Introduction}
Research in natural language processing (NLP) has traditionally focused on developing models for English and a small set of other languages with
large amounts of data (see~Figure \ref{fig:stats}, bottom right).
While the lack of data is generally cited as the key reason for the lack of progress in NLP for underrepresented languages \cite{Hu2020xtreme,Joshi2020}, we argue that another factor relates to the diversity and the lack of understanding of the linguistic characteristics of such languages.
Through the lens of the languages spoken in Indonesia, the world's second-most linguistically diverse country, we seek to illustrate
the challenges
in applying NLP technology to such a diverse pool of languages.
\begin{figure}[!ht]
\centering
\begin{subfigure}{0.99\linewidth}
\centering
\includegraphics[width=\linewidth]{images/local.pdf}
\end{subfigure}
\begin{subfigure}{0.99\linewidth}
\centering
\includegraphics[width=\linewidth]{images/paper_per_speaker.pdf}
\end{subfigure}
\caption{Following \citet{Joshi2020}, we compile ACL Anthology to count the distribution of published works that mention languages spoken in Indonesia. \textbf{Top:} Distribution of papers in 20 years. \textbf{Bottom:} Number of papers per a million speakers. We compare languages spoken in Europe, Asia, and Indonesia.
}
\label{fig:stats}
\vspace{-0.5cm}
\end{figure}
Indonesia is the 4th most populous nation globally, with 273 million people spread over 17,508 islands.
There are more than 700 languages spoken in Indonesia, equal to 10\% of the world's languages, second only to Papua New Guinea~\cite{ethnologue}. However, most of these languages are not well documented in the literature; many are not formally taught, and no established standard exists across speakers \cite{novitasari-etal-2020-cross}.
Many of them are decreasing in use, as Indonesian (\textit{Bahasa Indonesia}), the national language, is more frequently used as the primary language across the country.
This process may ultimately result in a monolingual society~\cite{cohn2014local}.
\begin{figure*}[!t]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.95\linewidth]{images/language_vitality.pdf}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=.95\linewidth]{images/ethnologue_count.pdf}
\end{minipage}
\caption{Distribution of 700+ languages spoken in Indonesia according to Ethnologue~\cite{ethnologue}. \textbf{Left:} Language vitality. \textbf{Right:} Speaker count.}
\label{fig:vitality}
\end{figure*}
Among more than 700 Indonesian local languages, many are threatened. 440 languages are listed as endangered and 12 as extinct according to data from Ethnologue~\cite{ethnologue} illustrated in Figure~\ref{fig:vitality}. \citet{anindyatri} found nearly half of a sample of 98 Indonesian local languages to be endangered while \citet{Esch2022} observed 71 among 151 Indonesian local languages to have less than 100k speakers.
\begin{table}[!t]
\centering
\small
\begin{tabular}{lcr}
\toprule
\textbf{Language} & \textbf{ISO} & \textbf{\# Speakers }\\
\midrule
Indonesian & ind & 198 M \\
Javanese & jav & 84 M \\
Sundanese / Sunda & sun & 34 M \\
Madurese / Madura & mad & 7 M \\
Minangkabau & min & 6 M \\
Buginese & bug & 6 M \\
Betawi & bew & 5 M \\
Acehnese / Aceh & ace & 4 M \\
Banjar & bjn & 4 M \\
Balinese & ban & 3 M \\
Palembang Malay (Musi) & mus & 3 M \\
\bottomrule
\end{tabular}
\caption{The number of speakers for Indonesian and top-10 most spoken local languages in Indonesia \cite{ethnologue}.}
\label{tab:population}
\end{table}
Table~\ref{tab:population} lists the names of the 10 most spoken local languages in Indonesia~\cite{ethnologue}. Javanese and Sundanese are at the top with 84M and 34M speakers, respectively,
while Madura, Minangkabau, and Buginese each have around 6M speakers. Despite their large speaker populations, these local languages are poorly represented in the NLP literature.
Compared to Indonesian, the number of research papers mentioning these languages has barely increased over the past 20 years (Figure \ref{fig:stats}, top).
Furthermore, compared to their European counterparts, Indonesian languages are drastically understudied (Figure \ref{fig:stats}, bottom).
This is true even for Indonesian, which has nearly 200M speakers.
Language technology should be accessible to everyone in their native languages \cite{elra}, including Indonesians.
In the context of Indonesia, language technology research offers some benefits.
First, language technology is a potential peacemaker tools in a multi-ethnic country, helping Indonesians understand each other better and avoid the ethnic conflicts of the past~\cite{bertrand2004nationalism}. On a
larger
scale, language technology promotes language use~\cite{elra} and helps language preservation. Despite these benefits, following \citet{bird-2020-decolonising}, we recommend a careful assessment of individual usage scenarios of language technology, so they are implemented for the good of the local population.
For language technology to be useful in the Indonesian context, it
additionally has to account for the
dialects of local languages. Language dialects in Indonesia are influenced by the geographical location and regional culture of their speakers~\cite{vander2015dichotomy} and thus often differ substantially in morphology and vocabulary, posing challenges for NLP systems. In this paper, we provide an overview of the current state of NLP for Indonesian and Indonesia's hundreds of languages. We then discuss the challenges presented by those languages and demonstrate how they affect state-of-the-art systems in NLP. We finally provide recommendations for developing better NLP technology not only for the languages in Indonesia but also for other underrepresented languages.
\section{Background and Related Work \label{sec:background}}
\subsection{History and Taxonomy}
\begin{figure}[!th]
\centering
\includegraphics[width=\linewidth]{images/Austronesian_family_crop.png}
\caption{Map of Austronesian and Papuan languages in Indonesia.}
\label{fig:map}
\end{figure}
Indonesia is one of the richest countries globally in terms of linguistic diversity.
More than 400 of its languages belong to the Austronesian language family, while
the others are Papuan languages spoken in the eastern part of the country.
As shown in \figref{map}, the Austronesian languages in Indonesia belong to three main groups: Western-Malayo-Polynesian (WMP), Central-Malayo-Polynesian (CMP), and South-Halmahera-West-New-Guinea (SHWNG)~\cite{blust1980austronesian}.
WMP languages are Malay, Indonesian, Javanese, Sundanese, Balinese, and Minangkabau, among others. All languages mentioned in Table~\ref{tab:population} are in this group.
Languages belonging to CMP are languages of the Lesser Sunda Islands from East Sumbawa (with Bimanese) onwards to the east, and languages of the central and southern Moluccas (including the Aru Islands and the Sula Archipelago).
The SHWNG group consists of languages of Halmahera and Cenderawasih Bay, and further-flung regions such as the Mamberamo River and the Raja Ampat Islands.
Meanwhile, the Papuan languages are mainly spoken in Papua, such as Dani, Asmat, Maybrat, and Sentani. Some Papuan languages are also spoken in Halmahera, Timor, and the Alor Archipelago~\cite{palmer2018languages,ross2005pronouns}.
Most Austronesian linguists and archaeologists agree that the original `homeland' of Austronesian languages must be sought in Taiwan and, prior to Taiwan, in coastal South China~\cite{adelaar:2005,bellwood2011cultures}.
In the second millennium CE, the Austronesian people moved from Taiwan to the Philippines. From the Philippines, they moved southward to Borneo and Sulawesi. From Borneo, they migrated to Sumatra, the Malay Peninsula, Java, and even to Madagascar. From Sulawesi, they moved southward to the CMP area and eastward to the SHWNG area. From there, they migrated to Oceania and Polynesia, as far as New Zealand, Easter Island, and Hawaii~\cite{gray2000}. The people that lived in insular Southeast Asia, such as in the Philippines and Indonesia, before the arrival of Austronesians were Australo-Melanesians~\cite{bellwood2007prehistory}. Gradual assimilation with Austronesians occurred, although some pre-Austronesian groups still survive, such as Melanesian people in eastern Indonesia~\cite{ross2005pronouns,coupe2020asia}.
At the time of the arrival of the first Europeans, Malay had become the major language (lingua franca) of interethnic communication in Southeast Asia and beyond~\cite{steinhauer:2005,coupe2020asia}. It functioned as the language of trade and the language of Islam because Muslim merchants from India and the Middle East were the first to introduce the religion into the harbor towns of Indonesia. After the arrival of Europeans, Malay was used by the Portuguese and Dutch to spread Catholicism and Protestantism. When the Dutch extended their rule over areas outside Java in the nineteenth century, the importance of Malay increased, and thus, the first standardization of the spelling and grammar occurred in 1901, based on Classical Malay~\cite{abas_indonesian_1987,sneddon2003}. In 1928, the Second National Youth Congress participants proclaimed Malay (henceforth called Indonesian) as the unifying language of Indonesia. During World War II, the Japanese occupying forces forbade all use of Dutch in favor of Indonesian, which from then onward effectively became the new national language. From independence until the present, Indonesian has functioned as the primary language in education, mass media, and government activities. Many local language speakers are increasingly using Indonesian with their children because they believe it will aid them to attain a better education and career~\cite{klamer2018}.
\subsection{Efforts in Multilingual Research}
Recently, pretrained multilingual language models such as mBERT~\cite{devlin-etal-2019-bert}, mBART~\cite{liu2020mbart}, and mT5~\cite{xue-etal-2021-mt5} have been proposed. Their coverage, however, focuses on high-resource languages. Only mBERT and mT5 include Indonesian local languages, i.e., Javanese, Sundanese, and Minangkabau, but with comparatively little pretraining data.
Some multilingual datasets for question answering~\cite[TyDiQA;][]{clark-etal-2020-tydi}, common sense reasoning~\cite[XCOPA;][]{ponti-etal-2020-xcopa}, abstractive summarization~\cite{hasan-etal-2021-xl}, passage ranking~\cite[mMARCO;][]{bonifacio2021mmarco}, cross-lingual visual question answering~\cite[xGQA;][]{Pfeiffer2021xGQACV}, language and vision reasoning~\cite[MaRVL;][]{liu-etal-2021-visually}, paraphrasing~\cite[ParaCotta;][]{aji2021paracotta}, dialogue systems~\cite[XPersona \& BiToD;][]{lin-etal-2021-xpersona,lin2021bitod}, lexical normalization~\cite[MultiLexNorm;][]{van-der-goot-etal-2021-multilexnorm}, and machine translation \cite[FLORES-101;][]{flores1} include Indonesian but most others do not, and very few include Indonesian local languages. An exception is the weakly supervised named entity recognition dataset, WikiAnn~\cite{pan-etal-2017-cross}, which covers several Indonesian local languages, namely Acehnese, Javanese, Minangkabau, and Sundanese.
Parallel corpora including Indonesian local languages are: (i) CommonCrawl; (ii) Wikipedia parallel corpora like MediaWiki Translations;\footnote{\url{https://mediawiki.org/wiki/Content_translation}} and WikiMatrix~\cite{schwenk-etal-2021-wikimatrix} (iii) the Leipzig corpora~\cite{goldhahn-etal-2012-building}, which include Indonesian, Javanese, Sundanese, Minangkabau, Madurese, Acehnese, Buginese, Banjar, and Balinese; and (iv) JW-300~\cite{agic-vulic-2019-jw300}, which includes dozens of Indonesian local languages, e.g., Batak language groups, Javanese, Dayak language groups, and several languages in Nusa Tenggara. Recent studies, however, have raised concerns regarding the quality of such multilingual corpora for underrepresented languages~\cite{Caswell2022QualityAA}.
\subsection{Progress in Indonesian NLP }
NLP research on Indonesian has occurred across multiple topics, such as POS tagging~\cite{wicaksono2010hmmbp, dinakaramani_pos_tag}, NER~\cite{NER_indra,rachman2017ner,gunawan2018named}, sentiment analysis~\cite{aqsath2011sentiment, lunando2013indonesian, wicaksono2014automatically}, hate speech detection~\cite{alfina2017hate, sutejo2018indonesia}, topic classification~\cite{winata2015handling, kusumaningrum2016}, question answering~\cite{mahendra-etal-2008-extending,fikri2012cdqa}, machine translation~\cite{yulianti_mt, simon2013experiments, hermanto2015recurrent}, keyphrases extraction~\cite{saputra-etal-2018-keyphrases, trisna2020single}, morphological analysis~\cite{pisceldo2008two}, and speech recognition~\cite{lestari2006large,baskoro2008developing,zahra2009building}.
However, many of these studies either did not release the data or used non-standardized resources with a lack of documentation and open source code, making them extremely difficult to reproduce.
Recently, \citet{wilie-etal-2020-indonlu}, \citet{koto-etal-2020-indolem, koto-etal-2021-indobertweet}, and \citet{cahyawijaya-etal-2021-indonlg} collected Indonesian NLP resources as benchmark data.
Others have also begun to create standardized labeled data for Indonesian NLP, e.g. the works of {\setcitestyle{citesep={,}}\citet{kurniawan2018toward, guntara-etal-2020-benchmarking, koto-etal-2020-liputan6, khairunnisa-etal-2020-towards}}, and \citet{mahendra-etal-2021-indonli}. \setcitestyle{citesep={;}}
On the other hand, there has been very little work on local languages. Several works studied stemming (Sundanese~\cite{suryani2018rule}; Balinese~\cite{subali2019kombinasi}) and POS Tagging \cite[Madurese;][]{dewi2020combination}. \citet{koto-koto-2020-towards} built a Indonesian Minangkabau parallel corpus and also sentiment analysis resources for Minangkabau. Other works developed machine translation systems between Indonesian and local languages, e.g., Sundanese~\cite{suryani2015sundamt}, Buginese~\cite{apriani2016pengaruh}, Dayak Kanayatn~\cite{hasbiansyah2016tuning}, and
Sambas Malay~\cite{ningtyas2018penggunaan}.
{\setcitestyle{citesep={,}}\citet{tanaya2016dictionary, Tanaya-2018-word}} studied Javanese character segmentation in non-Latin script. \citet{safitri2016spoken} worked on spoken data language identification in Minangkabau, Sundanese, and Javanese, while \citet{azizah2021} developed end-to-end neural text-to-speech models for Indonesian, Sundanese, and Javanese. {\setcitestyle{citesep={,}}\citet{nasution2017generalized, nasution-2021-plan}} proposed an approach for bilingual lexicon induction and evaluated the approach on seven languages, i.e., Indonesian, Malay, Minangkabau, Palembang Malay, Banjar, Javanese, and Sundanese.
\citet{cahyawijaya-etal-2021-indonlg} established a machine translation benchmark in Sundanese and Javanese using Bible data. ~\citet{wibowo-etal-2021-indocollex} studied a family of colloquial Indonesian, which is influenced by some local languages via morphological transformation, and \citet{Putri_etal_2021_abusive} worked on abusive language and hate speech detection on Twitter for five local languages, namely Javanese, Sundanese, Madurese, Minangkabau, and Musi.
\iffalse
Despite the low coverage in local languages, the existing datasets are still arguably beneficial for constructing local languages resources.
This is due to the fact that
1) most Indonesians are bilingual, speaking both Indonesian and their local language~\cite{koto-koto-2020-towards}, and
2) Indonesian can be used as a pivot language with regard to the local languages due to
overlaps in vocabulary (see Table~\ref{tab:vocab_wiki} in Appendix).
However, the high vocabulary overlap does not guarantee a great performance when using Indonesian models in the local languages. For example, \citet{koto-koto-2020-towards} have shown that when applying Indonesian transformer models in the zero-shot setting for sentiment classification in Minangkabau, performance drops by over 10\% compared to models that are fully trained in Minangkabau, despite the large vocabulary overlap between both languages of almost 55\%.
\fi
\section{Challenges for Indonesian NLP}
\subsection{Limited Resources}
\label{sec:limited-res}
\begin{figure}[!th]
\centering
\includegraphics[width=0.95\linewidth]{images/wiki_vs_speaker.pdf}
\caption{
Relationship between the number of speakers and the size of data in Wikipedia for languages spoken in Europe, Asia, and Indonesia.
}
\label{fig:wiki_vs_speaker}
\vspace{-0.3cm}
\end{figure}
\paragraph{Monolingual Data}
Unlabeled corpora are crucial for building large language models, such as GPT-2~\cite{radford2019language} or BERT~\cite{devlin-etal-2019-bert}.
Available unlabeled corpora such as Indo4B \cite{wilie-etal-2020-indonlu}, and Indo4B-Plus \cite{cahyawijaya-etal-2021-indonlg} mainly include data in Indonesian, with the latter containing $\approx$10\% of data in Javanese and Sundanese.
In comparison, in multilingual corpora such as CC--100 \cite{conneau-etal-2020-unsupervised}, Javanese and Sundanese data accounts for only 0.001\% and 0.002\% of the corpus size respectively while in mC4~\cite{xue-etal-2021-mt5}, there are only 0.6M Javanese and 0.3M Sundanese tokens out of a total of 6.3T tokens.
In addition, we measure data availability in Wikipedia compared to the number of speakers in Figure~\ref{fig:wiki_vs_speaker}.\footnote{The number of speakers is collected from Wikidata~\cite{vrandevcic2014wikidata}, from the \texttt{number of speakers (P1098)} property as of Nov 7th, 2021, while the size is collected from the 20211101 Wikipedia dump.} Much less data is available for the languages spoken in Indonesia, compared to European languages with similar numbers of speakers. For example, Wikipedia contains more than 3 GB of Italian articles but less than 50 MB of Javanese articles, despite both languages having a comparable number of speakers. Similarly, Sundanese has less than 25 MB of articles, whereas languages with comparable numbers of speakers have more than 1.5 GB of articles. Similar trends hold for most other Asian languages. Languages in Africa are even more underrepresented in terms of Wikipedia data (see Appendix \ref{appendix:wiki}).
Beyond the highly spoken local languages, most other Indonesian local languages do not have Wikipedia instances, in contrast to European languages with few speakers. It is very difficult to find alternative sources for high-quality text data for other local languages of Indonesia (such as news websites), as most such sources are written in Indonesian. Resources in long-tail languages are even more scarce due to a very low number of speakers. Moreover, most of the languages in the long tail are mainly used in a spoken context, making text data challenging to obtain. These statistics demonstrate that collecting unlabeled corpora for Indonesian local languages is extremely difficult. This makes it impractical to develop strong pretrained language models for these languages, which have been the foundation for many recent NLP systems.
\begin{table*}[ht!]
\centering
\small
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lcccccccc@{}}
\toprule
\textbf{English} & \textbf{Mudung Laut} & \textbf{Dusun Teluk} & \textbf{Mersam} & \textbf{Suo Suo} & \textbf{Teluk Kuali} & \textbf{Lubuk Telau} & \textbf{Bunga Tanjung} & \textbf{Pulau Aro} \\
\midrule
I/me & sayo & aku & \textipa{awaP} & sayo & kito, \textipa{awaP} & am$^b$o & ambo & ambo \\
You & kau, kamu & kau & ka$^d$n & kamu & kaan & kamu & \textipa{aN, kau, kayo} & \textipa{baPaN} \\
he/she & \textipa{dioP} & \textipa{dioP, {\textltailn}o} & \textipa{{\textltailn}o} & kau & \textipa{{\textltailn}o} & \textipa{{\textltailn}o} & \textipa{{\textltailn}o} & \textipa{i{\textltailn}o}\\
if & kalu & jiko, kalu & kalu & bilao & kalu & jiko & \textipa{koP} & kalu \\
one & satu & \textipa{sekoP} & \textipa{sekoP} & \textipa{sekoP} & \textipa{ci3P} & \textipa{sekoP} & \textipa{sekoP}, so & \textipa{sekoP} \\
\bottomrule
\end{tabular}
}
\caption{Lexical variation of Jambi Malay across different villages in Jambi~\citep{anderbeck2008malay}.}
\label{tab:jambi-word}
~\\
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}llllll@{}}
\toprule
\multirow{2}{*}{\textbf{English}} & \multirow{2}{*}{\textbf{Context}} & \multicolumn{3}{c}{\textbf{Ngoko}} & \textbf{Krama} \\ \cmidrule(l{2pt}r{2pt}){3-5}
\cmidrule(l{2pt}r{2pt}){6-6}
& & Western & Central & Eastern & Eastern \\
\midrule
I/me & I like to eat fried rice. & inyong, enyong & aku & aku & kulo \\
You & Where will you go? & rika, kowe, ko & kowe, siro, sampeyan & koen, awakmu, sampeyan & panjenengan \\
How & How do I read this? & priwe & piye & yo'opo & pripun \\
Why & Why is this door broken? & ngapa & ngopo & opo'o & punapa \\
Will & Where will you go? & arep & arep & kate, ate & badhe \\
Not/no & The calculation is not correct. & ora & ora & gak & mboten \\
\bottomrule
\end{tabular}
}
\caption{Lexical variations of Javanese dialects and styles across different regions of the Java island. Native speakers are asked to translate the words, given the context.}
\label{tab:jv-word}
\end{table*}
\paragraph{Labeled Data}
Most work on Indonesian NLP (see \textsection \ref{sec:background}) has not publicly released the data or models, limiting reproducibility. Although recent Indonesian NLP benchmarks are addressing this issue, they mostly focus on the Indonesian language (see Appendix~\ref{appendix:resource}).
Some widely spoken local languages such as Javanese, Sundanese, or Minangkabau have extremely small labeled datasets compared to Indonesian, while others have barely any.
The lack of such datasets makes NLP development for the local languages difficult.
However, constructing new labeled datasets is still challenging due to: (1) the lack of speakers of some languages; (2) the vast continuum of dialectical variation (see \textsection \ref{sec:dialect_style}); and (3) the absence of writing standard in most local languages (see \textsection \ref{sec:non-standard}).
\subsection{Language Diversity}
The diversity of Indonesian languages is not only reflected in the large number of local languages but also the large number of dialects of these languages (\textsection \ref{sec:dialect_style}). Speakers of local languages also often mix languages in conversation, which makes colloquial Indonesian more diverse (\textsection \ref{sec:code-mixing}). In addition, some local languages are more commonly used in conversational contexts, so they do not have consistent writing forms in written media (\textsection \ref{sec:non-standard}).
\subsubsection{Regional Dialects and Style Differences} \label{sec:dialect_style}
Indonesian local languages often have multiple dialects, depending on the geographical location. Local languages of Indonesian spoken in different locations might be different (have some lexical variation) to one another, despite still being categorized as the same language~\cite{fauzi2018dialect}. For example,
\citet{anderbeck2008malay} showed that villages across the Jambi province use different dialects of Jambi Malay. Similarly, \citet{kartikasari2018study} showed that Javanese between different cities in central and eastern Java could have more than 50\% lexical variation, while~\citet{purwaningsih2017geografi} showed that Javanese in different districts in the Lamongan has up to 13\% lexical variation. Similar studies have been conducted on other languages, such as Balinese~\cite{maharani2018variasi} and Sasak~\cite{sarwadi2019lexical}.
Moreover, Indonesian and its local languages have multiple styles, even within the same dialect. One factor that affects style is the level of politeness and formality---similar to Japanese and other Asian languages~\cite{bond2016introduction}. More polite language is used when speaking to a person with a higher social position, especially to elders, seniors, and sometimes strangers. Different politeness levels manifest in the use of different honorifics and even different lexical terms.
To illustrate the distinctions between regional dialects and styles, we highlight common words and utterances across dialects and styles in Jambi Malay and Javanese in Tables \ref{tab:jambi-word} and \ref{tab:jv-word} respectively. For Jambi Malay, we sample the result from a prior work~\citep{anderbeck2008malay}.
For Javanese, we ask native speakers to translate basic words into three regional dialects: Western, Central, and Eastern Javanese, and two different styles: \textit{Ngoko} (standard, daily-use Javanese) and \textit{Krama} (polite Javanese, used to communicate to elders and those with higher social status). However, since contemporary \textit{Krama} Javanese is not very different among regions, we only consider \textit{Krama} from the Eastern speakers' perspective.
Jambi Malay has many dialects across villages. As shown in Table \ref{tab:jambi-word}, many common words are spoken differently across dialects and styles. Similarly, Javanese is also different across regions. Not every Javanese speaker understands \textit{Krama}, since its usage is very limited. Moreover, the number of Javanese speakers who can use \textit{Krama} is declining~\cite{cohn2014local}.\footnote{\textit{Krama} is used to speak formally (e.g., with older or respected people). However, people prefer to use Indonesian more in a formal situation. People who move from sub-urban areas to bigger cities tend to continue to use \textit{Ngoko} and thus also pass \textit{Ngoko} on to their children.} Examples from other languages are shown in Appendix~\ref{appendix:dialect}.
\begin{table}[!t]
\small
\centering
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{@{}c@{ ~ }c@{ ~ }c@{ ~ }c@{ ~ }c@{ ~ }c@{ ~ }c@{}}
\toprule
& \multicolumn{5}{c}{\textbf{Model}} \\ \midrule
\multirow{2}{*}{\textbf{Style}} &\multirow{2}{*}{\textbf{Region}} & \multicolumn{2}{c}{\textbf{langid.py}} & \multicolumn{2}{c}{\textbf{FastText}} & \textbf{CLD3} \\ \cmidrule(l{2pt}r{2pt}){3-4} \cmidrule(l{2pt}r{2pt}){5-6} \cmidrule(l{2pt}r{2pt}){7-7}
& & Top-1 & Top-3 & Top-1 & Top-3 & Top-1 \\
\midrule
\textit{Ngoko} & Western & 0.241 & 0.621 & 0.069 & 0.379 & 0.759 \\
\rowcolor{lime} \textit{Ngoko} & Central & 0.345 & 0.690 & 0.379 & 0.724 & 0.828 \\
\textit{Ngoko} & Eastern & 0.276 & 0.552 & 0.103 & 0.379 & 0.552 \\
\rowcolor{lime} \textit{Krama} & Eastern & 0.345 & 0.759 & 0.379 & 0.586 & 0.897 \\
\bottomrule
\end{tabular}
}
\caption{Language identification accuracy based on different Javanese dialects and styles. Systems do not perform equally well across dialects and styles.}
\label{tab:language-iden-jv}
\end{table}
\subsubsection*{Case Study in Javanese}
Dialectical and style differences pose a challenge to NLP systems. To explore the extent of this challenge, we conduct an experiment to test the robustness of NLP systems to variations in Javanese dialects. We ask native speakers\footnote{Our annotators are based in Banyumas, Jogjakarta, and Jember for Western, Central, and Eastern Javanese respectively. Using dialects from different cities might yield different results.} to translate 29 simple sentences into Javanese according to the specified dialect and style. We then evaluate several
language identification systems on those instances. Language identification is a core part of multilingual NLP
and a necessary step for collecting textual data in a language. Despite its importance, it is an open research area, particularly for underrepresented languages \cite{Hughes+:2006,Caswell2022QualityAA}.
We compare langid.py~\cite{lui2012langid}, FastText~\cite{joulin2017bag}, and CLD3.\footnote{\url{https://github.com/google/cld3}} The results can be seen in Table~\ref{tab:language-iden-jv}. In general, the language identification systems are more accurate in detecting Javanese texts in the \textit{Ngoko}-Central dialect, or \textit{Krama}, since the systems were trained on Javanese Wikipedia data, which is written in either the \textit{Ngoko}-Central or \textit{Krama} dialects and styles. If an NLP system can only detect certain dialects, then this information should be conveyed explicitly. Problems arise if we assume that the model works equally well across dialects. For example, in the case of language identification, if we use the model to collect datasets automatically, then Javanese datasets with poor-performing dialects will be underrepresented in the data.
\begin{table}[!t]
\centering
\small
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{@{}p{0.44\linewidth} p{0.47\linewidth}@{}}
\toprule
\textbf{Colloquial Indonesian} & \textbf{Translation} \\
\midrule
Ada yang \textbf{\textcolor{orange}{nge}\textcolor{red}{tag}} foto \textbf{\textcolor{blue}{lawas}} di FB & Someone is tagging old photos in FB \\
\textbf{\textcolor{red}{Quote}}nya Andrew Ng ini relevan \textbf{\textcolor{orange}{banget}} & This Andrew Ng quote is very relevant \\
\textcolor{teal}{Bilo} kita pergi main lagi? & When will we go play again? \\
Ini \textbf{\textcolor{olive}{teh}} aksara jawa kenapa susah \textbf{\textcolor{orange}{banget}}? & Why is this Javanese script very difficult? \\
\bottomrule
\end{tabular}
}
\caption{Colloquial Indonesian code-mixing examples from social media. Color code: \textcolor{red}{English}, \textcolor{orange}{Betawinese}, \textcolor{blue}{Javanese}, \textcolor{teal}{Minangkabau}, \textcolor{olive}{Sundanese}, Indonesian.}
\label{tab:codemix}
\end{table}
\subsubsection{Code-Mixing} \label{sec:code-mixing}
Code-mixing is an occurrence where a person speaks alternately in two or more languages in a conversation~\cite{sitaram2019survey,winata2018code,winata2019code,dogruoz-etal-2021-survey}. This phenomenon is common in Indonesian conversations~\cite{barik-etal-2019-normalization, johanes2020structuring, wibowo-etal-2021-indocollex}. In a conversational context, people sometimes mix their local languages with standard Indonesian, resulting in colloquial Indonesian~\cite{siregar2014code}. This colloquial-style Indonesian is used daily in speech and conversation and is common on social media~\cite{sutrisno2019beyond}. Some frequently used code-mixed words (especially on social media) are even intelligible to people that do not speak the original local languages. Interestingly, code-mixing can also occur in border areas where people are exposed to multiple languages, therefore mixing them together. For example, people in Jember (a regency district in East Java) combine Javanese and Madurese in their daily conversation~\cite{haryono2012perubahan}.
Indonesian code-mixing not only occurs at the word level but also
at the morpheme level~\cite{winata2021multilingual}. For example, \textit{quotenya} (``his/her quote'', see Table~\ref{tab:codemix}) combines the English word \textit{quote} and the Indonesian suffix \textit{-nya}, which denotes possession; similarly, \textit{ngetag} combines the Betawinese prefix \textit{nge-} and the English word \textit{tag}. More examples can be found in Table~\ref{tab:codemix}.
\subsection{Orthography Variation} \label{sec:non-standard}
Many Indonesian local languages are mainly used in spoken settings and have no established standard orthography system. Some local languages do originally have their own archaic writing systems that derive from the Jawi alphabet or Kawi script, and even though standard transliteration into the Roman alphabet exists for some (e.g., Javanese and Sundanese), they are not widely known and practiced~\cite{Diksi5265}. Hence, some words have multiple romanized orthographies that are mutually intelligible, as they are pronounced the same. Some examples can be seen in Table~\ref{tab:written-form}. Such a variety of written forms is common in local languages in Indonesia. This variation leads to a significantly larger vocabulary size, especially for NLP systems that use word-based representations, and presents a challenge to constrain the representations for different spellings of the same word to be similar.
\begin{table}[!t]
\centering
\setlength\tabcolsep{1.5pt}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{l|cccc}
\toprule
\textbf{Language} & \textbf{Meaning} & \textbf{Written Variation} &
\textbf{IPA} \\
\midrule
Javanese & what & apa / opo & \textipa{/OpO/} \\
(Eastern-- & there is & ana / ono / onok & \textipa{/OnOP/} \\
\textit{Ngoko}) & you & kon / koen & \textipa{/kOn/} \\
\midrule
Balinese & yes & inggih / nggih & \textipa{/PNgih/} \\
(Alus--& I / me & tiang / tyang & \textipa{/tiaN/} \\
\textit{Singgih})& <greeting> & swastyastu / swastiastu & \textipa{/swastiastu/} \\
\midrule
Sundanese & please / sorry & punten / punteun & \textipa{/punt@n/} \\
(Badui--& red & beureum / berem & \textipa{/b@r1m/} \\
\textit{Loma}) & salivating & ngacai / ngacay & \textipa{/NacaI/} \\
\bottomrule
\end{tabular}
}
\caption{Written form variations in several local languages, confirmed by native speakers.}
\label{tab:written-form}
\end{table}
\subsection{Societal Challenges} \label{sec:societal-challenge}
Language evolves together with the speakers. A more widely used language may have a larger digital presence, which fosters a more written form of communication, while languages that are used only within small communities may emphasize the spoken form. Some languages are also declining, and speakers may prefer to use Indonesian
rather than their local language. In contrast, there are isolated residents that use the local language daily and are less proficient in Indonesian~\cite{nurjanah2018pengembangan,antara-NTT}. These variations give rise to different requirements, and there is no single solution for all.
Technology and education are not well-distributed within the nation. Internet penetration in Indonesia is 73.7\% in 2020 but is mainly concentrated on Java. Among the non-Internet users, 39\% explain that they do not understand the technology, while 15\% state that they do not have a device to access the Internet.\footnote{The Indonesian Internet Providers Association (APJII) survey: \url{https://apjii.or.id/survei2019x}} In some areas where the Internet is not seen as a basic need, imposing NLP technology on them may not necessarily be relevant. At the same time, general NLP development within the nation faces difficulties due to the lack of funding, especially in universities outside of Java. GPU servers are still scarce, even in top universities in the country.\footnote{For instance, we estimate the whole CS Faculty of the country's top university to have fewer than 10 GPUs.}
The dynamics of population movement in Indonesia also need to be taken into consideration. For example, urban communities transmigrate to remote areas for social purposes, such as teaching or becoming doctors for underdeveloped villages. Each of these situations might call for various new NLP technologies to be developed to facilitate better communication.
\section{Opportunities}
Based on the challenges for Indonesian NLP highlighted in the previous section, we formulate proposals for improving the state of Indonesian NLP research, as well as of other underrepresented languages. Our proposals cover several
aspects including metadata documentation;
potential research directions;
and engagement with communities.
\subsection{Better Documentation}
In line with studies promoting proper data documentation for NLP research~\cite{bender2018data, rogers-etal-2021-just-think, alyafeai2021masader, mcmillan2022documenting}, we recommend the following
considerations.
\paragraph{Regional Dialect Metadata}
We have shown that a local language can have large variation depending on the region and the dialect. Therefore, we suggest adding regional dialect metadata to NLP datasets and models, not only for Indonesian but also for other languages. This is particularly important for languages with large dialectical differences. It also helps to clearly communicate NLP capabilities to stakeholders and end-users as it will help set an expectation of what dialects the systems can handle. Additionally, regional metadata can indirectly inform topics present in the data, especially for crawled data sources.
\paragraph{Style and Register Metadata}
Similarly, we also suggest adding style and register metadata. This metadata can capture the politeness level of the text, not only for Indonesian but also in other languages.
In addition, this metadata can be used to document the formality level of the text, so it may be useful for research on modeling style or style transfer.
\subsection{
Potential Research Directions}
Among the most spoken local languages, a lot of research has been done on mainstream NLP tasks such as hate-speech detection, sentiment analysis, entity recognition, and machine translation. Some research has even been deployed in production by industry.
Many of the languages, however, are not widely spoken and under-explored. Focusing on these languages, we suggest future research direction as follows.
\paragraph{Data-Efficient NLP}
Pretrained language models, which have taken the NLP world by storm, require an abundance of monolingual data.
However, data collection has been a long-standing problem for low-resource languages.
Therefore, we recommend more exploration into designing data-efficient approaches such as adaptation methods~\cite{artetxecross,aji2020neural, gururangan2020don,koto-etal-2021-indobertweet,kurniawan-etal-2021-ppt}, few-shot learning~\cite{winata2021language,madotto2021few, le2021many}, and learning from related languages \cite{Khanuja2021,khemchandani-etal-2021-exploiting}.
The goal of these methods is effective resource utilization, that is, to minimize the financial costs for computation and data collection as advocated by {\setcitestyle{citesep={,}}\citet{schwartz2020green,cahyawijaya2021greenformers}}, and \citet{nityasya2021costs}.\setcitestyle{citesep={;}}
\paragraph{Data Collection} Data collection efforts need to be commenced as soon as possible, despite all the challenges (\textsection \ref{sec:limited-res}). Here, we suggest collecting parallel data between Indonesian and each of the local languages for several reasons.
First, a lot of Indonesians are bilingual~\cite{koto-koto-2020-towards}, that is, they speak both Indonesian and their local language, which facilitates data collection. Moreover, the fact that the local languages have some vocabulary overlap with Indonesian (see Table~\ref{tab:vocab_wiki} in the Appendix) might help facilitate building translation systems with relatively little parallel data~\cite{nguyen2017transfer}.
Finally, having such parallel data, we can build translation systems for synthetic data generation.
In line with this approach, the effectiveness of models trained on synthetic translated data can be explored.
\paragraph{Compute-Efficient NLP}
The costly GPU requirement for current NLP models hinders adoption by local research institutions and industries. Instead of focusing on building yet another massive model, we suggest focusing on developing lightweight and fast neural architectures, for example through distillation~\cite{kim2019research, sanh2019distilbert, jiao2020tinybert}, model factorization~\cite{winata2019effectiveness} or model pruning~\cite{voita2019analyzing}.
We also recommend research on more efficient training mechanisms \cite{aji2017sparse, diskin2021distributed}. In addition, non-neural methods are still quite popular in Indonesia. Therefore, further research on the trade-off between the efficiency and quality of the models is also an interesting research direction.
\paragraph{Robustness to Code-mixing and Non-Standard Orthography}
Languages in Indonesia are prone to variations due to code-mixing and non-standard orthography, which occurs on the morpheme or even grapheme level. Models that are applied to Indonesian code-mixed data need to be able to learn morphologically faithful representations. Therefore, we recommend more explorations on methods derived from subword tokenization~\cite{gage1994bpe,kudo2018subword} and token-free models~\cite{gillick2016multilingual,tay2021charformer,xue2021byt5} to deal with this problem. This problem is also explored by~\citet{tan2021code} in an adversarial setting.
\paragraph{NLP Beyond Text}
For many Indonesian local languages that are rarely if ever written, speech is a more natural communication format. We thus recommend more attention on less text-focused research, such as spoken language understanding~\cite{chung2021splat,dmitriy2018slu}, speech recognition~\cite{besacier2014automatic,winata2020lrt,winata2020learning}, and multimodality~\cite{dai2020modality,dai-etal-2021-multimodal} in order to improve NLP for such languages.
\subsection{Engagement with Communities}
As discussed in \textsection \ref{sec:societal-challenge}, it is difficult to generalize a solution across local languages. We thus encourage the NLP community, such as the Indonesian Association of Computational Linguistics (INACL)\footnote{\url{https://inacl.id/inacl/}} to work more closely with native speakers and local communities.
Local communities who work on linguistics such as \textit{Polyglot Indonesia},\footnote{\url{http://polyglotindonesia.org}} \textit{Merajut Indonesia},\footnote{\url{https://merajutindonesia.id/}} and \textit{Masyarakat Linguistik Indonesia}\footnote{\url{https://www.mlindonesia.org/}} would be relevant collaborators to provide solutions and resources that support use cases benefiting the native speakers and communities of underrepresented languages. We advise the involvement of linguists, for example, to aid the language documentation process~\cite{anastasopoulos-etal-2020-endangered}. We also support open-science movements such as BigScience\footnote{\url{https://bigscience.huggingface.co/}} or ICLR CoSubmitting Summer\footnote{\url{https://blog.iclr.cc/2021/08/10/broadening-our-call-for-participation-to-iclr-2022/}} to help start collaborations and reduce the barrier to entry to NLP research.
\section{Conclusion}
In this paper, we highlight challenges in Indonesian NLP. Indonesia is one of the most populous countries and the second-most linguistically diverse country of the world, with over 700 local languages, yet Indonesian NLP is underrepresented and under-explored.
Based on the observed challenges, we also present recommendations to improve the situation, not only for Indonesian but also for other underrepresented languages.
\section*{Acknowledgments}
We are grateful to Alexander Gutkin and Ling-Yen Liao for valuable feedback on an earlier draft of this paper. We also thank the anonymous reviewers for their helpful feedback. Lastly, we acknowledge the support of Kata.ai in this work.
|
3,212,635,537,667 | arxiv | \section{Introduction}
In this note using non-smooth critical point theory together with classical
local invertibility results we consider locally invertible locally Lipschitz
functions $f
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
^{n}$. We provide conditions for global invertibility of $f$. This note aims
at answering the question posed on p. 20 of \cite{gutu} which suggests that
results from \cite{SIW} should have their Lipschitz counterpart. The
difficulties which arise in the non-smooth setting are described to the end
of this section and although we follow the pattern introduced in \cite{SIW},
i.e. we make local invertibility a global one, we do not have sufficient
tools to do this directly. The local results we base on are as follows
\begin{lemma}
\cite{CLARKE2}\label{lemma2}Let $D$ be an open subset of
\mathbb{R}
^{n}$. If $f:D\rightarrow
\mathbb{R}
^{n}$ satisfies a Lipschitz condition in some neighbourhood of $x_{0}$ and
\partial f(x_{0})\subseteq $
\mathbb{R}
^{n}$ (where $\partial f(x_{0})$ denotes the collection of all generalized
directional derivative of $f$ at the point $x\in
\mathbb{R}
^{n}$ in the sense of Clarke) is of maximal rank, then there exist
neighbourhoods $U\subset D$ of $x_{0}$ and $V$ of $f(x_{0})$ and a Lipschitz
function $g:V\rightarrow
\mathbb{R}
^{n}$ such that\newline
i ) for every $u\in U$, $g(f(u))=u$, and\newline
ii) for every $v\in V$, $f(g(v))=v.$
\end{lemma}
Set $\partial f(x)\subseteq $
\mathbb{R}
^{n}$ being of maximal rank means that all matrices in $\partial f(x)$ are
nonsingular, so it reflects the assumption that $\det [f^{\prime }(x)]\neq 0$
for every $x\in D,$ compare with \cite{rad} where this condition provides
local diffeomorphism for a differentiable mapping. The reason for using a
non-smooth critical point theory is as follows. We are going to apply
mountain pass technique being inspired by the following result by Katriel
\cite{katriel}, see also Theorem 5.4 from \cite{jabri}:
\begin{theorem}
\label{MainTheo copy(2)}Let $X,$ $B$ be finite dimensional Euclidean spaces.
Assume that $f:X\rightarrow B$ is a $C^{1}$-mapping such that\newline
(a1) $f^{\prime }(x)$ is invertible for any $x\in X$;\newline
(a2) $\left\Vert f\left( x\right) \right\Vert \rightarrow \infty $ as
\left\Vert x\right\Vert \rightarrow \infty $\newline
then $f$ is a diffeomorphism.
\end{theorem}
Theorem \ref{MainTheo copy(2)} is proved via a classical mountain pass
theorem. We cannot use this in the setting of Lemma \ref{lemma2} due to the
fact that the assumptions in the mountain pass theorem require action
functional to be $C^{1}$. However the mountain pass approach in the
non-smooth setting will provide us with a satisfactory option. Since we lack
the chain rule formula in our case we will have to restrict our
consideration to some subclass of locally Lipschitz functions for which
\partial f(x)\subseteq $
\mathbb{R}
^{n}$ is of maximal rank. This is again in contrast with \cite{rad}, where
any differentiable on $D$ function $f$ such that $\det [f^{\prime }(x)]\neq
0 $ for every $x\in D$ can be considered with additional coercivity.
\section{Preliminaries}
A function $f
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
^{n}$ is called locally Lipschitz continuous, if to every $u\in
\mathbb{R}
^{n}$ there corresponds a neighborhood $V_{u}$ of $u$ and a constant
L_{u}\geq 0$ such that
\begin{equation*}
\left\Vert f(z)-f(w)\right\Vert \leq L_{u}\Vert z-w\Vert \quad \text
for\thinspace all \ }z,w\in V_{u}\,.
\end{equation*
If $u,z\in E$, we write $f^{0}(u;z)$ for the generalized directional
derivative of $J$ at the point $u$ along the direction $z$; i.e.,
\begin{equation*}
f^{0}(u;z):=\limsup_{w\rightarrow u,\,t\rightarrow 0^{+}}\frac{f(w+tz)-f(w)}
t}\,.
\end{equation*
The generalized gradient of the function $J$ in $u$, denoted by $\partial
J(u)$, is the set
\begin{equation*}
\partial f(u):=\{\xi \in
\mathbb{R}
^{n}:\langle \xi ,z\rangle \leq f^{0}(u;z),\;\text{for all }\,z\in
\mathbb{R}
^{n}\}.
\end{equation*
The basic properties of generalized directional derivative and generalized
gradient were studied in \cite{CLARKE2} and in \cite{ochalbook}.
Further, a point $u$ is called a (generalized) critical point of the locally
Lipschitz continuous function $f,$ if $0\in \partial f(u)$.
A locally Lipschitz continuous functional $J
\mathbb{R}
^{n}\rightarrow \mathbb{R}$ is said to fulfill the Palais-Smale condition
(PS) if every sequence $\{u_{n}\}$ in $E$ such that $\{J(u_{n})\}$ is
bounded and
\begin{equation*}
J^{0}(u_{n};u-u_{n})\geq -\varepsilon _{n}\Vert u-u_{n}\Vert
\end{equation*
\ for all $u\in
\mathbb{R}
^{n}$, where $\varepsilon _{n}\rightarrow 0^{+},$ possesses a convergent
subsequence.
Our main tool will be the following
\begin{theorem}
\label{MPT} \cite{MoVa}Let $J
\mathbb{R}
^{n}\rightarrow \mathbb{R}$ be a locally Lipschitz continuous functional
satisfying $(\mathrm{PS)}$ condition. If there exist $u_{1},u_{2}\in
\mathbb{R}
^{n}$, $u_{1}\neq u_{2}$ and $r\in (0,\left\Vert u_{2}-u_{1}\right\Vert )$
such that
\begin{equation*}
\inf \{J(u):\left\Vert u-u_{1}\right\Vert =r\}\geq \max \{J(u_{1}),J(u_{2})\}
\end{equation*
and we denote by $\Gamma $ the family of continuous paths $\gamma
:[0,1]\rightarrow
\mathbb{R}
^{n}$ joining $u_{1}$ and $u_{2},$ the
\begin{equation*}
c:=\underset{\gamma \in \Gamma }{\inf }\underset{s\in \lbrack 0,1]}{\max
J(\gamma (s))\geq \max \{J(u_{1}),J(u_{2})\}
\end{equation*
is a critical value for
\mathbb{R}
^{n}$\ and $K_{c}\backslash \{u_{1},u_{2}\}\neq \emptyset $, where $K_{c}$
is the set of critical points at the level $c$.
\end{theorem}
\section{Main result}
\begin{theorem}
\label{MainTheo} Assume $f
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
^{n}$ is a locally Lipschitz mapping such that\newline
(b1) for any $y\in
\mathbb{R}
^{n}$ the functional $\varphi
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ defined by
\begin{equation*}
\varphi \left( x\right) =\frac{1}{2}\left\Vert f\left( x\right)
-y\right\Vert ^{2}
\end{equation*
satisfies the non-smooth\ Palais-Smale condition;\newline
(b2) for any $x\in
\mathbb{R}
^{n}$ we have that $\partial f(x)\subseteq $
\mathbb{R}
^{n}$ is of maximal rank;\newline
(b3) for any $x\in
\mathbb{R}
^{n}$ we have that the following chain rule hold
\begin{equation*}
\partial \left( \frac{1}{2}\left\Vert f\left( x\right) -y\right\Vert
^{2}\right) \subset (f\left( x\right) -y)\circ \partial f(x)
\end{equation*
\newline
Then $f$ is invertible on
\mathbb{R}
^{n}$ and $f^{-1}$ is locally Lipschitz.
\end{theorem}
\begin{proof}
We follow the ideas used in the proof of Main Theorem in \cite{SIW} with
necessary modifications due to the fact that we now use non-smooth critical
point theory and a weaker type of differentiability. In view of Lemma \re
{lemma2} (\textit{b2}) implies that $f$ defines a locally invertible
mapping. Thus it is sufficient to show that $f$ is onto and one to
one.\bigskip
Let us fix any point $y\in
\mathbb{R}
^{n}$. Functional $\varphi $ is locally Lipschitz, bounded from the below
and it satisfies the non-smooth\ Palais-Smale condition. Thus it has an
argument of a minimum $\overline{x}$, see Corollary 15.6 in \cite{jabri}
which necessarily satisfies the non-smooth Fermat's rule, i.e. $0\in
\partial \varphi \left( \overline{x}\right) $. By condition (b3) we obtain
\begin{equation*}
0\in \partial \left( \frac{1}{2}\left\Vert f\left( \overline{x}\right)
-y\right\Vert ^{2}\right) \subset (f\left( \overline{x}\right) -y)\circ
\partial f(\overline{x}).
\end{equation*
Then $0\in (f\left( \overline{x}\right) -y)\circ \partial f(\overline{x}).$
Since $\partial f(\overline{x})$ does not contain $0$ by (b2) we see that
f\left( \overline{x}\right) -y=0$. Thus $f$ is surjective. \bigskip
Now we argue by contradiction that $f$ is one to one.\textbf{\ }Suppose
there are $x_{1}$ and $x_{2}$, $x_{1}\neq x_{2}$, $x_{1}$, $x_{2}\in
\mathbb{R}
^{n}$, such that $f\left( x_{1}\right) =f\left( x_{2}\right) =a\in
\mathbb{R}
^{n}$. We will apply Theorem \ref{MPT}. Thus we put $e=x_{1}-x_{2}$ and
define mapping $g
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
^{n}$ by the following formul
\begin{equation*}
g\left( x\right) =f\left( x+x_{2}\right) -a\text{.}
\end{equation*
Observe that $g\left( 0\right) =g\left( e\right) =0$. We define a locally
Lipschitz functional $\psi
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ by
\begin{equation*}
\psi \left( x\right) =\frac{1}{2}\left\Vert g\left( x\right) \right\Vert
^{2}.
\end{equation*
By (\textit{b1}) and by its definition functional $\psi $ satisfies the
non-smooth Palais-Smale condition and also $\psi \left( e\right) =\psi
\left( 0\right) =0.$ Fix $\rho >0$ such that $\rho <\left\Vert e\right\Vert $
and take any $x\in \partial \overline{B\left( 0,\rho \right) }$. By Lebourg
Mean Value Theorem, Proposition 3.36, \cite{ochalbook}, it follows that
there exists $z\in \left[ 0,x\right] $ where $\left[ 0,x\right] $ is the
line segment connecting $0$ and $x,$ and an element $\xi \in \partial
f\left( z\right) $ such that
\begin{equation*}
g\left( 0\right) -g\left( x\right) =-g\left( x\right) =\left\langle \xi
,-x\right\rangle \text{. }
\end{equation*
Let us fix $\xi \in \partial f\left( z\right) $. Since $\xi
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
^{n}$ is an invertible linear operator there exists a constant $\alpha _{\xi
,z}$ such that for all $x\in
\mathbb{R}
^{n}$
\begin{equation*}
\left\Vert \left\langle \xi ,x\right\rangle \right\Vert \geq \alpha _{\xi
,z}\left\Vert x\right\Vert .
\end{equation*
Thus for all $x\in
\mathbb{R}
^{n}
\begin{equation*}
\left\Vert g\left( x\right) \right\Vert \geq \alpha _{\xi ,z}\left\Vert
x\right\Vert .
\end{equation*
From the above we get
\begin{equation*}
\inf_{\left\Vert x\right\Vert =\rho }\psi (x)\geq \frac{1}{2}\left( \alpha
_{\xi ,z}\right) ^{2}\left\Vert x\right\Vert ^{2}=\frac{1}{2}\left( \alpha
_{\xi ,z}\right) ^{2}\rho ^{2}>0=\psi \left( e\right) =\psi \left( 0\right)
\end{equation*
Thus by Theorem \ref{MPT} applied to $J=\psi $ we note that $\psi $ has a
non-smooth critical point $v$ which is different from $0$ and $e$ since the
corresponding critical value $\psi \left( v\right) \geq \inf_{\left\Vert
x\right\Vert =\rho }\psi (x)>0$. On the other hand, since $v$ is a critical
point, we get again by (b3) that
\begin{equation*}
0\in \partial \left( \frac{1}{2}\left\Vert f\left( v+x_{2}\right)
-a\right\Vert ^{2}\right) \subset (f\left( v+x_{2}\right) -a)\circ \partial
f(v+x_{2}).
\end{equation*
This means that $f\left( v+x_{2}\right) -a=0$\texttt{\ }contrary to the
assumption.\texttt{\ }Thus we obtain a contradiction which shows that $f$ is
a one to one operator.
\end{proof}
\begin{remark}
The only assumptions which seems somehow demanding is (b3). It must be
assumed since there is no suitable chain formula for Lipschitz mappings when
the outer function is continuously differentiable and the inner is
Lipschitz. In fact the known chain rule works the other way round, i.e. the
outer function is locally Lipschitz and the inner differentiable. We see by
the following example however that this assumption is not null and there are
functions which satisfy it. Let us take $f\left( x,y\right) =\left(
2x-\left\vert x\right\vert ,\left\vert y\right\vert -2y\right) $. Then
\left( 2x-\left\vert x\right\vert \right) ^{2}+\left( 2y-\left\vert
y\right\vert \right) ^{2}\rightarrow \infty $ as $\left\vert x\right\vert
\rightarrow \infty $. Note that $\left( 2x-\left\vert x\right\vert \right)
^{2}+\left( 2y-\left\vert y\right\vert \right) ^{2}$ is differentiable so
our assumption is fulfilled.
\end{remark}
|
3,212,635,537,668 | arxiv | \section{Introduction}
Triboelectric nanogenerators (TENGs) have received
considerable attention recently as potential candidates for energy
scavenging~\cite{envsci17, id1, id2, id4}. These devices have been
shown to convert mechanical energy into electricity in
applications such as energy harvesting and self-powered sensors~\cite{id5,
id6, id7}. Furthermore, TENGs have many advantages over existing energy harvesting
technologies~\cite{id1, id2, id5, id7, id8}, such as low cost, simple
construction, relatively high power, flexibility and robustness.
The contact-mode triboelectric nanogenerator is the most
commonly used TENG architecture owing to its simplicity and output
performance~\cite{id8, id9}. Typically, it consists of two triboelectric
plates, at least one being of dielectric material, each attached to an
electrode. When the plates come into contact, one becomes
positively charged and the other, negatively. (Static electricity produced
by friction is a well-known example of the same effect.) We feel that it is timely
to discuss, from an applied mathematical point of view, the power produced by a TENG,
whose construction is described in detail in for example~\cite{envsci17, id4, id8}.
A related device, a piezo-electric generator designed to harvest energy
from the heartbeat, is described in~\cite{zhang}, but its mathematical
modelling, at least from the point of view we take here, is straightforward and
so of less interest.
In this paper, we consider the most common configuration --- two metal electrodes, each with
a layer of dielectric attached --- in order to assess its power output characteristics.
Our starting point is the ordinary differential equation (o.d.e.) that describes such a TENG
connected to a load resistance $R$. Our main assumption is that the TENG is
being driven periodically at a frequency $\omega$, that is to say, the
separation of the plates varies periodically with time. We then adopt a circuit theory approach, as
laid out in~\cite{wang_ode}, by modelling the system as the circuit in Figure~\ref{Matt}.
The circuit leads directly to a differential equation, and this forms the
basis for our study. The circuit and its mathematical description are both
straightforward, the only complication arising from the fact that the
capacitance, $C(t)$, is a periodic function of time: this is inescapable and is a direct
consequence of the periodically varying plate separation. It is this that generates the
time variation in both the capacitance and the input voltage, and
guarantees that their fundamental frequencies, $\omega$, are identical.
\begin{figure}[htbp]
\begin{centering}
\includegraphics*[width=4in]{cct.eps}
\caption{A circuit model of the TENG studied in this paper, in which the voltage
source $V(t)$ and the capacitance $C(t)$ are both periodic with the same
fundamental frequency $\omega$. This circuit is described by the
differential equation~\eqref{ode}.}
\label{Matt}
\end{centering}
\end{figure}
We are particularly interested in computing, both accurately and approximately, the mean
power delivered to the load as a function of the parameters of the system, and in order to
investigate this, we first take a perturbation theory approach. This leads to
two formulae for the power, one valid for small $R$ and the other for large
$R$. We also tackle the same problem via Fourier series, this approach showing
how the power generated is distributed over different harmonics, that is,
integer multiples of $\omega$. The mean (as opposed to, say, the peak) power
is especially amenable to calculation, and is also the most appropriate measure of the
effectiveness of the generator.
We should give here some justification for our analytical approach, which
takes up the bulk of the paper. After all, the mean power, current
waveforms and so on can also be computed numerically --- why, then, compute
them analytically? We offer several reasons for our approach. In general,
analytical results give deeper insight into the problem, and in this paper,
we derive several approximate expressions for the variables of interest,
and use these, for instance, to optimise the power output by means of simple arguments.
For example, we derive, by perturbation theory, a simple expression for the value
of $R$ that maximises the mean power. Contrast this with a purely numerical approach,
in which such an optimisation would have to be carried out for one set of parameters at a time.
Furthermore, for large $R$, the transient times are large --- a
numerical solution would have to be continued for long times in order to
ensure that the transient has decayed sufficiently. By contrast, our
analytical solution is set up specifically to correspond to the steady state (post-transient)
behaviour. The Fourier series approach, which we also discuss, immediately gives
insight into how rapidly the Fourier coefficients decrease in magnitude
with index. It also turns out to be possible to express these coefficients explicitly in terms of
Bessel functions.
The rest of the paper is organised as follows. We first derive the o.d.e., including the
periodic functions $V(t)$ and the reciprocal capacitance (elastance), $B(t) := 1/C(t)$.
Both of these depend on a function $I(x)$, which in turn comes from calculating the
electric field in the system. This was derived in~\cite{envsci17} for
square plates; for the sake of completeness, we give the derivation in the
slightly more general case of rectangular plates in Appendix~I.
On examining both $V(t)$ and $B(t)$ for practical values of the parameters,
we make simple approximations to them, which in turn leads to a simplified
version of the o.d.e. We then carry out the mean power calculations mentioned above,
in both the case of general periodic source voltage and elastance, and their
approximations, and follow this with some comparisons between numerical solutions
of the o.d.e.\ in both cases. This comparison shows the approximations to be
very good. We then discuss an approach to power computation
based on Fourier series, after which we draw some conclusions.
\section{The o.d.e.}
The o.d.e.\ in its original form
\begin{equation}
AR\dot{\sigma} + \underbrace{\frac{1}{\pi}\left(\frac{1}{\epsilon_1} +
\frac{1}{\epsilon_2}\right)\left\{I(x_1 + x_2 + z(t)) - I_0\right\}}_{G(t)}\,\sigma =
F_1(t) + F_2(t)
\label{ode_orig}
\end{equation}
was discussed in~\cite{envsci17}. Here,
$\sigma(t)$, to be solved for, is the surface charge density and $\dot\sigma$ is its time
derivative. Also, $A = wl = \alpha w^2$ is the area of the plates, $\alpha = l/w$ is their aspect ratio,
$R > 0$ is a fixed load resistance, $\epsilon_1$ and $\epsilon_2$ are the permittivities
of the dielectrics, with thickness $x_1$ and $x_2$ respectively;
and $\sigma_T$ is the triboelectric charge density, which is constant and a
property of the materials used. Furthermore, the excitation $z(t) =
z_0[1+\sin(\omega t +\phi)]$, where $z_0$ is the amplitude, $\phi$ is a fixed phase angle, and
$\omega = 2\pi f$ is the excitation frequency; and
$F_i = \frac{\sigma_T}{\pi\epsilon_i}\left\{I\left(x_i + z(t)\right) - I(x_i)\right\}$,
with $i = 1,2$, where $I(x)$ is defined in equation~(\ref{Idef}) in
Appendix~I. Finally, $I_0 = \lim_{x\rightarrow 0} I(x)$. The variable parameters of
interest are $R$ and $\omega$ --- we consider all the other parameters to be fixed.
Since $\sigma$ is a charge density, we interpret $q := A\sigma$ as a charge.
Hence, replacing $A\sigma$ in equation~(\ref{ode_orig}) with $q$, we have
$$
R\dot{q} + \frac{G(t)}{A} q = F_1(t) + F_2(t),
$$
where $G(t)$ is defined in equation~(\ref{ode_orig});
and it becomes clear that the function of time that multiplies $q$ can be
interpreted as the reciprocal of a time-dependent capacitance, provided that
this is defined as the ratio of the change in the charge in the system to the change
in potential difference (as opposed to the derivative of the charge with respect to potential
difference); and that the term on the right hand
side is a time-dependent voltage, $V(t) = F_1(t) + F_2(t)$. That is,
\begin{equation}
R\dot{q} + B(t) q = V(t)
\label{ode}
\end{equation}
where
\begin{equation}
B(t) = \frac{G(t)}{A} = \frac{1}{A\pi}\left(\frac{1}{\epsilon_1} + \frac{1}{\epsilon_2}\right)
\left\{I(x_1 + x_2 + z(t)) - I_0\right\}.
\label{Bdef}
\end{equation}
We claim that equation~(\ref{ode}) models the TENG and the rest of this paper
discusses its periodic solution, $q(t)$, from which the current, $i(t) := \dot{q}(t)$,
and much else, can be deduced.
\subsection{Practical parameter values}
Typical values for the parameters, taken from~\cite{envsci17}, are given in
Table~\ref{numvals}. Plots of the exact $C(t)$, $B(t)$ and $V(t)$ are given in Figure~\ref{CV} for these
parameter values. Note that, despite the fact that $z(t)$ contains only one
harmonic, $C(t)$ and $V(t)$ contain all harmonics owing to the nonlinear
function $I(\mbox{const.}+z(t))$ used in their definition. However, Figure~\ref{CV}
suggests that a good approximation might be obtained by only considering
the first harmonic.
\begin{table}[ht]
\centering
\begin{tabular}{lll} \hline
\multicolumn{3}{c} {\raisebox{2.2ex}{ }Parameter values for practical triboelectric
nanogenerator}\\ \hline
Name & Symbol & Numerical value\\ \hline
\raisebox{2.2ex}{}Permittivity of free space & $\epsilon_0$ & $8.854\times 10^{-12}$ Fm$^{-1}$\\ \hline
Permittivity of dielectric 1 & $\epsilon_1$ & $3.30\epsilon_0$\\
Permittivity of dielectric 2 & $\epsilon_2$ & $3.27\epsilon_0$\\
Triboelectric charge density & $\sigma_T$ & $4.8\times 10^{-5}$ Cm$^{-2}$\\
Thickness of dielectric 1 & $x_1$ & 200 $\mu$m\\
Thickness of dielectric 2 & $x_2$ & 20 $\mu$m\\ \hline
Default aspect ratio, $l/w$ & $\alpha$ & 1\\
Default plate dimensions & $l, w$ & $l = w = 5\times 10^{-2}$ m\\
Default plate area & $A$ & $2.5\times 10^{-3}$ m$^2$\\
Excitation amplitude \& phase & $z_0,\,\phi$ & resp.\ $1.0\times 10^{-3}$ m, $3\pi/2$\\
Elastance parameters & $B_0, B_1$ & resp.\ $1.6\times 10^{10}, 1.3\times 10^{10}$ F$^{-1}$\\
Drive voltage amplitude & $V_0$ & 1550 V\\ \hline
Excitation frequency & $f = \omega/2\pi$ & 0.1 -- $10^3$ Hz (nom.\ 1 Hz)\\
Load resistance & $R$ & $10^5$ -- $10^{15}\Omega$\\ \hline
\end{tabular}
\caption{Names, symbols and numerical values for the parameters for a practical TENG.
Note the very large range of the parameter $R$.}
\label{numvals}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics*[width=5in]{CV.eps}
\caption{The periodic functions of time in equation~(\ref{ode}), for
$\omega = 2\pi$ rad/s: $C(t)$, left; $B(t) = 1/C(t)$, middle; and $V(t)$, right.
None of the functions are approximated. Parameter values are from Table~\ref{numvals}.}
\label{CV}
\end{figure}
\subsection{The practical approximation}
Clearly, $I(\mathrm{constant} + z(t))$ is not sinusoidal, even
though $z(t)$ is: so strictly speaking, $B(t)$ and $V(t)$ should be
expanded as Fourier series, both with identical fundamental frequency $\omega$.
However, in the practical case considered here, $x_1$, $x_2$, $z_0$ and $I$ are such that good first
approximations for $B(t)$ and $V(t)$ are
\begin{equation}
B(t) = \sum_{k\in\mathbb Z}b_k \mathrm{e}^{\mathrm{i} k\omega t} \approx B_0 - B_1 \cos\omega t\qquad\qquad
V(t) = \sum_{k\in\mathbb Z}v_k \mathrm{e}^{\mathrm{i} k\omega t}\approx V_0(1-\cos\omega t),
\end{equation}
as suggested by Figure~\ref{CV}. In fact,
$\left|b_2/b_1\right| \approx 9.385\times 10^{-3},$
$\left|b_3/b_1\right| \approx 7.610\times 10^{-6}$ and
$\left|v_2/v_1\right| \approx 9.351\times 10^{-3}$,
$\left|v_3/v_1\right| \approx 6.898\times 10^{-6}.$
Thus, we study where appropriate both
the general o.d.e.\ with $B(t)$ and $V(t)$ subject to some mild conditions but
otherwise arbitrary, and also the approximate o.d.e.\
\begin{equation}
R\dot{q} + (B_0 - B_1\cos\omega t)q = V_0(1 - \cos\omega t).
\label{apode}
\end{equation}
We refer to equation~\eqref{apode} as the `practical approximation' in what follows.
A better approximation would take into account more terms in the Fourier
series expansions of $B(t)$ and $V(t)$, but in practice, for the parameter
values given in Table~\ref{numvals}, this approximation gives good results --- as we shall see.
\section{Analytical results}
\subsection{Unique periodic attractor}
\label{exist}
In this section, we show that, under conditions that must apply on physical
grounds, there is a unique periodic solution to the differential
equation~\eqref{ode}, and that solutions starting from any initial
condition are attracted to it. We can then talk about `the periodic
solution', knowing that this exists.
In a real system described by the differential equation~\eqref{ode},
on physical grounds alone, $B(t)$ and $V(t)$ must satisfy the Dirichlet
conditions~\cite{haykin},
and so both functions can be expanded in Fourier series. Furthermore, as
shown in the Appendix~I, $B(t) > 0$ for all $t$ and so the mean value of $B(t) > 0$.
We will use both these facts in what follows.
In practice, we will be interested only in periodic solutions to the
o.d.e.~\eqref{ode}, but in this section alone, we need to solve the o.d.e.\ for the complete
solution, which we call $q_c(t)$. The standard way to solve such an o.d.e.\ is by the
integrating factor method~\cite{piaggio}, which gives
\begin{equation}
q_c(t) = \exp\left\{-\frac{1}{R}\int_0^t B(t')\,\mathrm{d} t'\right\}
\left[\frac{1}{R}\int_0^t V(t') \exp\left\{\frac{1}{R}\int_0^{t'} B(t'')\,\mathrm{d} t''\right\}
\mathrm{d} t' + q_c(0)\right].
\label{intfac}
\end{equation}
Although the solution in this form is not obviously useful for direct
computation of, for instance, the mean power, it \textit{is} useful to show
that for any initial condition, $q_c(0)$, there is a unique periodic attractor. We now prove this.
We introduce the notation for the mean, $\mn{f}$, of any periodic function $f(t)$
with period $T_0$, which is
$$\langle f\rangle := \frac{1}{T_0}\int_{-T0/2}^{T_0/2} f(t)\,\mathrm{d} t.$$
We then define $\widetilde{f}(t) := f(t) - \mn{f}$ as the zero-mean, time-varying part of
$f(t)$.
Since $B(t)$ and $V(t)$ both have the same period $T_0 = 2\pi/\omega$, all
the time-varying terms in equation~\eqref{intfac} have period $T_0$. Hence,
writing $B(t) = \mn{B} + \widetilde{B}(t)$,
we can make the following Fourier expansion:
$$\frac{V(t)}{R} \exp\left\{\frac{1}{R}\int_0^{t} B(t')\,\mathrm{d} t'\right\} =
\mathrm{e}^{\mn{B}t/R}\,\frac{V(t)}{R}\exp\left\{\frac{1}{R}\int_0^{t} \widetilde{B}(t')\,\mathrm{d} t'\right\} =
\mathrm{e}^{t/\tau}\sum_{k\in\mathbb Z} I_k \mathrm{e}^{\mathrm{i} k\omega t},$$
where $\tau = R/\mn{B} > 0$ since both $R$ and $\mn{B}$ are positive, and Fourier
coefficients $I_k\in\mathbb C$ have dimensions of current. Using this in~\eqref{intfac} then gives
\begin{eqnarray*}
q_c(t) &=& \mathrm{e}^{-t/\tau}\,\exp\left\{-\frac{1}{R}\int_0^t \widetilde{B}(t')\,\mathrm{d} t'\right\}
\left[\int_0^t\sum_{k\in\mathbb Z} I_k \mathrm{e}^{(1+\mathrm{i} k\omega\tau)t'/\tau}\,\mathrm{d} t'+ q_c(0)\right]\\
&=& \exp\left\{-\frac{1}{R}\int_0^t \widetilde{B}(t')\,\mathrm{d} t'\right\}
\left[\sum_{k\in\mathbb Z}\frac{\tau I_k\mathrm{e}^{\mathrm{i} k\omega t}}{1+\mathrm{i} k\omega\tau}
+ \mathrm{e}^{-t/\tau}\left(q_c(0) - \sum_{k\in\mathbb Z}\frac{\tau I_k}{1+\mathrm{i} k\omega\tau}\right)\right].
\end{eqnarray*}
From this we deduce that
\begin{enumerate}
\item Since $\tau > 0$, as $t\rightarrow\infty$, for all initial conditions $q_c(0)$,
$q_c(t)$ tends to the period-$T_0$ function
$$q(t) = \exp\left\{-\frac{1}{R}\int_0^t \widetilde{B}(t')\,\mathrm{d} t'\right\}
\sum_{k\in\mathbb Z}\frac{\tau I_k\mathrm{e}^{\mathrm{i} k\omega t}}{1+\mathrm{i} k\omega\tau};$$
\item Any solution $q_c(t)$ approaches this periodic attractor
exponentially,\footnote{Although the rate of approach to the attractor will be very
slow for large $R$ (i.e. large $\tau$).}
that is, $|q_c(t) - q(t)| < c \mathrm{e}^{-t/\tau}$ for some positive constant $c$.
\item The choice $q_c(0) = \sum_{k\in\mathbb Z}\tau I_k/(1+\mathrm{i} k\omega\tau)$
puts the solution directly on the periodic attractor.
\end{enumerate}
\subsection{Perturbation series --- small $R$}
Although the previous section derives an exact expression for the complete
solution, $q_c(t)$, and the periodic solution, $q(t)$, these
expressions do not lend themselves to direct computation of the power, or
simple approximations to it. To circumvent
this, we estimate $q(t)$ using a perturbation theory approach.
Underlying this approach is the assumption that $B(t)$ and $V(t)$ are
both infinitely differentiable (which is clearly the case for the practical approximation).
Our starting point is to define the dimensionless parameter $\varepsilon :=
\omega R/\mn{B}$, which is small for small $R$. In terms of this, we
re-write the o.d.e.~\eqref{ode} as
\begin{equation}
\varepsilon\dot{q} + \frac{\omega}{\mn{B}}B(t)\, q = \frac{\omega}{\mn{B}} V(t).
\label{Rsode}
\end{equation}
We now set
\begin{equation}
q(t) = q_0(t) + \varepsilon q_1(t) + \varepsilon^2 q_2(t) + \ldots,
\label{pertser}
\end{equation}
and substituting this into~\eqref{Rsode}, and equating to zero the
coefficients of each power of $\varepsilon$, we find
$$\varepsilon^0:\;\; q_0 = \frac{V}{B},\;\;\; \varepsilon^1:\;\;\dot{q}_0 + \frac{\omega}{\mn{B}}\, B q_1 = 0,
\;\;\; \varepsilon^2:\;\;\dot{q}_1 + \frac{\omega}{\mn{B}}\, B q_2 = 0,\ldots,\;
\varepsilon^k:\;\;\dot{q}_{k-1} + \frac{\omega}{\mn{B}}B q_k = 0,$$
where for brevity we have dropped the argument $(t)$ for $B$, $V$ and $q_k$.
Hence, we have
$$q_0 = \frac{V}{B},\qquad
q_1 = -\frac{\mn{B}}{\omega B}\,\frac{\mathrm{d}}{\mathrm{d} t}\!\left(\frac{V}{B}\right)
= -\frac{\mn{B}}{\omega}\;\frac{B\dot{V} - \dot{B}V}{B^3},$$
$$q_2 = -\frac{\mn{B}}{\omega B}\,\dot{q}_1 =
\frac{\mn{B}^2}{\omega^2}\frac{1}{B}\;\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{1}{B}\frac{\mathrm{d}}{\mathrm{d}
t}\!\left(\frac{V}{B}\right)\right) =
\frac{\mn{B}^2}{\omega^2}\;\frac{B(B\ddot{V}-\ddot{B}V) -
3\dot{B}(B\dot{V}-\dot{B}V)}{B^5} $$
with obvious generalisations for $q_k$, $k > 2$. The above expressions for $q_k$ are
general, in that they apply for any infinitely differentiable functions $B, V$ provided
only that $B > 0,\,\forall t$.
The power series expansion~\eqref{pertser} is very unlikely to converge. Even in
the case of constant $B$, it is easy to check that the periodic solution $q(t)$ is not analytic
around $\varepsilon = 0$ for a generic, analytic $V(t)$ containing all the harmonics. Additionally, the
presence of the function $B(t)$ will only complicate the situation. However, even
though the series expansion is not expected to be convergent, it is likely to be an asymptotic series,
and hence any truncation of it should provide a reliable approximation of the full solution, for
$\varepsilon$ small enough.
We can make further progress from this point if we use the practical approximation,
which is that $B(t) = B_0 - B_1\cos\omega t$ and $V(t) = V_0(1 - \cos\omega t)$,
where $B_0 = \mn{B} > B_1 > 0$. With these definitions of $B$ and $V$ in
force, we have that, for $k\in\mathbb N$, $q_{2k-1}$ are odd and $q_{2k}$ are
even functions of $t$; hence, $\dot{q}_{2k-1}$ are even and $\dot{q}_{2k}$ are odd.
The instantaneous power $p(t) := \dot{q}(t)^2R$ and the mean power is defined as $\mn{p} := \mn{\dot{q}^2}R$. From the
perturbation series~\eqref{pertser}, we have
$$\mn{\dot{q}^2} = \mn{\dot{q}_0^2} + 2\varepsilon\mn{\dot{q}_0\dot{q}_1} +
\varepsilon^2(2\mn{\dot{q}_0\dot{q}_2} + \mn{\dot{q}_1^2}) + O(\varepsilon^3).$$
Now, by the definition of $\mn{f}$ we have that $\mn{f} = 0$ if $f$ is an odd function of $t$. Thus,
using the parity of functions $q_k$ and their first derivatives, we have that
$\mn{\dot{q}_0\dot{q}_1} = 0$ and so on. Hence, for small $R$, and so for small $\varepsilon$, we have
$$\mn{p}_s = R\mn{\dot{q}_0^2} + R\varepsilon^2 \left(2\mn{\dot{q}_0\dot{q}_2} + \mn{\dot{q}_1^2}\right)
+ O\left(\varepsilon^4\right).$$
Since it is useful to know the dependence of $\mn{p}$ on $\omega$ as well
as $R$, we compute
$$\mn{\dot{q}_0^2} = \frac{1}{T_0}\int_{-T_0/2}^{T_0/2}\frac{(B\dot{V} - \dot{B}V)^2}{B^4}\,\mathrm{d} t =
\frac{\omega^2}{2\pi}\int_{-\pi}^{\pi} \frac{(B V' - B' V)^2}{B^4}\,\mathrm{d} x
:= \omega^2 a_1,$$
where we have substituted $x = \omega t$ and used a prime to denote
differentiation with respect to $x$.
In fact, this integral can be evaluated in closed form; it is
\begin{equation}
a_1 = \frac{V_0^2 B_0}{2\sqrt{B_0^2 - B_1^2}(B_0 + B_1)^2}.
\label{a1xp}
\end{equation}
From our previous definitions of $q_0$, $q_1$ and $q_2$, we also have
$$\frac{2}{\omega^2 B_0^2}\mn{\dot{q}_0\dot{q}_2} = \frac{1}{\pi}\int_{-\pi}^\pi
\left(\frac{V}{B}\right)'\frac{\mathrm{d}}{\mathrm{d} x}\left\{\frac{1}{B}\frac{\mathrm{d}}{\mathrm{d} x}\left[\frac{1}{B}
\left(\frac{V}{B}\right)'\right]\right\}\,\mathrm{d} x$$
and
$$\frac{1}{\omega^2 B_0^2}\mn{\dot{q}_1^2} = \frac{1}{2\pi}
\int_{-\pi}^\pi\left\{\frac{\mathrm{d}}{\mathrm{d} x}\left[\frac{1}{B}\left(\frac{V}{B}\right)'\right]
\right\}^2\,\mathrm{d} x.$$
Defining $a_3$ as the sum of the right hand sides of these two expressions,
we can also determine that
$$a_3 = \frac{V_0^2 B_0\left(8 B_0^4 + 44 B_0^2B_1^2 + 17 B_1^4\right)}
{16\sqrt{B_0^2 - B_1^2}(B_0-B_1)^3(B_0+B_1)^5}.$$
Bearing in mind now that $\varepsilon = \omega R/B_0$, and that $a_1, a_3$ do not depend on
$\omega$ or $R$, we have that
\begin{eqnarray}
\mn{p}_s &=& a_1\omega^2 R + a_3\omega^4 R^3 + O\left(\omega^6 R^5\right)\nonumber \\
&=& 2.45018\times 10^{-15} \omega^2 R - 1.35505\times 10^{-33}\omega^4 R^3 + O\left(\omega^6
R^5\right)
\label{psmall}
\end{eqnarray}
where $a_1$ and $a_3$ have been evaluated using the values in Table~\ref{numvals}.
\subsection{Perturbation series --- large $R$}
In the case of large $R$, we compute a solution to the o.d.e.\ in a similar way
to the previous section, this time expanding as a power series
in the dimensionless, small parameter $\delta := \mn{B}/\omega R\; (= 1/\varepsilon)$. In
terms of $\delta$, the o.d.e.~\eqref{ode} becomes
\begin{equation}
\dot{q} = \delta\frac{\omega}{\mn{B}}\left[V(t) - q\,B(t)\right].
\label{Rlode}
\end{equation}
Again, we expand $q(t) = q_0(t) + \delta q_1(t) + \delta^2 q_2(t) + \delta^3 q_3(t) +\ldots$
and, substituting this into~\eqref{Rlode} and matching powers of $\delta$, we find
$$\delta^0:\;\; \dot{q}_0 = 0,\qquad
\delta^1:\;\;\dot{q}_1 = \frac{\omega}{\mn{B}}\left(V - q_0\,B\right),\qquad
\delta^2:\;\;\dot{q}_2 = -\frac{\omega}{\mn{B}}\, B\, q_1,\qquad
\delta^3:\;\;\dot{q}_3 = -\frac{\omega}{\mn{B}}\, B\, q_2$$
and so on.
The guiding principle for finding $q_k$ is that for each $k$, $q_k$ is periodic.
Hence, when $q_k$ is expressed as an integral, we must ensure that the
integrand has, in each case, zero mean (otherwise the integral will grow
linearly with $t$).
The equation $\dot{q}_0 = 0$ has solution $q_0 = c_0$ for $c_0$ an as yet
undetermined constant. Hence, we have $\dot{q}_1 = \omega(V - c_0 B)/\mn{B}$.
Recalling that $B(t) = \mn{B} + \widetilde{B}(t)$ and $V(t) = \mn{V} + \widetilde{V}(t)$,
we find
$$q_1(t) = \frac{\omega}{\mn{B}}\int_0^t\mn{V}-c_0\mn{B} +\widetilde{V}(t')
-c_0\widetilde{B}(t')\;\mathrm{d} t' + c_1,$$
where $c_1$ is a constant to be determined at the next stage. Furthermore,
in order for the integrand here to have zero mean, we require $c_0 = \mn{V}/\mn{B}$. Hence,
$$q_1(t) = \frac{\omega}{\mn{B}^2}\int_0^t \mn{B}\widetilde{V}(t') - \mn{V}\widetilde{B}(t')\,\mathrm{d} t'+c_1
\qquad\mbox{and}\qquad
\dot{q}_1(t) = \frac{\omega}{\mn{B}^2}\left(\mn{B}\widetilde{V}(t) - \mn{V}\widetilde{B}(t)\right).$$
The constant $c_1$ is found in an analogous way to $c_0$. Since
$\dot{q}_2(t) = -\omega B(t) q_1(t)/\mn{B}$, we have, using our expression for $q_1(t)$,
\begin{equation}
q_2(t) = -\frac{\omega}{\mn{B}}\int_0^t \left(\mn{B}+\widetilde{B}(t')\right)
\left\{\int_0^{t'}\frac{\omega}{\mn{B}^2}\left(\mn{B}\widetilde{V}(t'')
- \mn{V}\widetilde{B}(t'')\right)\,\mathrm{d} t'' +c_1\right\}\,\mathrm{d} t' + c_2,
\label{q2def}
\end{equation}
where $c_2$ is a new constant that can be determined at the next stage.
Reasoning as before, for $q_2(t)$ to be periodic, the integrand of the
integral w.r.t.\ $t'$ in
equation~\eqref{q2def} must have mean zero, which fixes $c_1$ by
$$c_1 = -\frac{\omega^2}{2\pi\mn{B}^3}\int_{-T_0/2}^{T_0/2}
\left(\mn{B}+\widetilde{B}(t')\right)\left\{\int_0^{t'}\left(\mn{B}\widetilde{V}(t'')
- \mn{V}\widetilde{B}(t'')\right)\,\mathrm{d} t''\right\}\,\mathrm{d} t'.$$
We can in principle continue in the same way to find $q_k,\, k>2$.
Using the practical approximation for $B(t)$ and $V(t)$, we find $q_0 =
V_0/B_0$, $c_1 = 0$ and so
$$q_1(t) = -\frac{V_0(B_0-B_1)}{B_0^2}\sin\omega t,\qquad
q_2(t) = \frac{V_0(B_0-B_1)}{4B_0^3}\left(B_1\cos 2\omega t - 4B_0\cos\omega t
+ 4B_0 - B_1\right) + c_2.$$
In what follows, we also need an expression for $\dot{q}_3$, which obeys
$\dot{q}_3 = -\omega B(t) q_2(t)/B_0$. Hence, we require a value for $c_2$,
which we again deduce by imposing the condition that
$$q_3 = -\frac{\omega}{B_0}\int_0^t \left(B_0 - B_1\cos\omega t'\right) \left\{K_1
\left(B_1 \cos 2\omega t' - 4B_0\cos\omega t' + 4B_0 - B_1\right) + c_2\right\}\,\mathrm{d} t' + c_3,$$
where $K_1 = V_0(B_0-B_1)/4B_0^3$,
is bounded as $t\rightarrow\infty$. Since $\mn{\cos\omega t} = \mn{\cos\omega t \cos 2\omega t} = 0$
but $\mn{\cos^2\omega t} = 1/2$, this condition gives
$$c_2 = -\frac{V_0(B_0 - B_1)(4B_0 + B_1)}{4B_0^3}$$
and so
$$q_3(t) = \frac{V_0(B_0-B_1)}{24B_0^4}\left(3(8B_0^2 - 3B_1^2)\sin\omega t
- 9B_0B_1\sin 2\omega t + B_1^2\sin 3\omega t\right) + c_3.$$
To find the mean power, we again need to compute $\mn{\dot{q}}^2 =
\delta^2\mn{\dot{q}_1^2} + \delta^3 \mn{\dot{q}_1\dot{q}_2} + \delta^4\mn{2\dot{q}_1\dot{q}_3
+ \dot{q}_2^2} + O(\delta^6)$, where, as before, we observe that $\dot{q}_1(t)\dot{q}_2(t)$ is
an odd function of $t$ in the practical approximation, so its mean is zero.
From the above expressions for $q_1,\,q_2,\,q_3$, we find
$$\mn{\dot{q}_1^2} = \frac{1}{2}\left(\frac{V_0(B_0-B_1)}{B_0^2}\right)^2\omega^2,
\qquad
\mn{2\dot{q}_1\dot{q}_3 + \dot{q}_2^2} = -\frac{V_0^2(B_0-B_1)^3(B_0+B_1)}{2B_0^6}\omega^2.$$
Defining the mean power for large $R$, $\mn{p}_l$, as $\mn{p}_l := R\mn{\dot{q}^2}$ we have
\begin{eqnarray}
\mn{p}_l &=& \frac{1}{2}\left(\frac{V_0(B_0-B_1)}{B_0}\right)^2 R^{-1} -
\frac{V_0^2(B_0-B_1)^3(B_0+B_1)}{2B_0^2} \omega^{-2} R^{-3}
+ O\left(\omega^{-4} R^{-5}\right)\nonumber \\
&=& 42231.4 R^{-1} -3.67414\times10^{24}\omega^{-2} R^{-3} + O\left(\omega^{-4} R^{-5}\right)
\label{plarge}
\end{eqnarray}
where we have used the parameter values in Table~\ref{numvals} to obtain the last
expression.
\subsection{Comparison with numerics}
\label{cwn}
\begin{figure}[htbp]
\centering
\includegraphics*[width=5.0in]{pwr2.eps}
\caption{Left: The mean power, computed directly from a numerically-obtained $i(t) = \dot{q}(t)$
waveform and not from Fourier series, versus $R$, using the approximate
(circles) and exact (crosses) expressions for $B(t)$ and $V(t)$. Also shown,
over appropriately restricted ranges of $R$, are the series/asymptotic expansions
for $\langle p\rangle_s$ and $\langle p\rangle_l$, from equations~\eqref{psmall} and~\eqref{plarge} respectively.
Right --- see section~\ref{mpar}: mean power from the numerical $i(t)$ as in the left-hand figure with
approximate $B, V$, (solid line), and also using the rational approximation $\mn{p(R)}$,
equation~\eqref{ratapp} (dashed line).}
\label{pwr}
\end{figure}
We now make two comparisons. The first, in Figure~\ref{pwr},
is a plot of the mean power, computed in four different ways:
\begin{itemize}
\item By solving the o.d.e.~\eqref{ode} numerically, using a good numerical algorithm
(the Livermore Stiff o.d.e.\ solver~\cite{lsode}), with the value of $q(0)$
computed by Fourier series,
using (a) the approximate $B(t)$, $V(t)$ functions (circles in Figure~\ref{pwr}) and
(b) the exact expressions for them (crosses);
\item By perturbation theory, as derived in the previous two subsections, with the dashed
line showing $\mn{p}_s$ for $R\in[10^5, 10^8]$, as given by equation~\eqref{psmall},
and the continuous line, $\mn{p}_l$ from equation~\eqref{plarge}, for
$R\in[3\times10^9, 10^{15}]$.
\end{itemize}
The agreement between approximate and exact
$B, V$ is seen to be very good, as are the perturbation expressions (within
their ranges of applicability).
\begin{figure}[htbp]
\centering
\includegraphics*[width=5.0in]{icomp.eps}
\caption{The current, computed by solving the o.d.e.~\eqref{ode} numerically
(continuous lines) and using the perturbation series (dashed lines). On the left, $R =
10^7\,\Omega$ and the small $R$ series is used; on the right, $R = 10^{10}\,\Omega$
and the large $R$ series is used. The curves are sufficiently close that
they are almost indistinguishable.}
\label{icomp}
\end{figure}
The second comparison is between currents, computed numerically using
the exact $B(t)$, $V(t)$, and from perturbation theory up to and including
the terms $\dot{q}_3(t)$, both in the small and large $R$ cases
($\dot{q}_3$ is only needed for large $R$). See
Figure~\ref{icomp}, which compares the numerical solution with the small
$R$ perturbation solution ($R = 10^7\,\Omega$) and with the large $R$
perturbation solution ($R = 10^{10}\,\Omega$). The two curves are almost
exactly superimposed.
\subsection{Applications of the perturbation solutions}
\label{appert}
We now briefly discuss some applications of equations~\eqref{psmall} and~\eqref{plarge},
the mean power expressions for small and large $R$ respectively.
In general, these expressions between them give a good approximation to $\langle p\rangle$,
in a very simple form, for all $R$ except $10^8\leq R \leq 3\times 10^9$,
and this may in itself be useful. Furthermore, we have derived expressions
for $q(t)$ up to $O(\varepsilon^2)$ for small $R$ and $O(\delta^3)$ for large $R$,
where $\varepsilon = \omega R/\mn{B}$ and $\delta = 1/\varepsilon$. These approximations are
good, as can be seen from Figure~\ref{icomp}.
Less obviously, the perturbation series can also be used to estimate $R_{pk}$, the value of $R$
for which the peak mean power is obtained. Taking logarithms, and only including the first terms of
equations~\eqref{psmall} and~\eqref{plarge} respectively, we find
$$\ln\mn{p} = \ln\left(a_1\omega^2 R\right) + O\left(R^2\right)\qquad\mbox{and}\qquad
\ln\mn{p} = \ln\left(b_1 R^{-1}\right) + O\left(R^{-2}\right),$$
where $a_1$ and $b_1$ are given in~\eqref{a1xp} and~\eqref{plarge}
respectively. On a plot of $\ln\mn{p}$ versus $\ln R$, these are straight
lines, and their intersection point gives an estimate of $R_{pk}$.
This is
\begin{equation}
R_{pk}\approx \frac{1}{\omega}\sqrt{\frac{b_1}{a_1}} =
\frac{B_0}{\omega}\left(1 - \frac{B_1^2}{B_0^2}\right)^{5/4}
= \frac{4.152}{\omega}\;\mbox{G}\Omega.
\label{intersect_rpk}
\end{equation}
We postpone discussion of the accuracy of this until we have
a more accurate computation of $R_{pk}$ at the end of section~\ref{fccomp}.
Note that this argument will always overestimate the power, however.
\subsection{Mean power for all $R$?}
\label{mpar}
We end this section with an observation. Clearly, it would be useful to have
a simple formula for the mean power, $\mn{p(R)}$, valid for \textit{all} $R$, rather
than the restricted ranges accessible via perturbation theory.
A heuristic approach yields a non-rigorous result, which we now briefly discuss.
Looking at equations~\eqref{psmall} and~\eqref{plarge} suggests that these
expressions might be, respectively, the Maclaurin series and an asymptotic
expansion of a single odd function of $\omega R$
$$\omega\, F(\omega R) = \omega\,\frac{A_1\cdot \omega R + A_3\cdot \omega^3 R^3}
{1 + B_2\cdot\omega^2 R^2 + B_4\cdot\omega^4 R^4},$$
where $A_1$, $A_3$, $B_2$ and $B_4$ are constants to be found. By
definition, this function is a candidate for $\mn{p(R)}$. In fact, all $F$
does is to interpolate smoothly between the small and large $R$ regimes;
how it behaves for intermediate values of $R$ is beyond our control.
Expanding $\omega F(\omega R)$ in a Maclaurin series, we find
$$\mn{p}_s = A_1\omega^2 R + (A_3 -A_1B_2)\omega^4 R^3 + O(\omega^6 R^5)$$
whereas the asymptotic expansion, that is, the Maclaurin series for
$\omega F\left(\frac{1}{\omega R}\right)$ in powers of $\frac{1}{\omega R}$, is
$$\mn{p}_l = \frac{A_3}{B_4}\frac{1}{R} + \frac{A_1 B_4 - A_3 B_2}{B_4^2}
\frac{1}{\omega^2 R^3} + O(\omega^{-4} R^{-5}).$$
Matching the coefficients of the various powers of $R$ with
equations~\eqref{psmall} and~\eqref{plarge} gives four equations to solve
for $A_1$, $A_3$, $B_2$ and $B_4$, and doing so gives
\begin{equation}
\omega\,F(\omega R)
= \omega^2 R\frac{2.45018\times 10^{-15} + 2.99655\times 10^{-34}\omega^2 R^2}
{1 + 6.75329\times 10^{-19}\omega^2 R^2 + 7.09553\times 10^{-39}\omega^4 R^4}
\stackrel{?}{=}\mn{p(R)}.
\label{ratapp}
\end{equation}
Figure~\ref{pwr}, right-hand side, shows a comparison between $\mn{p(R)}$ approximated in this
way, and a numerical computation. Solving $\partial\mn{p(R)}/\partial R = 0$ for $R$
gives one positive, one negative and four complex values of $R$. The
positive one is $R_{max} = 8.09\times 10^9/\omega = 1.29\times 10^9\Omega$ for $\omega
= 2\pi$, and $\mn{p(R_{max})} = 15.2\mu$W. Compare these with the
numerical computation in Figure~\ref{pwr}, which gives $R_{max} = 6.8\times
10^8\Omega$ and $\mn{p(R_{max})} = 18.2\mu$W.
\section{A Fourier series approach}
\label{FS}
An alternative approach to computing $q(t)$ is to use Fourier series:
having established in section~\ref{exist} that there is a unique attracting periodic
solution, the idea here is to expand it as a Fourier series. In this
section, we use the practical approximation for $B(t)$ and $V(t)$, although
the ideas could be applied for any $B$, $V$ of the same period. We start
by setting
\begin{equation}
q(t) = \sum_{k\in\mathbb Z}\alpha_k e^{\mathrm{i} k\omega t},
\label{fser}
\end{equation}
where $\alpha_k\in\mathbb C$, with $\alpha_{-k} = \alpha_k^*$, so
$\alpha_0\in\mathbb R$, as is $q(t)$ for real $t$.
Substituting this in the o.d.e., we find that the $\alpha_k$ must obey the following relations:
\begin{subequations}
\label{recrel}
\begin{align}
\label{rra}
\alpha_1 & = \eta\alpha_0 -\alpha_{-1}- 2Q_0,\\
\label{rrb}
\alpha_2 & = \eta(1+\mathrm{i}\varepsilon)\alpha_1 - \alpha_0 + Q_0,\\
\label{rrc}
\alpha_{k+1} & = \eta(1 + \mathrm{i}\varepsilon k)\alpha_k - \alpha_{k-1},\qquad |k| \geq 2,
\end{align}
\end{subequations}
where we have used the dimensionless real parameters $\varepsilon = \omega R/B_0$,
as before, and $\eta := 2B_0/B_1 > 0$. Also, $Q_0 := V_0/B_1$, and both
$Q_0$ and $\alpha_k$ have dimensions of charge. In what follows, we
sometimes need to distinguish between the two `exceptional equations',~\eqref{rra}
and~\eqref{rrb}, and the `general equation',~\eqref{rrc}, which is true for $|k| \geq 2$.
Here, there are just two exceptional equations because $B(t)$ and $V(t)$
are truncated at the first harmonic: the number of exceptional equations
grows linearly with the number of harmonics retained in $B(t)$ and $V(t)$.
There are two approaches to solving equations~\eqref{recrel}, and we explain these in the following two
subsections. In the first, we show an arithmetical solution
and in the second, we show how the same solution can be constructed using Bessel
functions, and we briefly discuss the asymptotic behaviour of this solution.
\subsection{Arithmetical solution}
\label{fccomp}
Here is a practical method for solving the difference equations~\eqref{recrel}
for $\alpha_k$, $k\in\mathbb Z$ and with $\alpha_{-k} = \alpha_k^*$.
There should be no free parameters in the solution, even though
solving~\eqref{recrel} is equivalent to solving the original o.d.e., whose
general solution contains one arbitrary constant. However, here
we seek only the periodic solution, $q(t)$, to the o.d.e., and not the
general solution, $q_c(t)$: only the latter contains an arbitrary constant.
The general solution to~\eqref{rrc}, which is of second order,
has two arbitrary parameters. One just sets the scale, since, if
$\alpha_k$ is a solution, then so is $\lambda\alpha_k$ for any $\lambda$.
In fact, this scaling is set by either one of~\eqref{rra},~\eqref{rrb}. However, we still need
one more relation in order to pin down the second arbitrary parameter. By
analogy with the usual procedure for solving Mathieu's equation~\cite{a&s}, this
is furnished by considering the ratio $\rho_k := \alpha_k/\alpha_{k-1}$.
For $k \geq 2$, we have
$$\rho_{k+1} = \eta(1+\mathrm{i}\varepsilon k) - \frac{1}{\rho_k} = d_k - \frac{1}{\rho_k},$$
where $d_k := \eta(1+\mathrm{i}\varepsilon k)$. Hence, $\rho_k = 1/(d_k - \rho_{k+1})$,
giving the continued fraction expansion
\begin{equation}
\rho_n = \frac{\alpha_n}{\alpha_{n-1}} = \frac{1}{d_n - \frac{1}{d_{n+1} - \frac{1}{d_{n+2}-\ldots}}}
\label{cf}
\end{equation}
From equation~\eqref{cf}, we can now compute $\rho_3$, which
depends on $\eta$, $\varepsilon$ but not on $\alpha_k$. Then we use equations~\eqref{rra}
and~\eqref{rrb}, and equation~\eqref{rrc} with $k=2$, along with the definition
of $\rho_3$, to find $\alpha_0,\ldots, \alpha_3$. That is, we solve
\begin{equation}
2\operatorname{Re}\alpha_1 = \eta\alpha_0 - 2 Q_0,\;\;\;\alpha_2 = d_1\alpha_1 - \alpha_0 + Q_0,\;\;\;
\alpha_3 = d_2\alpha_2 - \alpha_1\;\;\;\mbox{and}\;\;\;\alpha_3 = \rho_3\alpha_2
\label{exeqs}
\end{equation}
for $\alpha_0,\ldots, \alpha_3$, obtaining
\begin{equation}
\alpha_0 = \frac{2Q_0(x+1)}{\eta + 2x},\;\;\;\alpha_1 = w(Q_0 - \alpha_0);
\label{exsol}
\end{equation}
then we use the second and third expressions in equation~\eqref{exeqs} to find $\alpha_2,\; \alpha_3$
in terms of $\alpha_0,\;\alpha_1$. Here, $w = (d_2 - \rho_3)/(1-d_1(d_2-\rho_3))$ and $x = \operatorname{Re} w$.
Knowing $\alpha_2$, $\alpha_3$, we can now compute $\alpha_k$ for $k\geq 4$
from the general equation --- but not in the obvious way,
that is, by finding $\alpha_4$, followed by $\alpha_5$ and so on, since
iteration in this direction is unstable: the values of $\alpha_k$
so obtained rapidly become inaccurate, even for moderate values of $k$, and
especially for large $R$. Instead, following section 19.28 in~\cite{a&s}, where an analogous
problem is solved, we iterate in the reverse direction: we fix a maximum value of $k$, $K$,
say, and invert the general
equation~\eqref{recrel} to give $\alpha_{k-1} = d_k\alpha_k - \alpha_{k+1}$.
We then use this to find, successively $\alpha_{K-j}$, $j = 1, \ldots, K-2$,
each in terms of $\alpha_K,\,\alpha_{K+1}$, which are, for the moment, unknown. At stage
$j$, we will have an expression of the form $\alpha_{K-j} = u_j \alpha_K + v_j \alpha_{K+1}$,
where $u_j,\, v_j\in\mathbb C$. Now, for $j = K-3$ and $K-2$, we have
$\alpha_3 = u_{K-3}\alpha_K + v_{K-3}\alpha_{K+1}$ and
$\alpha_2 = u_{K-2}\alpha_K + v_{K-2}\alpha_{K+1}$, and, since $\alpha_2$
and $\alpha_3$ are known, we can solve these two equations for $\alpha_K$
and $\alpha_{K+1}$. With these now known, we can find $\alpha_k$, $k = 4, \ldots, K+1$,
and hence $q(t)$ (from equation~\eqref{fser}). In particular, $q(0)
\approx\sum_{|k|\leq K+1}\alpha_k$, which we used as our initial condition in
the numerical computation of $q(t)$ in section~\ref{cwn}.
\begin{figure}[htbp]
\centering
\includegraphics*[width=4.8in]{fcof_pwr.eps}
\caption{Left: A logarithmic plot of the energy in the $k$-th harmonic, $\log_{10}
2R\omega^2 k^2|\alpha_k|^2$, $k = 1,\ldots, 20$, computed as described
in section~\ref{fccomp}, for $R = 10^7, 10^9$ and $10^{10}\Omega$.
Right: Mean power as a function of $R$ computed from $2\omega^2 R\sum_{k=1}^{n_h} k^2|\alpha_k|^2$,
with $n_h = 30$ and $n_h = 3$. Compare with Figure~\ref{pwr}.}
\label{fcof_pwr}
\end{figure}
Figure~\ref{fcof_pwr} shows a logarithmic plot of the modulus of the first 21 Fourier
coefficients so obtained, in the cases $R = 10^7, 10^9$ and $10^{10}\Omega$.
For comparison, we compute the mean power $\mn{p(R)}$ by
(a) $2\omega^2 R\sum_k k^2|\alpha_k|^2$; (b) by solving the
o.d.e.~\eqref{ode} numerically, as was done for Figure~\ref{pwr}; and (c) by
using the approximate formulae~\eqref{psmall},~\eqref{plarge}, if they
apply. The agreement is very good --- see Table~\ref{pwr_comp}.
\begin{table}[h!]
\centering
\begin{tabular}{|l|c|c|c|}\hline
\multicolumn{1}{|c}{\raisebox{2.5ex}{ }Method} & \multicolumn{1}{|c}{$R=10^7\Omega$} &
\multicolumn{1}{|c|}{$R=10^9\Omega$} & \multicolumn{1}{|c|}{$R=10^{10}\Omega$}\\ \hline
(a) Sum of Fourier coefficients & $0.9652 \mu$W & $17.80 \mu$W & $4.132 \mu$W\\
(b) Numerical solution of o.d.e & $0.9652 \mu$W & $17.80 \mu$W & $4.132 \mu$W\\
(c) Eq.~\eqref{psmall} or~\eqref{plarge} if valid & $0.9652 \mu$W & --- & $4.130 \mu$W\\\hline
\end{tabular}
\caption{Comparison of the mean power computed in three different ways, for
three different values of $R$. The first value in row (c) comes from
equation~\eqref{psmall}, the third from~\eqref{plarge}. Neither formula
applies for $R = 10^9\Omega$.}
\label{pwr_comp}
\end{table}
We now make a practical point. In general, the values of
$|\alpha_k|$ decrease very rapidly with $k$, the more so with increasing
$R$. In fact, to a good approximation, for $R\in [10^5, 10^{15}]\Omega$, we can use
\begin{equation}
\mn{p(R, n_h)} := 2\omega^2 R\sum_{k = 1}^{n_h} k^2|\alpha_k|^2,
\label{p_from_FS}
\end{equation}
for a small number of harmonics, $n_h$. For an accurate estimate of the mean power, we use $n_h = 30$.
However, even with $n_h = 3$, using equations~\eqref{exeqs} and~\eqref{exsol} to find $\alpha_1$,
$\alpha_2$ and $\alpha_3$, the largest relative error between $\mn{p(R, 3)}$ and
$\mn{p(R, 30)}$ is about 15\%; this largest relative error occurs for $R\in[10^5,
3\times 10^7]$ and is approximately constant over this range --- see
Figure~\ref{fcof_pwr}. For other values of $n_h$, we find the following maximum relative errors:
$n_h = 4:\;6\%$, $n_h = 5:\;2.5\%$, $n_h = 7:\;0.3\%$.
With an algorithm to compute the Fourier coefficients now in place, we can
do more however. For instance, we can study the effect of
$\omega$ on the power output, and we include in Figure~\ref{maxpwr} a plot of the peak mean power
obtained as $R$ varies, as a function of $\omega$. Specifically, we compute
the mean power, $\mn{p(R, n_h)}$, from~\eqref{p_from_FS} with $n_h = 30$, which is large
enough to ensure that the error is completely negligible, and then vary $R$ to find
the maximum value of the mean power, $\langle p(\omega)\rangle_{pk}$. That is, $\langle p(\omega)\rangle_{pk} := \max_{R> 0}\mn{p(R, 30)}$.
We then denote by $R_{pk}(\omega)$ the value of $R$ for which $\langle p(\omega)\rangle_{pk}$ is obtained.
A very strong linear trend is noticeable in both $\langle p(\omega)\rangle_{pk}$ and $R_{pk}(\omega)^{-1}$,
and using the data in Figure~\ref{maxpwr}, we find $\langle p(\omega)\rangle_{pk}\approx
2.92\omega\;\mu$W and $R_{pk}(\omega)\approx 4.33/\omega$~G$\Omega$.
The latter should be compared with equation~\eqref{intersect_rpk}, which
predicts that $R_{pk}(\omega)\approx 4.15/\omega$~G$\Omega$ for the parameter
values in Table~\ref{numvals} --- a very good agreement.
The fact that $R_{pk}(\omega)$ is proportional to $1/\omega$ suggests the
following way of understanding qualitatively the behaviour of the circuit in Figure~\ref{Matt}. If we
replace $C(t)$ by a constant capacitance $C_{\mathrm{eff}}$ and compute the power
transferred to load $R$, we find that it is a maximum when $R = R_{pk} =
1/(\omegaC_{\mathrm{eff}})$. Now, from above, we have that $R_{pk}(\omega)\approx 4.33\times 10^9/\omega$,
suggesting that $C_{\mathrm{eff}}\approx 2.31\times 10^{-10}$~F. Looking now at Figure~\ref{CV},
left, we see that $C_{\mathrm{eff}}$ lies between the maximum and minimum values of
$C(t)$. It is similar, but not equal to the mean value of $C(t)$, which is
about $1.05\times 10^{-10}$~F, and it is clear that the actual value of
$C_{\mathrm{eff}}$ can only be computed from an accurate power computation, such as
that carried out here using Fourier series.
\begin{figure}[htbp]
\centering
\includegraphics*[width=4.8in]{maxpwr.eps}
\caption{Fourier-series-based power computations. Left: the peak mean power, $\langle p(\omega)\rangle_{pk}$, $\mu$W,
as a function of frequency $f = \omega/2\pi = 1, 2, 5, 10, 20$ and 50 Hz.
Right: $R_{pk}(\omega)^{-1}$, the value of load resistance at which this
peak mean power is obtained, plotted for the same values of $\omega$ (solid
line), alongside the estimate of $R_{pk}$ given in equation~\eqref{intersect_rpk}
(dashed line). The range of $R_{pk}$ itself, as opposed to its reciprocal, is
about $14$--$690$ M$\Omega$.}
\label{maxpwr}
\end{figure}
\subsection{A solution based on Bessel functions}
Along with equations~\eqref{recrel}, consider also
\begin{equation}
\beta_{k+1} + \beta_{k-1} = \eta(1+\mathrm{i}\varepsilon k)\beta_k, \qquad k\in\mathbb Z
\label{beteq}
\end{equation}
and
\begin{equation}
\gamma_{k+1} + \gamma_{k-1} = \eta(1-\mathrm{i}\varepsilon k)\gamma_k, \qquad k\in\mathbb Z.
\label{gameq}
\end{equation}
The Bessel function of the first kind~\cite{a&s}, $J_\nu(z)$, is defined for $\nu,\, z\in\mathbb C$,
with $\nu$ referred to as the order, and this function obeys
\begin{equation}
J_{\nu+1}(z) + J_{\nu-1}(z) = \frac{2\nu}{z}J_\nu(z).
\label{Ceq}
\end{equation}
In the light of this, solutions to equation~\eqref{beteq}
can be expressed in terms of a Bessel function of the appropriate (complex) order.
For this to work, we require that $\eta(1+\mathrm{i}\varepsilon k) = 2\nu/z$ for all $k$, where $z$ is
independent of $k$, and $\nu = k + \mathrm{i}\xi$ for $\xi\in\mathbb R$. Taking
these together, we see that $z\eta(1+\mathrm{i}\varepsilon k) = 2k + 2\mathrm{i}\xi$, and so $z$
must be purely imaginary. Hence $\nu = k -\mathrm{i}/\varepsilon$ and $z = -2\mathrm{i}/\eta\varepsilon$,
giving $\beta_k = C W_k$ where $W_k := J_{k-\mathrm{i}/\varepsilon}(-2\mathrm{i}/\eta\varepsilon)$ and $C$
is an arbitrary constant. Note
that $J_{\nu}(z)$ is bounded as $\hbox{Re}(\nu) \to +\infty$, as shown in
Appendix~II, Property~\ref{jbig}. Analogously, for any $k\in\mathbb Z$, equation~\eqref{gameq}
has the solution $\gamma_k=D Z_k$, where $D$ is an arbitrary constant and
$Z_k:=J_{k+\mathrm{i}/\varepsilon}(2\mathrm{i}/\eta \varepsilon)$.
Now set $\delta_k=\beta_k$ for $k\ge 1$ and $\delta_k=\gamma_{-k}$ for $k\le -1$.
Then, for $k\ge 2$, we have
$$ \delta_{k+1} + \delta_{k-1} = \beta_{k+1} + \beta_{k-1} = \eta \left( 1 + \mathrm{i}\varepsilon k \right) \beta_{k} =
\eta \left(1 + \mathrm{i}\varepsilon k \right) \delta_{k},$$
while, for $k\le -2$,
$$\delta_{k+1} + \delta_{k-1} = \gamma_{-(k+1)} + \gamma_{-(k-1)} = \gamma_{-k-1} + \gamma_{-k+1}
= \eta \left(1 + \mathrm{i}\varepsilon k \right) \gamma_{-k} = \eta \left( 1 + \mathrm{i} \varepsilon k \right) \delta_{k},$$
so that, comparing the last two equations with~\eqref{rrc}, we see that, if we set
$\alpha_k = \delta_{k}$ for all $|k| \ge 1$, we obtain a solution to~\eqref{rrc} depending on the
complex constants $C$ and $D$.
Now, Property~\ref{jstar} in Appendix~II states that $J_{\nu^*}(z^*) = \left( J_{\nu}(z) \right)^*$.
Hence, setting $D=C^*$, for all $k\ge 1$, we have
\begin{equation}
\alpha_{k}^* = \beta_k^* = C^* W_k^* = C^* \left(J_{k-\mathrm{i}/\varepsilon}(-2\mathrm{i}/\eta \varepsilon)\right)^* =
D J_{k+\mathrm{i}/\varepsilon}(2\mathrm{i}/\eta \varepsilon) = D Z_{k} = \gamma_{k} = \alpha_{-k}
\label{aka-k}
\end{equation}
so that $\alpha_{k}^*=\alpha_{-k}$ for all $k \neq 0$. So, if we require $q(t)$ to be real,
we are left with only one free parameter $C\in\mathbb C$.
To compute $C$, we go back to equations~\eqref{rra},~\eqref{rrb}. Solving
these for $\alpha_0$ gives
\begin{equation}
\alpha_0 = \frac{CW_1 + C^* W_1^* + 2Q_0}{\eta} = \eta(1+\mathrm{i}\varepsilon)CW_1 - C W_2 + Q_0,
\label{al0eq}
\end{equation}
where we have used $\alpha_i = CW_i$ for $i = 1, 2$. The right-hand equality can be rewritten
in the form $z_1 C + z_2 C^* = r$, where $z_1, z_2, C\in\mathbb C$ and
$r\in\mathbb R$. Specifically, $z_1 = W_1\left(1 - \eta^2(1+\mathrm{i}\varepsilon)\right)\!/\eta + W_2$,
$z_2 = W_1^*/\eta$ and $r = Q_0(1-2/\eta)$. By considering $z_1 C + z_2 C^* = r$ and its
complex conjugate, we can solve for $C$ to obtain $C = r(z_1^* - z_2)/\Delta$, where
$\Delta = |z_1|^2 - |z_2|^2$. Numerical evidence indicates that
$\Delta\in(0, 1)$ for all $R > 0$, so $C$ exists for all positive $R$.
The expression $\alpha_k = C J_{k-\mathrm{i}/\varepsilon}(-2\mathrm{i}/\eta\varepsilon)$, $k\geq 1$, with
$\alpha_0$ being given by~\eqref{al0eq}, is in principle good for any values of the
parameters. In practice, however, the method for computing $\alpha_k$ in
Section~\ref{fccomp} is often useful, especially for small $R$ (leading to small
$\varepsilon = \omega R/B_0$, which requires the evaluation of $J_{k-\mathrm{i}/\varepsilon}(z)$ for small $\varepsilon$
--- algorithms to do this tend to be slow, especially when $|k-\mathrm{i}/\varepsilon|\approx |z|$).
One advantage of expressing the Fourier coefficients in terms of Bessel
functions is that we immediately see from our expression for
$\alpha_k$, along with~\eqref{mJas} in Appendix~II, that
$$|\alpha_k| \sim \left(\frac{e}{\eta\varepsilon}\right)^k \frac{1}{k^{k+\frac{1}{2}}},$$
so clearly the Fourier series for $q(t)$ converges for all non-zero $\varepsilon, \eta$.
\section{Conclusions and further work}
We have studied the periodically-excited triboelectric nanogenerator (TENG) from a mathematical
viewpoint. Our main aim has been to derive expressions for the mean power as a
function of load resistance $R$ and excitation frequency $\omega$, although
as a by-product, we also compute the current, from
which other quantities of interest can be derived.
The TENG has a single state variable, the charge, $q(t)$, and the time
evolution of $q(t)$ is described by the linear, first-order, non-autonomous o.d.e.\ in
equation~\eqref{ode}. This has a periodically-varying coefficient, the reciprocal
capacitance $B(t)$, which makes analysis of
the problem less straightforward than, say, the piezoelectric device discussed in~\cite{zhang},
the o.d.e.\ for which, while still non-autonomous, has constant coefficients.
After proving that the o.d.e.~\eqref{ode} has a unique periodic attractor, we
derive perturbation series for $q(t)$ in the cases
of small and large $R$, equations~\eqref{psmall} and~\eqref{plarge}
respectively. We give general expressions for the first few
coefficients in these series before using them to estimate the current, $i(t) = \dot{q}(t)$,
and the mean power, $R\times$[mean squared current over one period].
Comparison with numerics shows these expressions to be good for all $R$
except $R\in[10^8, 3\times 10^9]\Omega$ (approximately), these values
of $R$ being neither `small' nor `large' in the context of this problem.
However, by using a simple argument based on the intersection of straight lines,
the two perturbation series can be used to estimate $R_{pk}$, the value of
$R$ that maximises the mean power, and this approach results in a simple expression,
equation~\eqref{intersect_rpk}, for $R_{pk}(\omega)$.
Since the o.d.e.\ has a unique periodic solution $q(t)$, we then discuss its
Fourier expansion. We give two procedures to find the Fourier
coefficients, $\alpha_k$, for $k$ as large as desired. The first is an
algorithm to find $\alpha_k$, which requires only simple computing
machinery, and the second is an expression for $\alpha_k$ in terms of Bessel
functions of the first kind, $J_\nu(z)$. From $\alpha_k$, the mean
power, for any $R$ and $\omega$, can be computed to any degree of precision, as can $q(t)$ and $i(t)$
--- from the latter, the peak current and peak power can then be found if
desired. One important aspect of the connection with Bessel functions is that we
can deduce the behaviour of $\alpha_k$ for large $k$.
Our chief interest has been the mean power as a function of $R$ and
$\omega$ and we discuss how this may be estimated by a single expression,
valid for \textit{all} $R$. We propose two possibilities. The first is
heuristic and comes from an observation about the two
series, equations~\eqref{psmall} and~\eqref{plarge}: we show that if these
series are both expansions of the same rational function, then we can
compute an approximation to this function explicitly --- see equation~\eqref{ratapp}.
This estimate is simple to use but entirely heuristic, although it does compare
favourably with the exact result.
The second possibility comes from our study of the Fourier coefficients in
terms of Bessel functions, which shows that they decrease in magnitude
rapidly (as $1/k!$ in fact) with increasing index $k$. In practice, even the first three
coefficients are enough to give a reasonable power estimate (relative error
$\leq 15\%$) for all $R$.
Clearly, the power output also depends on other parameters in the problem,
for example the thickness $x_1$, $x_2$ of the dielectrics, the excitation amplitude
$z_0$ and the triboelectric charge density $\sigma_T$. A study of the
effects of these parameters and others has recently been published in~\cite{aem}.
This work raises some interesting questions for further investigation. For
example, in the TENG context, the fundamental frequencies of $B(t)$ and $V(t)$ have to
be the same. What can we say about solutions to o.d.e.s like~\eqref{ode}
where this is not the case? What more can we say about the convergence of the
perturbation series? Can we say anything about the the behaviour of the
modulus of the Fourier coefficients in the case where $B(t)$ and/or $V(t)$
are not approximated as truncated Fourier series? What can we say in the
case of a load that is not purely resistive?
|
3,212,635,537,669 | arxiv | \section*{Introduction}
The use of discounting in optimal control and Hamilton--Jacobi equations theory is both motivated by the models used and very useful from a theoretical point of view. In the models, the discount accounts for the lesser influence of events far away in time. From the theoretical point of view, the discount brings exponential terms $e^{-\lambda t}$ allowing to consider infinite horizon problems as improper integrals converge, and in Hamilton--Jacobi equations, it allows to prove strong comparison principle (or to make some solution operators strongly contracting) and are followed by powerful existence and uniqueness results.
To our knowledge, one of the first striking applications of this method was for homogenization purposes in the famous \cite{LPV}. The authors consider a periodic (in the first variable) coercive (in the second variable) Hamiltonian $H : \mathbb R^N \times \mathbb R^N \to \mathbb R$ and for $\lambda>0$ prove that there exists a unique periodic viscosity solution $u_\lambda : \mathbb R^N \to \mathbb R$ to
$$\lambda u(x) +H(x,D_x u) = 0.$$
They then prove that the $(u_\lambda)_\lambda$ are equiLipschitz and that $\lambda u_\lambda$ converges uniformly to a constant, $-c_H$, as $\lambda \to 0$. Using the Ascoli Theorem, the authors conclude to the existence of a periodic viscosity solution to the cell problem
$$H(x,D_xu) = c_H.$$
This last point uses an extraction and the problem of actual convergence of the family $u_\lambda$ was not tackled for quite some time. Following a first breakthrough in this direction \cite{IS}, a general convergence Theorem was established, under an additional convexity hypothesis in \cite{DFIZ}. Since then there has been a huge activity on this problem and therefore an important subsequent literature. Let us state amongst others \cite{DFIZ2} for a discrete time version, \cite{ISico} for a non compact version, \cite{PSnet} on networks, \cite{CCIZ,WYZ,Chen} for nonlinear discounting versions, \cite{I21,I22} for second order PDE versions, \cite{DZsys} for weakly coupled systems, following ideas from \cite{DSZ} and then widely generalized in \cite{Isys1,Isys2}, finally let us mention \cite{A12} for a more geometric interpretation of the convergence result and references therein.
Such convergence results also have limitations. All the previous require some convexity assumptions and a striking counterexample has been constructed in \cite{Zi} when otherwise. Moreover, all the previous works require that the discounting be increasing in some sort and counterexamples to the convergence result appear in \cite{CCIZ} when this monotonicity is not strong enough.
The goal of this paper is to provide a setting with non--decreasing discounting where convergence still holds. More precisely we study equations of the form
$$\lambda \alpha(x)u(x) +H(x,D_x u) = c_H,$$
where $\alpha$ is a nonnegative function that may vanish at some places. From an economical point of view, this may provide a model for settings in which interest rates depend on the space variable and are allowed to vanish at some places.
From a theoretical point of view, if $\alpha$ is positive, we will see in Proposition \ref{bb2} that the study reduces to the previous case. If $\alpha$ is identically $0$ on the contrary, not much can be said. We require in this paper that $\alpha$ is positive on the Aubry set of $H$ that plays a major role in the study of the limiting equation $H(x,D_xu) = c_H$. Our result is then that the solutions $u_\lambda$ to the discounted equations converge when $\lambda \to 0$. To obtain such a result, the main difficulty is to obtain quantitative properties on the behavior of characteristic trajectories associated to $u_\lambda$. This is done in Proposition \ref{excursion}.
\subsection*{Organization of the paper}
In section \ref{setting}, we introduce the setting, the main hypotheses and objects of study and state precisely our main result.
In section \ref{generality}, we state well known, or folklore results for general Hamilton--Jacobi equations that we later apply to our particular setting.
In section \ref{sec:weakKAM} we recall main features and tools of weak KAM theory and Aubry--Mather theory. Those are at the core of the proof of the main result.
In section \ref{sec:deg} we study our degenerate discounted equations. In particular, we prove they satisfy a strong comparison principle (Proposition \ref{comp-l}) and in particular that there exists indeed a unique solution $u_\lambda$ for each fixed
$\lambda>0$. We then derive from classical weak KAM theory representation formulas of Lax--Oleinik type for $u_\lambda$ (in Theorem \ref{representation} and \ref{representation-l}) and we establish the main technical lemma about minimizing trajectories for $u_\lambda$ (Proposition \ref{excursion}).
Finally, in section \ref{sec:proof} we prove the convergence result.
To conclude, in section \ref{formula} we establish an alternate expression for the limit $u_0$. For readers familiar with \cite{DFIZ}, this formula will not come as a surprise.
\subsection*{Acknowledgement} The authors wishes to thank the anonymous referees for their help in improving the motivations and presentation of the present work.
\section{Setting and main result}\label{setting}
Let $M$ be a closed connected smooth compact manifold. We denote by $TM$ and $T^*M $ respectively the tangent and cotangent bundles of $M$.
We endow $M$ with a Riemannian metric $g$ and denote by $d : M\times M \to \mathbb R$ the associated distance. As $M$ is compact, all such Riemannian metrics are equivalent and all our results are independent of this choice. If $x\in M$ we will denote by $\|\cdot \|_x$ the norm associated to $g$, either on $T_xM$ or on $T^*_x M$. We will denote by $\pi$ the canonical projections from $TM\to M$ and from $T^*M\to M$, $(x,v),(x,p)\to x$ according to the context. If $A$ is any set and $f : A\to \mathbb R$ is a bounded function we denote by $\|f\|_\infty = \sup\limits_{x\in A} |f(x)|$ its sup norm.
Unless explicitly specified, when a sequence of functions is convergent, it will always be for the sup norm.
We will consider a Hamiltonian function $H : T^*M \to \mathbb R$ that we will always assume to be continuous. Moreover we will assume that
\begin{itemize}
\item[(H1)]\label{H1}(Convexity) For every $x\in M$ the function $H(x,\cdot) : T_x^*M \to \mathbb R$ is convex.
\item[(H2)]\label{H2}(Coercivity) $H(x,p) \to +\infty$ as $\|p\|_x \to +\infty$.
\end{itemize}
Note that thanks to the compactness of $M$ and the convexity of $H$, it is equivalent to require that coercivity holds pointwise for all $x\in M$ or uniformly on $M$.
We will at some point enforce (H2) by the stronger condition
\begin{itemize}
\item[(H2')]\label{H2'}(Superlinearity) $H(x,p)/\|p\|_x \to +\infty$ as $\|p\|_x \to +\infty$.
\end{itemize}
Again, this limit is equivalently verified pointwise or uniformly in $x$.
Associated to $H$ is the critical value $c_H\in \mathbb R$, it is the only real number such that the critical equation
\begin{equation}\label{HJ-crit}
H(x,D_x u) = c_H, \quad \quad x \in M \tag{HJ-crit},
\end{equation}
admits viscosity solutions\footnote{All notions and definitions related to viscosity will be given in the next section.}. An important object to study such solutions is the projected Aubry set $\mathcal{A} \subset M$ (that will be precisely defined later). It is closed, compact and non--empty. Moreover one of its fundamental properties is provided by the following theorem (\cite[Theorem 6.2]{FS05}):
\begin{Th}\label{strict}
There exists a continuous function $v : M\to \mathbb R$ that is a viscosity subsolution of \eqref{HJ-crit}, that is $C^{\infty}$ and strict on $M\setminus \mathcal{A}$ meaning that
$$\forall x\in M\setminus \mathcal{A} , \quad H(x,D_x v) < c_H.$$
\end{Th}
Finally, let us introduce the function we will use to discount the Hamilton--Jacobi equation: $\alpha : M\to \mathbb R$ is a continuous function that verifies
\begin{itemize}
\item[($\alpha$1)]\label{a1}(Non--negativity) The function $\alpha$ is nonnegative on $M$.
\item[($\alpha$2)]\label{a2}(Positivity) The function $\alpha$ is positive on $\mathcal{A}$.
\end{itemize}
The discounted equations we will be studying are the following: given a positive constant $\lambda >0$
\begin{equation}\label{HJ-l}
\lambda \alpha(x) u(x) + H(x,D_x u) = c_H, \quad \quad x \in M \tag{HJ-$\lambda$}.
\end{equation}
As we will see in Proposition \ref{comp-l}, given $\lambda >0$, the previous equation admits a unique viscosity solution $u_\lambda$. Our main theorem is the following:
\begin{Th}\label{main}
The family $(u_\lambda)_{\lambda >0}$ uniformly converges, as $\lambda \to 0$ to a function $u_0 : M\to \mathbb R$ that is a viscosity solution to
$$ H(x,D_x u_0) = c_H, \quad \quad x \in M.$$
\end{Th}
Following from the proof we actually will establish two characterizations of the limit function $u_0$, the first one in terms of critical subsolutions and Mather measures (Definition \ref{limit}). The second one involves Mather measures and Peierl's barrier (Definition \ref{limit2}).
\section{Generalities on Hamilton--Jacobi equations} \label{generality}
Equations \eqref{HJ-l} and \eqref{HJ-crit} fall into the scope of general Hamilton--Jacobi equations of the type
\begin{equation}\label{HJgen}
G\big(x,D_xu,u(x)\big) = 0, \quad \quad x \in M ,
\end{equation}
where $G : T^*M \times \mathbb R \to \mathbb R$ is continuous, convex in the last two variables, coercive in the second variable and non--decreasing in the last variable. Unless explicitly stated otherwise, we assume in this section that $G$ verifies the previous hypotheses. Let us state some well known facts about those. The general references for this section are \cite{bardi,barles,Fa,Sic}. Note that many results below are not stated with optimal hypotheses but they will suffice our needs.
\begin{df}\rm
A continuous function $u : M\to \mathbb R$ is a
\begin{itemize}
\item viscosity subsolution to \eqref{HJgen} if for all $C^1$ function $\varphi : M\to \mathbb R$ and $x\in \mathbb R$ such that $u-\varphi$ has a local maximum at $x$, $G\big(x,D_x\varphi,u(x)\big) \leqslant 0$;
\item viscosity supersolution to \eqref{HJgen} if for all $C^1$ function $\varphi : M\to \mathbb R$ and $x\in \mathbb R$ such that $u-\varphi$ has a local minimum at $x$, $G\big(x,D_x\varphi,u(x)\big) \geqslant 0$;
\item viscosity solution if it is both a viscosity subsolution and supersolution.
\end{itemize}
\end{df}
In this article, even if omitted, all functions considered will be continuous and all subsolutions, supersolutions or solutions will always be in the viscosity sense as above, hence the adjective will sometimes be omitted.
The next Proposition is referred to as stability in viscosity theory:
\begin{prop}\label{stab}
A uniform limit of subsolutions (resp. supersolutions, solutions) of \eqref{HJgen} is a subsolution (resp. supersolutions, solutions) of \eqref{HJgen}.
\end{prop}
It is also worth noticing that for $C^1$ functions, it is equivalent to be subsolution, supersolution, solution in the viscosity sense or in the classical sense.
We start by an important consequence of the coercivity assumption (\cite[Lemme 2.5]{barles}). Note that the proof being local, it adapts to our manifold setting in a straightforward manner (see also \cite[Theorem 7.5.2]{Fa}).
\begin{prop}\label{lip}
Let $u : M\to \mathbb R$ be a subsolution of \eqref{HJgen}. Then $u$ is automatically Lipschitz. A Lipschitz constant of $u$ is given by
$$K=\max\big \{ \|p_x\| , \ \ (x,p,r)\in T^*M\times[-\|u\|_\infty, \|u\|_\infty] \cap G^{-1}\big( (-\infty,0)\big)\big\}.$$
\end{prop}
As $G$ is also convex, we derive the following classical consequence (remembering that by Rademacher's theorem, Lipschitz functions are automatically differentiable almost everywhere):
\begin{prop}\label{aesub}
Let $u:M\to \mathbb R $ be a continuous function, then the following assertions are equivalent:
\begin{itemize}
\item $u$ is a subsolution of \eqref{HJgen};
\item $ u$ is Lipschitz and verifies the inequality $G\big(x,D_xu,u(x)\big) \leqslant 0$ for almost every $x\in M$.
\end{itemize}
\end{prop}
Let us now state some general properties on sub and supersolutions of \eqref{HJgen}. The first item is general, the last two rely on our convexity hypothesis.
\begin{prop}\label{viscositygen}
The following properties hold:
\begin{enumerate}
\item If $u\in C^0(M,\mathbb R)$ is a pointwise supremum (resp. infimum) of a family of subsolutions (resp. supersolutions) of \eqref{HJgen}, then $u$ is a subsolution (resp. supersolution) to \eqref{HJgen}.
\item If $u$ is a pointwise infimum of a family of equi--Lipschitz subsolutions of \eqref{HJgen}, then $u$ is itself a subsolution of \eqref{HJgen}.
\item If $u$ is a convex combination of a family of equi--Lipschitz subsolutions of \eqref{HJgen}, then $u$ is itself a subsolution of \eqref{HJgen}.
\end{enumerate}
\end{prop}
Here is an approximation theorem that we will use repeatedly. Its proof follows from standard mollification technics. We will see how to obtain it from similar known results.
\begin{prop}\label{approx}
Let $G : T^*M\times \mathbb R \to \mathbb R$ be continuous, convex in $p$ and let $u : M\to \mathbb R$ be a Lipschitz subsolution of \eqref{HJgen}. Then for all $\varepsilon>0$ there exists a smooth $u_\varepsilon : M\to \mathbb R$ such that $G\big(x,D_xu_\varepsilon,u_\varepsilon(x)\big) \leqslant \varepsilon$ and $\|u-u_\varepsilon\|_\infty < \varepsilon$.
\end{prop}
\begin{proof}
Let $\varepsilon>0$. Let $K>0$ be a Lipschitz constant for $u$. If $\eta>0$ we may apply \cite[Theorem 8.1]{fatmad}, see also \cite[Lemma 2.2]{DFIZ} and \cite{FS05}, to the Hamiltonian $F(x,p)=G\big(x,p,u(x)\big)$ to obtain a function $v_\eta : M\to \mathbb R$ that is smooth, $2K$--Lipschitz, such that $\|u-v_\eta\|_\infty <\eta$ and $F(x,D_xv_\eta)<\eta$ for all $x\in M$.
It now follows from the uniform continuity of $G$ on the compact set $\{(x,p,r)\in T^*M\times \mathbb R , \ \ \|p\|_x\leqslant 2K ,\ \ |r|\leqslant \|u\|_\infty +1\}$ that for $\eta$ small enough, the function $v_\eta$ verifies the requirements of the proposition.
\end{proof}
We conclude by the most important result we will need which is a strong comparison principle. It is rather folklore (see \cite[Theorem 2.7]{barles} and the following discussion). Let us however provide elements of proof for the reader's convenience:
\begin{Th}\label{compG}
Let $G : T^*M \times \mathbb R \to \mathbb R$ be continuous, convex in the last two variables, coercive in the second variable and non--decreasing in the last variable. Assume that there exists a $C^1$ function $\varphi : M\to \mathbb R$ such that
$G\big(x,D_x\varphi ,\varphi(x)\big) < 0$, for all $x\in M$. Let $u : M\to \mathbb R$ and $v: M\to \mathbb R$ be respectively a subsolution and a supersolution of \eqref{HJgen}. Then $u\leqslant v$.
\end{Th}
\begin{proof}[sketch of proof]
If $\eta\in (0,1)$ we define $u_\eta = (1-\eta)u+\eta \varphi$. Note that thanks to Proposition \ref{lip}, the $u_\eta$ are equi--Lipschitz. We denote by $K>0$ a common Lipschitz constant. We will prove that $u_\eta\leqslant v$ for all $\eta\in (0,1)$ which proves the Theorem letting $\eta \to 0$.
We now fix $\eta\in (0,1)$ until the end of the proof. By compactness, there exists a positive constant $r>0$ such that $G\big(x,D_x\varphi ,\varphi(x)\big) < -r$, for all $x\in M$. Hence, thanks to the convexity hypothesis of $G$, $u_\eta$ is a subsolution of
$$G\big(x,D_xu_\eta,u_\eta(x)\big) \leqslant -\eta r, \qquad \qquad \qquad x\in M.$$
If $0<\varepsilon <\eta r$ we may apply Proposition \ref{approx} to find a smooth function $\tilde u_\varepsilon$ such that $\|u_\eta - \tilde u_\varepsilon \|_\infty \leqslant \varepsilon$ and
$$G\big(x,D_x\tilde u_\varepsilon,\tilde u_\varepsilon(x)\big) \leqslant -\eta r+\varepsilon <0, \qquad \qquad \qquad x\in M.$$
Let now $x_\varepsilon \in M$ be a maximum point of $\tilde u_\varepsilon - v$. We may see $\tilde u_\varepsilon$ as a test function for $v$ and the supersolution criterion yields that
$$G\big(x_\varepsilon, D_{x_\varepsilon} \tilde u_\varepsilon, v(x_\varepsilon)\big) \geqslant 0.$$
Moreover, as $G\big(x_\varepsilon,D_{x_\varepsilon}\tilde u_\varepsilon,\tilde u_\varepsilon(x_\varepsilon)\big) \leqslant -\eta r+\varepsilon <0$, we deduce that $\tilde u_\varepsilon (x_\varepsilon) -v(x_\varepsilon) \leqslant 0$ as $G$ is non--decreasing in the last variable. Finally, by the choice of $x_\varepsilon$ it follows that $\tilde u_\varepsilon\leqslant v$ on $M$ and letting $\varepsilon \to 0$ we conclude that $u_\eta\leqslant v$ hence the Theorem.
\end{proof}
\section{Classical weak KAM theory}\label{sec:weakKAM}
Good references for this section are \cite{FS05,Fa12}. Much of the present content is also recalled in \cite{DFIZ}.
\subsection{For coercive Hamiltonians}
We here study stationary equations associated to a continuous Hamiltonian function $H : T^*M\to \mathbb R$ that verifies hypotheses (H1) and (H2). If $c\in \mathbb R$ we consider the Hamilton--Jacobi equation
\begin{equation}\label{HJs}
H(x,D_x u) = c , \qquad \qquad x\in M.
\end{equation}
If $u$ is a $C^1$ function then it is a subsolution to \eqref{HJs} as soon as
$$c\geqslant \max \{H(x,D_x u),\ \ x\in M\}.$$
The equation \eqref{HJs} falls into the scope of the previous section hence all its results apply.
In particular, all subsolutions to \eqref{HJs} are automatically Lipschitz with a common Lipschitz being
\begin{equation}\label{kappa}
\kappa_c = \max\big\{ \|p\|_x, \ \ (x,p)\in H^{-1}\big((-\infty, c ) \big)\big\}.
\end{equation}
This leads us to the important notion of critical value:
\begin{df}\rm
The critical value $c_H$ is the smallest constant $c\in \mathbb R$ for which \eqref{HJs} admits subsolutions.
\end{df}
It is not hard, using the Ascoli theorem and stability of viscosity subsolutions that such a minimum is actually attained. As already introduced, we will call critical equation the stationary Hamilton--Jacobi equation
\begin{equation}\label{HJ-crit}
H(x,D_x u) = c_H, \quad \quad x \in M \tag{HJ-crit},
\end{equation}
We will also refer to as critical the subsolutions, supersolutions and solutions to \eqref{HJ-crit}.
Hence in the rest of the paper, a critical solution (resp. critical subsolution, critical supersolution) is a solution (resp. subsolution, supersolution) to \eqref{HJ-crit}.
Note finally that $c_H$ is the only constant for which \eqref{HJs} admits solutions, called critical solutions or weak KAM solutions.
As the set of subsolutions to \eqref{HJ-crit} is equi--Lipschitz, only the restriction of $H$ to a compact subset of $T^*M$ \big(namely $H^{-1}\big((-\infty,c_H)\big)$\big) is relevant. Hence it is not too restrictive to consider superlinear Hamiltonians, up to modifying the initial coercive Hamiltonian outside of this compact set.
\subsection{For superlinear Hamiltonians}\label{superlinear}
In the rest of this section, we will enforce condition (H2) in condition (H2') assuming that $H$ is moreover superlinear and still verifies (H1). This section is more dynamical in spirit. However as the Hamiltonian is not smooth, there is no Hamiltonian flow at hand and other arguments are required (see also \cite{DS, DavZav} and the appendix in \cite{DZ15}).
We define the Lagrangian $L : TM\to \mathbb R$ through the Legendre transform:
$$\forall (x,v)\in TM, \quad L(x,v) = \max_{p\in T^*_xM} p(v)-H(x,p).$$
The Lagrangian $L$ enjoys similar properties as $H$. It is continuous and verifies
\begin{itemize}
\item[(L1)]\label{L1}(Convexity) For every $x\in M$ the function $L(x,\cdot) : T_xM \to \mathbb R$ is convex.
\item[(L2')]\label{L2'}(Superlinearity) $L(x,p)/\|v\|_x \to +\infty$ as $\|v\|_x \to +\infty$.
\end{itemize}
Some further properties are given by the proposition below:
\begin{prop}\label{fenchel}
The Hamiltonian and Lagrangian are linked by the following:
\begin{itemize}
\item For all $(x,v)\in TM$ and $p\in T^*_x M$, the inequality $L(x,v)+H(x,p) \geqslant p(v)$ holds and is called the Fenchel inequality.
\item Equality, $L(x,v_0)+H(x,p_0) = p_0(v_0)$, holds if and only if one of two equivalent properties hold:
\begin{itemize}
\item $p_0\in \partial_v L(x,v_0)$,
\item $v_0\in \partial_p H(x,p_0)$,
\end{itemize}
where we denote by $\partial_v L(x,v_0)$ \big(resp. $\partial_p H(x,p_0)$\big) the subdifferential (in the sense of convex analysis) of the convex function $v\mapsto L(x,v)$ at $v_0$ (resp. $p\mapsto H(x,p)$ at $p_0$).
\item The Hamiltonian can be recovered from the Lagrangian through the Legendre transform:
$$\forall (x,p)\in T^*M, \quad H(x,p) = \max_{v\in T_xM} p(v)-L(x,v).$$
\end{itemize}
\end{prop}
For every $t>0$ we define the action functional $h_t : M\times M\to \mathbb R$:
$$ h_t(x,y) = \inf\left\{ \int_{-t}^0 \Big[L\big(\gamma (s),\dot\gamma(s)\big) +c_H\Big]ds,\ \gamma\in AC([-t,0],M) , \gamma(-t)=x, \gamma(0)=y \right\}, $$
where $AC([-t,0],M)$ is the set of absolutely continuous curves from $[-t,0]$ to $M$.
Note that the infimum is actually a minimum that is reached by a Lipschitz curve by the Clarke-Vinter Theorem \cite[Theorem 16.18]{Cl13} or \cite{Cl7}.
We recall for later use a simple property of the action functional that follows from its definition:
\begin{prop}\label{inegtr}
The functions $h_t$ verify the following triangular inequality:
$$\forall (x,y,z,t,t')\in M\times M\times M\times (0,+\infty)\times (0,+\infty), \quad h_{t+t'}(x,z)\leqslant h_t(x,y)+h_t(y,z).$$
\end{prop}
The following characterization hold (see \cite{DavZav, Fa,FS05}):
\begin{prop}\label{sub}
A function $w : M\to \mathbb R$ is a critical subsolution of \eqref{HJ-crit} if and only if
$$\forall (x,y)\in M\times M,\ \forall t>0,\quad w(y) - w(x) \leqslant h_t(x,y).$$
\end{prop}
The Peierls barrier is defined as follows
\begin{df}\label{Peierl}\rm
The Peierls barrier is the function $h : M\times M\to \mathbb R$:
$$\forall(x,y)\in M\times M,\quad h(x,y) = \liminf_{t\to+\infty} h_t(x,y).$$
\end{df}
By results of Fathi \cite{Fa1} extended to less regular setting in \cite{DS} the liminf above is actually a limit as soon as $H$ is strictly convex.
Fundamental properties of $h$ are summed up below
\begin{prop}\label{proph}
The Peierls barrier verifies the following properties:
\begin{enumerate}
\item It is finite valued and Lipschitz continuous.
\item If $w:M\to \mathbb R$ is a critical subsolution then
$$\forall (x,y)\in M\times M,\quad w(y) - w(x) \leqslant h(x,y).$$
\item For any $x\in M$, the function $h(x,\cdot)$ is a critical solution.
\item For any $y\in M$, the function $-h(\cdot,y)$ is a critical subsolution.
\end{enumerate}
\end{prop}
We may now give a proper definition of the projected Aubry set:
\begin{df}\label{aubrydef}\rm
The projected Aubry set is the closed and non empty set
$$\mathcal A = \{x\in M,\ \ h(x,x)=0\}.$$
\end{df}
Let us recall the already introduced Theorem \ref{strict}
\begin{Th*}
There exists a function $v : M\to \mathbb R$ that is a subsolution of \eqref{HJ-crit}, that is $C^{\infty}$ and strict on $M\setminus \mathcal{A}$ meaning that
$$\forall x\in M\setminus \mathcal{A} , \quad H(x,D_x v) < c_H.$$
\end{Th*}
In particular, it is transparent from the previous Theorem and the definition of the critical value $c_H$ that $\mathcal{A}$ should be non--empty.
As the set of critical solutions is invariant by addition of constant, the critical equation cannot verify a comparison principle. However, the Aubry set allows to have a result in this direction:
\begin{Th}\label{compA}
If $u$ and $v$ are respectively a critical sub and supersolution such that $u\leqslant v$ on $\mathcal{A}$ then $u\leqslant v$ on the whole of $M$.
In particular, $\mathcal{A}$ is a uniqueness set for \eqref{HJ-crit} meaning that if two critical solutions coincide on $\mathcal{A}$ then they are equal.
\end{Th}
We finish this paragraph on further characterizations of critical solutions (see \cite{FS05,DavZav}):
\begin{Th}\label{weakKAM}
The following are equivalent:
\begin{enumerate}
\item The function $u: M\to \mathbb R$ is a critical solution.
\item The function $u$ is a critical subsolution and
$$\forall x\in M,\forall t>0, \exists y\in M,\quad u(x) = u(y) +h_t(y,x).$$
\item\label{calibr} The function $u$ is a critical subsolution and for all $x\in M$, there exists a Lipschitz curve $\gamma_x : (-\infty , 0]\to M$ such that $\gamma(0)=x$ and
$$\forall t>0, \quad u(x) = u\big(\gamma_x(-t)\big) + h_t\big(\gamma_x(-t),x\big).$$
\end{enumerate}
\end{Th}
Let us comment on this last point and make precise a Lipschitz constant for $\gamma_x$. Using the triangular inequality \ref{inegtr} and the definition of $h_t$ one infers that
for $s<t\leqslant 0$,
\begin{equation}\label{calib}
u\big(\gamma_x(t)\big) - u\big(\gamma_x(s)\big) = \int_s^t\Big[L\big(\gamma_x (\sigma),\dot\gamma_x(\sigma)\big) +c_H\Big]d\sigma.
\end{equation}
Recall that $u$ is Lipschitz with constant $\kappa_{c_H}$ given by \eqref{kappa}. Moreover, by superlinearity of $L$, the constant
$$A(\kappa_{c_H}) = \max\left\{ (\kappa_{c_H}+1)\|v\|_x - L(x,v)-c_H , \ \ (x,v)\in TM \right\} $$
is such that
$$\forall (x,v)\in TM,\quad L(x,v) +c_H\geqslant (\kappa_{c_H}+1)\|v\|_x -A(\kappa_{c_H}).$$
One then computes using \eqref{calib} that
\begin{multline*}
\kappa_{c_H}d\big(\gamma_x(t),\gamma_x(s)\big)
\geqslant \int_s^t\left[ (\kappa_{c_H}+1)\|\dot\gamma_x(\sigma)\|_{\gamma_x(\sigma)} -A(\kappa_{c_H}) \right]d \sigma. \\
\geqslant (\kappa_{c_H}+1)d\big(\gamma_x(t),\gamma_x(s)\big) - (t-s)A(\kappa_{c_H}) .
\end{multline*}
Hence we conclude that $d\big(\gamma_x(t),\gamma_x(s)\big) \leqslant (t-s)A(\kappa_{c_H})$ and that $\gamma_x$ is indeed Lipschitz with constant $A(\kappa_{c_H})$.
\subsection{Mather measures}
We gather here informations already present in \cite{DFIZ} and references therein.
Throughout the rest of this work, we will be dealing with Borel Probability measures on $TM$. We will denote this set by $\mathbb P(TM)$ and.
We say that a sequence of Borel Probability measures on $TM$, $(\mu_n)_{n\in \mathbb N}$ weakly converges to $\mu\in \mathbb P(TM)$ if for all continuous function $f : TM\to \mathbb R$ with compact support,
$$\int_{TM} f(x,v)d \mu_n(x,v) \to \int_{TM} f(x,v)d \mu(x,v).$$
If $\mu \in \mathbb P(TM)$, we will denote $\pi^*\mu$ its push forward on $M$ defined by
$\pi^*\mu (B) = \mu\big(\pi^{-1}(B)\big)$ for any Borel set $B\subset M$. Recall that $\pi : TM\to M$ is the canonical projection. Note that the following formula hold for all continuous function $f : M\to \mathbb R$:
$$\int_M f(x) d \pi^*\mu(x) = \int_{TM} f\circ \pi(x,v) d \mu(x,v).$$
We then introduce closed measures:
\begin{df}\label{closed}\rm
We say that $\mu\in \mathbb P(TM)$ is closed if
\begin{enumerate}
\item it has finite first moment: $\int_{TM} \|v\|_x d\mu(x,v)<+\infty$,
\item for all function $f\in C^1(M,\mathbb R)$, $\int_{TM}D_xf (v) d\mu(x,v)=0$.
\end{enumerate}
We will denote by $\mathbb P_0(TM)$ the set of closed measures on $TM$.
\end{df}
Let us recall the link between closed measures and the critical value $c_H$ (\cite[Theorem A.7.]{DFIZ}). Recall that in this section, $H$ and $L$ are convex and superlinear in the fibers:
\begin{Th}\label{minimizing}
The following holds:
$$\min_{\mu\in \mathbb P_0(TM)} \int_{TM}L(x,v)d\mu(x,v) = -c_H.$$
\end{Th}
Measures realizing the minimum in the previous expression are called minimizing measures, or Mather measures. We will denote by $\mathbb P_{\mathrm{min}}$ the set of Mather measures. Closely linked is the Mather set:
\begin{df}\label{mather}\rm
The Mather set is defined as
$$\widetilde \mathcal M = \bigcup_{\mu \in \mathbb P_{\mathrm{min}}}\mathrm{supp}(\mu)$$
where supp$(\mu)$ denotes the support of $\mu$.
The projected Mather set is $\mathcal M = \pi\big(\widetilde \mathcal M\big)$.
\end{df}
It can be proved that the Mather set is compact and it is not empty as we will see in the last section.
We end by a classical result of weak KAM and Aubry--Mather theories. We provide a proof in this continuous setting for sake of completeness:
\begin{prop}\label{inclusion}
The following inclusion holds: $\mathcal M\subset \mathcal{A}$.
\end{prop}
\begin{proof}
We argue by contradiction and assume that there exists $\mu\in \mathbb P_{\mathrm{min}}$ and $(x_0,v_0)\in \mathrm{supp}(\mu)$ such that $x_0\notin \mathcal{A}$. Let $O$ and $O'$ be open sets of $M$ such that $\overline O \cap \overline{O'}=\varnothing$, $x_0\in O$ and $\mathcal{A}\subset O'$. Let $w : M\to\mathbb R$ be the function given by Theorem \ref{strict} and let $\varepsilon>0$ such that
$H\big(x,D_x w)<c_H-\varepsilon$ for $x\in O$. Let $\beta : M\to [0,+\infty)$ be a continuous function such that $\beta^{-1}(\{0\})= \mathcal{A}$, $O\subset \beta^{-1}(\{\varepsilon\})$ and
$$\forall x\in M\setminus \mathcal{A}, \quad H(x,D_x w)<c_H-\beta(x).$$
By assumption, $\mu\big(\pi^{-1}(O)\big)>0$ so let $\varepsilon'>0$ be such that $2\varepsilon' < \varepsilon \mu\big(\pi^{-1}(O)\big)$.
Finally, we apply Proposition \ref{approx} to the function $w$ and Hamiltonian $G(x,p)= H(x,p)-\beta(x)-c_H$ to obtain a $C^1$ function $w_{\varepsilon'}$ such that
$$\forall x\in M, \quad H(x,D_x w_{\varepsilon'})<c_H-\beta(x)+\varepsilon'.$$
We then use the Fenchel inequality (Proposition \ref{fenchel}) and the fact that $\mu$ is closed and minimizing to infer
\begin{align*}
0=\int_{TM} D_x w_{\varepsilon'} (v) d \mu(x,v) & \leqslant \int_{TM}\left[ H(x,D_x w_{\varepsilon'})+L(x,v)\right]d \mu(x,v) \\
&\leqslant \int_{TM} (c_H-\beta(x)+\varepsilon')d\mu(x,v) -c_H \\
&\leqslant \int_{\pi^{-1}(O)} -\beta(x) d\mu(x,v) + \int_{TM}(c_H+\varepsilon') d\mu(x,v) - c_H \\
&= -\varepsilon \mu\big(\pi^{-1}(O)\big) +\varepsilon' <0.
\end{align*}
This is absurd and proves the proposition.
\end{proof}
\section{The degenerate discounted equation}\label{sec:deg}
We now turn to \eqref{HJ-l} and study its properties.
\subsection{Generalities on the degenerate discounted equation}
In this section, unless stated otherwise, we fix a $\lambda>0$ and start with a comparison principle that follows from the previous results:
\begin{prop}\label{comp-l}
Equation \eqref{HJ-l} enjoys the two following properties:
\begin{enumerate}
\item \label{comp-1} If $u : M\to \mathbb R$ and $v: M\to \mathbb R$ are respectively a subsolution and a supersolution of \eqref{HJ-l} then $u\leqslant v$.
\item \label{existence} There exists a unique solution $u_\lambda$ to \eqref{HJ-l}.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate} \item
Let $w : M\to \mathbb R$ be the function given by Theorem \ref{strict}. The function $\tilde w = w-\|w\|_\infty-2\leqslant -2$ is then negative and still is a subsolution of \eqref{HJ-crit}, that is $C^{\infty}$ and strict on $M\setminus \mathcal{A}$ meaning that
$$\forall x\in M\setminus \mathcal{A} , \quad H(x,D_x \tilde w) < c_H.$$
Let $\delta = \frac12 \min\{\alpha(x) , \ \ x\in \mathcal{A}\}$. Then $\delta>0$ and $\mathcal{A} \subset \alpha^{-1}\big( (\delta,+\infty)\big)$.
Let $\eta>0$ such that
$$\forall x\in \alpha^{-1}[0,\delta] , \quad H(x,D_x \tilde w) < c_H-\eta.$$
Finally, let $\beta : M\to [0,+\infty)$ be a continuous function such that $\beta^{-1}\{0\} = \mathcal{A}$, $\beta = \eta$ on $\alpha^{-1}([0,\delta])$ and
$$\forall x\in M\setminus \mathcal{A} , \quad H(x,D_x \tilde w) < c_H-\beta(x).$$
If $0<\varepsilon<\min(1,\lambda\delta, \eta)$, by Proposition \ref{approx}, there exists a smooth $w_\varepsilon : M\to \mathbb R$ such that $\|\tilde w-w_\varepsilon\|_\infty<\varepsilon$ and
$$\forall x\in M , \quad H(x,D_x \tilde w_\varepsilon) \leqslant c_H-\beta(x)+\varepsilon.$$
It follows that if $x\in \alpha^{-1}([0,\delta])$ then
$$\lambda \alpha(x) w_\varepsilon (x) + H(x,D_x \tilde w_\varepsilon)\leqslant H(x,D_x \tilde w_\varepsilon)\leqslant c_H-\beta(x)+\varepsilon = c_H-\eta+\varepsilon <c_H.$$
On the other hand, if $x\in \alpha^{-1}\big( (\delta , +\infty)\big) $ then
$$\lambda \alpha(x) w_\varepsilon (x) + H(x,D_x \tilde w_\varepsilon)\leqslant -\lambda \delta + H(x,D_x \tilde w_\varepsilon)\leqslant -\lambda\delta +c_H + \varepsilon <c_H.$$
It follows that $w_\varepsilon$ is a smooth strict subsolution to \eqref{HJ-l} and we may apply directly Theorem \ref{compG} that yields the result.
\item This is now a standard result in viscosity solutions theory (see \cite[Théorème 2.12, Théorème 2.14]{barles}). Uniqueness is a direct consequence of the first part. As for existence, it follows from Perron's method. Let $u_0 : M\to \mathbb R$ be a critical solution. One verifies that the function $\bar u = u_0+\|u_0\|_\infty \geqslant 0$ is a supersolution to \eqref{HJ-l} and that $\underline u = u_0-\|u_0\|_\infty \leqslant 0$ is a subsolution to \eqref{HJ-l}. By the first point any subsolution $u$ verifies $u\leqslant \bar u$. Moreover, the set $\mathcal S_\lambda$ of subsolutions $u$ to \eqref{HJ-l} such that $\underline u \leqslant u \leqslant \bar u$ is not empty as it contains $\underline u$. The set $\mathcal S_\lambda$ is made of equiLipschitz functions as they verify for almost every $x\in M$,
$$H(x,D_x u ) \leqslant c_H - \lambda\alpha(x) u(x)\leqslant c_H +\lambda \|\alpha\|_\infty \|\underline u\|_\infty. $$
A common Lipschitz constant for elements of $\mathcal S_\lambda$ is
$$\kappa_{ c_H +\lambda \|\alpha\|_\infty \|\underline u\|_\infty}= \max\big\{ \|p\|_x, \ \ (x,p)\in H^{-1}\big((-\infty, c_H +\lambda \|\alpha\|_\infty \|\underline u\|_\infty ) \big)\big\}.$$
Hence one verifies that the function $u_\lambda = \max\limits_{u\in \mathcal S_\lambda} u$ is well defined, Lipschitz and a solution to \eqref{HJ-l}.
\end{enumerate}
\end{proof}
Before the next and fundamental subsection let us give a property on the family of solutions:
\begin{lemma}\label{boundedLip}
The family $(u_\lambda)_{\lambda \in (0,1]}$ is equibounded and equiLipschitz.
\end{lemma}
\begin{proof}
We have already seen that $\underline u\leqslant u_\lambda \leqslant \bar u$ for all $\lambda >0$. As far as the equiLipschitz property il concerned, if $0<\lambda \leqslant 1$ a common Lipschitz constant is
$$\kappa_{ c_H + \|\alpha\|_\infty \|\underline u\|_\infty}= \max\big\{ \|p\|_x, \ \ (x,p)\in H^{-1}\big((-\infty, c_H + \|\alpha\|_\infty \|\underline u\|_\infty ) \big)\big\}.$$
\end{proof}
\subsection{Representation formula for $u_\lambda$}
Thanks to Lemma \ref{boundedLip} we know that for $\lambda<1$ all the functions $u_\lambda$ are equiLipschitz as well as critical solutions. Hence only the knowledge of $H$ on the compact set
$$\{(x,p)\in T^*M , \quad \|p\|_x\leqslant \kappa_{ c_H + \|\alpha\|_\infty \|\underline u\|_\infty}\}$$
is relevant for our study. Hence up to modifying $H$ outside of this compact set, we assume without loss of generality, until the end of the article, that $H$ satisfies hypothesis (H2') the superlinearity condition. We will now apply results of section \ref{superlinear}.
We fix once more $\lambda\in(0,1]$ until the end of this section.
\begin{nota}\label{notation}
If $\gamma : I\to M$ is a continuous curve defined on an interval containing $0$ and if $s\in I$ we denote
$$A_\gamma (s) = \int_{0}^s \alpha\circ\gamma(\sigma) d\sigma.$$
\end{nota}
We start by a representation formula in the spirit of Theorem \ref{weakKAM}.
\begin{Th}\label{representation}
The function $u_\lambda $ verifies the following properties:
\begin{enumerate}
\item For all $x\in M$ and $t>0$,
\begin{multline}\label{LOl}
u_\lambda(x) = \min_{\substack{\gamma \in AC([-t,0],M )\\ \gamma(0)=x}} \Bigg \{ \exp\big(\lambda A_\gamma (-t) \big)u_\lambda\big(\gamma(-t)\big) \\
+\int_{-t}^0 \exp\big(\lambda A_\gamma (s) \big) \left[L\big(\gamma(s),\dot\gamma(s)\big)+c_H\right] ds\Bigg\}.
\end{multline}
\item For all $x\in M$ there exists a Lipschitz curve $\gamma_{\lambda,x} : (-\infty,0] \to M$ with $\gamma_{\lambda,x}(0)=x$ such that for all $t>0$,
\begin{multline}\label{LOl=}
u_\lambda(x) = \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) u_\lambda\big(\gamma_{\lambda,x}(-t)\big) \\
+\int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\left[ L\big(\gamma_{\lambda,x}(s),\dot\gamma_{\lambda,x}(s)\big)+c_H\right] ds.
\end{multline}
\end{enumerate}
\end{Th}
\begin{proof}
We will prove both points simultaneously. The function $u_\lambda $ is a solution to the Hamilton--Jacobi equation
$$H_\lambda(x,D_x u) =c_H, \quad \quad x \in M,$$
where $H_\lambda (x,p) = \lambda \alpha(x) u_\lambda(x) + H(x,p) $.
The Hamiltonian $H_\lambda$ satisfies hypotheses (H1) and (H2') hence by Proposition \ref{sub} for all $x\in M$, $T>0$ and $\gamma : [-T,0]\to M$ absolutely continuous curve such that $\gamma(0)=x$ we find that for $0\leqslant t\leqslant T$,
$$u_\lambda\big(x)\leqslant u_\lambda\big(\gamma(-t)\big)+ \int_{-t}^0 \Big[L_\lambda\big(\gamma (s),\dot\gamma(s)\big) +c_H\Big]ds,$$
where one computes easily that $L_\lambda$, the Lagrangian associated to $H_\lambda$ is
$$\forall (y,v)\in TM,\quad L_\lambda(y,v) = L(y,v)-\lambda \alpha(y) u_\lambda(y) .$$
Setting $f(t) =- u_\lambda\big(\gamma(-t)\big)$ one finds by a change of variables $s\mapsto -s$
$$f(t)\leqslant f(0) +\int_0^t \Big[L\big(\gamma (-s),\dot\gamma(-s)\big) +c_H+ \lambda \alpha\big(\gamma(-s)\big) f(s)\Big]ds. $$
Recalling that $\alpha\geqslant 0$ we may apply Gronwall's inequality (\cite[Lemma 2.1]{mis}), thus obtaining
\begin{multline*}
f(t)\leqslant \exp\left(\lambda \int_{0}^t \alpha\circ\gamma(-\sigma) d\sigma \right) f(0)\\
+\int_0^t \Big[L\big(\gamma (-s),\dot\gamma(-s)\big) +c_H \Big]\exp \left(\lambda \int_{s}^t \alpha\circ\gamma(-\sigma) d\sigma \right) ds.
\end{multline*}
After dividing by $\exp\left(\lambda \int_{0}^t \alpha\circ\gamma(-\sigma) d\sigma \right) $ and making the changes of variables $\sigma \to -\sigma $ and $s\to -s$ we find that
\begin{equation}\label{ssdeg}
u_\lambda(x) \leqslant \exp\big(\lambda A_\gamma (-t) \big)u_\lambda\big(\gamma(-t)\big) +\int_{-t}^0 \exp\big(\lambda A_\gamma (s) \big) \big[L\big(\gamma(s),\dot\gamma(s)\big)+c_H\big] ds.
\end{equation}
To finish the proof, we will directly establish the second point thus proving that \eqref{LOl} is valid and that the right hand side is indeed a minimum. We use here item \ref{calib} of Theorem \ref{weakKAM} (with Hamiltonian $H_\lambda$) to obtain a Lipschitz curve $\gamma_{\lambda,x} : (-\infty , 0] \to M$ such that $\gamma_{\lambda,x}(0) = x$ and
\begin{multline*}
u_\lambda(x) - u_\lambda\big(\gamma_{\lambda,x}(-t)\big) = \int_{-t}^0\Big[L_\lambda\big(\gamma_{\lambda,x} (\sigma),\dot\gamma_{\lambda,x}(\sigma)\big) +c_H\Big]d\sigma \\
= \int_{-t}^0\Big[L\big(\gamma_{\lambda,x} (\sigma),\dot\gamma_{\lambda,x}(\sigma)\big) +c_H-\lambda \alpha\big(\gamma_{\lambda,x} (\sigma)\big) u_\lambda \big(\gamma_{\lambda,x} (\sigma)\big) \Big]d\sigma,
\end{multline*}
for all $t>0$.
Hence we find that for $t>0$,
$$g(t) =g(0) + \int_0^t h(s)g(s)ds+\int_0^t \ell(s),$$
where $g(s) =- u_\lambda\big(\gamma_{\lambda,x}(-s)\big) $, $h(s) = \lambda \alpha\big(\gamma_{\lambda,x} (-s)\big)$ and
$$\ell(s) = L\big(\gamma_{\lambda,x} (-s),\dot\gamma_{\lambda,x}(-s)\big) +c_H.$$
We infer from this,
\footnote{The proof of this last fact follows exactly the classical proof of the Cauchy--Lipschitz Theorem. Fix $T>0$ and let $\mathcal F$ the operator from $\big(C^0([0,T],\mathbb R],\|\cdot\|_\infty\big)$ to itself defined for $f\in C^0([0,T],\mathbb R]$ by
$$\forall t\in [0,T], \quad \mathcal F (f) (t) = g(0) + \int_0^t h(s)f(s)ds+\int_0^t \ell(s).$$
This functional $\mathcal F$ is $T\|h\|_\infty$-Lipschitz hence for $T$ small enough it is a contraction hence has a unique fixed point. This proves local existence of a solution. As the time of existence is constant, the maximal solution is defined for all $t>0$. Finally one checks by a direct computation that the given function is solution.}
$$\forall t>0, \quad g(t) = g(0)\exp\left(\int_0^t h(s) ds\right) +\int_0^t \ell(s)\exp\left( \int_s^t h(\sigma)d\sigma \right) ds,$$
that is the announced formula.
\end{proof}
\begin{oss}\label{remsg}
Note that, if $0<s<t$ by subtracting \eqref{LOl=} applied to $s$ and $t$ one obtains
\begin{multline}\label{LOlst}
\exp\left(\lambda A_{\gamma_{\lambda,x}} (-s) \right)u_\lambda\big(\gamma_{\lambda,x}(-s)\big) =\exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) u_\lambda\big(\gamma_{\lambda,x}(-t)\big) \\
+\int_{-t}^{-s} \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\left[ L\big(\gamma_{\lambda,x}(s),\dot\gamma_{\lambda,x}(s)\big)+c_H\right] ds.
\end{multline}
In particular, the curve $\gamma_{\lambda,x}(-s+\cdot)$ realizes equality in \eqref{LOl=} for the point $\gamma_{\lambda,x}(-s)$.
\end{oss}
We continue by giving important properties on the curves $\gamma_{\lambda,x}$ and there asymptotics.
\begin{prop}\label{equiLip}
The curves $\gamma_{\lambda,x}$ with $\lambda\in (0,1]$ and $x\in M$ are equiLipschitz.
\end{prop}
\begin{proof}
Recall that by construction, the curves $\gamma_{\lambda,x}$ verify for $s\leqslant t\leqslant 0$,
\begin{equation}\label{calibl}
u_\lambda\big(\gamma_{\lambda,x}(t)\big) - u_\lambda\big(\gamma_{\lambda,x}(s)\big) = \int_s^t\Big[L_\lambda\big(\gamma_{\lambda,x} (\sigma),\dot\gamma_{\lambda,x}(\sigma)\big) +c_H\Big]d\sigma,
\end{equation}
where $L_\lambda : (x,v)\mapsto L(x,v) - \lambda\alpha(x)u_\lambda(x)$ is the Lagrangian associated to the Hamiltonian $H_\lambda : (x,p)\mapsto h(x,p)+\lambda\alpha(x)u_\lambda(x)$.
Recall that $(u_\lambda)_{\lambda\in (0,1]}$ are equiLipschitz, by Lemma \ref{boundedLip}, with constant that we denote $K>0$. We assume also by Lemma \ref{boundedLip} that $K$ is big enough such that $\|u_\lambda \|_\infty <K$ for $\lambda \in (0,1]$. Moreover, by superlinearity of $L$, the constant
$$A(K) = \max\left\{ (K+1)\|v\|_x - L(x,v)-c_H , \ \ (x,v)\in TM \right\} $$
is such that
$$\forall (x,v)\in TM,\quad L(x,v) +c_H\geqslant (K+1)\|v\|_x -A(K).$$
One then computes using \eqref{calibl} that
\begin{multline*}
Kd\big(\gamma_{\lambda,x}(t),\gamma_{\lambda,x}(s)\big)
\geqslant \int_s^t\left[ (K+1)\|\dot\gamma_{\lambda,x}(\sigma)\|_{\gamma_{\lambda,x}(\sigma)} -A(K) - \lambda \|\alpha\|_\infty \|u_\lambda\|_\infty \right]d \sigma. \\
\geqslant (K+1)d\big(\gamma_{\lambda,x}(t),\gamma_{\lambda,x}(s)\big) - (t-s)\big(A(K)+K\|\alpha\|_\infty\big) .
\end{multline*}
Hence we conclude that $d\big(\gamma_{\lambda,x}(t),\gamma_{\lambda,x}(s)\big) \leqslant (t-s)\big(A(K)+K\|\alpha\|_\infty\big)$ and that $\gamma_{\lambda,x}$ is indeed Lipschitz with constant $\big(A(K)+K\|\alpha\|_\infty\big)$ that is independent of $x\in M$ and $\lambda\in (0,1]$.
\end{proof}
\begin{prop}\label{excursion}
There exists $T>0$ and $\varepsilon>0$ such that
for all $x\in M$, $\lambda \in (0,1]$ and $\gamma_{\lambda,x}$ be given by the previous Theorem.
$$\forall t>0, \exists t'\in [t,t+T] , \quad \alpha \circ \gamma_{\lambda,x}(-t') >\varepsilon.$$
\end{prop}
\begin{proof}
We argue by contradiction and assume that for all $n>0$ there exists $x_n\in M$, $\lambda_n \in (0,1]$ and $t_n>0$ such that
$$\forall t\in [t_n,t_n+n] , \quad \alpha \circ \gamma_{\lambda_n,x_n}(-t) \leqslant \frac1n.$$
We introduce the notations $\gamma_n = \gamma_{\lambda_n,x_n}$ and $A_n = A_{ \gamma_{\lambda_n,x_n}}$ (with respect to Notation \ref{notation}).
Let us define probability measures $\mu_n$ on $TM$ by
\begin{align*}
\int_{TM} f d\mu_n& =C_n \int_{-(t_n+n)}^{-t_n} \exp\big(\lambda_n A_{n} (s) \big)f\big(\gamma_n(s),\dot\gamma_n(s)\big) ds \\
& =C'_n \int_{-(t_n+n)}^{-t_n} \exp\big(\lambda_n A_n (s) -\lambda_n A_n (-t_n) \big)f\big(\gamma_n(s),\dot\gamma_n(s)\big) ds ,
\end{align*}
for $f\in C^0(TM,\mathbb R)$, where $C_n = \big( \int_{-(t_n+n)}^{-t_n} \exp\big(\lambda_n A_n (s) \big)ds \big)^{-1}$ and
\begin{multline*}
C'_n = \left( \int_{-(t_n+n)}^{-t_n} \exp\big(\lambda_n A_n (s)-\lambda_n A_n (-t_n) \big)ds \right)^{-1} \\
= \left( \int_{-(t_n+n)}^{-t_n} \exp\left(-\lambda_n \int_{s}^{-t_n} \alpha \circ \gamma_n(\sigma) d\sigma \right)ds \right)^{-1}.
\end{multline*}
Note that our hypothesis implies that $C_n' \to 0$. Indeed
$$\forall s\in [-t_n-n,-t_n],\quad \exp\left(-\lambda_n \int_{s}^{-t_n} \alpha \circ \gamma_n(\sigma) d\sigma \right)\geqslant \exp\left( \frac{\lambda_n}{n}(t_n+s)\right),
$$
$$(C_n')^{-1} \geqslant \int_{-n}^0 \exp\left( \frac{\lambda_n s}{n}\right)ds = \frac{n}{\lambda_n}\big(1-\exp(-\lambda_n)\big) \geqslant n(1-e^{-1}).
$$
As the $(\gamma_n)_n$ are equiLipschitz (Proposition \ref{equiLip}), the family $(\mu_n)_n$ are supported in a common compact subset of $TM$ and we may extract a weakly convergent subsequence $\mu_{k_n} \rightharpoonup \mu$. We will prove that $\mu $ is a Mather measure.
Let us introduce the notation $B_n(s) = \lambda_n A_n (s)-\lambda_n A_n (-t_n) $.
\textbf{The measure $\mu$ is closed: } let $f : M\to \mathbb R$ be a $C^1$ function, one computes (by integration by part):
\begin{align*}
\int_{TM} D_xf(v) d\mu_n(x,v) &=C'_n \int_{-t_n-n}^{-t_n} \exp\big(B_n(s) \big)D_{\gamma_n(s)}f\big(\dot\gamma_n(s)\big) ds \\
&= C'_n \Big[ \exp\big(B_n(s)\big) f\big( \gamma_n(s) \big) \Big]_{-t_n-n}^{-t_n} \\
& \qquad - C'_n \int_{-t_n-n}^{-t_n} \frac{d}{ds} \Big(\exp\big(B_n(s) \big) \Big)f\big( \gamma_n(s) \big)ds
\end{align*}
The first term goes to $0$ as $C_n' \to 0$ and the functions $s\mapsto \exp\big(B_n(s) \big)f\big( \gamma_n(s) \big)$ are equibounded for $s\leqslant 0$.
To handle the second term we infer
$$ \int_{-t_n-n}^{-t_n} \frac{d}{ds} \Big(\exp\big(B_n (s) \big) \Big)f\big( \gamma_n(s) \big)ds =
\int_{-t_n-n}^{-t_n} \lambda_n \alpha\big(\gamma_n(s)\big) \exp\big(B_n(s) \big) f\big( \gamma_n(s) \big)ds.
$$
Hence, recalling our hypothesis,
$$\left| \int_{-t_n-n}^{-t_n} \frac{d}{ds} \Big(\exp\big(B_n (s) \big) \Big)f\big( \gamma_n(s) \big)ds\right| \leqslant \frac{\lambda_n\|f\|_\infty}{n} \int_{-t_n-n}^{-t_n} \exp\big(B_n (s) \big) ds= \frac{\lambda_n\|f\|_\infty}{nC_n'},$$
it follows that
$$\lim_{n\to +\infty} C_n' \int_{-t_n-n}^{-t_n} \frac{d}{ds} \Big(\exp\big(B_n(s) \big) \Big)f\big( \gamma_n(s) \big)ds = 0.$$
Finally, taking the limit along the subsequence $k_n$, we conclude that
$$\int_{TM} D_xf(v) d\mu(x,v)=0,$$
hence $\mu $ is closed.
\textbf{The measure $\mu$ is minimizing: } indeed let us start from the equalities coming from Theorem \ref{representation} and Remark \ref{remsg}:
\begin{align*}
\int_{TM} L(x,v) d\mu_n(x,v) & = C'_n \int_{-t_n-n}^{-t_n} \exp\big(B_n(s) \big)L\big(\gamma_n(s),\dot\gamma_n(s)\big) ds \\
&=C_n'\Big(u_{\lambda_n}\big(\gamma_n(-t_n)\big)- \exp\big(B_n (-t_n-n) \big) u_{\lambda_n}\big(\gamma_n(-t_n-n)\big)\Big) \\
&\qquad -\int_{TM}c_H d\mu_n.
\end{align*}
Once more, the first term converges to $0$ as $C_n'\to 0$ and the functions $(u_\lambda)_{\lambda\in (0,1]}$ and $s\mapsto \exp\big(B_n(s)\big), s<0$ are equibounded (Lemma \ref{boundedLip}). Going along the subsequence $k_n$ we conclude that $\int_{TM}L(x,v)d\mu = -c_H$ as announced.
\textbf{The projected measure $\pi^*\mu$ is supported in $\alpha^{-1}(\{0\})$:} indeed, let $\varepsilon>0$ and $\chi_\varepsilon$ the indicatrix function of $\alpha^{-1}\big((\varepsilon,+\infty)\big) $. If $n>0$ is big enough such that to verify $1/n<\varepsilon$ then clearly
$$
\int_M \chi_\varepsilon d\pi^*\mu_n = C'_n \int_{-t_n-n}^{-t_n} \exp\big(B_n(s) \big)\chi_\varepsilon\big(\gamma_n(s)\big) ds=0.
$$
This is a contradiction as the support of $\pi^*\mu$ should be included in the projected Aubry set by Proposition \ref{inclusion}. Hence we have proved the Proposition.
\end{proof}
As a first consequence we deduce
\begin{lemma}\label{courbe-aub}
Let $\lambda\in (0,1]$ and $x\in M$. Then $\int_{-\infty}^0 \alpha\big(\gamma_{\lambda,x} (s)\big)ds = +\infty$.
\end{lemma}
\begin{proof}
Thanks to the previous proposition there exists $\varepsilon>0$ and an increasing sequence $t_n\to +\infty$ such that
$ \alpha\big(\gamma_{\lambda,x}(-t_n)\big) \geqslant \varepsilon$, for all $n\in \mathbb N$.
Up to extracting, we may without loss of generality assume that $t_{n+1}-t_n \geqslant 2$ for all $n\in \mathbb N$. As $\gamma_{\lambda,x}$ is Lipschitz, there exists $1>\tau>0$ such that
$$\forall n\in \mathbb N, \forall h\in (-\tau,\tau),\quad \alpha\big(\gamma_{\lambda,x}(-t_n+h)\big) \geqslant \frac{\varepsilon}{2}.$$
It follows that
$$\int_{-\infty}^0 \alpha\big(\gamma_{\lambda,x} (s)\big)ds\geqslant \sum_{n=0}^{+\infty} \int_{t_n-\tau}^{t_n+\tau} \alpha\big(\gamma_{\lambda,x} (s)\big)ds \geqslant \tau\varepsilon \sum_{n=0}^{+\infty}1 = +\infty.$$
\end{proof}
As an immediate corollary, we obtain the most important result of this section:
\begin{Th}\label{representation-l}
Let $x\in M$ and $\gamma_{\lambda,x}$ be a curve given by Theorem \ref{representation}, then
\begin{equation}\label{LOl=+}
u_\lambda(x) =\int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\left[ L\big(\gamma_{\lambda,x}(s),\dot\gamma_{\lambda,x}(s)\big)+c_H\right] ds.
\end{equation}
\end{Th}
\section{The convergence result}\label{sec:proof}
We now have all the tools necessary to prove Theorem \ref{main}. The proof follows the original arguments of \cite{DFIZ}. We will start by proving the result in two easy cases before turning to the general case.
\subsection{Warm up: two baby cases}
Though we state it in our setting, the first result holds without the convexity assumption:
\begin{prop}\label{bb1}
Assume that constant functions are critical subsolutions. Then the family $(u_\lambda)_{\lambda>0}$ is nondecreasing as $\lambda \searrow 0$ in particular, it converges.
\end{prop}
\begin{proof}
As constants are subsolutions it follows that if $\lambda >0$,
$$\forall x\in M, \quad \lambda \alpha(x)\times 0 +H(x, D_x 0) \leqslant c_H.$$
It follows from the comparison principle (Proposition \ref{comp-l}) that $u_\lambda \geqslant 0$.
If now $0<\lambda< \lambda'$, we have in the viscosity sense
$$ \lambda \alpha(x) u_{\lambda'}(x) +H(x, D_x u_{\lambda'}) \leqslant \lambda' \alpha(x) u_{\lambda'}(x) +H(x, D_x u_{\lambda'}) = c_H.$$
Hence $u_{\lambda'}$ is a subsolution of \eqref{HJ-l} and again by the comparison principle, $u_{\lambda'}\leqslant u_\lambda$. The rest is an immediate consequence of Lemma \ref{boundedLip}.
\end{proof}
The second case reduces to \cite{DFIZ} and can be seen as a motivation for the rest of our work:
\begin{prop}\label{bb2}
Assume that $\alpha(x)>0$ on $M$, then the family $(u_\lambda)_{\lambda>0}$ converges as $\lambda\to 0$.
\end{prop}
\begin{proof}
If $\lambda>0$ then $u_\lambda$ is also a viscosity solution to
$$\lambda u_\lambda(x) +H_{\alpha} ( x, D_x u_\lambda) = 0 , \qquad x\in M$$
where $H_\alpha (x,p) = \alpha^{-1}(x) \big(H(x,p) - c_H\big)$ still verifies hypotheses (H1) and (H2). Moreover, any critical solution of $H(x, D_x u)=c_H$ is a solution of $H_\alpha(x,D_x u)=0$ hence the critical constant of $H_\alpha$ is $0$. The result is now a direct application to $H_\alpha$ of \cite[Theorem 1.1]{DFIZ}.
\end{proof}
\begin{oss}\rm
This second case is implied by our hypothesis ($\alpha2$) when the projected Aubry set verifies $\mathcal{A}=M$. This happens for instance in the presence of an exact KAM torus. If $\mathcal{A}=M$ then there is a unique weak KAM solution up to a constant and the selection problem reduces to selecting a particular constant. However, in a forthcoming work (\cite{CFZZ}), we will weaken this hypothesis ($\alpha2)$. For example, in the case of existence of an exact KAM torus where the Hamiltonian flow is conjugated to an irrational rotation, the conclusions of the selection Theorem hold true for any nonnegative, non identically $0$ function $\alpha$. To be more precise, in \cite{CFZZ} we will tackle degenerate discounted problems involving Hamiltonians depending nonlinearly in the value of the function. In contrast with the present work, the proofs are different as they do not rely on explicit representation formulas of the $u_\lambda$.
\end{oss}
\subsection{The general case}
As the family $(u_\lambda)_{\lambda \in (0,1]}$ is relatively compact, we only have to prove that there is only one accumulation point as $\lambda \to 0$. We start by establishing a constraint on such accumulation points:
\begin{prop}\label{constraints}
Let $(\lambda_n)_n\in \mathbb N$ be a decreasing sequence converging to $0$ such that $(u_{\lambda_n})_n$ uniformly converges to a function $v_0$. Then
\begin{equation}\label{inegconstraint}
\forall \mu\in \mathbb P_{\mathrm{min}}, \quad \int_{TM} \alpha(x)v_0(x) d\mu(x,v) \leqslant 0.
\end{equation}
\end{prop}
\begin{proof}
Let $\mu$ be a Mather measure. Let $\varepsilon >0$, by applying Proposition \ref{approx} to the Hamiltonian $H(x,p)+\lambda\alpha(x) u_\lambda(x)- c_H$ we find a $C^1$ function $v_\varepsilon : M\to \mathbb R$ such that
$$\forall x\in M,\quad \lambda\alpha(x) u_\lambda(x) + H(x, D_x v_\varepsilon )-c_H \leqslant \varepsilon.$$
We then integrate this inequality and apply the Fenchel inequality and the properties of $\mu \in\mathbb P_{\mathrm{min}}$ to infer that
\begin{align*}
\int_{TM} \lambda\alpha(x) u_\lambda(x)d\mu(x,v) & = \int_{TM}\Big[ \lambda\alpha(x) u_\lambda(x) +D_x v_\varepsilon (v) -L(x,v)-c_H\Big]d\mu(x,v) \\
&\leqslant \int_{TM}\Big[ \lambda\alpha(x) u_\lambda(x) +H(x, D_x v_\varepsilon)-c_H\Big]d\mu(x,v) \\
&\leqslant \int_{TM}\varepsilon d\mu(x,v) = \varepsilon.
\end{align*}
As this holds for all $\varepsilon >0$ we conclude, after dividing by $\lambda >0$, that
$$ \int_{TM} \alpha(x)u_\lambda(x) d\mu(x,v) \leqslant 0.$$
Passing to the limit along the sequence $\lambda_n$, recalling that $u_{\lambda_n} \to u$ uniformly, we conclude that $ \int_{TM} \alpha(x)v_0(x) d\mu(x,v) \leqslant 0$ as was to be shown.
\end{proof}
\begin{df}\rm\label{limit}
Let us set $\S_0$ be the set of critical subsolutions $u$ to \eqref{HJ-crit} verifying the constraint
\begin{equation}\label{constraint}
\forall \mu\in \mathbb P_{\mathrm{min}}, \quad \int_{TM} \alpha(x)u(x) d\mu(x,v) \leqslant 0.
\end{equation}
We then define $u_0(x) = \sup\limits_{u\in \S_0} u(x)$.
\end{df}
By stability of the notion of viscosity solutions any accumulation point $v_0$ of $(u_\lambda)_\lambda$ is a critical solution, hence by Proposition \ref{constraints}, $v_0 \in \S_0$.
The function $u_0$ is well defined as functions in $\S_0$ are equiLipschitz and must take a nonpositive value. By Proposition \ref{viscositygen} $u_0$ is a critical subsolution. Moreover, if $v_0$ is as above, $v_0\leqslant u_0$. To prove the convergence result, we will establish the reverse inequality.
We will need the following
\begin{lemma}\label{chepalle}
Let $\lambda\in (0,1]$ and $x\in M$. Let $\gamma_{\lambda,x} : (-\infty , 0] \to M$ be given by Theorem \ref{representation}, then
$\int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds <+\infty$.
\end{lemma}
\begin{proof}
We use Proposition \ref{excursion} and consider $T>0$ and $\varepsilon>0$ such that
$$\forall t>0, \exists t'\in [t,t+T] , \quad \alpha \circ \gamma_{\lambda,x}(-t') >\varepsilon.$$
By induction we construct an increasing sequence $(t_n)_n$ such that $t_0\in [1,T+1]$, $2\leqslant t_{n+1}-t_n \leqslant T+2$ for all $n\in \mathbb N$ and $\alpha \circ \gamma_{\lambda,x}(-t') >\varepsilon$. As in Lemma \ref{courbe-aub}
using that $\gamma_{\lambda,x}$ is Lipschitz, there exists $1>\tau>0$ such that
$$\forall n\in \mathbb N, \forall h\in (-\tau,\tau),\quad \alpha\big(\gamma_{\lambda,x}(-t_n+h)\big) \geqslant \frac{\varepsilon}{2}.$$
Hence if $n\geqslant 0$,
$$A_{\gamma_{\lambda,x}} (-t_n-\tau) \leqslant -\sum_{k=0}^n \int_{-t_k-\tau}^{-t_k+\tau}\alpha\big(\gamma_{\lambda,x}(h)\big)dh \leqslant -(n+1)\tau \varepsilon,
$$
and using the monotonicity of $A_{\gamma_{\lambda,x}}$:
\begin{align}
\int_{-t_n-\tau}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds& = \int_{-t_0-\tau}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds+ \sum_{k=0}^{n-1} \int_{-t_{k+1}-\tau}^{-t_k-\tau} \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds \nonumber\\
&\leqslant \int_{-t_0-\tau}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds+ \sum_{k=0}^{n-1}(t_{k+1}-t_k) \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t_k-\tau) \right) \nonumber \\
&\leqslant \int_{-t_0-\tau}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds+ \sum_{k=0}^{n-1}(T+2) \exp\big(-\lambda (k+1)\tau\varepsilon\big) \nonumber\\
&\leqslant T+2+\frac{(T+2)\exp\big(-\lambda \tau\varepsilon\big)}{1-\exp\big(-\lambda \tau\varepsilon\big)} .\label{eqpalle}
\end{align}
\end{proof}
For $x\in M$, $\lambda\in (0,1]$ we fix $\gamma_{\lambda,x} : (-\infty , 0] \to M$ be given by Theorem \ref{representation} and define the probability measure $\mu_x^\lambda$ on $TM$ defined by
$$\forall f\in C^0(M,\mathbb R),\quad \int_{TM} f d\mu_x^\lambda = C_{\lambda,x}\int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)f\big(\gamma_{\lambda,x}(s),\dot\gamma_{\lambda,x}(s)\big) ds ,$$
where $(C_{\lambda,x})^{-1} = \int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds$.
As a corollary of the proof of Lemma \ref{chepalle} we deduce
\begin{lemma}\label{pallebis}
The function $(\lambda,x)\mapsto \lambda (C_{\lambda,x})^{-1}$ is uniformly bounded as $\lambda \to 0$.
\end{lemma}
\begin{proof}
We have seen, \eqref{eqpalle}, that there are constants $T$, $\varepsilon$, and $\tau $, independent of $x\in M$ and $\lambda\in (0,1]$ such that
$$ \lambda (C_{\lambda,x})^{-1} \leqslant \lambda \left( T+2+\frac{(T+2)\exp\big(-\lambda \tau\varepsilon\big)}{1-\exp\big(-\lambda \tau\varepsilon\big)} \right).$$
The right-hand side is bounded as $\lambda \to 0$ and converges to $(T+2)(\tau\varepsilon)^{-1}$.
\end{proof}
\begin{prop}\label{mesure-lim}
The measures of the family $(\mu_x^\lambda)_{\lambda\in (0,1]}$ have support included in a common compact subset of $TM$. Hence it is a relatively compact family in $\mathbb P(TM)$. Moreover if $\lambda_n \to 0$ is such that $(\mu_x^{\lambda_n})_{n\in \mathbb N}$ converges to $\mu$ then $\mu \in\mathbb P_{\mathrm{min}}$ is a Mather measure.
\end{prop}
\begin{proof}
The first part of the Proposition is a direct consequence of Proposition \ref{equiLip}.
We turn to the second part of the Proposition:
Note that $\lim\limits_{\lambda\to 0}C_{\lambda,x} = 0$. Indeed
$$(C_{\lambda,x})^{-1} = \int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds\geqslant \int_{-\infty}^0 \exp\left(\lambda \|\alpha\|_\infty s \right)ds=\frac{1}{\lambda\|\alpha\|_\infty}.$$
{\bf The measure $\mu$ is closed:}
let $f : M\to \mathbb R$ be a $C^1$ function, one computes (by integration by part):
\begin{align*}
\int_{TM} D_xf(v) d\mu_{x}^\lambda(x,v) &=C_{\lambda,x} \int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)D_{\gamma_{\lambda,x}(s)}f\big(\dot\gamma_{\lambda,x}(s)\big) ds \\
&= C_{\lambda,x} \Big[ \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) f\big( \gamma_{\lambda,x}(s) \big) \Big]_{-\infty}^0 \\
& \qquad - C_{\lambda,x} \int_{-\infty}^0 \frac{d}{ds} \Big(\exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \Big)f\big( \gamma_{\lambda,x}(s) \big)ds.
\end{align*}
The first term goes to $0$ as $\lim\limits_{\lambda\to 0}C_{\lambda,x} = 0$ and the function $s\mapsto \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) f\big( \gamma_{\lambda,x}(s) \big)$ is bounded.
To handle the second term we use that
$$ \frac{d}{ds} \Big(\exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\Big)=\lambda \alpha\big(\gamma_{\lambda,x}(s)\big) \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \geqslant 0.$$
Hence
\begin{align*}
\left| \int_{-\infty}^0 \frac{d}{ds} \Big(\exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \Big)f\big( \gamma_{\lambda,x}(s) \big)ds \right| & \leqslant
\int_{-\infty}^0 \lambda \alpha\big(\gamma_{\lambda,x}(s)\big) \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \big|f\big( \gamma_{\lambda,x}(s) \big)\big|ds \\
&\leqslant \lambda \|\alpha\|_\infty \|f\|_\infty \int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) ds\\
&=\frac{ \lambda \|\alpha\|_\infty \|f\|_\infty}{C_{\lambda,x}} .
\end{align*}
It follows that
$$\lim_{\lambda\to 0} C_{\lambda,x} \int_{-\infty}^0 \frac{d}{ds} \Big(\exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \Big)f\big( \gamma_{\lambda,x}(s) \big)ds = 0.$$
Finally, taking the limit along the subsequence $\lambda_n$, we conclude that $\int_{TM} D_xf(v) d\mu(x,v)=0$ hence $\mu $ is closed.
\textbf{The measure $\mu$ is minimizing: } indeed let us start from the equalities coming from Theorem \ref{representation-l}:
\begin{align*}
\int_{TM} L(x,v) d\mu_x^\lambda(x,v) & = C_{\lambda,x} \int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)L\big(\gamma_{\lambda,x}(s),\dot\gamma_{\lambda,x}(s)\big) ds \\
&=C_{\lambda,x}u_\lambda(x) -\int_{TM}c_H d\mu_x^\lambda.
\end{align*}
Once more, the first term converges to $0$ as $C_{\lambda,x}\to 0$ as $\lambda \to 0$, and the functions $(u_\lambda)_{\lambda} \in (0,1]$ are equibounded. Going along the subsequence $\lambda_n$ we conclude that $\int_{TM}L(x,v)d\mu = -c_H$ as announced.
\end{proof}
\begin{lemma}\label{ineglambda}
Let $w$ be any critical subsolution. For any $\lambda \in (0,1]$ and $x\in M$
$$u_\lambda(x) \geqslant w(x) -\frac{\lambda}{C_{\lambda,x}}\int_{TM} \alpha(y)w(y)d\mu_\lambda^x(y,v).$$
\end{lemma}
\begin{proof}
Let $\varepsilon>0$ and $w_\varepsilon \in C^1(M,\mathbb R)$ given by Proposition \ref{approx} such that $\|w-w\|_\varepsilon \leqslant \varepsilon$ and $H(y, D_yw_\varepsilon) \leqslant c_H+\varepsilon$ for all $y\in M$.
Let $t>0$, by Theorem \ref{representation}, and the Fenchel inequality (Proposition \ref{fenchel}),
\begin{align*}
u_\lambda(x) &= \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) u_\lambda\big(\gamma_{\lambda,x}(-t)\big)+\int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\left[ L\big(\gamma_{\lambda,x}(s),\dot\gamma_{\lambda,x}(s)\big)+c_H\right] ds
\\
&\geqslant \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) u_\lambda\big(\gamma_{\lambda,x}(-t)\big) \\
&\qquad +\int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\left[ D_{\gamma_{\lambda,x}(s)}w_\varepsilon \big( \dot\gamma_{\lambda,x}(s)\big)- H\big(\gamma_{\lambda,x}(s),D_{\gamma_{\lambda,x}(s)}w_\varepsilon\big)+c_H\right] ds
\\
&\geqslant \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) u_\lambda\big(\gamma_{\lambda,x}(-t)\big) \\
&\qquad +\int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)\left[ D_{\gamma_{\lambda,x}(s)}w_\varepsilon \big( \dot\gamma_{\lambda,x}(s)\big) \right] ds
-\varepsilon \int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds
\\
&=w_\varepsilon(x) -\int_{-t}^0 \frac{d}{ds} \left[ \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \right]w_\varepsilon \big(\gamma_{\lambda,x}(s)\big)ds \\
&\qquad+ \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) \big[u_\lambda\big(\gamma_{\lambda,x}(-t)\big)-w_\varepsilon\big(\gamma_{\lambda,x}(-t)\big) \big]
-\varepsilon \int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds \\
&=w_\varepsilon(x) -\int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \lambda \alpha \big(\gamma_{\lambda,x}(s)\big)w_\varepsilon \big(\gamma_{\lambda,x}(s)\big)ds \\
&\qquad+ \exp\left(\lambda A_{\gamma_{\lambda,x}} (-t) \right) \big[u_\lambda\big(\gamma_{\lambda,x}(-t)\big)-w_\varepsilon\big(\gamma_{\lambda,x}(-t)\big) \big]
-\varepsilon \int_{-t}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds .
\end{align*}
Sending $t\to +\infty$ and recalling Lemmas \ref{courbe-aub} and \ref{chepalle} we obtain that
$$u_\lambda(x)\geqslant
w_\varepsilon(x) -\int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \lambda \alpha \big(\gamma_{\lambda,x}(s)\big)w_\varepsilon \big(\gamma_{\lambda,x}(s)\big)ds
-\varepsilon \int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right)ds .
$$
Finally, letting $\varepsilon \to 0$ yields
\begin{multline*}
u_\lambda(x)\geqslant
w(x) -\int_{-\infty}^0 \exp\left(\lambda A_{\gamma_{\lambda,x}} (s) \right) \lambda \alpha \big(\gamma_{\lambda,x}(s)\big)w\big(\gamma_{\lambda,x}(s)\big)ds \\
=w(x) -\frac{\lambda}{C_{\lambda,x}}\int_{TM} \alpha(y)w(y)d\mu_\lambda^x(y,v).
\end{multline*}
\end{proof}
We finally finish the proof of Theorem \ref{main}:
\begin{proof}[end of the proof of Theorem \ref{main}]
Recall that we have set $\S_0$ to be the set of critical subsolutions $u$ to \eqref{HJ-crit} verifying the constraint \eqref{constraint}:
$$\forall \mu\in \mathbb P_{\mathrm{min}}, \quad \int_{TM} \alpha(x)u(x) d\mu(x,v) \leqslant 0.$$
We then have defined $u_0(x) = \sup\limits_{u\in \S_0} u(x)$.
Let $(\lambda_n)_{n\in \mathbb N}$ be a decreasing sequence converging to $0$ such that $u_{\lambda_n}$ uniformly converge to a function $v_0$. We have established that $v_0\leqslant u_0$. Let us establish the reverse inequality.
Let $x\in M$. Up to taking a subsequence, we may assume that the family of measures $\mu_{\lambda_n}^x$ converges to a measure $\mu_x$ that is a Mather measure thanks to Proposition \ref{mesure-lim}. Let $w\in \S_0$. We know $\lim\limits_{n\to +\infty } \int_{TM}\alpha(y) w(y)d \mu_{\lambda_n}^x(y,v) = \int_{TM} \alpha(y)w(y)d \mu \leqslant 0$ and thanks to Lemma \ref{pallebis} we infer that
$$\limsup_{n\to +\infty}\frac{\lambda_n}{C_{\lambda_n,x}}\int_{TM} \alpha(y)w(y)d\mu_{\lambda_n}^x(y,v) \leqslant 0.$$
Combining with Lemma \ref{ineglambda} we deduce that
$$v_0(x) \geqslant w(x) - \limsup_{n\to +\infty}\frac{\lambda_n}{C_{\lambda_n,x}}\int_{TM} \alpha(y)w(y)d\mu_{\lambda_n}^x(y,v)\geqslant w(x).$$
As the above inequality holds for all $w\in \S_0$ it follows that $v_0(x) \geqslant u_0(x)$ which concludes the proof as this is true for all $x\in M$.
\end{proof}
\section{An alternate formula for $u_0$}\label{formula}
We want to establish another formula for the limit function $u_0$. Again, this follows closely section 4 of \cite{DFIZ}.
\begin{df}\rm \label{limit2}
We define the function $\hat u_0 : M\to \mathbb R$ by
\begin{equation}\label{limitbis}
\forall x\in M, \quad \hat u_0(x) = \min_{\mu \in \mathbb P_{\mathrm{min}}} \quad \frac{\int_{TM} \alpha(y)h(y,x) d\mu(y,v)}{\int_{TM} \alpha(y) d\mu(y,v)},
\end{equation}
where $h$ is the Peierls barrier (Definition \ref{Peierl}).
\end{df}
We aim at proving that $u_0 = \hat u_0$.
\begin{lemma}\label{critsub}
The function $\hat u_0$ is a critical subsolution.
\end{lemma}
\begin{proof}
As $h$ is bounded, $\hat u_0$ is clearly well defined as $\mu\in \mathbb P_{\mathrm{min}}$ is supported on $\alpha^{-1}\big( (0,+\infty)\big)$ (Proposition \ref{inclusion}).
By Proposition \ref{proph} each function $h_y = h(y,\cdot)$ is a critical solution. Hence if $\mu\in \mathbb P_{\mathrm{min}} $ the function
$$h_\mu : x\mapsto \frac{\int_{TM} \alpha(y)h(y,x) d\mu(y,v)}{\int_{TM} \alpha(y) d\mu(y,v)},$$
is itself a critical subsolution as a convex combination of such (Proposition \ref{viscositygen}).
Finally, again thanks to Proposition \ref{viscositygen}, $\hat u_0$ is a critical subsolution as a pointwise minimum of critical subsolutions (that are automatically equiLipschitz in this case).
\end{proof}
\begin{lemma}\label{inegu0}
For all $x\in M$, $u_0(x) \leqslant \hat u_0(x)$.
\end{lemma}
\begin{proof}
Let $x\in M$ and $\mu \in \mathbb P_{\mathrm{min}}$ recall that by Proposition \ref{constraints},
$$\int_{TM}\alpha(y) u_0(y) d\mu (y,v)\leqslant 0.$$
Moreover, by Proposition \ref{proph},
$$\forall y\in M, \quad u_0(x) \leqslant u_0(y) +h(y,x).$$
Multiply the above inequality by $\alpha(y)$ and integrate with respect to $\mu$ to obtain
\begin{align*}
u_0(x)\int_{TM} \alpha(y) d\mu(y,v)& \leqslant \int_{TM}\alpha(y) u_0(y) d\mu(y,v) + \int_{TM} \alpha(y) h(y, x)d\mu(y,v)\\
&\leqslant \int_{TM} \alpha(y) h(y, x)d\mu(y,v).
\end{align*}
Dividing by $\int_{TM} \alpha(y) d\mu(y,v)$ yields $u_0(x)\leqslant h_\mu (x)$ and taking the minimum over all $\mu \in \mathbb P_{\mathrm{min}}$ proves the lemma.
\end{proof}
\begin{Th}\label{egaliteu0}
For all $x\in M$, $ \hat u_0(x) = u_0(x)$.
\end{Th}
\begin{proof}
Let $y\in M$, recall that the function $-h(\cdot , y)$ is a critical subsolution by Proposition \ref{proph}.
Clearly the function $w : M\to \mathbb R$ defined by
\begin{align*}
\forall x\in M, \quad w(x) &= -h(x,y) -\max_{\mu \in \mathbb P_{\mathrm{min}}}\frac{ \int_{TM} -\alpha(z) h(z, y)d\mu(z,v)}{\int_{TM} \alpha(z) d\mu(z,v) }\\
&= -h(x,y) +\min_{\mu \in \mathbb P_{\mathrm{min}}}\frac{ \int_{TM} \alpha(z) h(z, y)d\mu(z,v)}{\int_{TM} \alpha(z) d\mu(z,v) },
\end{align*}
is a critical subsolution verifying the constraint \eqref{constraint}: $w\in \S_0$.
It follows from the definition of $u_0$ that $u_0\geqslant w$.
Evaluating at $y\in \mathcal{A}$ and recalling that $h(y,y)= 0$ by Definition \ref{aubrydef} we obtain that
$$u_0(y) \geqslant \min_{\mu \in \mathbb P_{\mathrm{min}}}\frac{ \int_{TM} \alpha(z) h(z, y)d\mu(z,v)}{\int_{TM} \alpha(z) d\mu(z,v) }=\hat u_0(y).$$
As $u_0$ is a critical solution (hence supersolution) and the inequality holds for all $y\in \mathcal{A}$, we conclude from Theorem \ref{compA} that the inequality $u_0\geqslant \hat u_0$ is valid on all $M$ and this with Lemma \ref{inegu0} concludes the proof.
\end{proof}
As a last result we come back to one of the easy cases treated at the beginning of this section:
\begin{prop}
Assume that constant functions are critical subsolutions. Then
$$\forall x\in M,\quad u_0(x)=\min_{y\in A} h(y,x).$$
\end{prop}
\begin{proof}
As constant functions are critical subsolutions, by Proposition \ref{proph}, we get that the Peierls barrier is nonnegative: $h\geqslant 0$.
Let us set $v : x\mapsto \min\limits_{y\in A} h(y,x)$. By Proposition \ref{proph}, $v$ is the minimum of critical solutions of the type $h_y$. It follows from Proposition \ref{viscositygen} that $v$ is itself a critical solution.
As $h\geqslant 0$, each critical solution $h_y$ is also a supersolution of \eqref{HJ-l}. It follows from the comparison principle (Proposition \ref{comp-l}) and from the proof of Proposition \ref{bb1} that $0\leqslant u_\lambda \leqslant h_y$ for all $y\in M$. In particular, $0\leqslant u_\lambda \leqslant v$. Passing to the limit we obtain $0\leqslant u_0\leqslant v$.
To conclude we notice that if $y\in \mathcal{A}$, then $0\leqslant v(y)\leqslant h(y,y)= 0$. Hence both $u_0$ and $v$ are critical solutions that vanish on $\mathcal{A}$. By Theorem \ref{compA}, they are equal.
\end{proof}
As a concluding remark, it is interesting to notice that in this very particular case (when constants are critical subsolutions), the limiting function is actually independent on the function $\alpha$. Of course this is not true in general.
|
3,212,635,537,670 | arxiv | \section{Introduction}
Molecules are often used as tracers of the physical conditions of many astrophysical environments. Their presence and abundance, as well as their internal states, give strong constraints on the temperature, density, and also dynamics of the observed media. Observations are almost always supported by models, and as soon as the abundance of a new molecular species can be measured, new or improved chemical pathways are implemented in these models. Gas phase chemistry can explain the majority of the observed molecular abundances, but there are cases where interstellar dust grains are indespensable to astrochemistry. This is the case for H$_2$ formation (Gould\&Salpeter 1963; Hollenbach\&Salpeter 1970), and for a long time, solid state chemistry has been thought to be the solution to the open questions gas phase chemistry could not address. For example, it has been proposed that hydrogenation, leading to saturation of species (i.e., C $\rightarrow$ CH$_4$, O $\rightarrow$ H$_2$O, CO $\rightarrow$ CH$_3$OH), occurs on dust grains and subsequently is at the origin of part of the molecular complexity observed in space (Charnley et al. 2000).
Most astrochemical models that include solid-state chemistry (Hasegawa \& Herbst 1992; Charnley et al. 2000; Caselli et al. 2002; Cuppen \& Herbst 2007; Cazaux et al. 2010; Vasyunin et al. 2013; Reboussin et al. 2014 and references therein) refer to the article by Tielens \& Hagen (1982), a watershed paper that proposed diffusion controlled reactions and accretion/desorption processes as the corner-stone of solid state chemistry.
Atoms and molecules can adsorb on a surface through two kinds of bonding (chemisorption or physisorption) depending on the physical-chemical properties of the surface (temperature and chemical structure) and of the adparticle, i.e., its energy and electronic structure (Oura et al. 2003).
At low surface temperatures where icy mantles can grow, physisorption processes involving Van der Waals interactions (10-400 meV) (Buckingham et al. 1988) are dominant. Physisorption concerns low surface temperatures ($<$ 100~K) and low particle energies ($<$ 50~meV), which are physical conditions that make physisorption hard to study experimentally and theoretically. Nevertheless in the last decade, an ever-growing number of works have dealt with a systematic study of the physics and chemistry of physisorbed species on cold non-metallic surfaces, namely graphite, silicates, and water ices (Pirronello et al. 1997; Ioppolo et al. 2008; Fillion et al. 2009; Goumans et al. 2010; Watanabe et al. 2010; Ward et al. 2011; Karssemeijer et al. 2012, 2014; Congiu et al. 2012; Thrower et al. 2014, and references therein).
Both astrochemical models and experimental studies reveal that solid state chemistry is governed by a desorption-diffusion competition.
In \figurename~\ref{fig:fig00} we show a sketch representing the desorption and diffusion processes and a diagram of the desorption energy E$_{des}$ and the diffusion energy E$_{dif}$ potentials.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{Fig00.png}
\caption{\textit{\textbf{($\alpha$)}}: Schematic diagram of a substrate (black circles) with lattice constant d. \textit{\textbf{($\beta$)}}: potential energy of adparticles on a surface in the x direction. \textit{\textbf{($\gamma$)}}: potential energy of adparticles on a surface along the z direction. Particle (a) represents an adparticle in an adsorption site, particle (b) is in a transition state between two adsorption sites, and particle (c) represents a species in gas phase.} \label{fig:fig00}
\end{figure}
If the desorption mechanism dominates, physisorbed reactive partners cannot increase the molecular complexity. Conversely, if diffusion mechanisms are preponderant, mobile atoms will be able to scan the surface and affect the abundance and variety of the species eventually created.
Diffusion and desorption barriers can be studied independently in the case of stable species (Mispelaer et al. 2013; Collings et al. 2004; Bisschop et al. 2006; Noble et al. 2012), but that is not the case for reactive species (i.e., atoms or radicals). In fact, the intrinsic mobility and reactivity of atoms governs the chemistry, but at the same time it hinders any direct measurement of the adsorption energy of atoms.
Experimental studies of reactive species were initially focused on the diffusion-desorption of H atoms (Manic\`o et al. 2001; Hornaeker et al. 2003; Amiaud et al. 2007). These works led to different results. The ambiguity probably originated in the use of an experimental technique (temperature-programmed desorption, TPD) that affects the mobility of atoms. For this reason, Matar et al. (2008) subsequently used the ability of H atoms to diffuse and react with a probe molecule (O$_2$) to study H mobility and obtain an effective diffusion barrier. In more sophisticated experiments, Hama et al. (2012) forced the desorption of H atoms by laser pulses and measured their residence time on different substrates to derive H mobility.
More recently, the diffusion of O atoms was investigated on different substrates experimentally and theoretically at low ($\sim$~6~K) (Minissale et al. 2013; Minissale et al. 2014a; Congiu et al. 2014; Lee\&Meuwly 2014) and high surface temperatures (50~K) (Minissale et al. 2015a).
Simultaneously, two experimental groups (Ward et al. 2011; Kimber et al. 2014; He et al. 2015) proposed a rather high value for the adsorption energy of O atoms (i.e., 1500--1800~K) on different substrates, consistently with theoretical estimations by Bergeron et al. (2008). On the other hand, no experimental data have existed so far that cover N-atom desorption and diffusion barriers.
The aim of this article is twofold. First, we propose an original method of directly measuring the adsorption energy of reactive species; second, we provide an experimental value of the adsorption energies of O and N atoms on two surface analogues of astrophysical interest, i.e., compact amorphous solid water (ASW) ice and oxidized graphite. The ASW template mimics ice-coated grains in clouds with A$_v$>3, whereas oxidized graphite simulates grain surface conditions in thin clouds (A$_v$<3).
\section{Experimental}
Experiments were performed using the FORMOLISM set-up (Congiu et al. 2012), an ultra-high vacuum chamber (passivated for O atoms) coupled to a triply differentially pumped atomic beam aimed at a temperature controlled sample holder. The substrate is an ZYA-grade HOPG (highly oriented pyrolytic graphite) slab previously exposed to an O-atom
beam (oxidized) to avoid surface changes during the experimental sequences.
Water ice films were grown on top of the HOPG substrate by spraying water vapour from a microchannel array doser located 2 cm in front of the surface. The water vapour was obtained from deionized water that had been purified by several freeze-pump-thaw cycles, carried out under vacuum. Amorphous solid water ice was dosed while the surface was held at a constant temperature of 110 K.
O(N) atoms were produced by dissociating O$_2$(N$_2$) gas in a quartz tube placed within a Surfatron cavity. The cavity can deliver a maximum microwave power of 200~W at 2.45~GHz. We calibrated the atomic/molecular fluxes as described in Amiaud et al. (2007) and Noble et al. (2012). We found $\phi_O$=5$\pm$0.4$\times$10$^{12}$ atoms cm$^{-2}$s$^{-1}$, $\phi_N$=2$\pm$0.4$\times$10$^{12}$, and $\phi_{O_2, N_2}$=(3.0$\pm$0.3)$\times$10$^{12}$ molecules cm$^{-2}$s$^{-1}$.
The dissociated fraction $\mu$ is typically 75\% for O$_2$ and 30\% for N$_2$. We can study the electronic state composition of the beam particles by tuning the energy of the ionizing electrons of the QMS. This technique allows ground-state or excited atoms and molecules to be selectively detected, as described in Congiu et al. (2009). We determined that our oxygen beam did not contain O($^1$D) nor O$_2$(a$^1\Delta^{-}_{g}$) and was composed of only O($^3$P) and O$_2$(X$^3 \Sigma^{-}_{g}$) (Minissale et al. 2014a). The nitrogen beam did not contain N atoms in an excited state, while a fraction of N$_2$ molecules were found to be in ro-vibrationally excited states (Minissale et al. 2014c). However, this is not an issue because atoms will relax rapidly on the surface. The beam temperature is typically lower than 400 K both for O and N atoms.
\subsection{The TP-DED technique}
Experiments consisted in exposing the sample to the atomic beam while slowly reducing (17~mK/s) the sample temperature from 110~K to 10~K. The quadrupole mass spectrometer (QMS) was placed close to the surface and the species desorbing from the surface were measured continuously, as shown in \figurename~\ref{fig:fig0}.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{Fig0.png}
\caption{Schematic top and side views (photo) showing the QMS and the cold surface configuration. The DED and TP-DED methods are indicated by red arrows (incoming particles from the beam lines) and by the blue arrow, which represents the fraction of desorbing species that are detected during the exposure phase. Adapted from Minissale (2014).} \label{fig:fig0}
\end{figure}
Incoming atoms from the beam cannot enter the QMS head directly owing to the geometric configuration of our apparatus. Atoms can be scattered by the surface, desorb after thermalization, or react on the surface. The desorption flux is proportional to the mass signal measured in the experiments.
The new method we probed in this work is a combination of the King and Wells method, TPD technique, and during-exposure-desorption (DED) technique used to measure chemical desorption efficiency (Dulieu et al. 2013; Minissale et al. 2014b). The resulting combined technique is a temperature-programmed during-exposure desorption (TP-DED). The linearly decreasing temperature is an aspect of major importance. In fact, this allows us to have the surface free of adsorbates at least in the high-temperature part of the experiment. Unlike the coverage at high temperatures, the surface density is more questionable in the low-temperature regime.
\subsection{Experimental results}
An example of TP-DED is shown in \figurename~\ref{fig:fig1}. Here we show the normalized QMS ion count of mass 32 and mass 16 concurrent with an exposure of O atoms to compact ASW. Mass 32 and mass 16 correspond to O$_2$ and O detection, respectively. Because of cracking of molecules in the
QMS head, the contribution of O$_2$ to mass 16 is weak and falls within the error bars. Signals are normalized with respect to the high-temperature-regime ion count.
\begin{figure}[t]
\centering
\includegraphics[width=8.6cm]{Fig1.png}
\caption{TP-DED normalized signals at Mass 16 (red) and Mass 32 (blue) from compact ASW ice during exposure to the O beam. TP-DEDs were performed with a ramp of -17~mK/s} \label{fig:fig1}
\end{figure}
At high temperatures (above 68~K) we observe a plateau, which means that the number of atoms and molecules coming off the surface is constant. The plateau behaviour indicates that the desorption flux evens up with the incoming flux of particles. In other words, the residence time of O atoms on the surface is very short if compared with the mean time between the arrival of two atoms at the same adsorption site (about 300$\pm$25s, Minissale et al. 2014a).
As long as the desorption rate is greater than the incoming flux, the accumulation of O atoms cannot occur. The desorption energy of O$_2$ molecules is considerably lower (Noble et al. 2012) than that of O atoms, therefore O$_2$ does not represent a high-coverage species at high temperatures.
During the TP-DED, the desorption rate follows an exponential decrease with temperature. As soon as the accretion rate exceeds the desorption rate, the O-atom surface density increases. That compensates in part for the decrease in the desorption rate, but then diffusion-reaction events begin to dominate.
This is why we observe the drop in atom count. The O atoms no longer leave the surface, but instead diffuse and react.
This is corroborated by the spectra of mass 32, showing an increase in the O$_2$ signal between 68 and 54~K. The residence time of O$_2$ at 68~K on compact ice is about 25 ms (Noble et al. 2012). Therefore, at our experimental timescales ($\gg$~1~s), O$_2$ desorbs almost immediately after it forms. Below 54~K, the signal at mass 32 begins to go down. The decrease in O$_2$ signal may be due to desorption, but is more likely related to its transformation into O$_3$, which is an efficient process as long as the O$_2$ residence time is long enough to allow for O+O$_2$ encounters (Minissale et al. 2014a). Actually, O and O$_2$ may be simultaneously on the surface for an interval of about 1~K, corresponding to a length of time of one minute under our experimental conditions. That would lead to the formation of at most 0.12 layers (upper limit) of ozone if we assume that the O+O$_2$ efficiency is one.
At temperatures lower than 40~K, there is a second plateau for mass 16 that remains unchanged until the end of the cooling ramp occurring at 10~K. The intensity of this low-temperature plateau lies between 10 and 15$\%$ of the normalized signal. A simple interpretation of this residual O-atom signal is that it is due to the fraction of atoms bouncing off the surface without staying on it. This would indicate that the sticking coefficient of O atoms is higher than 85$\%$. This is consistent with what is expected. On one hand, sticking efficiencies of light species (H$_2$, HD, D$_2$) have shown to have a clear dependence on their mass (Matar et al. 2010; Chaabouni et al. 2012); on the other hand, heavier molecules like water or O$_2$ have a sticking efficiency higher than 90$\%$ (Bisschop et al. 2006). It must be noted that our spectrometer is located close to the specular reflection angle opposite the beam-line, thus the scattered part of the beam is probably overestimated, as we have seen in the case of O$_2$ molecules. Nonetheless, assuming a conservative lower limit of 85$\%$ for the sticking efficiency of O atoms seems reasonable.
In the case of nitrogen, the presence of N$_2$ in the lower plateau region is probably of more importance because the desorption energy values of N$_2$ and N are rather close. The total exposure time between the two plateaus, which is where most of the information is found (e.g., atoms and molecules present on the surface at once), corresponds to an integrated exposure of around two layers.
Nevertheless, since the most molecules are desorbing at the beginning of the signal drop, we can fairly assume that no more than a complete layer was formed on the surface. This explains our choice of an unusual temperature ramp: slow enough to be certain that a quasi steady state was achieved at any time, but fast enough to be certain that no multilayer effects could occur.
\figurename~\ref{fig:fig2} displays the experimental and modeled TP-DED spectra obtained during irradiation with O atoms of oxidized graphite (panel $\alpha$) and irradiation with O atoms and N atoms of ASW (panel $\beta$ and panel $\gamma$, respectively). In each case, the trend of the experimental data follow the same scheme:
\begin{itemize}
\item a high-temperature plateau, i.e., desorption flux $=$ incoming beam of particles;
\item an intermediate region, where diffusion-reaction events consume atoms and the desorption flux is reduced with decreasing temperature;
\item a low-temperature plateau, where atoms no longer desorb from the surface (except for a small fraction bouncing off the surface without accommodating on it).
\end{itemize}
The differences in the three cases can be interpreted according to the position of high- and low-temperature plateaus. Experimental values of O atoms sent onto ASW show that the spectral features are shifted towards lower temperatures with respect to the results obtained on oxidized graphite. If we compare the curves of N and O obtained on ASW, we notice that the N plateaus come at lower temperatures.
\begin{figure*}[t]
\centering
\includegraphics[width=14cm]{Fig2b.png}
\caption{TP-DED normalized signal as a function of surface temperature (thermal ramp = $- 17$ mK/s) of O atoms exposed to oxidized graphite (panel $\alpha$), O (panel $\beta$) and N atoms (panel $\gamma$) exposed to compact ASW. The curves represent modeling results obtained with different combinations of E$_{des}$--E$_{dif}$ values, each treated as a free parameter.} \label{fig:fig2}
\end{figure*}
\section{Discussion}
It is possible to build a very simple model to reproduce the experimental curves. This was done by computing the evolution of the atom population on the surface. The loss rate of species X from the surface is
\begin{equation}
\frac{dX}{dt}= \phi_X - X k_{X-des} - R(X,k_{X-dif})
\end{equation}
where $\phi_X$ is the flux of incoming atoms, $X$ is the surface density of atom X, and
$k_{X-des}$ is the desorption rate
\begin{equation}\label{eq:des}
k_{X-des}=\nu_0 e^{-E_{X-des}/k_bT_s}
\end{equation}
where $\nu_0$ is the vibration frequency typically assumed to be 10$^{12}$~s$^{-1}$, $T_s$ is the surface temperature, and E$_{X-des}$ represents the desorption energy. The resulting desorption flux $X\cdot k_{X-des}$ is proportional to the mass signal measured in the experiments.
Finally, R(X,k$_{X-dif}$) represents a series of terms accounting for reactions involving particle X (Minissale et al. 2014a, 2015a), where X can be O, O$_2$, or O$_3$, and k$_{X-dif}$ and E$_{X-dif}$ are the diffusion rate and diffusion energy of species X, respectively.
Since diffusion should be dominated by thermal hopping in the range of high temperatures (30--70~K), we use an Arrhenius-type law (eq. \ref{eq:des}) to simulate diffusion by using E$_{X-dif}$ instead of E$_{X-des}$. E$_{X-des}$ and E$_{X-dif}$ represent the two numerical parameters of our model; the symbol X will be omitted when we do not refer to a specific species or whenever that is clearly specified in the text. As for O$_2$, O$_3$, and N$_2$, the diffusion and desorption energies were fixed. Their values are shown in Table 1.
\begin{table}[ht]
\centering
\caption{Diffusion and desorption energies on oxidized graphite and ASW ice for O$_2$, O$_3$, and N$_2$ used in the model. The E$_{dif}$/E$_{des}$ ratio was fixed to 0.7}\label{tab:2}
\begin{tabular}{l | c | c}
\hline\hline
Species & Ox-Graph & ASW\\
& \multicolumn{2}{c}{ E$_{dif}$--E$_{des}$ (K)}\\
\hline
O$_2$& 890--1260$^{a,b}$ & 820--1160$^a$\\
O$_3$& 1470--2100$^b$ & 1260--1800$^c$\\
N$_2$& -- & 810--1150$^b$\\
\hline\hline
\end{tabular}
\scriptsize \\$^a$ Noble et al. 2012, Minissale 2014, Cuppen\&Herbst 2007.
\end{table}
The curves in \figurename~\ref{fig:fig2} represent our modeling results. We found that different couples of E$_{des}$ and E$_{dif}$ exist that lead to a good fit of the data. If E$_{des}$ is high, the plateaus tend to appear at high temperature, while if E$_{des}$ value is reduced, then the plateaus shift towards lower temperatures. Also E$_{dif}$ affects the position of the plateaus, but this effect is smaller. Increasing E$_{dif}$ also shifts the plateaus towards high temperatures.
It is therefore possible to get a reasonable fit for several combinations of E$_{des}$ and E$_{dif}$.
\figurename~\ref{fig:fig3} shows the possible set of values of the E$_{des}$--E$_{dif}$ pairs that are able to reproduce our experimental values. In the E$_{des}$ {\it vs.} E$_{dif}$ graph, the coloured bands represent the regions where the
$\chi^2$ lies within 10$\%$ of its minimum. We discarded the values of diffusion energy lower than 500~K for O atoms, since thermal diffusion should not be below this value at low temperatures (Congiu et al. 2014). On the other hand, in the upper region of the bands, high values are systematically eliminated by the filter that discards fits of poor quality.
The bands indicating the acceptable couples of solutions are fairly elongated and almost linear. The desorption barrier is relatively more constrained than the diffusion barrier. The slope of the bands is close to 2 for O atoms (blue and grey bands): for an increase in 1~K of desorption energy, the diffusion energy rises by 2~K. Overall, E$_{dif}$/E$_{des}$ ratios are between 0.5 and 0.9, as shown by the dotted lines in \figurename~\ref{fig:fig3}.
This is the first experimental evidence that the value E$_{dif}$/E$_{des} \sim 0.5$, which is usually adopted in models of astrochemistry (Caselli et al. 2002; Cazaux \& Tielens 2004; Cuppen \& Herbst 2007), is a plausible choice. Our experiments also rule out the possibility of having values of E$_{dif}$/E$_{des}$ below 0.5 and above $0.9$.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{Fig3.png}
\caption{Diffusion energies {\it vs.} desorption energies obtained for O and N atoms (blue and green bands, respectively) on ASW ice, and O atoms (grey band) obtained on oxidized graphite. Dotted lines represent E$_{dif}$/E$_{des}$~=~0.5, 0.7, and 0.9.
Open symbols represent previous estimations of E$_{des}$ for O atoms: 1350~K calculated by Bergeron et al. (2008) on pyrene; 1455~K found Ward\&Price (2011) on C$_2$H$_2$-coated graphite; 1840~K measured by Kimber et al. (2014) on propyne ice; 1764~K found by He et al. (2014) on amorphous silicate; 1660~K and 1850~K measured by He et al. (2015) on porous water ice and amorphous silicate, respectively.} \label{fig:fig3}
\end{figure}
Table 2 gives some examples of diffusion-desorption energy combinations corresponding to E$_{dif}$/E$_{des}$ ratios of 0.5, 0.7, and 0.9 obtained for O and N on the two substrates investigated in this work.
One can adopt --- according to an educated guess or a measured value of either E$_{des}$ or E$_{dif}$ --- any of the couples satisfying the fits of \figurename~\ref{fig:fig3}. These are not to be considered true error bars, because once a value of either E$_{des}$ or E$_{dif}$ has been chosen, the other is well constrained.
\begin{table}[ht]
\centering
\caption{Diffusion and desorption barriers obtained for O and N atoms corresponding to E$_{dif}$/E$_{des}$ ratios =~0.5, 0.7, 0.9.}\label{tab:1}
\begin{tabular}{l|c|c|c}
\hline\hline
E$_{dif}$/E$_{des}$ & 0.5 & 0.7 & 0.9\\
\hline
&\multicolumn{3}{c}{ E$_{dif}$--E$_{des}$ (K)} \\
\hline
O on Ox-Graph & $690-1380$ & $1100-1580$ & $1675-1860$ \\
O on ASW & $625-1250$ & $990-1410$ & $1520-1700$ \\
N on ASW & $320-640$ & $525-720$ & $790-880$ \\
\hline\hline
\end{tabular}
\end{table}
An O diffusion barrier in the range $690 - 1100$~K (0.5$<$E$_{dif}$/E$_{des}$$<$0.7) is perfectly consistent with what we found previously (values between 600 and 900~K) to reproduce the efficiency of the H$_2$CO+O reaction (Minissale et al. 2015a). Also, desorption energies for O are consistent with previous estimations (Bergeron et al. 2008; Ward et al. 2011; Kimber et al. 2014; He et al. 2015); He et al. (2015) found 1850 $\pm$~90 K on amorphous silicates and 1660 $\pm$~60 K on porous water ice (see open-symbol points in \figurename~\ref{fig:fig3}).
The values of E$_{des}$ we derived for O atoms on oxidized graphite cover the interval of values (1455$\pm$72~K) proposed by Ward et al. (2011) and Kimber et al. (2014) obtained on an analogous sample, coated with C$_2$H$_2$. Also the theoretical value of 1350~K obtained for physisorption of O on pyrene by Bergeron et al. (2008) is compatible with our results. Since Bergeron et al. (2008), Ward et al. (2011), and Kimber et al. (2014) do not suggest an explicit value for the diffusion energy, the corresponding E$_{dif}$ values displayed in \figurename~\ref{fig:fig3} were drawn according to a E$_{dif}$/E$_{des}$ ratio of 0.7.
The good agreement of our fits with estimations of other authors, though achieved on different substrates, proves the reliability of the new direct method we present. Moreover, we provide the first evaluation of E$_{des}$ and E$_{dif}$ for N atoms. They are lower than the ones found for O atoms. In fact, the same difference existing between E$_{des}$ and E$_{dif}$ of N and O atoms appears if one considers desorption and diffusion barriers of the molecular species N$_2$ and O$_2$. Oxygen atoms are generally more bound to the substrate than O$_2$ molecules, indicating that the polarizability is not the only parameter that governs the adsorption energy. The high electronegativity of O atoms could perhaps explain that behavior. Another explanation could be that O($^3$P) atoms have unpaired electrons that generate a quadrupole moment. In contrast, molecular and atomic nitrogen seem to have adsorption energies in the same energy range.
Both substrates used in this work are amorphous and possess a complex distribution of adsorption and diffusion barriers. However, the technique we used implies that the experiments are carried out under conditions of very low surface coverage, where diffusion leads the atoms to occupy the minima on the surface potential. Therefore, desorption energies derived here are very likely to be related to the deepest sites, which means that we probed the high-energy end of the barrier distribution. That our modeling results fit the data is a clear indication that choosing unique values for desorption and diffusion is a reasonable choice in our case, notwithstanding the existence of an energy distribution.
In astrochemical models, the diffusion of species is very often directly and simply assumed to be linked to the desorption energy. Because all the chemistry is ruled by diffusion/reaction and diffusion/desorption competitions, this strong hypothesis has deep consequences. Such an assumption is also due to the poor knowledge of diffusion of molecules on a complex surface. Recently, Karssemeijer\&Cuppen (2014) have shown calculations on the diffusion of CO and CO$_2$ on water ice surfaces. They find that the E$_{dif}$/E$_{des}$ ratio lies within the 0.3-0.4 range, rather than between 0.5 and 0.9 as we found in this work. We cannot directly compare their calculations with our work because, firstly, the two studies investigate different systems and, secondly, atoms and molecules are likely to have different diffusion properties. Moreover, as we mentioned earlier, we probably probe only the high-value tail of the distribution of binding energies and also the high end of the diffusion barrier distribution if the two energies are directly coupled.
However, we can limit the range for E$_{dif}$/E$_{des}$ ratio by using indirect constraints on O-atom diffusion, derived from our previous study of the H$_2$CO+O system (Minissale et al. 2015a). In this case, experiments were performed on a substrate held at 55 K corresponding to a thermal-hopping-dominated regime. The substrate was ASW ice or oxidized graphite, coated with H$_2$CO ice. In the H$_2$CO+O system, the key point is not the desorption vs diffusion competition, but diffusion vs reactivity. We succeeded in reproducing our data using an O-diffusion barrier ranging from 600~K to 900~K. If we choose the mean value (750 K) of this interval, and use it to constrain the results of the present paper, we have a E$_{dif}$-E$_{des}$ =~750-1320~K for ASW ice and 750-1420~K for oxidized graphite. We notice that the two E$_{dif}$/E$_{des}$ ratios are 0.57 and 0.53, respectively. Therefore, in the absence of other experimental evidence, we conclude that the most likely number to assign to the E$_{dif}$/E$_{des}$ ratio is 0.55. By using this value, in the case of N atoms, we find E$_{dif}$-E$_{des}$ =~400-720~K.
We realize that the diffusion/desorption energy ratio of 0.55 we derive does not come at the end of an explicit demonstration, but it is a conclusion we draw after the analysis of a series of converging facts. In the future more evidence will be available we believe, for example thanks to extended calculations, and our proposed value will be put to the test to verify whether it is plausible or not as far as atoms are concerned. It must be noted, however, that the E$_{dif}$/E$_{des}$ =~0.55 we propose represents the reference value of current astrochemical models. Our findings show that the assumptions made in the present codes are meaningful and reasonable, at least for the cases of O and N atoms.
\section{Conclusions}
In this work, we present a new method (TP-DED)of deriving desorption and diffusion energies of reactive species. We validated our method by modeling the experimental values and comparing our results to previous calculations and measurements of O-atom desorption energies. For the first time, we provide an estimation of N-atom diffusion and desorption energies on ASW ice. These findings are of major interest to astrophysicists because astrochemical models simulating solid state physics-chemistry on dust grain surfaces will implement the new values of diffusion-desorption energies of O and N atoms in their codes and perhaps improve our understanding of the observed interstellar abundances.
We discussed the empiric relation between diffusion and desorption barriers and confirmed experimentally that 0.5 is a reasonable choice for diffusion-desorption barrier ratios of atoms, showing that the initial guess by Tielens \& Hagen (1982) was prophetic. We have proposed a value of 0.55, although there may be cases where E$_{dif}$/E$_{des}$ ratios ranging between 0.5 and 0.9 are a more realistic choice for atoms. As for molecules, the 0.3 -~0.4 range derived from calculations by Karssemeijer \& Cuppen (2014) has no reason to be questioned, and we would retain it among the possible E$_{dif}$/E$_{des}$ values characterizing molecular species. Here we proposed the combination E$_{dif}$-E$_{des}$ =~400-720 K for N atoms on icy mantles.
The 400~K diffusion energy of nitrogen atoms may have important implications for interstellar chemistry. In fact, it is low enough to allow for fast mobility at temperatures around 15~K, which is a diffusion rate of more than one hop per second. N atoms thus might be able to scan a large section of the grain surface before any other accretion event occurs. On the other hand, the 720~K desorption energy value is high enough to allow for a long residence time at the same interval of temperatures. While the N+N reaction is unlikely on dust grains, the most probable reaction involving N atoms is N+H $\rightarrow$ NH --- as was shown experimentally by Fedoseev et al. (2015) --- or N+H$_2$ (which should have an activation barrier). As a final speculation, we would like to emphasize that the synthesis of NH on bare grain surfaces should lead to a high percentage (about 45\%) of NH radicals released in the gas phase upon formation through the chemical desorption process (Minissale et al 2015b).
\begin{acknowledgements}
We acknowledge the support of the national PCMI programme founded by the CNRS, and DIM ACAV of the Conseil Regional d'Ile de France. MM acknowledges LASSIE Seventh Framework Programme ITN under Grant Agreement No. 238258. We thank our colleagues at the LERMA and ISMO laboratories for fruitful discussions.
\end{acknowledgements}
|
3,212,635,537,671 | arxiv | \section{Introduction}
In recent years impressive progress has been achieved in fabricating,
functionalizing and controlling molecular junctions for charge transfer
[\onlinecite{reviews,cuniberti,scheer}] and as nano-machines
[\onlinecite{browne}].
This includes advanced techniques to link molecular structures to metallic
electrodes and gates, to design molecular rectifiers or to induce rotary
molecular motion. Particularly fascinating is the interplay of charge transfer
and conformational dynamics.
Theoretically, density functional calculations (DFT) in combination with
non-equilibrium Greens function techniques as well as model based descriptions
in terms of master equations have elucidated fundamental aspects of single
molecule contacts.
A particular challenge is to capture electromechanical properties of these
devices where injected charges drive internal molecular degrees of freedom
such as vibrational or rotational modes far from equilibrium which in turn
back-act on the transport channels. A well-established procedure is to
determine energy surfaces for these modes from the conformation of the
molecular backbone in or at least very close to equilibrium. Since couplings
between molecules and electrodes are typically weak and relevant
intra-molecular excitations are fast compared to displacements of the
skeleton of the entire structure, this approach has been successfully
applied to explain a variety of experimental observations
[\onlinecite{scheer}].
What happens, however, far from equilibrium is much less understood.
To address this issue we consider in this paper a minimal system consisting of
a conjugated organic molecule with two sub-units the relative orientation of
which determines its transfer properties when contacted to electrodes. A
typical realization is e.g.\ a biphenyl compound with the dihedral angle
between the two benzene rings acting as conformational degree of freedom. This set-up
has received considerable attention in recent experiments, see
e.g.\ [\onlinecite{dadosh,venka,mis1,mis2}]. So far, however, only situations close
to equilibrium (low voltage regime) and for very weak electrode-molecule
couplings have been addressed. Theoretical work has revealed the role of
vibrational excitations [\onlinecite{thoss}] and dynamical symmetry breaking
[\onlinecite{grifoni}] within Master equation approaches. For compounds with
fixed torsion mode DFT calculations apply to show microscopic details of conductance properties
[\onlinecite{xue,xia,wang}].
Here we focus on the complementary regime of strong non-equilibrium with a
rotational mode that is not frozen, but may approach a voltage dependent
optimal configuration in steady state. This scenario requires
electrode-molecule couplings large compared to vibrational but small compared
to electronic excitations on the molecule (details are given below). Then, a
Born-Oppenheimer approximation (BO) applies which provides voltage
dependent energy surfaces. Notably, stable molecular conformations can be
controlled by bias and gate voltage which are directly related to a robust
quantization of current transfer. This opens the possibility for realizations
of molecular machines [\onlinecite{browne,selden}] as e.g. switches and valves.
For charge transfer through molecular contacts BOs in steady state have been discussed previously in several contexts. In [\onlinecite{nitzan}] a type of self-consistent treatment for a contact with a single electronic level coupled to single harmonic intramolecular degree of freedom revealed the appearance of hysteresis and negative differential conductance. Greens functions methods have been used in conjunction with a BO
to extract effective potential surfaces for nearly harmonic vibronic modes in [\onlinecite{greens}].
However, both treatments exploit that the driven dynamics of an {\em harmonic} oscillator is analytically known.
Instead, in the sequel a methodology for substantial anharmonicity of intramolecular modes and/or Coulomb interaction between excess charges on several molecular electronic levels is studied.
\section{Model}
\begin{figure}
\epsfig{file=molsketch.eps, width=7cm}
\caption{Sketch of the biphenyl molecule showing the rotational angle $\phi$.
Connection to the electrodes is established via the thiole groups.}\label{sketch}
\end{figure}
The biphenyl molecule (Fig.~\ref{sketch}) consists of two benzene rings which
can be twisted by an angle $\phi$. Each of the rings may carry one excess electron
and the tunnel coupling between left ring (L) and right ring (R) depends on
this torsion angle as the dominant intramolecular mode with respect to
transport properties. The total molecule Hamiltonian then reads [\onlinecite{thoss}]
\begin{eqnarray}
\label{molecule}
H_0&=&-\frac{1}{2 I}\frac{\partial^2}{\partial\phi^2}+V(\phi)
+T(\phi)(d_L^\dagger d_R+d_R^\dagger d_L)
\nonumber\\
&&+E_c\, (n_L+n_R)+U_0\, n_L n_R+\frac{V_S}{2}(n_L-n_R)\, ,
\end{eqnarray}
where the first line describes the dynamics of the torsion angle and tunneling
between the rings, i.e.,
\begin{equation}
V(\phi)=V_0\cos^2(2\phi)\, , \ T(\phi)=T_0\cos\phi\ ,
\end{equation}
and the second line captures the charging energy $E_c$ for individual excess
charges on the molecule and the Coulomb repulsion $U_0$ for double occupancy,
respectively.
Following microscopic calculations
[\onlinecite{xue}] we also incorporate a Stark energy
shift $V_S$ of the molecular sites due to an external electric field. It turns out
that this Stark shift has substantial impact on the molecular conformation and
transport properties.
Further we have the local electron densities $n_\alpha=d_\alpha^\dagger d_\alpha,
\alpha=L, R $, where operators $d_L^\dagger$ ($d_R^\dagger$) create excess
electrons on the left (right) subunit.
Crucial for the charge transfer across this structure is the separation of
energy scales with a small height $V_0=0.05{\rm eV}$ for the rotational barrier
of the neutral molecule, a large hopping element $T_0=0.5{\rm eV}$, and a large
inertial moment $I=20000 {\rm eV}^{-1}$ [\onlinecite{ssp1,ssp2}]. As it is
typical for conjugated organic molecules, orbitals are thus delocalized
throughout the molecular backbone. Accordingly, it is appropriate to introduce
the molecule's charge energy eigenstates as linear combinations of the left
and right localized states:
$d_+^\dagger=\cos\theta d^\dagger_L+\sin\theta d^\dagger_R$
and $d_-^\dagger=-\sin\theta d^\dagger_L+\cos\theta d^\dagger_R$.
The mixing angle $\theta\in[-\pi/2,\pi/2]$ depends on the torsional degree of freedom
\begin{equation}
\theta(\phi)=\arctan\left(\frac{T(\phi)}{\lambda(\phi)+\frac{V_S}{2}}\right)\,,
\label{thetaphi}
\end{equation}
with the eigenenergies of the one-electron Hamiltonian
$\lambda(\phi)=\sqrt{T(\phi)^2+V_S^2/4}$.
As one can see, for vanishing Stark shift, we have $\theta=\pm\pi/4$
(the sign of $\theta$ jumps at $\phi=\pi$ where the hopping matrix elemnt crosses
zero) and the energy eigenstates become the symmetric and antisymmetric combinations
of localized states, which couple symmetrically (except for a sign) to the right
and left leads. For large Stark shift we instead get $\theta=0$ and the
eigenstates become the localized ones that only couple to a single lead each.
In eigenstate representation the molecule Hamiltonian is
\begin{eqnarray}
H_0&=&-\frac{1}{2 I}\frac{\partial^2}{\partial\phi^2}+E_0(\phi)\,
|0\rangle\langle 0|+E_+(\phi)\, |+\rangle\langle +|\nonumber\\
&&+E_-(\phi)\, |-\rangle\langle -|+E_D(\phi)\, |D\rangle\langle D|\, .
\end{eqnarray}
Here, $|0\rangle$ denotes the neutral molecule, $|\pm\rangle$ the two
one-particle eigenstates, and $|D\rangle$ is the doubly occupied state.
The torsional dependent energies of these states are given by:
$E_0(\phi)=V(\phi)$, $E_\pm(\phi)=V(\phi)\pm \lambda(\phi)+E_c$, and
$E_D(\phi)=V(\phi)+2E_c+U_0$.
The molecular junction is now modeled by the Hamiltonian $H=H_0+H_I+H_L+H_R$
with $H_\alpha = \sum_k (\epsilon_{k\alpha} -\mu_\alpha) c_{k\alpha}^\dagger
c_{k\alpha}, \alpha\in\{L,R\}$ describing left (L) and right (R) lead,
respectively, as reservoirs of non-interacting quasi-particles with creation
(annihilation) operators $c_{k\alpha}^\dagger$ ($c_{k\alpha}$). The chemical
potentials $\mu_{L/R}=V_g\pm V_b/2$ are fixed by the bias voltage $V_b$ and the
gate voltage $V_g$. The Stark-shift varies with the bias according to
$V_S=\kappa\, V_b$ with a parameter $\kappa<1$. Coupling of the left (right)
lead to the left (right) ring is described by
\begin{eqnarray}
\label{coupling}
H_I&=&\gamma_L d_L^\dagger \psi_{L}+\gamma_R d_R^\dagger \psi_{R}+h.c.\nonumber\\
&=&\gamma_L(\cos\theta d_+^\dagger-\sin\theta d_-^\dagger)\psi_{L}
\nonumber\\
&&+\gamma_R(\sin\theta d_+^\dagger+\cos\theta d_-^\dagger)
\psi_{R}+h.c.\,
\end{eqnarray}
with $\psi_\alpha=\sum_k c_{\alpha, k}$.
Charge transfer into/out of the molecule is determined by rates
$\Gamma_\alpha=D_\alpha \gamma_\alpha^2/2\hbar$ where electrodes are taken in the
wide band limit with a density of states $D_\alpha$. Apparently, the coupling of the molecular states depends on the mixing angle $\theta$ which itself depends sensitively on the Stark-field according to (\ref{thetaphi}). For instance,
the coupling of the $|+\rangle$ state to the left (right) lead is given by
$2\Gamma_L\cos^2(\theta)\quad [2\Gamma_R\sin^2(\theta)]$. As shown in
Fig.~\ref{coupfig}, for $\Gamma_L=\Gamma_R\equiv \Gamma$ these couplings
are identical only for vanishing electric field, i.e.\ $V_S=0$ and $\theta(\phi)=\pi/4$. For finite values of $V_S$ this symmetry is immediately broken, e.g.\ around $\phi=\pi/2 -\delta$ with small deviations $\delta$, one has
$\theta(\pi/2-\delta)\approx (T_0/V_S)\, \delta$ so that $2\Gamma \cos(\theta)^2\approx 2\Gamma\gg
2\Gamma\sin^2 \theta\approx 2\Gamma (T_0 \delta/V_S)^2$.
\begin{figure}
\epsfig{file=coupling.ps, ,angle=270,width=7cm}
\caption{Frank-Condon-factor $\cos^2\theta(\phi)$ as a function of $\phi$ and
$V_s$. As described in the text, a finite electric field breaks the symmetry
of the one-electron eigenstates ($\theta(\phi)=\pi/4$) and induces a preference
for angles $\phi\approx\pi/2$ in the coupled system.}\label{coupfig}
\end{figure}
In the remainder we are particularly interested in the steady state current
$\langle I_\alpha\rangle = e {\rm Tr}\{W_0\dot N_\alpha({t\to \infty})\}
=e {\rm Tr}\{N_\alpha\dot{W}({t\to \infty})\}$ where
$N_\alpha=\sum_k c_{\alpha, k}^\dagger c_{\alpha, k}$ is the number operator in
lead $\alpha$ and $W(t)$ is the density operator of the junction.
With electrodes residing in thermal equilibrium the reduced density operator
$\rho(t)= {\rm Tr}_{\rm Leads}\{W(t)\}$ provides the relevant transport
information. A standard procedure is to derive from the Liouville-von Neuman
equation $\dot{\rho}(t)=(-i/\hbar){\rm Tr}_{\rm Leads}[H,W(t)]$ a master
equation by treating $H_I$ as a perturbation (weak coupling).
This way, upon employing the usual Born-Markov approximation one arrives at
$\dot{\rho}(t)=(-i/\hbar)[H_0,\rho(t)]+\hat{R}\rho(t)$ with the Redfield
tensor[\onlinecite{weiss,mitra}]
\begin{equation}
\hat{R}\rho(t)=-\frac{1}{\hbar^2}\int_0^\infty{\rm d}\tau{\rm Tr}_{\rm Leads}
[H_I,[H_I(-\tau),W(t)]]\,.
\label{redfield}
\end{equation}
Positively definite steady-state solutions are only obtained if a secular
approximation is applied to (\ref{redfield}) so that couplings between
populations (diagonal elements) and coherences (off-diagonal elements) of
{\em non-degenerate} states are dropped [\onlinecite{semigroups}]. The
corresponding Redfield tensor $R$ determines the steady state density due to
$R\rho_{\rm st}=0$. Further, from(\ref{redfield}) one has
\begin{equation}
\langle I_\alpha\rangle=-\frac{1}{\hbar}\lim_{t\to \infty}\int_0^\infty{\rm d}\tau
{\rm Tr}\{W(t)[I_\alpha,H_I(-\tau)]\}\label{ieq}
\end{equation}
with the current operator $I_\alpha=(i e/\hbar)[N_\alpha,H_I]=
(i e\gamma_\alpha/\hbar)
\sum_k( d_\alpha c_{k\alpha}^\dagger- d_\alpha c_{k\alpha}^\dagger)$. Of
course, in steady state $I_R=-I_L$.
\section{Steady state observables}
For low voltages ($|V_b|, |V_g|<V_0$) and very
weak couplings, transport properties are restricted to the low energy sector.
Thus a representation of $\rho(t)$ in the basis $\{|0, m\rangle, |-, n\rangle\}$
is appropriate, with vibrational modes $\{|m\rangle, |n\rangle\}$ probing only
the vicinity of $\phi=0$ on surfaces $E_0(\phi), E_-(\phi)$, respectively
[\onlinecite{thoss,grifoni}] (see Fig.~\ref{eeff}).
In regime of higher voltages, however, this scheme
poses severe difficulties. On surfaces $E_0, E_\pm$ the density of torsional
eigenstates (typical level spacings $\sqrt{V_0/I}\approx meV$) strongly
increases such that they should more accurately be described in terms of
coherent states. This necessitates the inclusion of coherences in
(\ref{redfield}) also for {\em non-degenerate} eigenstates which typically
leads to negative populations and thus to a breakdown of the perturbative
treatment. A separation of time scales between the sluggish motion of the
dihedral angle and a faster charge transfer through the contact suggests an
alternative approach though which avoids this deficiency. In the spirit of a
BO one first calculates steady-state solutions
$\rho_{\rm st}(\phi)$ for the electronic system at any {\em fixed} value of the
rotational angle and then uses this solution to extract an effective
steady-state potential for the torsional mode. Within the basis
$\{|0\rangle,|\pm\rangle, |D\rangle\}$ and for given $\phi$ corresponding
energies are {\em not} degenerate [apart for $V_S=0$ at
$\phi=\frac{\pi}{2} \ {\rm mod}\, \pi$ (Fig.~\ref{eeff})].
A secular approximation applied for fixed $\phi$ to (\ref{redfield}) leads
thus in steady state to
$R (\rho_{00},\rho_{++},\rho_{--},\rho_{DD})_{\rm st}^{t}=0$ with
\begin{equation}
R=\left(\begin{array}{cccc}
\sigma_{00}&-\Sigma^{\rm out}_{+0}&-\Sigma^{\rm out}_{-0}&0\\
-\Sigma^{\rm in}_{+0}&\sigma_{++}&0&-\Sigma^{\rm out}_{D+}\\
-\Sigma^{\rm in}_{-0}&0&\sigma_{--}&-\Sigma^{\rm out}_{D-}\\
0&-\Sigma^{\rm in}_{D+}&-\Sigma^{\rm in}_{D-}&\sigma_{DD}
\end{array}
\right)\, .
\end{equation}
Here, transition rates read $\sigma_{00}=\Sigma^{\rm in}_{+0}
+\Sigma^{\rm in}_{-0}$, $\sigma_{\pm \pm}=\Sigma^{\rm out}_{\pm 0}
+\Sigma^{\rm in}_{D\pm}$, and $\sigma_{DD}=\Sigma^{\rm out}_{D+}
+\Sigma^{\rm out}_{D-}$, where incoming self-energies are given by
\begin{equation}
\label{rates}
\Sigma^{\rm in}_{ab}=\sum_\alpha
\Sigma^{\rm in}_{ab,\alpha}=\hbar\sum_\alpha
\Gamma_{ab,\alpha} f_\alpha[E_a(\phi)-E_b(\phi)]
\end{equation}
and outgoing ones by $\Sigma^{\rm out}_{ab}=\hbar\sum_\alpha
\Gamma_{ab,\alpha}-\Sigma^{\rm in}_{ab,\alpha}$.
Further, we have introduced Fermi distributions
$f_\alpha(E)=1/\{1+\exp[\beta(E-\mu_\alpha)]\}$ and
coupling rates:
$\Gamma_{ab,\alpha}=2\,\Gamma_\alpha|\langle a|d_\alpha+d^\dagger_\alpha|b\rangle|^2$.
As noted above, these coupling rates are proportional to $\sin^2\theta(\phi)$
and $\cos^2\theta(\phi)$, respectively, and reduce to $\Gamma_\alpha$ in
absence of a Stark-shift ($V_S=0: \theta=\pi/4$).
The angle dependent steady state density determines all relevant quantities
such as the Born-Oppenheimer surface (BOS) for the torsional mode
\begin{equation}
V_{\rm eff}(\phi)= \sum_{j\in\{0,+,-,D\}}E_j(\phi)\rho_{jj}(\phi)\,
\label{veff}
\end{equation}
(red line in Fig.~\ref{eeff})
and the angle dependent current through lead $\alpha$ [see (\ref{ieq})]
\begin{eqnarray}
\lefteqn{ I_\alpha(\phi)=}\nonumber\\
&&-\rho_{00}(\Sigma^{\rm in}_{+0,\alpha}+\Sigma^{\rm in}_{-0,\alpha})
+\rho_{++}(\Sigma^{\rm out}_{+0,\alpha}-\Sigma^{\rm in}_{D+,\alpha})\nonumber\\
&&+\rho_{--}(\Sigma^{\rm out}_{-0,\alpha}-\Sigma^{\rm in}_{D-,\alpha})
+\rho_{DD}(\Sigma^{\rm out}_{D+,\alpha}+\Sigma^{\rm out}_{D-,\alpha})\, .
\label{ieq2}
\end{eqnarray}
(red line in Fig.~\ref{iphifig})
\begin{figure}
\epsfig{file=potplot_neu.ps, angle=270, width=6.5cm}
\caption{Energy surfaces of the bare molecular states $|0\rangle, |\pm\rangle$
and the BOS $V_{\rm eff}$ vs.\ the torsional angle $\phi$ for $V_b=0.5V$,
$V_g=0.5V$. In upper part bare molecular eigenstates are used, in the lower one
a Stark shift between the left and right site of $V_S=0.3V_b$ is assumed.}\label{eeff}
\end{figure}
\begin{figure}
\epsfig{file=iphiplot_neu.eps, angle=270, width=6.5cm}
\caption{Angle dependent current and probability density for the $T=80K$ rotational
state. Parameters are the same as in Fig.~\ref{eeff}}\label{iphifig}
\end{figure}
Eventually, the Schr\"odinger equation for the torsional degree of freedom with
the effective potential (\ref{veff}) is solved providing eigenfunctions
$\Psi_\nu(\phi)$ and energies $\epsilon_\nu$.
Note that these depend on the electronic steady state of the junction and thus
on bias and gate voltage as well as temperature.
Angle-averaged expectation values of observables $X(\phi)$ are obtained from
$\langle X(\phi)\rangle=\int{\rm d}\phi\sum_\nu\, X(\phi)\,
|\Psi_\nu(\phi)|^2\, \exp(-\beta\epsilon_\nu)/Z$ with partition function $Z$.
A thermal equilibrium state of the {\em torsional} sector is not in conflict
with steady-state solutions of the {\em electronic} sector far from
equilibrium due to the coupling of the former one to a heat bath environment
of residual vibronic modes. Note that this in turn suppresses any possible bistability of the system,
e.g. in Fig.~\ref{eeff} (top) a bistability between the global
minimum situated at $\phi=0$ and the local one around
$\phi=\pi/2$.
With typical energy barriers in $V_{\rm eff}$ of order $eV$ (Fig.~\ref{eeff}),
the mean angle $\langle \phi\rangle$ is dominated over a broad temperature
range mostly by torsional states well localized around the minima of
$V_{\rm eff}$ (green lines in Fig.~\ref{iphifig}).
Finite angle-averaged currents $\langle I(\phi)\rangle$ result
from sufficient overlap of angle dependent currents (\ref{ieq2}) with
these states.
Before we proceed, let us specify the constraints for the above scenario.
It is based on a time scale separation between electronic passage through the
molecule and torsional motion (adiabaticity) and on a separation between
electronic transition energies and electrode-molecule couplings
$\hbar\Gamma$ (perturbation theory). Specifically, this means
$\epsilon_\nu \ll \hbar\Gamma\ll E_+-E_-$ which should be accessible in
experimental set-ups due to $\epsilon_\nu\sim meV\ll E_+-E_-\sim eV$.
\section{Transport properties}
Let us now discuss the steady state properties in
more detail. We start with the case of a vanishing intramolecular field $\kappa=0$.
In Fig.~\ref{phifig} (bottom) the twist angle displays stability plateaus
which reveal conformational changes with increasing bias and gate voltage,
respectively. In a central diamond (low bias voltage, $V_g>0$) the molecule
resides in an almost planar configuration ($\langle\phi\rangle/\pi\approx 0$),
while outside it is found mainly out of plane with
$\langle\phi\rangle/\pi\approx \frac{1}{4}$ or
$\langle\phi\rangle/\pi\approx \frac{1}{8}$. One additional plateau appears
with $\langle \phi\rangle/\pi \approx \frac{3}{16}$ when four transition
channels are accessible: $|0\rangle\to |\pm\rangle, |\pm\rangle\to |D\rangle$.
This step-wise snapping leaves its imprint on the current as seen in
Figs.~\ref{phifig} (top) and \ref{currentcut}. According to (\ref{ieq2}) and
(\ref{rates}) transport channels open when $eV$ exceeds transition energies
between molecular states which lie in the conduction window determined by $V_g$.
The current exhibits distinct plateaus with $\langle I\rangle/(e\Gamma)=0,
\frac{1}{2}, \frac{2}{3}, \frac{3}{4}, 1$ which can be individually activated
by the gate voltage. An asymmetric response to $V_g$ is also seen with a
central Coulomb diamond shifted towards positive voltages
[\onlinecite{grifoni}].
\begin{figure}
\epsfig{file=colorplot_coulomb.ps, angle=270, width=6.5cm}
\caption{Net current in units of $e\Gamma$ (top) and mean torsional angle
(degrees, bottom) in steady state vs.\ gate voltage $V_g$ and bias voltage
$V_b$ (in units of $V$) with $V_S=0$.}\label{phifig}
\end{figure}
\begin{figure}
\epsfig{file=currentcut2.eps, width=7cm}
\caption{Current and equilibrium angle vs.\ $V_g$ at fixed $V_b=3V$,for $\kappa=0$.
Arrows denote the opening (closing) of transition channels.}\label{currentcut}
\end {figure}
In realistic junctions the presence of an intramolecular field (according to
[\onlinecite{xue}] we set $\kappa=0.3$) breaks the symmetry, thus opening a gap between
the $E_+$ and the $E_-$ surfaces (Fig.~\ref{eeff})
and decreasing the HOMO ($E_0$)--LUMO ($E_-$) separation [\onlinecite{xue}].
While for the mean twist angle and the net current (Fig.~\ref{phistarkfig})
the general appearance of the central diamond survives, new wedge-like structures
appear with $\langle\phi\rangle/\pi=1/2$ and thus
$\langle I\rangle\approx 0$ due to $T(\pi/2)=0$. Namely, for finite Stark shifts
the effective coupling between the leads and the one-particle eigenstates
becomes highly asymmetrical and peaked near $\phi=\pi/2$. This effect
is surprisingly robust and dominates the conduction properties even
at very low internal fields ($\kappa=0.01$). The fully symmetric situation for
$V_S=0$ turns thus out to be extremely unstable and not applicable to actual
junctions.
A cut through the $V_b$-$V_g$ plane at a fixed bias voltage (Fig.~\ref{cut})
reveals the details of the opening and closing of transitions between molecular
states upon sweeping $V_g$ from negative to positive values and the simultaneous
snapping of the dihedral angle. In contrast, previous treatments where the
torsion angle is considered to be fixed [\onlinecite{venka,xia,mis1,mis2}]
provide conductances $\propto \cos^2(\phi)$. Here, transitions between current
plateaus occur rather abruptly with simultaneous switchings of
the molecular backbone. Notably, this behavior is very insensitive to
temperature fluctuations; the system showed just the same plateaus (albeit
slightly rounded) for $T=300K$.
Of particular interest are sequences $\langle I\rangle/e\Gamma\approx 1
\leftrightarrow 0 \leftrightarrow 1/2$ in the regime $V_g=0\ldots 4$ with
substantial conformational changes
$\langle\phi\rangle/\pi\approx 1/4 \leftrightarrow 1/2\leftrightarrow 0$.
\begin{figure}
\epsfig{file=colorplot_stark.ps, angle=270, width=6.5cm}
\caption{Same as Fig.~\ref{phifig} but with $V_S=0.3V_b$. The vertical line
at $V_b=3V$ marks the cut displayed in Fig.~\ref{cut}.}\label{phistarkfig}
\end{figure}
\begin{figure}
\epsfig{file=currentcut3.eps, angle=0, width=8cm}
\caption{Current $\langle I\rangle$ in units of $e\Gamma$ (solid) and average
twist angle $\langle \phi\rangle/\pi$ (dashed) at $V_b=3V$ vs.\ $V_g$ (in $V$).
To the left (right) of the maximal current arrows indicate the opening
(closing) of transition channels between molecular states
$|0\rangle, |\pm\rangle, |D\rangle$.}\label{cut}
\end{figure}
This could be exploited as a molecular switch (current on/off) with the
benefit of being insensitive against variations of temperature and internal
fields in contrast to alternative proposals. In another application
the voltage dependence of the molecular conformation could be used by
operating the biphenyl as a molecular motor [\onlinecite{selden}]. More
specifically, in a first version a slowly varying time periodic gate voltage
at a fixed bias voltage induces a valve-like behavior (e.g.\ open for
$\phi=0$, closed for $\phi=\pi/2$). The state of the rotor is directly read
off from the variations of the dc-current. In an extended set-up side-groups
are attached to each of the biphenyl rings to break in an additional static
electric field the $\pi$-symmetry. A time-dependent gate voltage may then
generate left- resp.\ right-handed rotations of the individual rings. This
situation will be explored in a subsequent work.
\section{Summary}
We have analyzed charge transfer through conjugated organic
molecules contacted to metallic electrodes where a single sluggish vibrational
degree of freedom strongly couples to excess charges. In the regime of higher
voltages and molecule-electrode couplings large compared to vibrational
excitations, but small compared to intramolecular electronic level spacings,
voltage dependent energy surfaces are determined. Bias and gate voltage allow
to activate current plateaus that correspond to specific molecular
conformations under steady state conditions.
Applications may include non-linear junctions robust against thermal
fluctuations and molecular motors.
\acknowledgments{We thank M. Thoss and C. Timm for valuable discussions.
Financial support from the DFG through SFB569 is gratefully acknowledged.}
|
3,212,635,537,672 | arxiv |
\section{Appendix}
\begin{table}[htb]
\renewcommand\thetable{A}
\centering
\caption{The performance (Agreement and test accuracy) of previous methods under the soft-label and the hard-label settings.
And the average performance reduction of each dataset under hard label is reported in the last row.}
\resizebox{\textwidth}{!}{
\begin{tabular}{llcccccccc}
\hline
\multicolumn{2}{l}{\multirow{2}{*}{Method}} & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{SVHN} & \multicolumn{2}{c}{Caltech256} & \multicolumn{2}{c}{CUBS200} \\
\cline{3-10}
\multicolumn{2}{l}{} & Agreement & Acc & Agreement & Acc & Agreement & Acc & Agreement & Acc \\
\hline
\multirow{2}{*}{KnockoffNets} & soft-label & 81.59\% & 80.03\% & 93.17\% & 92.14\% & 76.42\% & 74.42\% & 65.48\% & 59.15\% \\
& \cellcolor{mygray}hard-label & \cellcolor{mygray}75.32\%\tiny\textbf{-6.27\%} & \cellcolor{mygray}74.44\%\tiny\textbf{-5.59\%} & \cellcolor{mygray}85.00\%\tiny\textbf{-8.17\%} & \cellcolor{mygray}84.50\%\tiny\textbf{-7.64\%} & \cellcolor{mygray}57.64\%\tiny\textbf{-18.78\%} & \cellcolor{mygray}55.28\%\tiny\textbf{-19.14\%} & \cellcolor{mygray}30.01\%\tiny\textbf{-35.47\%} & \cellcolor{mygray}28.03\%\tiny\textbf{-31.12\%} \\
\hline
\multirow{2}{*}{ActiveThief(Entropy)} & soft-label & 81.61\% & 79.85\% & 92.79\% & 91.95\% & 77.38\% & 70.91\% & 68.12\% & 60.39\% \\
& \cellcolor{mygray}hard-label & \cellcolor{mygray}75.26\%\tiny\textbf{-6.35\%} & \cellcolor{mygray}74.21\%\tiny\textbf{-5.64\%} & \cellcolor{mygray}90.47\%\tiny\textbf{-2.32\%} & \cellcolor{mygray}89.85\%\tiny\textbf{-2.10\%} & \cellcolor{mygray}56.28\%\tiny\textbf{-21.10\%} & \cellcolor{mygray}54.14\%\tiny\textbf{-16.77\%} & \cellcolor{mygray}32.05\%\tiny\textbf{-36.07\%} & \cellcolor{mygray}29.43\%\tiny\textbf{-30.96\%} \\
\hline
\multirow{2}{*}{ActiveThief(k-Center)} & soft-label & 82.98\% & 81.42\% & 94.45\% & 93.62\% & 78.66\% & 72.20\% & 73.71\% & 65.34\% \\
& \cellcolor{mygray}hard-label & \cellcolor{mygray}75.71\%\tiny\textbf{-7.27\%} & \cellcolor{mygray}74.24\%\tiny\textbf{-7.18\%} & \cellcolor{mygray}81.45\%\tiny\textbf{-13.00\%} & \cellcolor{mygray}80.79\%\tiny\textbf{-12.83\%} & \cellcolor{mygray}61.19\%\tiny\textbf{-17.47\%} & \cellcolor{mygray}58.84\%\tiny\textbf{-13.36\%} & \cellcolor{mygray}37.68\%\tiny\textbf{-36.03\%} & \cellcolor{mygray}34.64\%\tiny\textbf{-30.70\%} \\
\hline
\multirow{2}{*}{ActiveThief(DFAL)} & soft-label & 80.42\% & 78.88\% & 91.41\% & 90.57\% & 64.56\% & 59.81\% & 53.24\% & 47.65\% \\
& \cellcolor{mygray}hard-label & \cellcolor{mygray}76.72\%\tiny\textbf{-3.70\%} & \cellcolor{mygray}75.62\%\tiny\textbf{-3.26\%} & \cellcolor{mygray}84.79\%\tiny\textbf{-6.62\%} & \cellcolor{mygray}84.17\%\tiny\textbf{-6.40\%} & \cellcolor{mygray}46.92\%\tiny\textbf{-17.64\%} & \cellcolor{mygray}44.91\%\tiny\textbf{-14.90\%} & \cellcolor{mygray}20.31\%\tiny\textbf{-32.93\%} & \cellcolor{mygray}18.69\%\tiny\textbf{-28.96\%} \\
\hline
\multirow{2}{*}{ActiveThief(DFAL+k-Center)} & soft-label & 82.05\% & 80.86\% & 93.03\% & 92.08\% & 67.27\% & 62.67\% & 61.39\% & 55.18\% \\
& \cellcolor{mygray}hard-label & \cellcolor{mygray}74.97\%\tiny\textbf{-7.08\%} & \cellcolor{mygray}73.98\%\tiny\textbf{-6.88\%} & \cellcolor{mygray}81.40\%\tiny\textbf{-11.63\%} & \cellcolor{mygray}80.86\%\tiny\textbf{-11.22\%} & \cellcolor{mygray}55.70\%\tiny\textbf{-11.57\%} & \cellcolor{mygray}53.69\%\tiny\textbf{-8.98\%} & \cellcolor{mygray}26.60\%\tiny\textbf{-34.79\%} & \cellcolor{mygray}24.42\%\tiny\textbf{-30.76\%} \\
\hline
Average difference & & -6.13\% & -5.71\% & -8.35\% & -8.04\% & -17.31\% & -14.63\% & -35.06\% & -30.50\% \\
\hline
\end{tabular}
}
\label{tab1}
\end{table}
\begin{table}[htb]
\renewcommand\thetable{B}
\centering
\caption{Test accuracy of our method and previous methods with different architectures on CIFAR10 dataset. The smaller the standard deviation (Std), the more stable the method.}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{5}{c}{Substitute's architecture} & \multirow{2}{*}{Std($\times10^{-2}$)$\downarrow$} \\
\cline{2-6}
& ResNet-34 & ResNet-18 & ResNet-50 & VGG-16 & DenseNet \\
\hline
KnockoffNets & 74.44\% & 77.12\% & 66.78\% & 53.52\% & 78.50\% & 9.22 \\
ActiveThief(k-Center) & 74.24\% & 72.90\% & 71.25\% & 35.56\% & 74.48\% & 15.11 \\
ActiveThief(Entropy) & 74.21\% & 78.77\% & 73.52\% & 37.88\% & \textbf{79.09\%} & 15.57 \\
Ours & \textbf{80.47\%} & \textbf{79.93\%} & \textbf{80.34\%} & \textbf{75.22\%} & 74.43\% & \textbf{2.68} \\
\hline
\end{tabular}
}
\label{tab:arc}
\end{table}
\subsection{Gap between hard-label and soft-label setting}
Here, we report the numerical results of previous methods under both the soft-label setting and the hard-label setting as a supplementary to the Fig.1.
To be consistent with the experiment section, the victim models we use are trained using a ResNet-34~\cite{he2016deep} architecture on four datasets: CIFAR10~\cite{krizhevsky2009learning}, SVHN~\cite{netzer2011reading}, Caltech256~\cite{griffin2007caltech}, and CUBS200~\cite{wah2011caltech}.
And their test accuracy are $91.56\%$, $96.45\%$, $78.40\%$, and $77.10\%$ respectively.
We use the 1.2M images without labels presented in the ILSVRC-2012 challenge~\cite{russakovsky2015imagenet} as the attack dataset.
We also adopt official source codes from the authors for a fair comparison.
As in the Tab.\,\ref{tab1}, the performance of all previous methods has a significant degradation on the four datasets in this scenario, and the averages of the loss are in the last row of the Tab.\,\ref{tab1}, which are 5.71\%, 8.04\%, 14.63\%, and 30.50\% respectively.
The above results show that in the hard-label scenario, the previous model stealing methods are not effective enough.
\subsection{The influence of model architectures}
Instead of assuming that the substitute model and the victim one share the same architecture, we show the effect of different model architectures here on the CIFAR10 dataset.
Keeping ResNet-34 as the victim model, we choose the structure of substitute model from ResNet-34, ResNet-18, ResNet-50~\cite{he2016deep}, VGG-16~\cite{simonyan2014very}, DenseNet~\cite{huang2017densely}, respectively. With the same architecture included, we use the standard deviation to evaluate the impact of architectures on different methods. As in Tab.\,\ref{tab:arc}, the standard deviation of our method is about $1/6$ to $1/3$ of others, which means that our method is less susceptible to the influence of the model structure. In real situations, the structure of the victim model is often unknown. Since our method is less affected by the structure, our method performs better in real-world attacks.
\begin{figure}[tb]
\renewcommand\thefigure{A}
\centering
\includegraphics[width=\textwidth]{imgs/tmp_1_cropped.pdf}
\caption{The additional visualized attention maps of the victim model and different stages substitute models using the Grad-CAM.
Along with the training stages, the attention map of the substitute model tends to fit the victim model's.}
\label{fig:cam_1}
\end{figure}
\subsection{The visualization of the attention alignment.}
As we point out in the section 3.1, the novel CAM-driven erasing strategy we designed can not only dig out more class information, but also help the substitute model to align the victim model's attention. As shown in Fig.\,\ref{fig:cam_1}, at the beginning time, the substitute model learns the wrong attention map.
Along with the iterative training stages, the attention area of the substitute model tends to fit the victim model's, which conforms to our intention.
As~\cite{zagoruyko2016paying} stated, we transfer the victim's attention to the substitute model, which is one of the reasons why our method is effective enough.
\section{Experiment}
\subsection{Experiment settings}
\textbf{Victim model.} The victim models we used (ResNet-34~\citep{he2016deep}) are trained on four datasets, namely, CIFAR10~\citep{krizhevsky2009learning}, SVHN~\citep{netzer2011reading}, Caltech256~\citep{griffin2007caltech}, and CUBS200~\citep{wah2011caltech}, and their test accuracy are $91.56\%$, $96.45\%$, $78.40\%$, and $77.10\%$, respectively.
All models are trained using the SGD optimizer with momentum (of 0.5) for 200 epochs with a base learning rate of 0.1 decayed by a factor of 0.1 every 30 epochs.
Following~\citep{orekondy19knockoff,pal2020activethief,zhou2020dast}, we use the same architecture for the substitute model and will analyze the impact of different architectures in the supplementary.
\textbf{Attack dataset.}
We use $1.2M$ images without labels from the ILSVRC-2012 challenge~\citep{russakovsky2015imagenet} as the attack dataset.
In a real attack scenario, the attacker may use pictures collected from the Internet, and the ILSVRC-2012 dataset can simulate this scenario well.
Note that we resize all images in the attack dataset to fit the size of the target datasets, which is similar to the existing setting \cite{orekondy19knockoff,pal2020activethief,zhou2020dast}.
\textbf{Training process.}
We use the SGD optimizer with momentum (of 0.9) for 200 epochs and a base learning rate of $0.02\times{\frac{batchsize}{128}}$ decayed by a factor of 0.1 every 60 epochs. The weight decay is set to $5\times10^{-4}$ for small datasets (CIFAR10~\citep{krizhevsky2009learning} and SVHN~\citep{netzer2011reading}) and $0$ for others.
We set up a query sequence $\{0.1\mathrm{K}, 0.2\mathrm{K}, 0.5\mathrm{K}, 0.8\mathrm{K}, 1\mathrm{K}, 2\mathrm{K}, 5\mathrm{K}, 10\mathrm{K}, 20\mathrm{K}, 30\mathrm{K}\}$ as the iterative maximum query budget, and stop the sampling stage whenever reaching the budget at each iteration.
\textbf{Baselines and evaluation metric.}
We mainly compare our method with KnockoffNets~\citep{orekondy19knockoff} and ActiveThief~\citep{pal2020activethief}.
Follow~\citet{jagielski2020high}, we mainly report the test accuracy (Acc) as the evaluation metric.
We also report the \emph{Agreement} metric proposed by~\citet{pal2020activethief} which counts how often the prediction of the substitute model is the same as the victim's as a supplement.
\begin{table}[tb]
\centering
\caption{
The agreement and test accuracy (in \%) of each method under 30k queries. For our model, we report the average accuracy as well as the standard deviation computed over 5 runs.
(\textbf{Boldface}: the best value, \textit{italics}: the second best value.)
}
\resizebox{\textwidth}{!}{
\begin{tabular}{lcccccccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{SVHN} & \multicolumn{2}{c}{Caltech256} & \multicolumn{2}{c}{CUBS200} \\
\cline{2-9}
& Agreement & Acc & Agreement & Acc & Agreement & Acc & Agreement & Acc \\
\hline
KnockoffNets & 75.32 & 74.44 & 85.00 & 84.50 & 57.64 & 55.28 & 30.01 & 28.03 \\
ActiveThief(Entropy) & 75.26 & 74.21 & 90.47 & 89.85 & 56.28 & 54.14 & 32.05 & 29.43 \\
ActiveThief(k-Center) & 75.71 & 74.24 & 81.45 & 80.79 & 61.19 & 58.84 & 37.68 & 34.64 \\
ActiveThief(DFAL) & 76.72 & 75.62 & 84.79 & 84.17 & 46.92 & 44.91 & 20.31 & 18.69 \\
ActiveThief(DFAL+k-Center) & 74.97 & 73.98 & 81.40 & 80.86 & 55.70 & 53.69 & 26.60 & 24.42 \\
Ours+Random & \textbf{82.14}$\pm$0.16 & \textbf{80.47}$\pm$0.02 & \textbf{92.33}$\pm$0.47 & \textbf{91.57}$\pm$0.29 & \textit{62.15}$\pm$0.52 & \textit{59.91}$\pm$0.58 & \textit{38.28}$\pm$0.31 & \textit{35.24}$\pm$0.49 \\
Ours+k-Center & \textit{80.84}$\pm$0.21 & \textit{79.27}$\pm$0.15 & \textit{91.47}$\pm$0.09 & \textit{90.68}$\pm$0.14 & \textbf{65.12}$\pm$0.56 & \textbf{62.72}$\pm$0.57 & \textbf{46.69}$\pm$0.87 & \textbf{42.91}$\pm$0.46 \\
\hline
\end{tabular}
}
\label{tab1:result}
\end{table}
\subsection{Experiment results}
\begin{figure}[]
\centering
\subfigure[CIFAR10]{
\includegraphics[width=0.25\columnwidth]{imgs/CIFAR10_only.pdf}
}\hspace{-3.3mm}
\subfigure[SVHN]{
\includegraphics[width=0.25\columnwidth]{imgs/SVHN_only.pdf}
}\hspace{-3.3mm}
\subfigure[Caltech256]{
\includegraphics[width=0.25\columnwidth]{imgs/Caltech256_only.pdf}
}\hspace{-3.3mm}
\subfigure[CUBS200]{
\includegraphics[width=0.25\columnwidth]{imgs/CUBS200_only.pdf}
}
\caption{Curves of the test accuracy versus the number of queries.}
\label{fig:result}
\end{figure}
We first report the performance of our method compared with previous methods.
After that, we conduct ablation experiments to analyze the contribution of each module.
Finally, we also analyze the performance of our method when encountering defense methods and real-world online APIs.
More experiments (\emph{e.g.}, adversarial attack and overfitting analysis) can be found in our supplementary.
\textbf{Effectiveness of our method.} As in Tab.\,\ref{tab1:result}, the test accuracy and agreement of our method are all better than the previous methods.
We also plot the curves of the test accuracy versus the number of queries in Fig.\,\ref{fig:result}.
The performance of our method consistently outperforms other methods throughout the process.
Since our method does not conflict with the previous sample selection strategy, they can be used simultaneously to further improve the performance of these attacks.
Here, we take the k-Center algorithm as an example.
Note that, with or without the sample selection strategy, our method beats the previous methods by a large margin.
Particularly, the test accuracies of our method are 4.85\%, 1.72\%, 3.88\%, and 8.27\% higher than the previous best method, respectively.
And the agreement metric shares similar results.
It is also interesting that it is less necessary to use the k-Center algorithm on datasets with a small number of classes (\emph{i.e.}, CIFAR10 and SVHN).
While for the datasets with a large number of classes, the k-Center algorithm can make the selected samples better cover each class and improve the effectiveness of the method.
\begin{table}[t]
\centering
\caption{Ability to evade the state-of-the-art defense method (adaptive misinformation) on CIFAR10 dataset.
The larger the threshold, the better the defence effect while the low victim model's accuracy (threshold 0 means no defence).
Our method evades the defense best, and the self-KD part makes a great difference.
}
\begin{tabular}{lcccc}
\hline
\multicolumn{1}{l}{\multirow{2}{*}{Method}} & \multicolumn{4}{c}{Threshold} \\
\cline{2-5}
\multicolumn{1}{c}{} & 0 & 0.5 & 0.7 & 0.9 \\
\hline
KnockoffNets & 74.44\% & 74.13\% & 73.61\% & 54.98\% \\
ActiveThief(k-Center) & 74.24\% & 69.14\% & 59.78\% & 50.19\% \\
ActiveThief(Entropy) & 74.21\% & 71.61\% & 64.84\% & 51.07\% \\
Ours & \textbf{80.47\%} & \textbf{79.95\%} & \textbf{78.25\%} & \textbf{74.40\%} \\
Ours w/o self-KD & 79.02\% & 78.66\% & 73.61\% & 61.81\% \\
\hline
victim model & 91.56\% & 91.23\% & 89.10\% & 85.14\% \\
\hline
\end{tabular}
\label{tab:mis}
\end{table}
\textbf{Ability to evade the SOTA defense method.}
The SOTA perturbation-based defense method, adaptive misinformation~\citep{kariyappa2020defending},
introduces an Out-Of-Distribution (OOD) detection module based on the maximum predicted value
and punishes the OOD samples with a perturbed model $f^{\prime}(\cdot;\theta^{\prime})$.
The model ${f}^{\prime}(\cdot;{\theta}^{\prime})$ is trained with
$\arg \min_{{\theta}^{\prime}} \mathbb{E}_{(x,y)}[-\log(1-{f}^{\prime}(x;{\theta}^{\prime})_y)]$
to minimize the probability of the correct class.
Finally, the output will be:
\begin{equation}
\begin{aligned}
& y^{\prime} =(1-\alpha)f(x;\theta)+(\alpha){f}^{\prime}(x;{\theta}^{\prime}), \\
\end{aligned}
\end{equation}
where $\alpha=1/(1+e^{\nu (\max f(x;\theta)-\tau)})$ with a hyper-parameter $\nu$ is the coefficient to control how much correct results will be returned, and $\tau$ is the threshold used for OOD detection.
The model returns incorrect predictions for the OOD samples without having much impact on the in-distribution samples.
We choose four values of the threshold $\tau$ to compare the effects of our method with the previous methods. The threshold value of $0$ means no defence. The result is shown in Tab.\,\ref{tab:mis}.
Compared with other methods, adaptive misinformation is almost invalid to our method.
Furthermore, we find that if we remove the self-KD in our method, the performance is greatly reduced.
We conclude that this is because adaptive misinformation adds noise labels to the substitute model's training dataset, and self-KD can alleviate the overfitting of the substitute model to the training dataset, making this defence method not effective enough.
\begin{wrapfigure}[15]{r}{0.4\textwidth}
\centering
\vspace{-10pt}
\includegraphics[width=0.35\textwidth]{imgs/CUBS200_aba_cropped.pdf}
\caption{Ablation study on CUBS200 dataset for the contribution of the CAM-driven erasing and the self-KD in our method.}
\label{fig:ablation}
\end{wrapfigure}
\textbf{Ablation study.}
To evaluate the contribution of different modules in our method, we conduct the ablation study on CUBS200 dataset and plot the results in Fig.\,\ref{fig:ablation}.
If the CAM-driven erasing strategy is removed, the performance of our method will be greatly reduced, showing that it has an indispensable position in our method.
We also give some visual examples in Fig.\,\ref{fig:cam} to demonstrate that this strategy can help align the attention of two models.
As depicted in the Fig.\,\ref{fig:cam}, at the beginning time, the substitute model learns the wrong attention map.
Along with the iterative training stages, the attention area of the substitute model tends to fit the victim model's, which conforms to our intention.
We further remove the self-KD module to evaluate its performance.
It can be found from Fig.\,\ref{fig1} and Fig.\,\ref{fig:ablation} that the self-KD can improve the generalization of our method and further improve the performance.
\begin{wrapfigure}[12]{r}{0.4\textwidth}
\centering
\includegraphics[width=0.35\textwidth]{imgs/aws_only_cropped.pdf}
\caption{The experiment on AWS online API.}
\label{fig:aws}
\end{wrapfigure}
\textbf{Stealing functionality of a real-world API.}
We validate our method is applicable to real-world APIs.
The AWS Marketplace is an online store that provides a variety of trained ML models for users. It can only be used in the form of a black-box setting.
We choose a popular model (waste classifier\,\footnote{\url{https://amzn.to/3nFvA54}}) as the victim model.
We use ILSVRC-2012 dataset as the attack dataset and choose another small public waste classifier dataset\,\footnote{\url{https://github.com/garythung/trashnet}}, containing $2,527$ images as the test dataset.
As in Fig.\,\ref{fig:aws}, the substitute model obtained by our method achieves $12.63\%$ and $7.32\%$ improvements in test accuracy compared with two previous methods, which show our method has stronger practicality in the real world.
\begin{table}[tb]
\centering
\caption{Transferability of adversarial samples generated with PGD attack on the substitute models.}
\begin{tabular}{lccccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{5}{c}{Substitute's architecture} \\
\cline{2-6}
& ResNet-34 & ResNet-18 & ResNet-50 & VGG-16 & DenseNet \\
\hline
KnockoffNets & 57.85\% & 63.33\% & 52.04\% & 42.88\% & 60.77\% \\
ActiveThief(k-Center) & 57.44\% & 57.90\% & 57.01\% & 16.49\% & 60.72\% \\
ActiveThief(Entropy) & 63.56\% & 66.76\% & 58.19\% & 55.43\% & 62.05\% \\
Ours & \textbf{76.63\%} & \textbf{74.10\%} & \textbf{74.28\%} & \textbf{67.03\%} & \textbf{66.96\%} \\
\hline
\end{tabular}
\label{tab:adv}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{imgs/cam.pdf}
\caption{
The visualized attention maps of the victim model and different stages substitute models using the Grad-CAM.
Along with the training stages, the attention map of the substitute model tends to fit the victim model's.
}
\label{fig:cam}
\end{figure}
\textbf{Transferability of adversarial samples.}
Though with the dominant performance on a wide range of tasks, deep neural networks are shown to be vulnerable to imperceptible perturbations, \emph{i.e.}, adversarial examples~\citep{szegedy2013intriguing}. Since the model stealing attack can obtain a functionally similar substitute model,
some previous works (\emph{e.g.}, JBDA~\citep{papernot2017practical}, DaST~\citep{zhou2020dast} and ActiveThief~\citep{pal2020activethief}) used this substitute model to generate adversarial samples and then performed the transferable adversarial attack on the victim model.
We argue that a more similar substitute model leads to a more successful adversarial attacks.
We test the transferability of adversarial samples on the test set of the CIFAR10 dataset.
Keeping the architecture of the victim model as the ResNet-34, we evaluate the attack success rate of adversarial samples generated from different substitute models (\emph{i.e.}, ResNet-34, ResNet-18, ResNet-50~\cite{he2016deep}, VGG-16~\cite{simonyan2014very}, DenseNet~\cite{huang2017densely}).
All adversarial samples are generated using Projected Gradient Descent (PGD) attack~\citep{madry2017towards} with maximum $L_\infty$-norm of perturbations as $8/255$. As shown in Tab.\,\ref{tab:adv}, the adversarial samples generated by our substitute models have stronger transferability in all substitute's architectures.
This again proves that our method is more practical in real-world scenarios.
\section{Method}
The overview of our proposed black-box dissector is shown in Fig.\,\ref{fig:overview}.
In addition to the conventional process (\emph{i.e.}, the transfer dataset $D_T$ constructing in step 1 and the substitute model training in the right),
we introduce two key modules: a CAM-driven erasing strategy (step 2.1) and a RE-based self-KD module (step 2.2).
\begin{wrapfigure}[20]{R}{0.6\textwidth}
\centering
\vspace{-19pt}
\includegraphics[width=0.6\textwidth]{imgs/anna.pdf}
\caption{An example from the ILSVRC-2012 dataset and its attention map corresponding to two most likely class ``Anna humming bird" and ``Common yellow throat" on the CUBS200 trained model.
The attention areas share similar visual apparent with images of ``Anna humming bird" and ``Common yellow throat", respectively.}
\label{fig2}
\end{wrapfigure}
\subsection{A CAM-driven erasing strategy}
Since the lack of class similarity information degrades the performance of previous methods under the hard-label setting, we try to re-dig out such hidden information.
Taking an example from the ILSVRC-2012 dataset for illustration as in Fig.\,\ref{fig2}.
Querying the CUBS200 trained victim model with this image, we get two classes with the highest confidence score: ``Anna hummingbird" (0.1364) and ``Common yellowthroat" (0.1165), and show their corresponding attention map in the first column of Fig.\,\ref{fig2}.
It is easy to conclude that two different attention regions response for different classes according to the attention map.
When training the substitute model with the hard label ``Anna hummingbird" and without the class similarity information, the model can not learn from the area related to the ``Common yellowthroat" class, which means this area is wasted.
To re-dig out the information about the ``Common yellowthroat" class, we need to erase the impact of the ``Anna hummingbird" class.
To this end, a natural idea is to erase the response area corresponding to the hard label.
Since the victim model is a black-box model, we use the substitute model to approximately calculate the attention map instead.
If the attention map calculated by the substitute model is inaccurate and the victim model's prediction on the erased image does not change,
we can also align the attention map of two models by letting the substitute model learn the original image and the erased one simultaneously.
The attention map is also a kind of supervision signal pushing two models to be similar~\cite{zagoruyko2016paying}.
To get the attention map, we utilize the Grad-CAM~\cite{selvaraju2017grad} in this paper.
With the input image $x \in [0,1]^{d}$ and the trained DNN $\mathcal{F} \colon [0,1]^{d} \mapsto \mathbb{R}^{N}$, we let $\alpha_{k}^{c}$ denote the weight of class $c$ corresponding to the $k$-th feature map,
and calculate it as $\alpha_{k}^{c} = \frac{1}{Z}\sum_{i}\sum_{j} \frac{\partial{\mathcal{F}(x)}^{c}} {\partial{A}^{k}_{ij}}, $ where $Z$ is the number of pixels in the feature map, ${\mathcal{F}(x)}^{c}$ is the score of class $c$ and ${A}^{k}_{ij}$ is the value of pixel at $(i,j)$ in the $k$-th feature map. After obtaining the weights corresponding to all feature maps, the final attention map can be obtained as $S^{c}_{\mathrm{Grad-CAM}} = \mathrm{ReLU} (\sum_{k} \alpha_{k}^{c} {A}^{k})$ via weighted summation.
To erase the corresponding area, inspired by~\cite{zhong2020random}, we define a prior-driven erasing operation as $\psi(I, P)$, shown in Alg.\,\ref{alg2}, which randomly erases a rectangle region in the image $I$ with random values while the central position of the rectangle region is randomly selected following the prior probability $P$.
The prior probability $P$ is of the same size as the input image and is used to determine the probability of different pixels being erased.
Here, we use the attention map from Grad-CAM as the prior.
Let $x \in [0,1]^{d}$ denote the input image from the transfer set and $S^{\mathop{\arg \max}\hat{f}(x)}_{\mathrm{Grad-CAM}}(x, \hat{f})$ denote the attention map of the substitute model $\hat{f}$.
This CAM-driven erasing operation can be represented:
\begin{equation}
\psi\left(x, S^{\mathop{\arg\max} \hat{f}(x)}_{\mathrm{Grad-CAM}}(x, \hat{f})\right).
\end{equation}
We abbreviate it as $\psi(x, S(x,\hat{f}))$.
To alleviate the impact of inaccurate CAM caused by the difference between the substitute model and the victim one,
for each image, we perform this operation $N$ times ($\psi_i$ means the $i$-th erasing) and select the one with the largest difference from the original label.
Such a data augment operation helps the erasing process to be more robust.
We use the cross-entropy to calculate the difference between the new label and the original label, and we want to select the sample with the biggest difference.
Formally, we define $\Pi(x)$ as the function to select the most different variation of image $x$:
\begin{equation}
\label{eq9}
\begin{aligned}
& \Pi(x) := \psi_{k}(x,S(x,\hat{f})), \\
\text{where } k := & \mathop{\arg\max}_{i \in [N]} - \sum_{j} {\phi\left(f\left(x\right)\right)}_{j} \cdot \log \left({\hat{f}\big(\psi_{i}(x,S(x,\hat{f}))\big)}_{j}\right) \\
= & \mathop{\arg\max}_{i \in [N]} - \log \left({\hat{f}\big(\psi_{i}(x, S(x,\hat{f}))\big)}_{\mathop{\arg\max} \phi\big(f(x)\big)} \right) \\
= & \mathop{\arg\min}_{i \in [N]} {\hat{f}\left(\psi_{i}(x, S(x,\hat{f}))\right)}_{\mathop{\arg\max} {\phi\big(f(x)\big)}} .
\end{aligned}
\end{equation}
Due to the limitation of the number of queries, we cannot query the victim model for each erased image to obtain a new label.
We continuously choose the erased image with the highest substitute's confidence until reaching the budget.
To measure the confidence of the model, we adopt the Maximum Softmax Probability (MSP) for its simplicity:
\begin{equation}
\label{eq10}
\begin{aligned}
& \mathop{\arg\max}_{x\sim{\mathcal{D}_{T}}} MSP\left(\hat{f}\left(\Pi\left(x\right)\right)\right) \\
= & \mathop{\arg\max}_{x\sim{\mathcal{D}_{T}}} {\hat{f}\left(\Pi\left(x\right)\right)}_{\mathop{\arg\max} {\hat{f}\left(\Pi\left(x\right)\right)}}, \\
\end{aligned}
\end{equation}
where $D_{T}$ is the transfer set. The erased images selected in this way are most likely to change the prediction class. Then, we query the victim model to get these erased images' labels and construct an erased sample set $D_E$.
Note that when the victim model's predictions on the erased images change, it means our erasing method does dig out other related class information in the sample.
With the unchanged predictions, it points out the attentions of the substitute model and the victim are inconsistent.
Though wrong attention areas erased, training with these samples benefits aligning the attentions of two models.
As~\cite{zagoruyko2016paying} stated, the attention alignment can help more powerful KD.
\begin{algorithm}[!t]
\caption{Prior-driven Erasing Operation $\psi(I, P)$}
\label{alg2}
\LinesNumbered
\KwIn{Input image $I$, prior probability $P$, image size $W$ and $H$, area of image $S$, erasing area ratio range $s_l$ and $s_h$, erasing aspect ratio range $r_1$ and $r_2$.}
\KwOut{Erased image ${I}^{\prime}$.}
$S_e \leftarrow \mathrm{Rand}(s_l, s_h) \times S$,
$ r_e \leftarrow \mathrm{Rand}(r_1, r_2)$\footnotemark{} \\
$H_e \leftarrow \sqrt{S_e \times r_e}/2, W_e \leftarrow \sqrt{\frac{S_e}{r_e}}/2$ \\
$x_e, y_e$ sampled randomly according to $P$ \\
$I_e \leftarrow (x_e - W_e, y_e - H_e, x_e + W_e, y_e + H_e)$ \\
$I(I_e) \leftarrow \mathrm{Rand}(0, 255)$ \\
${I}^{\prime} \leftarrow I$ \\
\end{algorithm}
\footnotetext{$\mathrm{Rand}(a,b)$ returns an evenly distributed random real number in the range of $a$ to $b$.}
\begin{algorithm}[t]
\caption{Black-box Dissector}
\label{alg1}
\LinesNumbered
\KwIn{Unlabeled pool $D_U$, victim model $f$, maximum number of queries $Q$.}
\KwOut{Substitute model $\hat{f}$.}
Initialize $q \leftarrow 0, D_T \leftarrow \varnothing, D_E \leftarrow \varnothing$ \\
\While{$q<Q$}{
\textbf{// Step 1} \\
Select samples from $D_U$ according to budget and query $f$ to updata $D_T$ \\
$q = q + \text{budget}$ \\
$\mathcal{L}=\sum_{x \in D_T} \mathcal{L}^{\prime}\big(\phi(f(x)),\hat{f}(x)\big)$ \\
$\hat{f} \leftarrow update(\hat{f}, \mathcal{L})$ \\
\textbf{// A CAM-driven erasing strategy (step 2.1)} \\
Erase samples in $D_T$ according to Eq.\,\ref{eq9} \\
Choose samples from erased samples according to Eq.\,\ref{eq10} and budget \\
Query $f$ to get labels and updata $D_E$ \\
$\mathcal{L}=\sum_{x \in D_T \cup D_E} \mathcal{L}^{\prime}\big(\phi(f(x)),\hat{f}(x)\big)$ \\
$\hat{f} \leftarrow update(\hat{f}, \mathcal{L})$ \\
$q = q + \text{budget}$ \\
\textbf{// A random-erasing-based self-KD (step 2.2)} \\
Select samples from $D_U$ \\
Get pseudo-labels according to Eq.\,\ref{eq11} and construct a pseudo-label set $D_P$ \\
$\mathcal{L}= \sum_{x \in D_T \cup D_E} \mathcal{L}^{\prime}\big(\phi(f(x)),\hat{f}(x)\big) + \sum_{x \in D_P} \mathcal{L}^{\prime}\big(y_p(x,\hat{f}) , \hat{f}(x)\big)$ \\
$\hat{f} \leftarrow update(\hat{f}, \mathcal{L})$ \\
}
\end{algorithm}
\subsection{A random-erasing-based self-KD module}
We also find that in training with limited hard-label OOD samples, the substitute model is likely to overfit the training set, which damages its generalization ability~\cite{kim2020self,zhang2016understanding}.
Therefore, based on the above erasing operation, we further design a simple RE-based self-KD method to improve the generalization ability of the substitute model.
Formally, let $x \in [0,1]^{d}$ denote the unlabeled input image. We perform the erasing operation with a uniform prior $U$ on it $N$ times, and then average the substitute's outputs on these erased images as the pseudo-label of the original image:
\begin{equation}
\label{eq11}
y_{p}(x,\hat{f}) = \frac{1}{N} \sum_{i=1}^N \hat{f}\big(\psi_{i}(x, U)\big).
\end{equation}
This is a type of consistency regularization, which enforces the model to have the same predictions for the perturbed images and enhances the generalization ability.
With Eq.\ref{eq11}, we construct a new soft pseudo label set $D_P=\{\big(x, y_p(x,\hat{f})\big),\dots\}$.
With the transfer set $D_T$, the erased sample set $D_E$, and the pseudo-label set $D_P$, we train a new substitute model using the ensemble of the victim model and the previous substitute model as the teacher.
Our final objective function is:
\begin{equation}
\min \mathcal{L}= \min \big[ \sum_{x \in D_T \cup D_E} \mathcal{L}^{\prime}\big(\phi(f(x)),\hat{f}(x)\big) + \sum_{x \in D_P} \mathcal{L}^{\prime}\big(y_p(x,\hat{f}) , \hat{f}(x)\big) \big].
\end{equation}
where $\mathcal{L}^{\prime}$ can be commonly used loss functions, \emph{e.g.}, cross-entropy loss function.
To sum up, we built our method on the conventional process of the model stealing attack (step 1), and proposed a CAM-driven erasing strategy (step 2.1) and a RE-based self-KD module (step 2.2) unified by a novel erasing method.
The former strategy digs out missing information between classes and aligns the attention while the latter module helps to mitigate overfitting and enhance the generalization.
We name the whole framework as \textit{black-box dissector} and present the algorithm detail of it in Alg.\,\ref{alg1}.
\section{Introduction}
\input{Introduction_2}
\input{related_work}
\input{method}
\input{experiment}
\section{Conclusion}
We investigated the problem of model stealing attacks under the hard-label setting and pointed out why previous methods are not effective enough.
We presented a new method, termed \emph{black-box dissector}, which contains a CAM-driven erasing strategy and a RE-based self-KD module.
We showed its superiority on four widely-used datasets and verified the effectiveness of our method with defense methods, real-world APIs, and the downstream adversarial attack.
Though focusing on image data in this paper, our method is general for other tasks as long as the CAM and similar erasing method work,
\emph{e.g.}, synonym saliency words replacement for NLP tasks~\cite{dong2021towards}.
We believe our method can be easily extended to other fields and inspire future researchers.
Model stealing attack poses a threat to the deployed machine learning models.
We hope this work will draw attention to the protection of deployed models and furthermore shed more light on the attack mechanisms and prevention methods.
\bibliographystyle{plainnat}
\section{Background and Notions}
\textbf{Model stealing attack} is aim to find a substitute model $\hat{f} \colon [0,1]^d \mapsto \mathbb{R}^{N}$ that performs as similarly as possible to the black-box victim model $f \colon [0,1]^{d} \mapsto \mathbb{R}^{N}$ (with only outputs accessed).
\citet{papernot2017practical} first observed that online models could be stolen through multiple queries.
After that, due to the practical threat to real-world APIs, several studies paid attention to this problem and proposed many attack algorithms.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{imgs/overview.pdf}
\caption{Details of our proposed black-box dissector with a CAM-driven erasing strategy (step 2.1) and a RE-based self-KD module (step 2.2).
In step 2.1, the images in transfer set $D_T$ are erased according to the Grad-CAM, and we selected the erased images with the largest difference from the original images according to the substitute model's outputs.
In step 2.2, we randomly erase the unlabeled image $N$ times, and then average the outputs of the $N$ erased images by the substitute model as the pseudo-label.}
\label{fig:overview}
\end{figure}
These algorithms consist of two stages: 1) constructing a transfer dataset $D_T$ (step 1 in Fig.\,\ref{fig:overview}) and 2) training a substitute model.
The transfer dataset is constructed based on data synthesis or data selection
and then feed into the victim model for labels.
Methods based on data synthesis~\citep{zhou2020dast,kariyappa2020maze,NEURIPS2020_e8d66338} adopt the GAN-based models to generate a virtual dataset.
And the substitute model and the GAN model are trained alternatively on this virtual dataset by querying the victim model iteratively.
The data selection methods prepare an attack dataset as the data pool, and then sample the most informative data via machine learning algorithms, \emph{e.g.}, reinforcement learning~\citep{orekondy19knockoff} or active learning strategy~\citep{pal2020activethief}, uncertainty-based strategy~\citep{lewis1994sequential}, k-Center strategy~\citep{sener2018active}, and DFAL strategy~\citep{ducoffe2018adversarial}.
Considering that querying the victim model will be costly,
the attacker usually sets a budget on the number of the queries, so the size of the transfer dataset should be limited as well.
Previous methods assume the victim model returns a complete probability prediction $f(x)$, which is less practical.
In this paper, we focus on a more practical scenario that is about hard-label $\phi(f(x))$ setting, where $\phi$ is the truncation function used to truncate the information contained in the victim's output and return the corresponding one-hot vector:
\begin{equation}
{\phi(f(x))}_{i}:=
\begin{cases}
1 & \text{if } i = \mathop{\arg\max}_{n}{f(x)}_{n} \,; \\
0 & \text{otherwise} \,. \\
\end{cases}
\end{equation}
With the transfer dataset, the substitute model is optimized by minimizing a loss function $\mathcal{L}$ (\emph{e.g.,} cross-entropy loss function):
\begin{equation}
\begin{cases}
\mathbb{E}_{x \sim{\mathcal{D}_{T}}}\big[\mathcal{L}\big(f(x),\hat{f}(x)\big)\big], &\text{for soft labels}; \\
\mathbb{E}_{x \sim{\mathcal{D}_{T}}}\big[\mathcal{L}\big(\phi(f(x)),\hat{f}(x)\big)\big], &\text{for hard labels}. \\
\end{cases}
\end{equation}
\textbf{Knowledge distillation} (KD) has been widely studied in machine learning~\citep{hinton2015distilling,anil2018large,furlanello2018born}, which transfers the knowledge from a teacher model to a student model.
Model stealing attacks can be regarded as a black-box KD problem where the victim model is the \textit{teacher} with only outputs accessible and the substitute model is the \textit{student}.
The main reason for the success of KD is the \textit{valuable information that defines a rich similarity structure over the data} in the probability prediction~\citep{hinton2015distilling}.
However, for the hard-label setting discussed in this paper, this valuable information is lost.
Inspired by KD, our method tries to dig out the hidden information in the data and models, and then transfers more knowledge to the substitute model.
\textbf{The erasing-based method}, \emph{e.g.}, random erasing (RE)~\cite{zhong2020random,devries2017cutout}, is currently one of the widely used data augmentation methods, which generates training images with various levels of occlusion, thereby reducing the risk of over-fitting and improving the robustness of the model. Our work is inspired by RE and designs a prior-driven erasing operation, which erases the area corresponding to the hard label to re-mine missing information.
|
3,212,635,537,673 | arxiv | \section{Introduction}
Integrable systems are widely investigated in $(1+1)$ dimensions,
where one of the dimensions stands for the time evolution variable
and the other one stands for the space variable. The space
variable is usually considered on continuous intervals, or both on
integer values and on $\mathbb{R}$ \cite{MB} or on $\mathbb{K}_q$
intervals \cite{frenkel,Adler}. In order to embed the study of
integrable systems into a more general unifying framework, one of
the possible approaches is to construct the integrable systems on
time scales. Here the space variable is considered on any time
scale where $\mathbb{R}$, $\hslash\mathbb{Z}$, $\mathbb{K}_q$ are special
cases. The first step in this direction was taken in \cite{G-G-S}, where the Gelfand-Dickey approach
\cite{gelfand,bookdickey} was extended in order to construct
integrable nonlinear evolutionary equations on any time scale.
Another unifying approach is to formulate different types of
discrete dynamics on $\mathbb{R}$. Some contribution in this
direction was made recently in \cite{bgss}.
The main goal of this work is to present a theory for the systematic
construction of $(1+1)$-dimensional integrable systems on time
scales in the frame of the $R$-matrix formalism. By an integrable
system, we mean such a system which has an infinite-hierarchy of
mutually commuting symmetries. The $R$-matrix formalism is one of the
most effective and systematic methods of constructing integrable
systems \cite{reyman,semenov}. This formalism originated from
the pioneering article \cite{gelfand} by Gelfand and Dickey,
who constructed the soliton systems of KdV type. The crucial point
of the $R$-matrix formalism is that the construction of integrable
systems proceeds from the Lax equations on appropriate Lie
algebras \cite{reyman,semenov}. The simplest $R$-matrices can be
constructed by a decomposition of a given Lie algebra into two
Lie subalgebras. We refer to \cite{semenov,bookdickey,MB} for
abstract formalism of classical $R$-matrices on Lie algebras.
This paper is organized as follows: In the next section, we give a
brief review of the time scale calculus. In the third section, we
define the $\delta$-differentiation operator and formulate the Leibniz
rule for this operator. We introduce the Lie algebra as an algebra
of $\delta$-differential operators equipped with the commutator,
decompose it into two Lie subalgebras and construct the simplest
$R$-matrix on this algebra. We present the appropriate Lax
operators for infinite-field cases and the admissible finite-field
restrictions generating consistent Lax hierarchies. In $\mathbb{T}=\mathbb{R}$
case, or in the continuous limit of some special time scales, we
observe that the algebra of $\delta$-differential operators turns
out to be the algebra of pseudo-differential operators. Next, we
formulate and prove the property of the algebra of $\delta$-differential
operators. This property allows us to obtain natural constraints
which are fulfilled by finite field restrictions. Therefore, the
source of the constraints, obtained in the Burgers equations and KdV
hierarchy on time scales in \cite{G-G-S}, is established. We end
up this section with the construction of the recursion operators
by means of the method presented in \cite{Gurses}. In the fourth
section, we illustrate two infinite-field integrable hierarchies
on time scales which are difference counterparts of
Kadomtsev-Petviashvili (KP) and modified Kadomtsev-Petviashvili
(mKP) hierarchies. In the last section, we present finite-field
restrictions which are difference counterparts of
Ablowitz-Kaup-Newell-Segur (AKNS) and Kaup-Broer (KB) hierarchies
with their recursion operators.
\section{Preliminaries}
In this section, we give a brief introduction to the concept of
time scale. We refer to \cite{boh1,boh2} for the basic definitions
and general theory of time scale. What we mean by a time scale
$\mathbb{T}$, is an arbitrary nonempty closed subset of real numbers.
The time scale calculus was introduced by Aulbach and Hilger
\cite{ah,hil} in order to unify all possible intervals on the real
line $\mathbb{R}$, like continuous (whole) $\mathbb{R}$, discrete
$\mathbb{Z}$, and $q$-discrete $\mathbb{K}_q$ $({\mathbb
K}_{q}=\,q^{\mathbb Z}\cup \{0\} \equiv \{ q^{k}: k \in {\mathbb
Z}\} \cup \{0\}$, where $q\neq 1$ is a fixed real number)
intervals. For the definition of the derivative in time scales, we use
\textit{forward} and \textit{backward jump operators} which are
defined as follows.
\begin{definition}
For $x \in \mathbb{T}$, the forward jump operator $\sigma: {\mathbb
T}\rightarrow {\mathbb T}$ is defined by
\begin{equation}
\sigma(x)=\inf\, \{ y \in {\mathbb T}: y> x\},
\end{equation}
while the backward jump operator $\rho: {\mathbb T} \rightarrow
{\mathbb T}$ is defined by
\begin{equation}
\rho(x)=\sup\, \{ y \in {\mathbb T}: y < x\}.
\end{equation}
We set in addition $\sigma(\max\mathbb{T}) = \max \mathbb{T}$ if there
exists a finite $\max\mathbb{T}$, and $\rho(\min\mathbb{T}) = \min \mathbb{T}$
if there exists a finite $\min\mathbb{T}$.
The jump operators $\sigma$ and $\rho$ allow the classification
of points in a time scale in the following way: $x$ is called right
dense, right scattered, left dense, left scattered, dense and
isolated if $\sigma(x)=x, \ \sigma(x)>x, \ \rho(x)=x, \ \rho(x)<x,
\ \sigma(x)=\rho(x)=x$ and $\rho(x)<x<\sigma(x)$, respectively.
Moreover, we define the graininess functions $\mu,\, \nu :
{\mathbb T} \rightarrow [0,\infty)$ as follows
\begin{equation}
\mu(x)=\sigma (x)-x, \quad\nu(x)=x-\rho(x),\quad \mbox{for all}~ x
\in {\mathbb T}.
\end{equation}\end{definition}
In literature, ${\mathbb T}^{\kappa}$ denotes a set
consisting of ${\mathbb T}$ except for a possible left-scattered
maximal point while ${\mathbb T}_{\kappa}$ stands for a set of
points of ${\mathbb T}$ except for a possible right-scattered
minimal point.
\begin{definition}\label{derivative}
Let $f:\mathbb{T} \to \mathbb{R}$ be a function on a time scale
$\mathbb{T}$. For $x\in\mathbb{T}^\kappa$, delta derivative of $f$,
denoted by $\Delta f$, is defined as
\begin{equation}\label{del}
\Delta f(x) = \lim_{s\to x} \frac{f(\sigma (x))-f(s)}{\sigma
(x)-s},\qquad s\in\mathbb{T},
\end{equation}
while for $x\in \mathbb{T}_\kappa$, $\nabla$-derivative of $f$,
denoted by $\nabla f$, is defined as
\begin{equation}\label{nab}
\nabla f(x) = \lim_{s\to x} \frac{f(s)-f(\rho(x))}{s-\rho
(x)},\qquad s\in\mathbb{T},
\end{equation}
provided that the limits exist. A function $f:{\mathbb T}
\rightarrow {\mathbb R}$ is said to be $\Delta$-smooth
($\nabla$-smooth) if it is infinitely $\Delta$-differentiable
($\nabla$-differentiable).
\end{definition}
\begin{remark}Let $f:\mathbb{T}\to\mathbb{R}$ be $\Delta$-differentiable on
$\mathbb{T}^\kappa$. If $x$ is right-scattered, then the
definition \eqref{del} turns out to be
\begin{equation*} \Delta f(x)= \frac{f(\sigma(x))-f(x)}{\mu(x)},
\end{equation*}
while if $x$ is right-dense, \eqref{del} implies that
\begin{equation*}
\Delta f(x)=\lim_{s \rightarrow x}\, \frac{f(x)-f(s)}{x-s},\qquad
s\in\mathbb{T}.
\end{equation*}
Similarly, let $f:\mathbb{T}\to\mathbb{R}$ be
$\nabla$-differentiable on $\mathbb{T}_\kappa$. If $x$ is
left-scattered, then the definition \eqref{nab} turns out to be
\begin{equation*}
\nabla f(x)= \frac{f(x)-f(\rho(x))}{\nu(x)},
\end{equation*} while if $x$ is left-dense, \eqref{nab} yields as
\begin{equation*}
\nabla f(x)=\lim_{s \rightarrow x}\, \frac{f(x)-f(s)}{x-s},\qquad
s\in\mathbb{T}.
\end{equation*}
\end{remark}
In order to be more precise, we present $\Delta$ and $\nabla$
derivatives for some special time scales. If ${\mathbb T}={\mathbb
R}$, then $\Delta$- and $\nabla$-derivatives become ordinary
derivatives, i.e.
\begin{equation*}
\Delta f(x) = \nabla f(x) = \frac{df(x)}{dx}.
\end{equation*}
If ${\mathbb T}=\hslash\mathbb{Z}$, then
\begin{equation*}
\Delta f(x)= \frac{f(x+\hslash)-f(x)}{\hslash}~~~~ \mbox{and}~~~~\nabla
f(x)= \frac{f(x)-f(x-\hslash)}{\hslash}.
\end{equation*}
If ${\mathbb T}={\mathbb K}_{q}$, then
\begin{equation*}
\Delta f(x)= \frac{f(qx)-f(x)}{(q-1)x} ~~~\mbox{and}~~~\nabla
f(x)= \frac{f(x)-f(q^{-1}\,x)}{(1-q^{-1})x},
\end{equation*}
for all $x \ne 0$, and
\begin{equation*}
\Delta f(0)=\nabla f(0)=\lim_{s \rightarrow 0} \frac{f(s)-f(0)}{s},\quad s\in\mathbb{K}_q,
\end{equation*}
provided that this limit exists.
As an important property of $\Delta$-differentiation on $\mathbb
T$, we give the product rule. If $f,g:\mathbb{T} \to \mathbb{R}$
are $\Delta$-differentiable functions at $x\in \mathbb
T^{\kappa}$, then their product is also $\Delta$-differentiable
and the following Lebniz-like rule hold
\begin{equation}\label{leib}
\begin{split}
\Delta(f g)(x) &= g(x)\Delta f(x) + f(\sigma(x))\Delta g(x)\\
&= f(x)\Delta g(x) + g(\sigma(x))\Delta f(x).
\end{split}
\end{equation}
Besides, if $f$ is $\Delta$-smooth function, then
\begin{equation}\label{rel}
f(\sigma(x)) = f(x) + \mu(x)\Delta f(x).
\end{equation}
If $x\in\mathbb{T}$ is right-dense, then $\mu(x)=0$ and the relation
\eqref{rel} is trivial.
\begin{definition}
A time scale ${\mathbb T}$ is regular if both of the following two
conditions are satisfied:
\begin{itemize}
\item[(i)] $\sigma(\rho(x))=x$ for all $x\in\mathbb{T}$,
\item[(ii)] $\rho(\sigma(x))=x$ for all $x\in\mathbb{T}$.
\end{itemize}
\end{definition}
Set $x_*=\min\mathbb{T}$ if there exists a finite $\min\mathbb{T}$, and set
$x_* = -\infty$ otherwise. Also set $x^*=\max\mathbb{T}$ if there
exists a finite $\max\mathbb{T}$, and set $x^* = \infty$ otherwise.
\begin{proposition}\emph{\cite{G-G-S}}
A time scale is regular if and only if the following two
conditions hold:
\begin{itemize}
\item[(i)] the point $x_*=\min\mathbb{T}$ is right dense and the point $x^*=\max\mathbb{T}$
is left-dense;
\item[(ii)] each point of $\mathbb{T}\setminus \{x_*,x^*\}$ is either two-sided
dense or two-sided scattered.
\end{itemize}
\end{proposition}
In particular $\mathbb{R}, \hslash\mathbb{Z}$ ($\hslash\neq 0$) and
$\mathbb{K}_q$ are regular time scales, as are $[0,1]$ and $[-1,0]
\cup \{1/k:k\in\mathbb{N}\} \cup \{k/(k+1):k\in\mathbb{N}\} \cup
[1,2]$.
Throughout this work, let $\mathbb{T}$ be a regular time scale. By
$\Delta$, we denote the delta-differentiation operator which
assigns each $\Delta$-differentiable function
$f:\mathbb{T}\to\mathbb{R}$ to its delta-derivative $\Delta(f)$,
defined by
\begin{equation}
[\Delta(f)](x)=\Delta f(x), \quad\mbox{for}\quad x \in {\mathbb
T}^{\kappa}.
\end{equation}
The {\it shift operator} $E$ is defined by the formula
\begin{equation}
(Ef)(x)=f(\sigma(x)),\qquad x \in {\mathbb T}.
\end{equation}
The inverse $E^{-1}$ is defined by
\begin{equation}
(E^{-1}\,f)(x)=f(\sigma^{-1}(x))=f(\rho(x)),
\end{equation}
for all $x\in\mathbb{T}$. Note that $E^{-1}$ exists only in the case of
regular time scales and that in general $E$ and $E^{-1}$ do not
commute with $\Delta$ and $\nabla$ operators.
\begin{proposition}\emph{\cite{aticiguseinov}}
Let $\mathbb T$ be a regular time scale.
\begin{itemize}
\item[(i)] If $f: {\mathbb T} \rightarrow {\mathbb R}$ is a
$\Delta$-smooth function on $\mathbb T^{\kappa}$, then $f$ is
$\nabla$-smooth and for all $x\in \mathbb{T}_{\kappa}$,
\begin{equation}\label{nabladelta}
\nabla f(x)= E^{-1}\Delta f(x).
\end{equation}
\item[(ii)] If $f: {\mathbb T} \rightarrow {\mathbb R}$ is a
$\nabla$-smooth function on $\mathbb T_{\kappa}$, then $f$ is
$\Delta$-smooth and for all $x\in \mathbb{T}^{\kappa}$,
\begin{equation}\label{deltanabla}
\Delta f(x)= E\nabla f(x).
\end{equation}
\end{itemize}
\end{proposition}
Thus the properties of $\Delta$- and $\nabla$-smoothness for
functions on regular time scales are equivalent.
In some special cases, by properly introducing the deformation
parameter, it is possible to consider a continuous limit of a time
scale. For instance, the continuous limit of $\hslash\mathbb{Z}$ is the whole
real line $\mathbb{R}$, i.e.
\begin{equation}\label{c1}
\begin{CD}
\mathbb{T} = \hslash\mathbb{Z} @>\hslash\rightarrow 0>> \mathbb{T}=\mathbb{R};
\end{CD}
\end{equation}
and the continuous limit of ${\mathbb K}_q$ is the closed half line
$\mathbb{R}_{+}\cup{0}$, thus
\begin{equation}\label{c2}
\begin{CD}
\mathbb{T} = {\mathbb K}_q @>q\rightarrow 1>> \mathbb{T}=\mathbb{R}_{+}\cup{0}.
\end{CD}
\end{equation}
For more about the calculus on time scales we refer the readers to
\cite{boh1,boh2}.
\section{Algebra of $\delta$-differential operators}
\subsection{Basic notions}
In this section, we deal with the algebra of $\delta$-differential
operators defined on a regular time scale $\mathbb{T}$. We denote the
delta differentiation operator by $\delta$ instead of $\Delta$,
for convenience in the operational relations. The operator $\delta
f$ which is a composition of $\delta$ and $f$, where $f:
\mathbb{T}\rightarrow {\mathbb R}$, is introduced as follows
\begin{equation}\label{delta}
\delta f:=\Delta f+E(f) \delta, \quad \forall f.
\end{equation}
Note that, the definition \eqref{delta} is consistent with the
Lebniz-like rule on time scales \eqref{leib}.
\begin{theorem}
The Leibniz rule on time scales for the operator $\delta$ is given
as follows.
\begin{itemize}
\item[(i)] For $n\geqslant 0$:
\begin{equation}
\delta^{n}
f=\sum_{k=0}^n\quad\sum_{i_1+i_2+...+i_{k+1}=n-k}(\Delta^{i_{k+1}}E\Delta^{i_{k}}E...\Delta^{i_{2}}E\Delta^{i_{1}})
f\delta^k,\label{leibnizpositive}
\end{equation}
where $i_\gamma \geqslant 0$ for all $\gamma=1,2,..,k+1$. Here the
formula includes all possible strings containing $n-k$ times
$\Delta$ and $k$ times $E$.
\item[(ii)] For $n<0$:
\begin{equation}
\delta^{n}
f=\sum_{k=-n}^\infty\quad\sum_{i_1+i_2+...+i_{k+n+1}=k}(-1)^{k+n}(E^{-i_{k+n+1}}\Delta
E^{-i_{k+n}}\Delta...E^{-i_{2}}\Delta E^{-i_{1}})
f\delta^{-k},\label{leibniznegative}
\end{equation}
where $i_\gamma > 0$ for all $\gamma=1,2,..,k+n+1>0$. Here the
formula includes all possible strings containing $k+n+1$ times $E$
and $k+n$ times $\Delta$.
\end{itemize}
\end{theorem}
The above theorem is a straightforward consequence of definition
\eqref{delta}. Note that $\delta^{-1} f$ has the form of the
formal series
\begin{equation}
\delta^{-1}f=\sum_{k=0}^{\infty}(-1)^{k}((E^{-1} \Delta)^k
E^{-1})f\delta^{-k-1},\label{leibniz2}
\end{equation}
which was previously given in \cite{G-G-S}, in terms of $\nabla$.
Thus \eqref{leibniznegative} is the appropriate generalization of
\eqref{leibniz2}.
\subsection{Classical $R$-matrix formalism}
In order to construct integrable hierarchies of mutually commuting
vector fields on time scales, we deal with a systematic method, so-called {\it the classical
$R$-matrix formalism} \cite{semenov,bookdickey,MB}, presented in the following
scheme.
Let $\mathcal{G}$ be an algebra, with some associative
multiplication operation, over a commutative field $\mathbb{K}$ of
complex or real numbers, based on an additional bilinear product
given by a Lie bracket $[\cdot,\cdot]:\mathcal{G}\to\mathcal{G}$,
which is skew-symmetric and satisfies the Jacobi identity.
\begin{definition}\label{Rmatrix}
A linear map $R:\mathcal{G}\to\mathcal{G}$ such that the bracket
\begin{equation}
[a,b]_R:=[Ra,b]+[a,Rb],\label{rmatrix}
\end{equation}
is a second Lie bracket on $\mathcal{G}$, is called the classical
$R$-matrix.
\end{definition}
Skew-symmetry of \eqref{rmatrix} is obvious. When one checks the
Jacobi identity of \eqref{rmatrix}, it can be clearly deduced that
a sufficient condition for $R$ to be a classical $R$-matrix is
\begin{equation}\label{yangbaxter}
[Ra,Rb]-R[a,b]_R+\alpha[a,b]=0,
\end{equation}
where $\alpha\in\mathbb{K}$, called the \textit{Yang-Baxter
equation} YB$(\alpha)$. There are only two relevant cases of
YB$(\alpha)$, namely $\alpha\neq 0$ and $\alpha=0$, as Yang-Baxter
equations for $\alpha\neq 0$ are equivalent and can be
reparametrized.
Additionally, assume that the Lie bracket is a derivation of
multiplication in $\mathcal{G}$, i.e. the relation
\begin{equation}\label{Rmatrixleibniz}
[a,bc]=b[a,c]+[a,b]c\qquad a,b,c\in\mathcal{G}
\end{equation}
holds. If the Lie bracket is given by the commutator, i.e. $[a,b]=
ab-bc$, the condition \eqref{Rmatrixleibniz} is satisfied
automatically, since $\mathcal{G}$ is associative.
\begin{proposition}\label{proposition}
Let $\mathcal{G}$ be a Lie algebra fulfilling all the above assumptions and
$R$ be the classical $R$-matrix satisfying the Yang-Baxter
equation, YB$(\alpha)$. Then the power functions $L^n$ on $\mathcal{G}$,
$L\in\mathcal{G}$ and $n\in\mathbb{Z}_+$, generate the so-called Lax
hierarchy
\begin{equation}\label{vectorfield}
\frac{dL}{dt_n} = \brac{R(L^n),L},
\end{equation}
of pairwise commuting vector fields on $\mathcal{G}$. Here, $t_n$'s are
related evolution parameters. We additionally assume that $R$
commutes with derivatives with respect to these evolution
parameters.
\end{proposition}
\begin{proof}
It is clear that the power functions on $\mathcal{G}$ are well defined.
Then
\begin{align*}
(L_{t_m})_{t_n}-(L_{t_n})_{t_m} &= [RL^m,L]_{t_n}-[RL^n,L]_{t_m}\\
&=[(RL^m)_{t_n}-(RL^n)_{t_m},L]+[RL^m,[RL^n,L]]-[RL^n,[RL^m,L]]\\
&=[(RL^m)_{t_n}-(RL^n)_{t_m}+[RL^m,RL^n],L].
\end{align*}
Hence, the vector fields \eqref{vectorfield} mutually commute if
the so-called {\it zero-curvature} (or Zakharov-Shabat) {\it
equations}
\begin{equation*
(RL^m)_{t_n}-(RL^n)_{t_m}+[RL^m,RL^n]=0,
\end{equation*}
are satisfied. From \eqref{vectorfield} and by the Leibniz rule
\eqref{Rmatrixleibniz} we have that $(L^m)_{t_n} = [RL^n,L^m]$.
Using Yang-Baxter equation for $R$ and the fact that $R$ commutes
with $\partial_{t_n}$, we deduce
\begin{align*}
R(L^m)_{t_n} &- R(L^n)_{t_m} + [RL^m,RL^n] =\\
&= R[RL^n, L^m] - R[RL^m,L^n] + [RL^m,RL^n]\\
&= [RL^m,RL^n]-R[L^m,L^n]_R = - \alpha [L^m,L^n] = 0.
\end{align*}
Hence, the vector fields pairwise commute.
\end{proof}
In practice the powers of Lax operators in \eqref{vectorfield} are
fractional. Notice that, the Yang-Baxter equation is a sufficient
condition for mutual commutation of vector fields
\eqref{vectorfield}, but not necessary. Thus choosing an algebra
$\mathcal{G}$ properly, the Lax hierarchy yields abstract integrable
systems. In practice, the element $L$ of $\mathcal{G}$ must be
appropriately chosen, in such a way that the evolution systems
\eqref{vectorfield} are consistent on the subspace of $\mathcal{G}$.
\subsection{Classical $R$-matrix on time-scales}
We introduce the algebra $\mathcal{G}$ as an algebra of formal
Laurent series of (pseudo-) $\delta$-differential operators
equipped with the commutator, and define its decomposition such as:
\begin{equation}\label{algebra}
\mathcal{G}= \mathcal{G}_{\geqslant k}\oplus \mathcal{G}_{< k}= \{ \sum_{i\geqslant k}u_{i}(x)\delta^{i}\}\oplus
\{\sum_{i<k}u_{i}(x)\delta^{i}\},
\end{equation}
where $u_{i}:\mathbb{T}\rightarrow \mathbb{K}$ are $\Delta$-smooth functions. The
subspaces $\mathcal{G}_{\geqslant k}$, $\mathcal{G}_{< k}$ are
closed Lie subalgebras of $\mathcal{G}$ only if $k=0,1$. Thus, we
define the classical $R$-matrix in the following form
\begin{equation}\label{classicalrmatrix}
R:=\frac{1}{2}(P_{\geqslant k}-P_{<k})\qquad k=0,1,
\end{equation}
where $P_{\geqslant k}$ and $P_{<k}$ are the projections onto
$\mathcal{G}_{\geqslant k}$ and $\mathcal{G}_{<k}$, respectively. Since the
classical $R$-matrices \eqref{classicalrmatrix} are defined
through the projections onto Lie subalgebras, they satisfy the Yang-Baxter equation
\eqref{yangbaxter} for $\alpha=\frac{1}{4}$.
Let $L\in\mathcal{G}$ be given in the form
\begin{equation}\label{laxo}
L=u_N\delta^N+u_{N-1}\delta^{N-1}+\ldots+u_1\delta+u_0+u_{-1}\delta^{-1}+\ldots,
\end{equation}
where $u_i$ are dynamical fields depending additionally on
the evolution parameters $t_n$. Thus, the Lax hierarchy
\eqref{vectorfield}, based on \eqref{classicalrmatrix} and in
general generated by fractional powers of $L$, turns out to be
\begin{equation}\label{laxequation}
\frac{dL}{dt_n}=\brac{\bra{L^\frac{n}{N}}_{\geqslant
k},L}=-\brac{\bra{L^\frac{n}{N}}_{<k},L}\qquad k=0,1\qquad
n\in\mathbb{Z}_{+}.
\end{equation}
Proposition \ref{proposition} implies that the hierarchy
\eqref{laxequation} is infinite hierarchy of mutually commuting
vector fields and represents $(1+1)$-dimensional integrable
differential-difference systems on a time scale $\mathbb{T}$, including the
time variables $t_n$ and space variable $x\in\mathbb{T}$.
Analyzing \eqref{laxequation} for $L$ given by \eqref{laxo}, in
the case of $k=0$, one finds that $(u_N)_t=0$ and $(u_{N-1})_t =
\mu(\ldots)$ (see also Remark \ref{remark1}). Similarly for $k=1$,
we have $(u_N)_t= \mu(\ldots)$ (see also Remark \ref{remark2}).
Hence, the appropriate Lax operators, yielding consistent Lax
hierarchies \eqref{laxequation}, are in the following form:
\begin{align}
\label{l1} k=0:\qquad & L=c_N\delta^N+\tilde{u}_{N-1}\delta^{N-1}+\ldots+u_1\delta^1+u_0+u_{-1}\delta^{-1}+\ldots\\
\label{l2} k=1:\qquad &
L=\tilde{u}_N\delta^N+u_{N-1}\delta^{N-1}+\ldots+u_1\delta^1+u_0+u_{-1}\delta^{-1}+\ldots,
\end{align}
where $c_N$ is a time-independent field and fields
$\tilde{u}_{N-1},\tilde{u}_N$ are time-independent for dense
$x\in\mathbb{T}$, as at these points $\mu = 0$. This is the reason why
they are distinguished by a tylde mark.
Nevertheless, we are interested in finite-field integrable systems
on time-scales. Thus, in order to work with a finite number of
fields, we should impose some restrictions on \eqref{l1} and
\eqref{l2} in such a way that the commutator on the right-hand side of
the Lax equation \eqref{laxequation} does not produce terms not
contained in the left-hand side of the Lax equation. To be more
precise, the left- and right-hand of \eqref{laxequation} span the
same subspace of $\mathcal{G}$. From this purpose, in the case of
$k=0$, one finds the general admissible form of finite-field Lax
operator given by
\begin{equation}\label{finiterestriction0}
L = c_N\delta^N+\tilde{u}_{N-1}\delta^{N-1}+\ldots+u_1\delta+u_0+\sum_s\psi_s\delta^{-1}\varphi_s,
\end{equation}
with further restriction
\begin{equation}\label{finiterestriction01}
L = c_N\delta^N+\tilde{u}_{N-1}\delta^{N-1}+\ldots+u_1\delta+u_0.
\end{equation}
In the case of $k=1$, the general admissible Lax operator has the form
\begin{equation}\label{finiterestriction1}
L = \tilde{u}_N\delta^N+u_{N-1}\delta^{N-1}+\ldots+u_1\delta+u_0+\delta^{-1}u_{-1}
+\sum_s\psi_s\delta^{-1}\varphi_s,
\end{equation}
and further restrictions are
\begin{align}
\label{finiterestriction10} L &= \tilde{u}_N\delta^N+u_{N-1}\delta^{N-1}+\ldots+u_1\delta+u_0+\delta^{-1}u_{-1}\\
\label{finiterestriction11} L &= \tilde{u}_N\delta^N+u_{N-1}\delta^{N-1}+\ldots+u_1\delta+u_0\\
\label{finiterestriction12} L &=
\tilde{u}_N\delta^N+u_{N-1}\delta^{N-1}+\ldots+u_1\delta.
\end{align}
In the above Lax operators $c_N$ is a time-independent field for all
$x\in\mathbb{T}$ and $\tilde{u}_{N-1}, \tilde{u}_N$ are
time-independent at dense points from a time scale. We assume also
that the sum $\sum_s$ is finite.
In general, for an arbitrary regular time scale $\mathbb{T}$, the Lax
hierarchies \eqref{laxequation} represent hierarchies of
soliton-like integrable difference systems. For instance, when
$\mathbb{T} = \hslash\mathbb{Z}$ or ${\mathbb K}_q$, the hierarchies \eqref{laxequation} are
those of lattice and $q$-deformed (-like) (discrete) soliton
systems, respectively. In particular, for the case of $\mathbb{T}=\mathbb{R}$,
i.e. the continuous time scale on the whole $\mathbb{R}$, the Lax
hierarchies are those of field soliton systems. In some
cases, field soliton systems can also be obtained from the
continuous limit of integrable systems on time scales (see
\eqref{c1} and \eqref{c2}).
In the continuous time scale, the algebra of $\delta$-differential
operators \eqref{algebra} turns out to be the algebra of
pseudo-differential operators
\begin{equation}\label{algebrapartial}
\mathcal{G}=\mathcal{G}_{\geqslant k}\oplus \mathcal{G}_{< k}= \{ \sum_{i\geqslant
k}
u_{i}(x)\partial^{i}\}\oplus \{ \sum_{i< k}
u_{i}(x)\partial^{i}\},
\end{equation}
where $\partial$ is such that $\partial u = \partial_x u + u\partial=u_x+u\partial$. The
above decomposition is valid only if $k=0,1$ and $2$. Thus, in the
general theory of integrable systems on time scales, we loose one
case in contrast to the ordinary soliton systems constructed by means of
pseudo-differential operators. This follows from the fact that,
for $k=2$, \eqref{algebra} does not decompose into Lie subalgebras
for an arbitrary time scale. For appropriate Lax operators, finite
field restrictions and more information about the algebra of
pseudo-differential operators, we refer the reader to
\cite{oevel,oevelsrt,bookdickey,MB}. Note that the fields $\psi_s$
and $\varphi_s$ in \eqref{finiterestriction0} and
\eqref{finiterestriction1} are special dynamical fields in the case of
the algebra of pseudo-differential operators. They are the
so-called source terms, as $\psi_s$ and $\varphi_s$ are
eigenfunctions and adjoint-eigenfunctions, respectively, of the
Lax hierarchy \eqref{laxequation} \cite{oevelsrt}.
It turns out that there are constraints between dynamical fields
of the admissible finite-field Lax restrictions
\eqref{finiterestriction0}-\eqref{finiterestriction12} fulfilling
\eqref{laxequation}. We give these constraints in the following
theorem, which is a consequence of the property of the algebra of
$\delta$-differential operators. This property is illustrated in
the following lemma.
\begin{lemma}\label{lemma1}
Consider the equality
\begin{equation}\label{l1r}
\delta^r F=\sum_{i=0}^{r}C_i\delta^{r-i},\qquad r>0.
\end{equation}
Then the following relation
\begin{equation}\label{lemma1result}
\sum_{i=0}^r(-\mu)^iC_i=F
\end{equation}
is valid.
\end{lemma}
\begin{proof}
We make use of induction. Assume that \eqref{lemma1result} holds
for $r$. Then
\begin{align}
\delta^{r+1}F = \delta^r(EF)\delta + \delta^r\Delta F = \sum_{i=0}^r A_i \delta^{r-i+1} +\sum_{i=0}^r B_i \delta^{r-i} = \sum_{i=0}^{r+1} C_i \delta^{r+1-i}.
\end{align}
By the assumption we have $\sum_{i=0}^r(-\mu)^iA_i = EF$ and
$\sum_{i=0}^r(-\mu)^iB_i = \Delta F$. Hence
\begin{align}
\sum_{i=0}^{r+1}(-\mu)^iC_i = \sum_{i=0}^r(-\mu)^{(i+1)}B_i + \sum_{i=0}^r(-\mu)^iA_i
= -\mu\Delta F + EF = F.
\end{align}
\end{proof}
Let us explain the source of Lemma \ref{lemma1}. Consider the
equality
\begin{equation}\label{rr}
A = \sum_{i\geqslant 0}a_i\delta^i =0,
\end{equation}
where the sum is finite, and $A$ is purely $\delta$-differential
operator. We expand $A$ with respect to the shift operator $\mathcal{E}$:
$\mathcal{E} u = E(u) \mathcal{E}$. From the relation \eqref{rel} we have
\begin{equation}\label{rell}
\mathcal{E} = 1 + \mu\delta.
\end{equation}
The equality from Lemma \ref{lemma1} is trivially satisfied for
dense $x\in\mathbb{T}$, since in this case $\mu=0$. Thus, it is enough
to consider remaining points in a time scale so assume that
$\mu\neq 0$. Hence, from \eqref{rell}, we have the formula
\begin{equation}
\delta = {\mu}^{-1}\mathcal{E} - {\mu}^{-1}\label{abc}.
\end{equation}
Thus, using \eqref{abc} the relation \eqref{rr} can be rewritten
as
\begin{equation}
A = \sum_{i}a'_i\mathcal{E}^i =0.
\end{equation}
Obviously, it must hold for terms of all orders. The equality for
the zero-order terms, i.e. $a'_0=0$, can be simply obtained by
replacing $\delta$ with $-{\mu}^{-1}$ in \eqref{rr}. The same
substitution in \eqref{l1r} allows us to find
\begin{equation}
(-\mu)^{-r} F=\sum_{i=0}^{r}C_i (-\mu)^{-r+i},
\end{equation}
which is equivalent to \eqref{lemma1result}.
The above procedure can be extended also to operators $A$ that are
not purely $\delta$-differential and contain finitely many terms
with $\delta^{-1},\delta^{-2},\ldots$. As an illustration consider
the equality
\begin{equation}
[A\delta^r,\psi\delta^{-1}\varphi] =
\sum_{i=0}^{r-1}C_i\delta^{r-1-i} + \hat{C}_r\delta^{-1}\varphi +
\psi\delta^{-1}C_r.
\end{equation}
The above equality is well-formulated since it follows
immediately from the definition and the property of the $\delta$
operator. Replacing $\delta$ with $-\mu^{-1}$, the commutator
vanishes, and we have
\begin{align}
&0 = \sum_{i=0}^{r-1}C_i(-\mu)^{-r+1+i} + \hat{C}_r(-\mu)\varphi + \psi(-\mu)C_r
\quad\Longleftrightarrow\\
&\sum_{i=0}^{r-1}(-\mu)^iC_i + (-\mu)^r(\hat{C}_r\varphi + \psi C_r) = 0.
\end{align}
Straightforward consequence of such a behavior of
$\delta$-differential operators is the next theorem.
\begin{theorem}\label{theorem2}\hfill
\begin{itemize}
\item[(i)] The case $k=0$. The constraint between dynamical fields of \eqref{finiterestriction0}, generating Lax hierarchy \eqref{laxequation},
has the form
\begin{equation}\label{con1}
\begin{split}
&(-\mu)^{N-1}\diff{\tilde{u}_{N-1}}{t_n} + \sum_{i=0}^{N-2}(-\mu)^i\diff{u_i}{t_n} - \mu\sum_s\diff{(\psi_s\varphi_s)}{t_n} = 0\\
&\qquad\Longrightarrow\qquad
(-\mu)^{N-1}\tilde{u}_{N-1} + \sum_{i=0}^{N-2}(-\mu)^i u_i - \mu\sum_s \psi_s\varphi_s = a_n,
\end{split}
\end{equation}
where $n\in\mathbb{Z}_+$ and $a_n$ is a time-independent function.
\item[(ii)] The case $k=1$. The constraint between dynamical fields of \eqref{finiterestriction1}, generating \eqref{laxequation}, has the form
\begin{equation}\label{con2}
\begin{split}
& (-\mu)^{N}\diff{\tilde{u}_{N}}{t_n} + \sum_{i=-1}^{N-1}(-\mu)^i\diff{u_i}{t_n} - \mu\sum_s\diff{(\psi_s\varphi_s)}{t_n} = 0\\
&\qquad\Longrightarrow\qquad
(-\mu)^{N}\tilde{u}_{N} + \sum_{i=-1}^{N-1}(-\mu)^i u_i - \mu\sum_s \psi_s\varphi_s = a_n,
\end{split}
\end{equation}
where $n\in\mathbb{Z}_+$ and $a_n$ is a time-independent function.
\end{itemize}
\end{theorem}
\begin{proof}
We already know that Lax operators \eqref{finiterestriction0} and
\eqref{finiterestriction1} generate consistent Lax hierarchies
\eqref{laxequation}. Thus, the right-hand side of
\eqref{laxequation} can be represented in the form of $L_{t_n}$.
Replacing $\delta$ with $-\mu^{-1}$ in \eqref{laxequation}, we have
\begin{equation}
\left . L_{t_n} \right |_{\delta=-\mu^{-1}} =
\left . [(L^n)_{\geqslant k},L] \right |_{\delta=-\mu^{-1}} = 0.
\end{equation}
Hence, the constraints \eqref{con1} and \eqref{con2} follow.
\end{proof}
The above theorem can be generalized to further restrictions. As a
consequence, the constraints $\eqref{con1}$ or $\eqref{con2}$
with fixed common value of all $a_n$, are valid for the whole Lax
hierarchy \eqref{laxequation}.
\subsection{Recursion operators}
One of the characteristic features of integrable systems possessing
infinite-hierarchy of mutually commuting symmetries is the
existence of a recursion operator \cite{Olver,MB}. A recursion
operator of a given system, is an operator of such property that
when it acts on one symmetry of the system considered, it produces
another symmetry. G\"urses \emph{et al.} \cite{Gurses}
presented a general and very efficient method of constructing recursion operators for Lax hierarchies.
Among others, the authors illustrated the method by applying it to
finite-field reductions of the KP hierarchy. In \cite{Blaszak} the
method was applied to the reductions of modified KP hierarchy as
well as to the lattice systems. Our further considerations are based
on the scheme from \cite{Gurses} and \cite{Blaszak}.
The recursion operator $\Phi$ has the following property:
\begin{equation*}
\Phi(L_{t_{n}})=L_{t_{n+N}},\qquad n\in\mathbb{Z}_{+},
\end{equation*}
and hence it allows reconstruction of the whole hierarchy
\eqref{laxequation} when applied to the first $(N-1)$ symmetries.
\begin{lemma}\label{recl}\hfill
\begin{itemize}
\item[(i)] The case $k=0$. Let the Lax operator be given in the general form \eqref{finiterestriction0}.
Then, the recursion operator of the related Lax hierarchy can be
constructed solving
\begin{equation}\label{rec}
L_{t_{n+N}} = L_{t_n}L + [R, L]
\end{equation}
with the remainder in the form
\begin{equation}\label{rem1}
R = a_{N-1}\delta^{N-1} + \cdots + a_0 + \sum_s a_{-1,s}\delta^{-1}\varphi_s,
\end{equation}
where $N$ is the highest order of $L$.
\item[(ii)] The case $k=1$. Similarly for the Lax operator \eqref{finiterestriction1}, the recursion operator can be constructed from \eqref{rec} with
\begin{equation}\label{rem2}
R = a_N\delta^N + \cdots + a_0 + \sum_s a_{-1,s}\delta^{-1}\varphi_s.
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
Consider the case $k=0$. Then for \eqref{finiterestriction0} we
have
\begin{align*}
(L^\frac{n+N}{N})_{\geqslant 0} &= ((L^\frac{n}{N})_{\geqslant 0}L)_{\geqslant 0} + ((L^\frac{n}{N})_{< 0}L)_{\geqslant 0}\\
&= (L^\frac{n}{N})_{\geqslant 0}L - \sum_s[(L^\frac{n}{N})_{\geqslant 0}\psi_s]_0\delta^{-1}\varphi_s +
((L^\frac{n}{N})_{< 0}L)_{\geqslant 0}\\
&= (L^\frac{n}{N})_{\geqslant 0}L + R,
\end{align*}
where $[\sum_ia\delta^i]_0 = a_0$ and $R$ is given by
\eqref{rem1}. Similarly for $k=1$, we have
\begin{align*}
(L^\frac{n+N}{N})_{\geqslant 1} &= ((L^\frac{n}{N})_{\geqslant 1}L)_{\geqslant 1} + ((L^\frac{n}{N})_{< 1}L)_{\geqslant 1}\\
&= (L^\frac{n}{N})_{\geqslant 1}L - [(L^\frac{n}{N})_{\geqslant 1}L]_0 -
\sum_s[(L^\frac{n}{N})_{\geqslant 0}\psi_s]_0\delta^{-1}\varphi_s + ((L^\frac{n}{N})_{< 1}L)_{\geqslant 1}\\
&= (L^\frac{n}{N})_{\geqslant 1}L + R,
\end{align*}
where $R$ has the form \eqref{rem2}. Thus, in both cases
\eqref{rec} follows from \eqref{laxequation}. Hence we can extract
the recursion operator from \eqref{rec}.
\end{proof}
Note that in general, recursion operators on time scales are
non-local. This means that they contain non-local terms with
$\Delta^{-1}$ being formal inverse of $\Delta$ operator. However,
such recursion operators acting on an appropriate domain
produce only local hierarchies.
\section{Infinite-field integrable systems on time scales}
\subsection{Difference KP, $k=0$:}
Consider the following infinite field Lax operator
\begin{equation}\label{kplax}
L=\delta+\tilde{u}_0+\sum_{i\geqslant 1}u_i\delta^{-i},
\end{equation}
which generates the Lax hierarchy \eqref{laxequation} as the
difference counterpart of the Kadomtsev-Petviashvili (KP)
hierarchy.
For $\displaystyle{(L)_{\geqslant 0}=\delta+\tilde{u}_0}$, the
first flow is given by
\begin{equation}\label{inf1ugeneral}
\begin{split}
\frac{d\tilde{u}_0}{dt_1} &= \mu\Delta
u_1\\
\frac{du_i}{dt_1} &=
\sum_{k=0}^{i-1}(-1)^{k+1}u_{i-k}\sum_{j_1+j_2+\ldots+j_{k+1}=i}
(E^{-j_{k+1}}\Delta E^{-j_{k}}\Delta\ldots E^{-j_{2}}\Delta
E^{-j_{1}})\tilde{u}_0\\
&\qquad +\mu\Delta u_{i+1}+\Delta
u_{i}+u_{i}\tilde{u}_0\qquad\forall i>0,
\end{split}
\end{equation}
where $j_\gamma >0$ for all $\gamma\geqslant 1$.
For $(L^2)_{\geqslant 0}=\delta^2+\xi\delta+\eta$, where
\begin{equation}
\xi:=E\tilde{u}_0+\tilde{u}_0\qquad \eta:=\Delta
\tilde{u}_0+\tilde{u}_0^2+u_1+Eu_1,
\end{equation}
one calculates the second flow
\begin{equation}\label{inf2ugeneral}
\begin{split}
\frac{du_0}{dt_2} &= \mu\Delta(E+1)u_2+\mu\Delta(\Delta u_1+u_1\tilde{u}_0+u_1E^{-1}\tilde{u}_0)\\
\frac{du_i}{dt_2} &=
\sum_{k=-1}^{i-1}(-1)^{k+2}u_{i-k}\sum_{j_1+j_2+\ldots+j_{k+2}=i+1}
(E^{-j_{k+2}}\Delta E^{-j_{k+1}}\Delta\ldots E^{-j_{2}}\Delta E^{-j_{1}})\xi\\
&\qquad
+\sum_{k=0}^{i-1}(-1)^{k+1}u_{i-k}\sum_{j_1+j_2+\ldots+j_{k+1}=i}
(E^{-j_{k+1}}\Delta E^{-j_{k}}\Delta\ldots E^{-j_{2}}\Delta E^{-j_{1}})\eta\\
&\qquad +\Delta^2 u_i+(E\Delta+\Delta
E)u_{i+1}+\mu\Delta(E+1)u_{i+2}+ \xi(\Delta u_i+Eu_{i+1})+ \eta
u_i,
\end{split}
\end{equation}
where $j_\gamma > 0$ for all $\gamma\geqslant1$. \\
The simplest case in $(2+1)$ dimensions: We
rewrite the first two members of the first flow by setting
$\tilde{u}_0=w$ and $t_1=y$ and the first member of the second flow by
setting $t_2=t$. Since $E$ and $\Delta$ do not commute, note that
in the calculations up to the last step, we use $E-1$ instead of
$\mu\Delta$, to avoid confusion.
\begin{eqnarray}
w_{y}&=&(E-1)u_1,\label{inf1w}\\
u_{1,y}&=&(E-1)u_2+\Delta u_1+
u_1(1-E^{-1})(w),\label{inf1u1y}\\
w_{t}&=&(E^2-1)u_2+(E-1)(\Delta u_1+
u_1w+u_1E^{-1}(w))\label{inf2ugeneralb}
\end{eqnarray}
Applying $E+1$ to \eqref{inf1u1y} from the left yields:
\begin{equation} (E^2-1)u_2=(E+1)u_{1,y}-(E+1)\Delta u_{1}-(E-1)u_{1}(1-E^{-1})w.\label{inf1wxx} \end{equation}
Applying $(E-1)$ to \eqref{inf2ugeneralb} from the left and
substituting \eqref{inf1w} and \eqref{inf1wxx} into the new
derived equation we finally obtain the $(2+1)$-dimensional one-field
system of the form
\begin{equation}\mu\Delta w_t=(E+1)w_{yy}-2\Delta w_y+2\mu\Delta(ww_y).\label{inf2ugeneralc}\end{equation}
which does not have a continuous counterpart. For the case of
$\mathbb{T}=h{\mathbb Z}$, one can show that \eqref{inf2ugeneralc} is
equivalent to the $(2+1)$-dimensional Toda lattice system.
The difference analogue of one-field continuous KP equation is too
complicated to write it down explicitly.
\begin{remark}\label{remark1}
Here we want to illustrate the behavior of $\tilde{u}_0$ in all symmetries of the difference KP hierarchy. Let
$\displaystyle{(L^n)_{< 0}=\sum_{i\geqslant1}
v^{(n)}_i\delta^{-i}}$, then by the right-hand of the Lax equation
\eqref{laxequation}, we obtain the first members of all flows
\begin{eqnarray}
\frac{d\tilde{u}_0}{dt_n}=\mu\Delta v^{(n)}_1.
\end{eqnarray}
Thus $\tilde{u}_0$ is time-independent for dense $x\in \mathbb{T}$
since $\mu=0$. Hence when $\mathbb{T}=\mathbb{R}$, $\tilde{u}_{0}$
appears to be a constant.
\end{remark}
In $\mathbb{T}=\mathbb{R}$ case, or in the continuous limit of some special
time scales, with $\tilde{u}_0=0$, the Lax operator \eqref{kplax}
turns out to be a Laurent series of pseudo-differential operators
\begin{equation}
L=\partial+\sum_{i\geqslant 1}u_{i}\partial^{-i}.
\end{equation}
Moreover, the first flow \eqref{inf1ugeneral} turns out to be
exactly the first flow of the KP system
\begin{eqnarray}\frac{du_i}{dt_1}=u_{i,x},\qquad\forall i\geqslant
1\label{kp1}
\end{eqnarray}
while the second flow \eqref{inf2ugeneral} becomes exactly the
second flow of the KP system
\begin{equation}\label{kp2}
\frac{du_i}{dt_2} =
(u_i)_{2x}+2(u_{i+1})_x+2\sum_{k=1}^{i-1}(-1)^{k+1}\binom{i-1}{k}
u_{i-k}(u_1)_{kx}\qquad\forall i\geqslant 1.
\end{equation}
\subsection{Difference mKP, $k=1$:}
Consider the Lax operator of the form
\begin{equation}\label{mkplax}
L=\tilde{u}_{-1}\delta+\sum_{i\geqslant 0}u_{i}\delta^{-i}
\end{equation}
which generates the difference counterpart of the modified
Kadomstsev-Petviashvili (mKP) hierarchy.
Then, $(L)_{\geqslant 1}=\tilde{u}_{-1}\delta$ implies the first
flow
\begin{equation}\label{inf3generala}
\begin{split}
\frac{d\tilde{u}_{-1}}{dt_1} &= \mu \tilde{u}_{-1}\Delta u_0\\
\frac{du_i}{dt_1} &=
\sum_{k=-1}^{i-1}(-1)^{k+2}u_{i-k}\sum_{j_1+j_2+\dots+j_{k+2}=i+1}(E^{-j_{k+2}}\Delta
E^{-j_{k+1}}\Delta\ldots E^{-j_{2}}\Delta E^{-j_{1}})\tilde{u}_{-1}\\
&\qquad + \tilde{u}_{-1}Eu_{i+1}+\tilde{u}_{-1}\Delta u_i\qquad
\forall i\geqslant 0,
\end{split}
\end{equation}
where $j_\gamma>0$, $\gamma=1,2,\ldots,k+2$.
Next, for $(L^2)_{\geqslant 1}=\xi\delta^2+\eta\delta$, where
\begin{equation}
\xi:=\tilde{u}_{-1}E\tilde{u}_{-1},\qquad
\eta:=\tilde{u}_{-1}\Delta
\tilde{u}_{-1}+\tilde{u}_{-1}Eu_{0}+u_0\tilde{u}_{-1},
\end{equation}
we have the second flow as follows
\begin{equation}\label{inf4general}
\begin{split}
\frac{d\tilde{u}_{-1}}{dt_2} &= \xi(E\Delta u_0+E^2(u_1)) + \mu \tilde{u}_{-1}\Delta u_0^2
- u_1E^{-1}\xi-\tilde{u}_{-1}^2\Delta u_0\\
\frac{du_i}{dt_2} &=
\sum_{k=-2}^{i-1}(-1)^{k+3}u_{i-k}\sum_{j_1+j_2+\ldots+j_{k+3}=i+2}
(E^{-j_{k+3}}\Delta E^{-j_{k+2}}\Delta\ldots\Delta E^{-j_{1}})\xi\\
&\qquad+\sum_{k=-1}^{i-1}(-1)^{k+2}u_{i-k}\sum_{j_1+j_2+\ldots+j_{k+2}=i+1}
(E^{-j_{k+2}}\Delta E^{-j_{k+1}}\Delta\ldots \Delta
E^{-j_{1}})\eta\\
&\qquad+\xi_2(\Delta^2 u_i+(E\Delta+\Delta
E)u_{i+1}+E^2u_{i+2})+\eta(\Delta u_{i}+Eu_{i+1}),
\end{split}
\end{equation}
where $i\geqslant 0$ and $j_\gamma > 0$ for all $\gamma\geqslant
1$.
\begin{remark}\label{remark2} Similarly in order to illustrate the behavior of $\tilde{u}_{-1}$
in all symmetries of the difference mKP hierarchy let us consider
$\displaystyle{(L^n)_{<1}=\sum_{i\geqslant
0}v^{(n)}_i\delta^{-i}}$. Then we obtain the first members of all
flows
\begin{eqnarray}
\frac{d\tilde{u}_{-1}}{dt_n}=\mu\tilde{u}_{-1}\Delta v^{(n)}_0,
\end{eqnarray} Thus $\tilde{u}_{-1}$ is time-independent for dense $x\in \mathbb{T}$. Hence when $\mathbb{T}=\mathbb{R}$,
$\tilde{u}_{-1}$ appears to be a constant.
\end{remark}
In $\mathbb{T}=\mathbb{R}$ case, or in the continuous limit of some special
time scales, with $\tilde{u}_{-1}=1$, the Lax operator
\eqref{mkplax} turns out to be the pseudo-differential operator
\begin{equation}
L=\partial+\sum_{i\geqslant 0}u_{i}\partial^{-i},
\end{equation}
Furthermore, the system of equations \eqref{inf3generala} is
exactly the first flow of the mKP system
\begin{eqnarray}\frac{du_i}{dt_1}=u_{i,x},\qquad\forall
i\geqslant 0,\label{mkp1}
\end{eqnarray}
while the second flow \eqref{inf4general} turns out to be the
second flow of the mKP system
\begin{equation}\label{mkp2}
\begin{split}
\frac{du_i}{dt_2} &= (u_i)_{2x}+2(u_{i+1})_{x} + 2u_0(u_i)_x + 2u_0u_{i+1}\\
&\qquad+ 2\sum_{k=0}^{i}(-1)^{k+1}\binom{i}{k}
u_{i+1-k}(u_0)_{kx}\qquad\forall i\geqslant 0.
\end{split}
\end{equation}
\section{Finite-field integrable systems on time scales}
\subsection{Difference AKNS, $k=0$:}
Let the Lax operator \eqref{finiterestriction0} for $N=1$ and
$c_1=1$ is of the form
\begin{equation}\label{lax1}
L=\delta+\tilde{u}+\psi\delta^{-1}\varphi.
\end{equation}
The constraint \eqref{con1} between fields, with $a_n=0$, becomes
\begin{equation}\label{rel1}
\tilde{u} = \mu \psi\varphi.
\end{equation}
For $(L)_{\geqslant 0}=\delta+\tilde{u}$, one finds the first flow
\begin{equation}\label{akns1}
\begin{split}
\frac{d\tilde{u}}{dt_1} &= \mu\Delta(\psi E^{-1}\varphi),\\
\frac{d\psi}{dt_1} &= \tilde{u}\psi+\Delta\psi,\\
\frac{d\varphi}{dt_1} &= -\tilde{u}\varphi+\Delta E^{-1}\varphi.
\end{split}
\end{equation}
Eliminating field $\tilde{u}$ by \eqref{rel1} we have
\begin{equation}\label{akns11}
\begin{split}
\frac{d\psi}{dt_1}&=\mu\psi^2\varphi+\Delta\psi,\\
\frac{d\varphi}{dt_1}&=-\mu\varphi^2\psi+\Delta E^{-1}\varphi.
\end{split}
\end{equation}
Next we calculate $\displaystyle{(L^2)_{\geqslant
0}=\delta^2+\xi\delta+\eta}$ where
\begin{equation}
\xi:=(E+1)\tilde{u},\quad \eta:=\Delta
\tilde{u}+\tilde{u}^2+\varphi E(\psi)+\psi E^{-1}(\varphi).
\end{equation}
Thus, the second flow takes the form
\begin{equation}\label{akns2}
\begin{split}
\frac{d\tilde{u}}{dt_2}&=\mu\Delta\brac{\Delta (\psi E^{-1}(\varphi))+\psi E^{-1}(\tilde{u}\varphi)+\tilde{u}\psi E^{-1}\varphi}-\mu\Delta(E+1)\psi E^{-1}\Delta E^{-1}(\varphi)\\
\frac{d\psi}{dt_2}&=\psi\eta+\xi\Delta\psi+\Delta^2\psi\\
\frac{d\varphi}{dt_2}&=-\varphi\eta+\Delta
E^{-1}(\xi\varphi)-(\Delta E^{-1})^2\varphi .
\end{split}
\end{equation}
By the use of the constraint \eqref{rel1}, the second flow can be
written as
\begin{equation}\label{akns21}
\begin{split}
\frac{d\psi}{dt_2}&=\psi(\Delta
\mu\psi\varphi+(\mu\psi\varphi)^2+\varphi E(\psi)+\psi
E^{-1}(\varphi))+
(E+1)\mu\psi\varphi\Delta\psi+\Delta^2\psi,\\
\frac{d\varphi}{dt_2}&=-\varphi(\Delta
\mu\psi\varphi+(\mu\psi\varphi)^2+\varphi E(\psi)+\psi
E^{-1}(\varphi))+\Delta E^{-1}(\varphi(E+1)\mu\psi\varphi)-(\Delta
E^{-1})^2\varphi .
\end{split}
\end{equation}
In order to obtain the recursion operator one finds that for the
Lax operator \eqref{lax1} the appropriate reminder \eqref{rem1}
has the form
\begin{equation}
R = \Delta^{-1}\bra{{\mu}^{-1}\tilde{u}_{t_n}} - \psi_{t_n}\delta^{-1}\varphi.
\end{equation}
Then, \eqref{rec} implies the following recursion formula as
\begin{equation}
\pmatrx{\tilde{u}\\ \psi\\ \varphi}_{t_{n+1}} =
\pmatrx{\tilde{u}-{\mu}^{-1} & \phi E & \psi E^{-1}\\ \psi+\psi\Delta^{-1}{\mu}^{-1} &
\Delta+\tilde{u}+\psi\Delta^{-1}\varphi & \psi\Delta^{-1}\psi\\
-\varphi\Delta^{-1}{\mu}^{-1} & -\varphi E\Delta^{-1}\varphi &
\tilde{u} - \Delta E^{-1} - \varphi E\Delta^{-1}\psi}\pmatrx{\tilde{u}\\ \psi\\ \varphi}_{t_n}
\end{equation}
valid for isolated points $x\in\mathbb{T}$, i.e. when $\mu\neq 0$. For
dense points one must use its reduction by constraint
\eqref{rel1}:
\begin{equation}\label{red}
\pmatrx{\psi\\ \varphi}_{t_{n+1}} =
\pmatrx{\Delta+\tilde{u}+\mu\psi\varphi + 2\psi\Delta^{-1}\varphi & \mu\psi^2+2\psi\Delta^{-1}\psi\\
-\varphi(E+1)\Delta^{-1}\varphi &
\tilde{u} - \Delta E^{-1} - \varphi(E+1)\Delta^{-1}\psi}\pmatrx{\psi\\ \varphi}_{t_n},
\end{equation}
where $\tilde{u}$ is given by \eqref{rel1}.
In $\mathbb{T}=\mathbb{R}$ case, or in the continuous limit of some special
time scales, with the choice $\tilde{u}=0$, the Lax operator
\eqref{lax1} takes the form $L=\partial + \psi\partial^{-1}\varphi$. Then,
the continuous limits of \eqref{akns1} and \eqref{akns2}
respectively, imply that the first flow is the translational
symmetry
\begin{equation}
\begin{split}
\frac{d\psi}{dt_1}&=\psi_x\\
\frac{d\varphi}{dt_1}&= \varphi_x
\end{split}
\end{equation}
and the first non-trivial equation from the hierarchy is the AKNS
equation
\begin{equation}
\begin{split}
\frac{d\psi}{dt_2}&=\psi_{xx}+2\psi^2\varphi,\\
\frac{d\varphi}{dt_2}&=-\varphi_{xx}-2\varphi^2\psi .
\end{split}
\end{equation}
For that special case the recursion formula \eqref{red} is of the
following form:
\begin{equation}
\pmatrx{\psi\\ \varphi}_{t_{n+1}} =
\pmatrx{\partial_x+2\psi\partial_x^{-1}\varphi & 2\psi\partial_x^{-1}\psi\\
-2\varphi\partial_x^{-1}\varphi &
- \partial_x - 2\varphi\partial_x^{-1}\psi}\pmatrx{\psi\\ \varphi}_{t_n}.
\end{equation}
\subsection{Difference Kaup-Broer, $k=1$:}
>From the admissible finite field restrictions
\eqref{finiterestriction1}, we consider the following simplest Lax
operator
\begin{equation}\label{lax2}
L=\tilde{u}\delta + v + \delta^{-1}w.
\end{equation}
The constraint \eqref{con2}, with $a_n=1$, implies
\begin{equation}\label{rel2}
\tilde{u} = 1+\mu v-\mu^2w.
\end{equation}
Then, for $(L)_{\geqslant 1}=\tilde{u}\delta$, the first flow is
given as
\begin{equation}\label{1u}
\begin{split}
\frac{d\tilde{u}}{dt_1}&=\mu \tilde{u}\Delta v,\\
\frac{dv}{dt_1}&=\tilde{u}\Delta v+\mu \Delta E^{-1}(\tilde{u}w),\\
\frac{dw}{dt_1}&=\Delta E^{-1}(\tilde{u}w).
\end{split}
\end{equation}
By the constraint \eqref{rel2} one can rewrite the first flow as
\begin{equation}
\begin{split}\label{1u1}
\frac{dv}{dt_1}&=(\mu v-\mu^2w)\Delta v+\mu \Delta E^{-1}(w(\mu v-\mu^2w)),\\
\frac{dw}{dt_1}&=\Delta E^{-1}\bra{\mu vw -\mu^2w^2}.
\end{split}
\end{equation}
Next, we calculate $\displaystyle{(L^2)_{\geqslant
1}=\xi\delta^2+\eta\delta}$, where
\begin{equation}
\xi:=\tilde{u}E\tilde{u},\quad \eta:=\tilde{u}\Delta
\tilde{u}+\tilde{u}Ev+v\tilde{u},
\end{equation}
that yields the second flow
\begin{equation}\label{2u}
\begin{split}
\frac{d\tilde{u}}{dt_2}&=\mu \tilde{u}\Delta (E^{-1}+1)\tilde{u}w+\mu \tilde{u}\Delta v^2+\mu \tilde{u} \Delta (\tilde{u}\Delta v),\\
\frac{dv}{dt_2}&=\xi(\Delta^2v+\Delta w)+\mu\Delta
E^{-1}(w\eta)+E^{-1}\Delta E^{-1}(w\xi)+\eta \Delta v,\\
\frac{dw}{dt_2}&=-\Delta E^{-1}\Delta E^{-1}(w\xi)+\Delta
E^{-1}(w\eta).
\end{split}
\end{equation}
One can rewrite the above system reducing it by the constraint,
but the final equation has complicated form.
For the Lax operator \eqref{lax2} the appropriate reminder
\eqref{rem2} is given by
\begin{equation}
R = \tilde{u}\Delta^{-1}(\mu \tilde{u})^{-1}\tilde{u}_{t_n}\delta - v_{t_n} - \Delta^{-1}w_{t_n} .
\end{equation}
Hence, from \eqref{rec} we have the following, valid when $\mu\neq
0$, recursion formula
\begin{equation}
\pmatrx{\tilde{u}\\ v\\ w}_{t_{n+1}} =
\pmatrx{R_{\tilde{u}\tilde{u}} & \tilde{u}E & \mu \tilde{u}\\ R_{v\tilde{u}} & v + \tilde{u}\Delta & (1+E^{-1})\tilde{u}\\
R_{w\tilde{u}} & w & -\Delta E^{-1}\tilde{u} + v - \mu w}
\pmatrx{\tilde{u}\\ v\\ w}_{t_n},
\end{equation}
where
\begin{equation}
\begin{split}
R_{\tilde{u}\tilde{u}} &= E(v) -\mu^{-1}\tilde{u} + \mu \tilde{u}\Delta(v) \Delta^{-1}(\mu \tilde{u})^{-1} \\
R_{v\tilde{u}} &= \Delta(v) + w + \tilde{u} \Delta(v) \Delta^{-1}(\mu \tilde{u})^{-1} + (1-E^{-1}) \tilde{u}w
\Delta^{-1}(\mu \tilde{u})^{-1}\\
R_{w\tilde{u}} &= \Delta E^{-1}\tilde{u}w\Delta^{-1}(\mu \tilde{u})^{-1}.
\end{split}
\end{equation}
Its reduction by the constraint \eqref{rel2} is
\begin{equation}\label{red2}
\pmatrx{v\\ w}_{t_{n+1}} =
\pmatrx{v + \tilde{u}\Delta + R_{v\tilde{u}}\mu & (1+E^{-1})\tilde{u}-R_{v\tilde{u}}\mu^2\\
w + R_{w\tilde{u}}\mu & -\Delta E^{-1}\tilde{u} + v - \mu w - R_{w\tilde{u}}\mu^2}
\pmatrx{v\\ w}_{t_n},
\end{equation}
with $\tilde{u}$ given by \eqref{rel2}.
In the case of $\mathbb{T}=\mathbb{R}$, or in the continuous limit of some
special time scales, with the choice $\tilde{u}=1$, the Lax
operator \eqref{lax2} takes the form $L=\partial + v + \partial^{-1}w$. Then
the similar continuous analogue allows us to obtain the first flow
\begin{equation}
\begin{split}
\frac{dv}{dt_1}&=v_{x},\\
\frac{dw}{dt_1}&=w_{x},
\end{split}
\end{equation}
and the first non-trivial equation from the hierarchy is the
Kaup-Broer equation
\begin{equation}
\begin{split}
\frac{dv}{dt_2}&=v_{2x}+2w_{x}+2vv_{x},\\
\frac{dw}{dt_2}&=-w_{2x}+2(vw)_x.
\end{split}
\end{equation}
For such special cases, the recursion formula \eqref{red2} turns
out to be
\begin{equation}
\pmatrx{v\\ w}_{t_{n+1}} =
\pmatrx{\partial_x + v + v_x\partial_x^{-1}& 2\\
w + \partial_x w\partial_x^{-1} & -\partial_x + v}\pmatrx{v\\ w}_{t_n}.
\end{equation}
\section{Acknowledgments}
This work is partially supported by the Scientific and Technical
Research Council of Turkey and MNiSW research grant N N202 404933.
|
3,212,635,537,674 | arxiv | \section{Introduction}
Recognizing actions and activities in videos is a long studied problem in computer vision \cite{bobick2001recognition,haritaoglu2000w,bregler1997learning}. An action is defined as a short duration movement such as jumping, throwing, kicking. In contrast, activities are more complex. An activity has a beginning, which is triggered by an action or an event, which involves multiple actions, and an end, which involves another action or an event. For example, an activity like ``assembling a furniture" could start with unpacking boxes, continue by putting different parts together and end when the furniture is ready. Since videos can be arbitrarily long, they may contain multiple activities and therefore, temporal localization is needed. Detecting human activities in videos has several applications in content based video retrieval for web search engines, reducing the effort required to browse through lengthy videos, monitoring suspicious activity in video surveillance etc. While localizing objects in images is an extensively studied problem, localizing activities has received less attention. This is primarily because performing localization in videos is computationally expensive \cite{escorcia2016daps} and well annotated large datasets \cite{caba2015activitynet} were unavailable until recently.
Current object detection pipelines have three major components - proposal generation, object classification and bounding box refinement \cite{ren2015faster}. In \cite{escorcia2016daps, shou2016action} this pipeline was adopted for deep learning based action detection as well. LSTM is used to embed a long video into a single feature vector which is then used to score different segment proposals in the video \cite{escorcia2016daps}. While a LSTM is effective for capturing local context in a video \cite{Singh_2016_CVPR}, learning to predict the start and end positions for all activity segments using the hidden state of a LSTM is challenging. In fact, in our experiments we show that even a pre-defined set of proposals at multiple scales obtains better recall than the temporal segments predicted by a LSTM on the ActivityNet dataset.
In \cite{shou2016action}, a ranker was learned on multiple segments of a video based on overlap with ground truth segments. However, a feature representation which does not integrate information from a larger temporal scale than a proposal lacks sufficient information to predict whether a proposal is a good candidate or not. For example, in Figure \ref{fig:demo}, the red and green solid segments are two proposals which are both completely included within an activity. While the red segment is a good candidate, the green is not. So, although a single scale representation for a segment captures sufficient information for recognition, it is inadequate for detection. To capture information for predicting activity boundaries, we propose to explicitly sample features both at the scale of the proposal and also at a higher scale while ranking proposals. We experimentally demonstrate that this has significant impact on performance when ranking temporal activity proposals.
\begin{figure*}
\center
\includegraphics[width=0.8\linewidth]{figures/final}
\caption{Given a video, a two stream network is used to extract features. A pair-wise sampling layer samples features at two different resolutions to construct the feature representation for a proposal. This pairwise sampling helps to obtain a better proposal ranking. A typical sliding window approach (Green line box) can miss the context boundary information when it lies inside the activity. However, the proposed pairwise sampling with a larger context window (Red line box) will capture such information and yield better proposal ranking. These pair-wise features are then input to a ranker which selects proposals for classification. The green boxes on the left represent K different proposals which are placed uniformly in a video.}
\label{fig:demo}
\end{figure*}
By placing proposals at equal intervals in a video which span multiple temporal scales, we construct a set of proposals which are then ranked using features sampled from a pair of scales. A temporal convolution network is applied over these features to learn background and foreground probabilities. The top ranked proposals are then input to a classification network which assigns individual class probabilities to each segment proposal.
\section{Related Work}
Wang and Schmidt \cite{wang2011action} introduced Dense Trajectories (DT), which have been widely applied in various video recognition algorithms. For trimmed activity recognition, extracting dense trajectories and encoding them by using Fisher Vectors has been widely used \cite{atmosukarto2012trajectory, wang2013action, jiang2014thumos, heilbron2014camera, peng2016bag, wang2016improving}. For action detection, \cite{yuan2016temporal} constructed a pyramid of score distribution features (PSDF) as a representation for ranking segments of a video in a dense trajectories based pipeline. However, for large datasets, these methods require significant computational resources to extract features and build the feature representation after features are extracted. Because deep learning based methods provide better accuracy with much less computation, hand-crafted features have become less popular.
For object detection in images, proposals are a critical elements for obtaining efficient and accurate detections \cite{russakovsky2015imagenet, ren2015faster}. Motivated by this approach, Jain et al. \cite{jain2014action} introduced action proposals which extends object proposals to videos. For spatio-temporal localization of actions, multiple methods use spatio-temporal region proposals \cite{gkioxari2015finding,oneata2014spatio,gemert2015apt,yu2015fast}. However, these methods are typically applied to datasets containing short videos, and hence the major focus has been on spatial localization rather than temporal localization. Moreover, spatio-temporal localization requires training data containing frame level bounding box annotations. For many applications, simply labeling the action boundaries in the video is sufficient, which is a significantly less cumbersome annotation task.
Very recently, studies focusing on temporal segments which contain human actions have been introduced \cite{mettes2015bag, caba2016fast, shou2016action, ma2016learning, Singh_2016_CVPR}. Similar to grouping techniques for retrieving object proposals, Heilbron et al. \cite{caba2016fast} used a sparse dictionary to encode discriminative information for a set of action classes. Mettes et al. \cite{mettes2015bag} introduced a fragment hierarchy based on semantic visual similarity of contiguous frames by hierarchical clustering, which was later used to efficiently encode temporal segments in unseen videos. In \cite{Singh_2016_CVPR}, a multi-stream RNN was employed along with tracking to generate frame level predictions to which simple grouping was applied at multiple detection thresholds for obtaining detections.
Methods using category-independent classifiers to obtain many segments in a long video are more closely related to our approach. For example, Shou et al. \cite{shou2016action} exploit three segment-based 3D ConvNets: a proposal network for identifying candidate clips that may contain actions, a classification network for learning a classification model and a localization network for fine-tuning the learned classification network to localize each action instance. Escorcia et al. \cite{escorcia2016daps} introduce Deep Action Proposals (DAPs) and use a LSTM to encode information in a fixed clip (512 frames) of a video. After encoding information in the video clip, the LSTM scores K (64) predefined start and end positions in that clip. The start and end positions are selected based on statistics of the video dataset. We show that our method performs better than global representations like LSTMs which create a single feature representation for all scales in a video for localization of activities. In contemporary work, Shou et al. \cite{cdc_shou_cvpr17} proposed a convolutional-de-convolutional (CDC) network by combing temporal upsampling and spatial downsampling for activity detection. Such an architecture helps in precise localization of activity boundaries. We show that the activity proposals generated by our method can further improve CDC's performance.
Context has been widely used in various computer vision algorithms. For example, it helps in tasks like object detection \cite{gidaris2015object}, semantic segmentation \cite{mottaghi2014role}, referring expressions \cite{yu2016modeling} etc. In videos it has been used for action and activity recognition \cite{hasan2015context,wu2011action}. However, for temporal localization of activities, existing methods do not employ temporal context, which we show is critical for solving this problem.
\section{Approach}
Given a video $\mathcal{V}$, consisting of $T$ frames, TCN generates a ranked list of segments $s_1, s_2, ..., s_N$, each associated with a score. Each segment $s_j$ is a tuple $t_b, t_e$, where $t_b$ and $t_e$ denote the beginning and end of a segment. For each frame, we compute a $D$ dimensional feature vector representation which is generated using a deep neural network. An overview of our method is shown in Figure \ref{fig:arch}.
\subsection{Proposal Generation}
Our goal in this step is to use a small number of proposals to obtain high recall. First, we employ a temporal sliding window of a fixed length of $L$ frames with 50\% overlap. Suppose each video $\mathcal{V}$ has $M$ window positions. For each window at position $i$ ($i \in [0, M]$), its duration is specified as a tuple $(b_i, e_i)$, where $b_i$ and $e_i$ denote the beginning and end of a segment. We then, generate $K$ proposal segments (at $K$ different scales) at each position $i$. For $k \in [1, K]$, the segments are denoted by $(b_i^k, e_i^k)$. Also, the duration of each segment, $L_k$, increases as a power of two, i.e $L_{k+1} = 2L_k$. This allows us to cover all candidate activity locations that are likely to contain activities of interests, and we refer them as activity proposals, $P=\{(b_i^k, e_i^k)\}_{i=0, k=1}^{M, K}$. Figure \ref{fig:demo} illustrates temporal proposal generation. When a proposal segment meets the boundary of a video, we use zero-padding.
\subsection{Context Feature Representation}
We next construct a feature representation for ranking proposals. We use all the features $\mathcal{F} = \{f_1, f_2, ..., f_m\}$ of the untrimmed video as a feature representation for the video. For the $k^{th}$ proposal at window position $i$ ($P_{i,k}$), we uniformly sample from $\mathcal{F}$ to obtain a $D$ dimensional feature representation $Z_{i,k} = \{z_1, z_2, ..., z_n\}$. Here, $n$ is the number of features which are sampled from each segment. To capture temporal context, we again uniformly sample features from $\mathcal{F}$, but this time, from $P_{i,k+1}$ --- the proposal at the next scale and centered at the same scale. Note that we do not perform average or max-pooling but instead sample a fixed number of frames regardless of the duration of $P_{i,k}$.
Logically, a proposal can fall into one of four categories:
\begin{itemize}
\item It is disjoint from a ground-truth interval and therefore, the next scale's (larger) label is irrelevant
\item It includes a ground-truth interval and the next-scale has partial overlap with that ground truth interval.
\item It is included in a ground-truth interval and the next level has significant overlap with the background (i.e., it is larger than the ground truth interval).
\item It is included in a ground-truth interval and so is the next level.
\end{itemize}
A representation which only considers features inside a proposal would not consider the last two cases. Hence, whenever a proposal is inside an activity interval, it would not be possible to determine where the activity ends by only considering the features inside the proposal. Therefore, using a context based representation is critical for temporal localization of activities. Additionally, based on how much background the current and next scales cover, it becomes possible to determine if a proposal is a good candidate.
\begin{figure*}
\center
\includegraphics[width=0.7\linewidth]{figures/figure_2}
\caption{Temporal Context Network applies a two stream CNN on a video for obtaining an intermediate feature representation.}
\label{fig:arch}
\end{figure*}
\subsection{Sampling and Temporal Convolution}
To train the proposal network, we assign labels to proposals based on their overlap with ground truth, as follows,
\begin{equation}
Label(S_j) =
\begin{cases}
1, & iou(S_j, GT) > 0.7 \\
0, & iou(S_j, GT) < 0.3 \\
\end{cases}
\end{equation}
where $iou(\cdot)$ is intersection over union overlap and $GT$ is a ground truth interval. During training, we construct a mini batch with 1024 proposals with a positive to negative ratio of 1:1.
Given a pair of features $Z_{i,k}$, $Z_{i,k+1}$, from two consecutive scales, we apply temporal convolution to features sampled from each temporal scale separately to capture context information between scales, as shown in Figure \ref{fig:arch}. A temporal Convolutional Neural Network \cite{kang2016object} enforces temporal consistency and obtains consistent performance improvements over still-image detections. To aggregate information across scales, we concatenate the two features to obtain a fixed dimensional representation. Finally, two fully connected layers are used to capture context information across scales. A two-way Softmax layer followed by cross-entropy loss is used at the end to map the predictions to labels (proposal or not).
\subsection{Classification}
Given a proposal with a high score, we need to predict its action class. We use bilinear pooling by computing the outer product of each segment feature, and average pool them to obtain the bilinear matrix $bilinear(\cdot)$. Given features $\hat{Z} = [z_1, z_2, ... z_l]$ within a proposal, we conduct bilinear pooling as follows:
\begin{equation}
bilinear(\hat{Z}) = \sum_{i=1}^{l}\hat{Z}_{i}^T \hat{Z}_{i}
\end{equation}
For classification, we pool all the features $l$ which are inside the segment and do not perform any temporal sampling. We pass this vectorized bilinear feature $x = bilinear(\hat{Z}) $ through a mapping function with signed square root and $l^2$ normalization \cite{ifv}:
\begin{equation}
\phi(x) = \frac{sign(x)\sqrt{x}}{||sign(x)\sqrt{x}||_2}
\end{equation}
We finally apply a fully connected layer and use a 201-way (200 action classes plus background) Softmax layer at the end to predict class labels. We again use the cross entropy loss function for training. During training, we sample 1024 proposals to construct a mini batch. To balance training, 64 samples are selected as background in each mini-batch. For assigning labels to video segments, we use the same function which is used for generating proposals,
\begin{equation}
Label(S_j) =
\begin{cases}
lb, & iou(S_j, GT) > 0.7 \\
0, & iou(S_j, GT) < 0.3 \\
\end{cases}
\end{equation}
where $iou(\cdot)$ is intersection over union overlap, $GT$ is ground truth and $lb$ is the most dominant class with in proposal $S_j$. We use this classifier for the ActivityNet dataset but this can be replaced with other classifiers as well.
\begin{figure*}[t]
\center
\includegraphics[width=0.9\linewidth]{figures/figure_6-1}
\caption{Performance of our proposal ranker on ActivityNet validation set. (a) The Recall vs IoU for pyramid proposal anchors; (b) The Recall vs IoU for our ranker at 1, 5, 20 proposals; (c) Recall vs number of proposals for our ranker at IoU 0.5, 0.75 and 0.95}
\label{fig:ranker}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=0.9\linewidth]{figures/figure_6-3}
\caption{The effectiveness of context-based proposal ranker is shown in these plots. The Recall vs IoU plots show ranker performance at 1, 5, 20 proposals with and without context on ActivityNet validation set}
\label{fig:withpair}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=0.9\linewidth]{figures/figure_6-2}
\caption{Comparing the ranker performance using different relative scale for context based proposals on ActivityNet validation set}
\label{fig:scale}
\end{figure*}
\section{Experiments}
In this section, we provide analysis of our proposed temporal context network. We perform experiments on the ActivityNet and THUMOS14 datasets.
\subsection{Implementation details}
We implement the network based on a customized Caffe repository with Python interface. All evaluation experiments are performed on a workstation with a Titan X (Maxwell) GPU. We initialize our network with pre-trained TSN models \cite{wang2016temporal} and fine-tune them on both action labels and foreground/background labels to capture ``actionness" and "backgroundness". Later, we concatenate these together as high-level features input to our proposal ranker and classifier. For the proposal ranker, we use temporal convolution with a kernel size of 5 and a stride of 1, followed by ReLU activation and average pooling with size 3 and stride 1. The temporal convolution responses are then concatenated and mapped to a fully connected layer with 500 hidden units, which is used to predict the proposal score. To evaluate our method on the detection task, we generate top K proposals (K is set to 20, we apply non-maximum suppression to filter out similar proposals, using an NMS threshold set as 0.45) and classify them separately. While classifying proposals, we also fuse two global video level priors using ImageNet shuffle features \cite{ImagenetShuffle} and ``actionness" features to further improve classification performance, as shown in \cite{SinghC16}. We also perform an ablation study for different components of classification. For training the proposal network, we use a learning rate 0.1. For the classification network, we set learning the rate to 0.001. For both cases, we use a momentum of 0.9 and 5e-5 weight decay.
\subsection{ActivityNet Dataset}
ActivityNet \cite{caba2015activitynet} is a recently released dataset which contains 203 distinct action classes and a total of 849 hours of videos collected from YouTube. It consists of both trimmed and untrimmed videos. Each trimmed video contains a specific action with annotated segments. Untrimmed videos contain one or more activities with background involved. On average, each activity category has 137 untrimmed videos. Each video on average has 1.41 activities which are annotated with beginning and end points. This benchmark is designed for three applications: untrimmed video classification, trimmed activity classification, and untrimmed activity detection. Here, we evaluate our performance on the detection task in untrimmed videos. We use the mean average precision (mAP) averaged over multiple overlap thresholds to evaluate detection performance. Since test labels of ActivityNet are not released, we perform ablation studies on the validation data and test our full model on the evaluation server.
\textbf{Proposal anchors} We sample pair-wise proposals within a temporal pyramid. In Figure \ref{fig:ranker}(a), we present the recall for the pyramid proposal anchors on ActivityNet validation set with three different levels. This figure shows the theoretical best recall one can obtain using such a pyramid. Notice that even with a 4-level pyramid with 64 proposals in total, the coverage is already better than the baseline provided in the challenge, which uses 90 proposals. This ensures our proposal ranker's performance is high with a low number of proposals.
\textbf{Performance of our ranker} We evaluate our ranker with different numbers of proposals. Figure \ref{fig:ranker}(b) shows the average recall at various overlap thresholds with top 1, top 5 and top 20 proposals. Even when using {\em one} proposal, our ranker outperforms the ActivityNet proposal baseline by a significant margin when the overlap threshold is greater than 0.5. With top 20 proposals, our ranker can squeeze out most of the performance from pyramid proposal anchors. We also evaluate the performance of our ranker by measuring recall as the number of proposals varies (shown in Figure \ref{fig:ranker}(c)). Recall at IoU 0.5 increases to 90\% with just 20 proposals. At higher IoU, increasing the number of proposals does not increase recall significantly.
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c}
\hline
& mAP@.5 & mAP@.75 & mAP@.95 \\
\hline
without context & 15.91 & 3.11 & 0.13 \\
with context & 36.17 & 21.12 & 3.89 \\
\hline
\end{tabular}
\caption{Evaluation on the influence with and without context on ActivityNet validation set}
\label{tab:withpair}
\end{table}
\textbf{Effectiveness of temporal context}
We contend that temporal context for ranking proposals is critical for localization. To evaluate this claim, we conduct several experiments. In Figure \ref{fig:withpair}, we compare the performance of the ranker with and without temporal context. Using only the best proposal, without context, the recall drops significantly at high IoU (IoU $>$ 0.5). This shows that for precise localization of boundaries, temporal context is critical. Using top 5 and top 20 proposals, without context, the recall is marginally worse. This is expected because as the number of proposals increases, there is a higher likelihood of one having a good overlap with a ground-truth. Therefore, recall results using a single proposal are most informative. We also compute detection metrics on the ActivityNet validation set to evaluate the influence of context. Table \ref{tab:withpair} also shows that detection mAP is much higher when using the ranker with context based proposals. These experiments demonstrate the effectiveness of our method.
\textbf{Varying context window for ranking proposals}
Another important component for ranking proposals is the scale of context features which are associated with the proposal. Consider a case in which a proposal is contained within the ground truth interval. If the context scale is large, the ranker may not be able to distinguish between good and bad proposals, as it always see a significant amount of background . If the scale is small, there may be not enough context to determine if the proposal is contained within the ground truth or not. Therefore, we conduct an experimental study by varying the scale of context features while ranking proposals. In Figure \ref{fig:scale}, we observe that the performance improves up to a scale of 2. We evaluate the performance of the ranker at different scales on the ActivityNet validation set. In Table \ref{tab:scale} we show the impact of varying temporal context at different overlap thresholds, which validates our claim that adding more temporal context would hurt performance, but not using context at all would reduce performance by a much larger margin. For example, changing the scale from 2 to 3 only drops the performance by 3\% but changing it from 1.5 to 1 decreases mAP by 15\% and 12\% respectively.
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c}
\hline
Context Scale & mAP@.5 & mAP@.75 & mAP@.95 \\
\hline
1 & 15.91 & 3.11 & 0.13 \\
1.5 & 30.51 & 15.56 & 2.23 \\
2 & 36.17 & 21.12 & 3.89 \\
2.5 & 36.04 & 17.08 & 0.92 \\
3 & 33.29 & 14.35 & 1.03 \\
\hline
\end{tabular}
\caption{Impact of varying temporal context at different overlap thresholds on ActivityNet validation set}
\label{tab:scale}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c}
\hline
\#Proposal/Video & mAP@.5 & mAP@.75 & mAP@.95 \\
\hline
1 & 25.70 & 16.08 & 2.80 \\
5 & 34.13 & 20.72 & 3.89 \\
10 & 35.52 & 21.02 & 3.89 \\
20 & 36.17 & 21.12 & 3.89 \\
50 & 36.44 & 21.15 & 3.90 \\
\hline
\end{tabular}
\caption{Impact of number proposals on mAP on ActivityNet validation set}
\label{tab:topk}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{3}{c|}{Components} & mAP@.5 & mAP@.75 & mAP@.95 \\
\multicolumn{1}{c}{B.} & \multicolumn{1}{c}{F.} & \multicolumn{1}{c}{G.} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{} \\
\hline
\checkmark & \checkmark & \checkmark & 36.17 & 21.12 & 3.89 \\
\checkmark & \checkmark & $\times$ & 33.83 & 20.05 & 3.77\\
\checkmark & $\times$ & $\times$ & 30.31 & 17.80 & 2.82\\
$\times$ & $\times$ & $\times$ & 26.35 & 15.27 & 2.66\\
\hline
\end{tabular}
\caption{Ablation study for detection performance using top 20 proposals on the ActivityNet validation set. B - Bilinear, F - Flow, G - Global prior}
\label{tab:abl}
\end{table}
\textbf{Influence of number of proposals}
We also evaluate the influence of the number of proposals on detection performance. Table \ref{tab:topk}, shows that our method doesn't requires a large number of proposals to improve its highest mAP. This demonstrates the advantages of both our proposal ranker and classifier.
\textbf{Ablation study}
We conduct a series of ablation studies to evaluate the importance of each component used in our classification model. Table \ref{tab:abl} considers three components: "B" stands for ``using bilinear pooling"; ``F" stands for ``using flow" and ``G" stands for ``using global priors". We can see from the table that each component plays a significant role in improving performance.
\begin{table}[]
\centering
\small
\begin{tabular}{c|c|c|c|c}
\hline
\multicolumn{5}{c}{Evaluation Server} \\
\hline
Method & mAP@.5 & mAP@.75 & mAP@.95 & Average \\
\hline
QCIS\cite{WangUTS} & 42.48 & 2.88 & 0.06 & 14.62\\
UPC\cite{Montes_2016_NIPSWS} & 22.37 & 14.88 & 4.45 & 14.81\\
UMD\cite{Singh_2016_CVPR} & 28.67 & 17.78 & 2.88 & 17.68\\
Oxford\cite{SinghC16} & 36.40 & 11.05 & 0.14 & 17.83\\
\hline
\textbf{Ours} & \textbf{37.49} & \textbf{23.47} & \textbf{4.47} & \textbf{23.58} \\
\hline
\end{tabular}
\caption{Comparison with state-of-the-art methods on the ActivityNet evaluation sever using top 20 proposals}
\label{tab:stoa}
\end{table}
\textbf{Comparison with state-of-the-art}
We compare our method with state-of-the-art methods \cite{WangUTS, Montes_2016_NIPSWS, SinghC16, SinghC16} submitted during the CVPR 2016 challenge. We submit our results on the evaluation server to measure performance on the test set. At 0.5 overlap, our method is only worse than \cite{WangUTS}. However, this approach was optimized for 0.5 overlap and its performance degrades significantly (to 2\%) when mAP at 0.75 or 0.95 overlap is measured. Even though frame level predictions using a Bi-directional LSTM are used in \cite{Singh_2016_CVPR}, our performance is better when mAP is measured at 0.75 overlap. This is because \cite{Singh_2016_CVPR} only performs simple grouping of contiguous segments which are obtained at multiple detection thresholds, instead of a proposal based approach. Hence, it is likely to perform worse on longer action segments.
\subsection{The THUMOS14 Dataset}
We also evaluate our framework on the THUMOS14 dataset\cite{jiang2014thumos}, which contains 20 action categories from sports. The validation set contains 1010 untrimmed videos with 200 videos as containing positive samples. The testing set contains 1574 untrimmed videos, where only 213 of them have action instances. We exclude the remaining background videos from our experiments.
Note that solutions for action and activity detection could be different in general, as activities could be very long (minutes) while actions last just a few seconds. Due to their long duration, evaluation at high overlap (0.8 e.g.) makes sense for activities, but not for actions. Nevertheless, we also train our proposed framework on the validation set of THUMOS14 and test on the testing set. Our model also outperforms state-of-the-art methods on proposal metrics by a significant margin, which shows the good generalization ability of our approach.
\textbf{Performance of our ranker}
Our proposal ranker outperforms existing algorithms like SCNN\cite{scnn_shou_wang_chang_cvpr16} and DAPs\cite{escorcia2016daps}. We show proposal performance on both average recall calculated using IoU thresholds from 0.5 to 1 at a step 0.05 (shown in Table \ref{tab:thumos_proposal1}) and recall at IoU 0.5 (shown in Table \ref{tab:thumos_proposal2}) using 10, 50, 100, 500 proposals. Our proposal ranker performs consistently better than previous methods, especially using small number of proposals.
In Table \ref{tab:thumos_proposal3}, it is clear that, the proposal ranker performance improves significantly when using a pair of context windows as input. Hence, it is important to use context features for localization in videos, which has been largely ignored in previous state-of-the-art activity detection methods.
\textbf{Comparison with state-of-the-art}
Using off the shelf classifiers and our proposals, we also demonstrate noticeable improvement in detection performance on THUMOS14. Here, we compare our temporal context network with DAPs\cite{escorcia2016daps}, PSDF\cite{psdf_cvpr16}, FG\cite{fg_cvpr16} SCNN\cite{scnn_shou_wang_chang_cvpr16} and CDC\cite{cdc_shou_cvpr17}. We replace the S-CNN proposals originally used in CDC with our proposals. For scoring the detections in CDC, we multiply our proposal scores with CDC's classification score. We show that our proposals further benefit CDC and improve detection performance consistently at different overlap thresholds.
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c|c}
\hline
\multirow{2}{*}{Method}& \multicolumn{4}{c}{Avg.Recall [0.5:0.05:1]} \\
& \multicolumn{1}{c}{@10}& \multicolumn{1}{c}{@50}& \multicolumn{1}{c}{@100}& \multicolumn{1}{c}{@500}\\
\hline
DAPs& 3.0& 11.7& 20.1& 46.7\\
SCNN& 5.5& 16.6& 24.8& 48.3\\
Ours& 7.7& 20.5& 29.6& 49.2\\
\hline
\end{tabular}
\caption{Average Recall from IoU 0.5 to 1 with step size 0.05 for our proposals and other methods on the THUMOS14 testing set}
\label{tab:thumos_proposal1}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c|c}
\hline
\multirow{2}{*}{Method}& \multicolumn{4}{c}{Recall(IoU=0.5)}\\
&\multicolumn{1}{c}{@10}& \multicolumn{1}{c}{@50}& \multicolumn{1}{c}{@100}& \multicolumn{1}{c}{@500}\\
\hline
DAPs& 8.4& 29.2& 46.9& 85.5\\
SCNN& 13.0& 35.2& 49.6& 84.1\\
Ours& 17.1& 42.8& 59.8& 88.7\\
\hline
\end{tabular}
\caption{Recall evaluation at IoU 0.5 between our proposals and state-of-the-art methods on THUMOS14 testing set}
\label{tab:thumos_proposal2}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{c|c|c}
\hline
Method& Avg.Recall@100& mAP@0.5\\
\hline
Ours w/o Context& 22.5& 20.5\\
Ours w/ Context& 29.6& 25.6\\
\hline
\end{tabular}
\caption{Evaluation on the influence with and without context on THUMOS14 testing set}
\label{tab:thumos_proposal3}
\end{table}
\begin{table}[]
\centering
\small
\begin{tabular}{c|c|c|c|c}
\hline
Method& mAP@.4& mAP@.5& mAP@.6& mAP@.7\\
\hline
DAPs\cite{escorcia2016daps}& ----& 13.9& ----& ----\\
FG\cite{fg_cvpr16}& 26.4& 17.1& ----& ----\\
PSDF\cite{psdf_cvpr16}& 26.1& 18.8& ----& ----\\
SCNN\cite{scnn_shou_wang_chang_cvpr16}& 28.7& 19.0& ----& ----\\
SCNN+CDC\cite{cdc_shou_cvpr17}& 29.4& 23.3& 13.1& 7.9\\
\hline
\textbf{Ours}+CDC& \textbf{33.3}& \textbf{25.6}& \textbf{15.9}& \textbf{9.0}\\
\hline
\end{tabular}
\caption{Performance of state-of-the-art detectors on the THUMOS14 testing set}
\label{tab:thumos_detection}
\end{table}
\begin{figure*}[t]
\center
\includegraphics[width=0.85\linewidth]{figures/figure_5.png}
\caption{Visualization of top 5 ranking results, the blue bar denotes the ground-truth while the green one represents proposals.}
\label{fig:viz}
\end{figure*}
\section{Qualitative Results}
We show some qualitative results for TCN, with and without context. Note that only top 5 proposals are shown. The ground truth is shown in blue while predictions are shown in green. It is evident that when context is not used, multiple proposals are present inside or just at the boundary of ground truth intervals. Therefore, although the location is near the actual interval (when context is not used), the boundaries are inaccurate. Hence, when detection metrics are computed, these nearby detections get marked as false positives leading to a drop in average precision. However, when context is used, the proposals boundaries are significantly more accurate compared to the case when context is not used.
\section{Conclusion}
We demonstrated that temporal context is helpful for performing localization of activities in videos. Analysis was performed to study the impact of temporal proposals in videos by studying precision recall characteristics at multiple overlap thresholds. We also vary the context window to study the importance of temporal context for localization. Finally, we demonstrated state-of-the-art performance on two challenging public datasets.
\section*{Acknowledgement}
\vspace*{-0.1cm}
The authors acknowledge the University of Maryland supercomputing resources \url{http://www.it.umd.edu/hpcc} made available for conducting the research reported in this paper.
{\small
\bibliographystyle{ieee}
|
3,212,635,537,675 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Shape optimization has started its development especially in the last quarter of the previous century and we just quote several
monographs devoted to this subject Pironneau
\cite{Pironneau1984},
Haslinger and Neittaanm\"aki \cite{Haslinger1996},
Sokolowski and Zolesio \cite{Sokolowski1992},
Delfour and Zolesio \cite{Delfour2001},
Neittaanm\"aki, Sprekels and Tiba \cite{NS_Tiba2006},
Bucur and Buttazzo \cite{Bucur2005},
Henrot and Pierre \cite{Henrot2005}, Allaire \cite{A}, where more details on the history of the subject and comprehensive references can be found.
It is to be noted that, in general, just certain variants
of boundary variations are taken into account, while
topological variations of the unknown domains are
frequently not investigated.
A typical example of shape optimization problem, defined
on a given family of domains $\Omega \in \mathcal{O}$ (in general, it is assumed that $\Omega \subset D$, a
prescribed bounded domain), has
the following structure:
\begin{eqnarray}
\min_{\Omega\in\mathcal{O}} \int_\Lambda j\left(\mathbf{x}, y_\Omega(\mathbf{x})\right)d\mathbf{x},
\label{1.1}\\
A\, y_\Omega = f \hbox{ in }\Omega,
\label{1.2}\\
B\,y_\Omega = 0 \hbox{ on }\partial\Omega
\label{1.3}
\end{eqnarray}
where $\Lambda$ may be $\Omega$ or some fixed given
subdomain $E\subset \Omega$, or $\partial\Omega$; and
$B$ is some boundary operator expressing the boundary
condition, $A$ is some differential operator,
$f\in L^p(D)$, $p>2$ is given and $j(\cdot,\cdot)$ is a
a Carath\'eodory function.
More constraints on the
unknown domains $\Omega$, or on the state $y_\Omega$, more general cost
functionals may be taken into account. Regularity assumptions
on $\Omega\in\mathcal{O}$, on $j(\cdot,\cdot)$, other
hypotheses, will be imposed as necessity appears.
Many geometric optimization
problems arise in mechanics: minimize the thickness, the
volume, the stresses, etc., in a plate, a beam, a curved rod in
dimension three, an arch, a shell. Due to the
formulation of the mechanical models, the geometric
characteristics of the object (thickness, curvature)
enter as coefficients in the governing differential system.
Consequently, such geometric optimization problems take the
form of an optimal control problem in a given domain,
with the control acting in the coefficients. See \cite{bst}, \cite{alst},
\cite{NS_Tiba2006} Ch VI,
where detailed presentations, including numerical
examples, may be found.
In fact, general shape optimization problems
(\ref{1.1})-(\ref{1.3}) have a similar structure with optimal
control problems, the difference being that the minimization
parameter is the unknown geometry itself, $\Omega\in\mathcal{O}$.
It is a natural question to find a method that
reduces/approximates general optimal design problems to/via optimal
control theory, and some examples already appear in the
classical monograph of Pironneau \cite{Pironneau1984}.
In the case of Dirichlet boundary conditions several
approaches have been developed \cite{NP_Tiba2009}, \cite{N_Tiba2012}, \cite{MT2019},
\cite{MT2019a} allowing both shape
and topology optimization. Essential ingredients are
functional variations that combine both aspects and the
recent implicit parametrization method based on the
representation of the geometry via iterated Hamiltonian
systems \cite{Tiba2013},\cite{N_Tiba2015},\cite{Tiba2018},\cite{Tiba2018a}.
It turns out that this approach is very general and
we show here that it works in the case of Neumann boundary
conditions as well. This remains true for the Robin boundary conditions, nonlinear equations,
etc., but we do not examine now such questions.
The methodology is of fixed domain type and it has important
advantages at the numerical level: it avoids remeshing and recomputing the mass
matrix in
each iteration of the algorithm. Related ideas are also applicable in free boundary problems,
see \cite{HMT2016}, \cite{HMT2018}, optimization and control \cite{Tiba2020}.
Concerning topological variations, we underline that the well known level set method
\cite{OF}, \cite{OS}, \cite{A}, \cite{MAJ} is essentially different from our approach.
In our method, while we also use level functions, no Hamilton-Jacobi equation is needed
and simple ordinary differential Hamiltonian systems can handle the unknown geometry
and its variations. We work in dimension two, $D \subset \mathbb{R}^2$, since the
important periodicity argument is based on the Poincare-Bendixson theorem
\cite{Hirsch2014}, \cite{Pon}, and certain related developments.
This is a case of interest in shape optimization.
The paper is organized as follows. In the next Section, we
collect some preliminaries and we give the precise
formulation of the problem. Both distributed and boundary
observations are taken into account. In Section 3 we
introduce the fixed domain approximation process as an
optimal control problem, we prove a general approximation property under very weak
conditions and we also obtain some error estimates. As a corollary of the employed
methods, an existence result is proved as well. Section 4 is devoted to the
differentiability properties of our approach, that give the basis for numerical
algorithms of gradient type. A key technical development is the proof of the
differentiability of the period in Hamiltonian systems, with respect to functional
variations. Discretization and numerical examples are discussed in the last two Sections.
\section{Problem formulation and preliminaries}
\setcounter{equation}{0}
Let $\mathcal{O}$ be a given family of open, connected sets,
$\Omega \subset D$, not necessarily simply connected, where
$D\subset\mathbb{R}^2$ is a bounded domain and $\Omega$,
$D$ have both $\mathcal{C}^{1,1}$ boundaries.
In each $\Omega\in \mathcal{O}$, we consider the Neumann
boundary value problem
\begin{eqnarray}
-\Delta y_\Omega + y_\Omega= f \hbox{ in }\Omega,
\label{2.1}\\
\frac{\partial y_\Omega}{\partial n} = 0 \hbox{ on }
\partial\Omega,
\label{2.2}
\end{eqnarray}
where $f\in L^p(D)$, $p>2$ is given. It is known that
(\ref{2.1}), (\ref{2.2}) has a unique solution
$y_\Omega\in W^{2,p}(\Omega)$, more general elliptic operators
may be taken into account in (\ref{2.1}) or the regularity
conditions on the boundary may be relaxed, Grisvard
\cite{Grisvard1985}.
Here, it is important to work in $\mathbb{R}^2$ since
Poincar\'e-Bendixson type arguments are essential in the
proof of the global existence result for the Hamilton
system (\ref{2.9})-(\ref{2.11}) that are introduced in the sequel for the description
of the unknown geometries.
In fact, all the other arguments to be used in this work
are valid in arbitrary dimension, where iterated Hamiltonian
systems are necessary for the description of the geometry
and their solution is local \cite{Tiba2018}.
We associate to the system (\ref{2.1}), (\ref{2.2}) a cost
functional that combines distributed and boundary observation
(the necessary regularity conditions are detailed in the sequel):
\begin{equation}\label{2.3}
\min_{\Omega\in\mathcal{O}}
\left\{
\int_E J\left(\mathbf{x}, y_\Omega(\mathbf{x})\right)
d\mathbf{x}
+\int_{\partial\Omega}
j\left(\mathbf{x}, y_\Omega(\mathbf{x})\right)d\sigma
\right\},
\end{equation}
where $E\subset\subset D$ is a given subdomain such that
$E\subset\Omega$ for any $\Omega\in\mathcal{O}$ and
$J(\cdot,\cdot)$, $j(\cdot,\cdot)$ are Carath\'eodory
functions. More restrictions (for instance, on the state
$y_\Omega$) may be added to the shape optimization problem
(\ref{2.1})-(\ref{2.3}), denoted by $(\mathcal{P})$.
More assumptions will be formulated as necessity appears.
The approach based on functional variations \cite{N_Tiba2012}, \cite{NP_Tiba2009},
\cite{Tiba2018a} assumes
that the family of admissible domain $\mathcal{O}$ is
obtained starting from a family
$\mathcal{F}\subset\mathcal{C}(\overline{D})$ of level
functions via the relation:
\begin{equation}\label{2.4}
\Omega=\Omega_g=int\left\{ \mathbf{x}\in D;\ g(\mathbf{x})
\leq 0\right\},\quad g\in \mathcal{F}.
\end{equation}
While $\Omega_g$ defined in (\ref{2.4}) is an open set and
may have many connected components, the domain $\Omega_g$
that we use in the sequel is the component that contains
$E$. This is possible if we assume
\begin{equation}\label{2.5}
g(\mathbf{x})\leq 0,\quad \forall \mathbf{x}\in E,
\quad \forall g \in \mathcal{F}.
\end{equation}
Another variant, possible to be used in the definition of
the domain $\Omega_g$, is to assume that
\begin{equation}\label{2.6}
\mathbf{x}_0\in \partial\Omega_g,
\quad \forall g \in \mathcal{F}
\end{equation}
for some $\mathbf{x}_0\in D\setminus\overline{E}$, given. One has to impose on
the family $\mathcal{F}$ the simple constraint
\begin{equation}\label{2,5}
g(\mathbf{x}_0) = 0,
\quad \forall g \in \mathcal{F}.
\end{equation}
In this context, it is important to consider the closed
bounded set:
\begin{equation}\label{2.7}
G=\left\{ \mathbf{x}\in D;\ g(\mathbf{x})=0\right\}
\end{equation}
associated to any $g \in \mathcal{F}$.
If $\mathcal{F}\subset\mathcal{C}(\overline{D})$ without
further conditions, then $meas(G)>0$ is possible.
We further assume, see \cite{Tiba2018a}, that
$\mathcal{F}\subset\mathcal{C}^1(\overline{D})$ and
\begin{equation}\label{2.8}
|\nabla g( \mathbf{x})| > 0,\quad \forall\mathbf{x}\in G,
\quad\forall g \in \mathcal{F}.
\end{equation}
Then, by (\ref{2.6})-(\ref{2.8}) and the implicit functions
theorem, we get $G=\partial\Omega_g$ and the Hamiltonian
system
\begin{eqnarray}
z_1^\prime(t) & = & -\frac{\partial g}{\partial x_2}\left(z_1(t),z_2(t)\right),\quad t\in I_g,
\label{2.9}\\
z_2^\prime(t) & = & \frac{\partial g}{\partial x_1}\left(z_1(t),z_2(t)\right),\quad t\in I_g,
\label{2.10}\\
\left(z_1(0),z_2(0)\right)& = & \mathbf{x}_0
\in \partial\Omega_g,
\label{2.11}
\end{eqnarray}
where $I_g$ is the local existence interval for
(\ref{2.9})-(\ref{2.11}), gives a local parametrization of
$\partial\Omega_g$ around $\mathbf{x}_0$, \cite{Tiba2013}.
The solution is unique due to the Hamiltonian structure \cite{Tiba2018}.
We also assume that
\begin{equation}\label{2.12}
g( \mathbf{x}) > 0,\quad \forall\mathbf{x}\in \partial D,
\quad\forall g \in \mathcal{F}
\end{equation}
which ensures that $G\cap \partial D=\emptyset$ for
$g \in \mathcal{F}$.
Notice that the family $\mathcal{O}$ of domains defined by
(\ref{2.4})-(\ref{2.5}) is very rich,
they may be multiply connected and this is one reason
why the above approach combines boundary and topological
variations in shape optimization.
Moreover, under hypothesis (\ref{2.8}), we get
$\partial\Omega_g$ of class $\mathcal{C}^1$ and more
regularity can be obtained if more regularity is imposed
on $\mathcal{F}$. This ensures the previously mentioned
regularity properties for the solution of
(\ref{2.1}), (\ref{2.2}) and the cost (\ref{2.3}) and
its approximation (in the next section), are well defined.
It is proved in \cite{Tiba2018a}, that hypotheses (\ref{2.8}) and
(\ref{2.12}) are sufficient for the global existence
in (\ref{2.9})-(\ref{2.11}).
\begin{theorem}\label{theo:2.1}
For any $\mathbf{x}_0\in D\setminus E$, the solution of
(\ref{2.9})-(\ref{2.11}) is periodic and $I_g$ may
be chosen as its period, $I_g = [0,T_g]$.
\end{theorem}
Namely, the limit cycle situation from the
Poincar\'e-Bendixson theory is not possible here. If
$\partial\Omega_g$ is not connected, its complete
description may be obtained via
(\ref{2.9})-(\ref{2.11}), by choosing an initial condition
on each component. Another crucial property proved in \cite{Tiba2018a} is
\begin{theorem}\label{theo:2.2}
Under the above hypotheses, the compact set $G$ has
a finite number of connected components, for any fixed
$g \in \mathcal{F}$.
\end{theorem}
\noindent
Clearly, the number of the connected components may be
unbounded over the whole $\mathcal{F}$.
\section{Approximation and existence}
\setcounter{equation}{0}
The approximation of shape optimization problems via
cost penalization was introduced in \cite{Tiba2018a} and further
developed in \cite{MT2019}. The idea is to penalize the
boundary condition on the unknown domains. This is
possible due to the Hamiltonian representation of the
unknown geometries, Thm. \ref{theo:2.1} and Thm.
\ref{theo:2.2}. We use here a penalization variant that
has good differentiability properties and is formulated as
an optimal control problem ($\epsilon>0$):
\begin{eqnarray}
&&\min_{g,u}
\left\{
\int_{E} J\left(\mathbf{x},y(\mathbf{x})\right)
d\mathbf{x}
+
\int_{I_g}
j\left(\mathbf{z}(t), y(\mathbf{z}(t))\right)
\sqrt{ (z_1^\prime(t))^2 + (z_2^\prime(t))^2} dt
\right.
\nonumber\\
&&
\left.
+\frac{1}{\epsilon}
\int_{I_g}
\left[
\nabla y(z_1(t),z_2(t))\cdot
\frac{\nabla g(z_1(t),z_2(t)) }{
|\nabla g(z_1(t),z_2(t)) |}
\right]^2
\sqrt{ (z_1^\prime(t))^2 + (z_2^\prime(t))^2} dt
\right\}
\label{3.1}
\end{eqnarray}
subject to
\begin{eqnarray}
-\Delta y + y& = & f+ g_+^2 u,\quad\hbox{in } D,
\label{3.2}\\
y & = & 0,\quad\hbox{on } \partial D,
\label{3.3}
\end{eqnarray}
and (\ref{2.5}).
Above $\mathbf{z}(t)=(z_1(t),z_2(t))$ is the solution of
(\ref{2.9})-(\ref{2.11}), the state $y\in W^{2,p}(D)
\cap H_0^1(D)$ from (\ref{3.2}), (\ref{3.3}) clearly
depends on $g \in \mathcal{F}$ and $u$ is measurable
such that $g_+^2 u \in L^p(D)$, $p>2$.
In dimension 2, we have $y\in \mathcal{C}^1(\overline{D})$
by the Sobolev theorem and all the terms in (\ref{3.1})
make sense.
The penalization term in (\ref{3.1}) is a detailed
formula for
$$
\int_{\partial \Omega_g}
\left|
\frac{\partial y}{\partial n}
\right|^2 d\sigma
$$
based on the Hamiltonian representation
(\ref{2.9})-(\ref{2.11}) of $\partial \Omega_g$ and the fact
that the unit normal to $\partial \Omega_g=G$ is given by
$\frac{\nabla g(z_1(t),z_2(t)) }{
|\nabla g(z_1(t),z_2(t)) |}$ in $(z_1(t),z_2(t))\in \partial
\Omega_g$ and it is well defined due to (\ref{2.8}). In case
$\partial\Omega_g$ has several connected components (their
number is finite by Thm. \ref{theo:2.2}) then the penalization
term is replaced by a finite sum of similar terms, with some
initial condition in (\ref{2.9})-(\ref{2.11}) fixed on each
component.
It is to be noticed that, in the ``extended'' equation
(\ref{3.2}), (\ref{3.3}), we have Dirichlet boundary conditions,
while the original state system (\ref{2.1}), (\ref{2.2}) is a
Neumann boundary value problem. It turns out that the
approximation properties of (\ref{3.1})-(\ref{3.3}) remain valid
even with this change of boundary conditions and we want to
stress this property. In fact, it is also easier to work
with (\ref{3.3}) in the finite element discretization, in the
next sections.
\begin{proposition}\label{prop:3.1}
Let $J(\cdot,\cdot)$ and $j(\cdot,\cdot)$ be
Carath\'eodory functions on $D\times\mathbb{R}$, bounded from
below by a constant and let
$\mathcal{F}\subset \mathcal{C}^2(\overline{D})$ satisfy
(\ref{2.8}), (\ref{2.12}). Denote by $[y_n^\epsilon ,g_n^\epsilon ,u_n^\epsilon]$ a minimizing sequence in the penalized problem (\ref{3.1})-(\ref{3.3}), (\ref{2.5}). Then, on a subsequence
denoted by $n(m)$ the pairs
$[\Omega_{g_{n(m)}^\epsilon}, y_{n(m)}^\epsilon]$ (not necessarily
admissible) give a minimizing cost in (\ref{2.3}),
satisfy (\ref{2.1}) and (\ref{2.2}) is valid with a perturbation
of order $\epsilon^{1/2}$.
\end{proposition}
\noindent
\textbf{Proof.}
The proof follows the ideas from \cite{Tiba2018a}, \cite{MT2019}.
Let $[y_{g_m}, g_m]\in W^{2,p}(\Omega_{g_{m}})\times
\mathcal{F}$ be a minimizing sequence for the problem
(\ref{2.1})-(\ref{2.5}). Here, $\partial \Omega_{g_{m}}$ is
$\mathcal{C}^2$ and this ensures the regularity
$y_{g_{m}}\in W^{2,p}(\Omega_{g_{m}})$ due to $f\in L^p(D)$.
There is $\widetilde{y}_{g_{m}}\in
W^{2,p}(D\setminus \overline{\Omega}_{g_{m}})$, not unique,
such that $\widetilde{y}_{g_{m}}=y_{g_m}$ on
$\partial \Omega_{g_{m}}$,
$\frac{\partial \widetilde{y}_{g_{m}}}{\partial \mathbf{n}}
= \frac{\partial y_{g_{m}}}{\partial \mathbf{n}} = 0$
on $\partial \Omega_{g_{m}}$, $\widetilde{y}_{g_{m}}=0$ on
$\partial D$.
We define an admissible control in (\ref{3.2}) by
\begin{equation}\label{3.4}
u_{g_{m}} =
-\frac{\Delta \widetilde{y}_{g_{m}} + f -\widetilde{y}_{g_{m}}}
{(g_m)_+^2},
\quad\hbox{in } D\setminus \overline{\Omega}_{g_{m}},
\end{equation}
and zero otherwise.
We infer by (\ref{3.4}) that $(g_m)_+^2 u_{g_m}$ is in
$L^p(D)$ and $g_m$, $u_{g_{m}}$ is an admissible
control pair for the penalized problem
(\ref{3.1})-(\ref{3.3}), (\ref{2.5}). Moreover,
the corresponding state in (\ref{3.2}) is obtained by
concatenation of $y_{g_{m}}$ and $\widetilde{y}_{g_{m}}$ and
the corresponding penalization term in (\ref{3.1}) is null.
That is the corresponding costs in (\ref{3.1}) and in (\ref{2.3})
are the same. This construction is also valid in the case
$\Omega_{g_{m}}$ is not simply connected.
We obtain
\begin{eqnarray}
&&
\int_{E} J\left(\mathbf{x},y_{n(m)}^\epsilon(\mathbf{x})\right)
d\mathbf{x}
+
\int_{I_{g_{n(m)}}}
j\left(\mathbf{z}_{n(m)}(t),
y_{n(m)}^\epsilon(\mathbf{z}_{n(m)}(t))\right)
| \mathbf{z}_{n(m)}^\prime(t) | dt
\nonumber\\
&&
+\frac{1}{\epsilon}
\int_{I_{g_{n(m)}}}
\left[
\nabla y_{n(m)}^\epsilon(\mathbf{z}_{n(m)}(t))\cdot
\frac{\nabla g_{n(m)}^\epsilon(\mathbf{z}_{n(m)}(t)) }{
|\nabla g_{n(m)}^\epsilon(\mathbf{z}_{n(m)}(t)) |}
\right]^2
| \mathbf{z}_{n(m)}^\prime(t) | dt
\nonumber\\
&\leq &
\int_{E} J\left(\mathbf{x},y_m(\mathbf{x})\right)
d\mathbf{x}
+\int_{\partial\Omega_{g_m}} j\left(\mathbf{x},y_m(\mathbf{x})\right)
d\sigma \rightarrow \inf (\mathcal{P})
\label{3.5}
\end{eqnarray}
for $m\rightarrow\infty$.
In (\ref{3.5}), the index $n(m)$ is big enough in order
to have the inequality valid and $\mathbf{z}_n$ is the
solution of (\ref{2.9})-(\ref{2.11}) associated to
$g_n^\epsilon$ (for simplicity, we don't write
$\mathbf{z}_n^\epsilon$).
Since $J\left(\cdot,\cdot\right)$ and
$j\left(\cdot,\cdot\right)$ are bounded from below by
constants, from (\ref{3.5}), we get the boundedness of the
penalization term on the subsequence $n(m)$. This yields
the last statement of Proposition \ref{prop:3.1}, on
$\partial\Omega_{g_{n(m)}^\epsilon}$.
As $\left(g_{n(m)}^\epsilon\right)_+$ is null in
$\Omega_{g_{n(m)}^\epsilon}$, we see that (\ref{2.1}) is
satisfied in $\Omega_{g_{n(m)}^\epsilon}$, due to (\ref{3.2}).
The minimizing property of the sequence
$\left[ \Omega_{g_{n(m)}^\epsilon}, y_{n(m)}^\epsilon\right]$ in
the original cost (\ref{2.3}) is again an obvious
consequence of (\ref{3.5}), by the positivity of the
penalization term(s).\quad$\Box$
\noindent
By the Weierstrass theorem, there is $m_g > 0$ such that (\ref{2.8}) becomes
\begin{equation}\label{3,6}
|\nabla g( \mathbf{x})| \geq m_g,\quad \forall\mathbf{x}\in G,
\quad\forall g \in \mathcal{F}.
\end{equation}
\noindent
In order to strengthen the approximation property in Proposition \ref{prop:3.1},
we impose that $\mathcal{F} $ is bounded in $\mathcal{C}^2 (\overline{D})$ and we
require uniformity in (\ref{2.8}), (\ref{3,6}), where $m > 0$ is some given constant:
\begin{equation}\label{3,7}
|\nabla g( \mathbf{x})| \geq m,\quad \forall\mathbf{x}\in G,
\quad\forall g \in \mathcal{F}.
\end{equation}
\noindent
Notice that (\ref{3,7}) or the boundedness of $\mathcal{F} $ don't modify
the topological characteristics of the family of admissible domains
$\Omega_g, \; g \in \mathcal{F}$.
We denote by $y_{n, \epsilon}$ the solution of (\ref{2.1}), (\ref{2.2})
in $\Omega_{g_n^\epsilon}$.
\begin{proposition}\label{prop:3,2}
Under the above assumptions, there is an absolute constant $C > 0$ such that
$$
|y_{n, \epsilon} - y_n^\epsilon |_{H^1 (\Omega_{g_n^\epsilon})} \leq C \epsilon^{1/4}.
$$
\end{proposition}
\noindent
\textbf{Proof.}
We take the difference of the equations (\ref{2.1}) in $\Omega_{g_n^\epsilon}$
corresponding to $y_{n, \epsilon}, \; y_n^\epsilon$ and we multiply by
$y_{n, \epsilon} - y_n^\epsilon$. Then, we get:
$$
|y_{n, \epsilon} - y_n^\epsilon |^2_{H^1 (\Omega_{g_n^\epsilon})} = -\int_{\partial \Omega_{g_n^\epsilon}}
(\frac{\partial y_n^\epsilon}{\partial n})
(y_{n, \epsilon} - y_n^\epsilon) d\sigma \leq c\epsilon^{1/2}
|y_{n, \epsilon} - y_n^\epsilon|_{L^2(\partial \Omega_{g_n^\epsilon})},
$$
\noindent
where $c > 0$ is an absolute constant corresponding to the evaluation
of the penalization term in (\ref{3.1}), from the last statement
in Proposition \ref{prop:3.1}.
By (\ref{3,7}) and Green's formula, we have:
\begin{eqnarray*}
&&
m|y_{n, \epsilon} - y_n^\epsilon|^2_{L^2(\partial \Omega_{g_n^\epsilon})}
\leq \int_{\partial \Omega_{g_n^\epsilon}} |y_{n, \epsilon} - y_n^\epsilon|^2 \;
|\nabla g_n^\epsilon |d \sigma
=\int_{\partial \Omega_{g_n^\epsilon}} |y_{n, \epsilon} - y_n^\epsilon|^2
\nabla g_n^\epsilon \cdot \nu_\epsilon d \sigma \\
&&
\leq \int_{\Omega_{g_n^\epsilon}} |y_{n, \epsilon} - y_n^\epsilon|^2 |\Delta g_n^\epsilon|dx
+ 2\int_{\Omega_{g_n^\epsilon}} |y_{n, \epsilon} - y_n^\epsilon|
|\nabla(y_{n, \epsilon} - y_n^\epsilon) \cdot \nabla g_n^\epsilon|dx\\
&&
\leq M[|y_{n, \epsilon} - y_n^\epsilon|^2_{L^2(\Omega_{g_n^\epsilon})}
+ |y_{n, \epsilon} - y_n^\epsilon|_{L^2(\Omega_{g_n^\epsilon})}
|\nabla(y_{n, \epsilon} - y_n^\epsilon)|_{L^2(\Omega_{g_n^\epsilon})}]
\leq M[|y_{n, \epsilon} - y_n^\epsilon|^2_{L^2(\Omega_{g_n^\epsilon})} \\
&&
+ \epsilon^{1/2} |\nabla(y_{n, \epsilon} - y_n^\epsilon)|^2_{L^2(\Omega_{g_n^\epsilon})}
+ \epsilon^{-1/2}|y_{n, \epsilon} - y_n^\epsilon|^2_{L^2(\Omega_{g_n^\epsilon})} ],
\end{eqnarray*}
\noindent
where we also use the binomial inequality (with the same $\epsilon$ as
in Proposition \ref{prop:3.1}) together with the boundedness of
$\mathcal{F}$ in $\mathcal{C}^2 (\overline{D})$. The notation
$\nu_\epsilon$ is the normal to the domain $\Omega_{g_n^\epsilon}$.
\noindent
Combining the above two inequalities, we end the proof.\quad$\Box$
\begin{remark}\label{rem:3.1}
We note the very weak hypotheses on the cost functional in
Proposition \ref{prop:3.1}. Together with Proposition \ref{prop:3,2},
the justification for the use of the
control problem (\ref{3.1})-(\ref{3.3}), (\ref{2.5}) in
the approximation of $(\mathcal{P})$, is obtained.
A detailed study of the convergence properties when
$\epsilon \rightarrow 0$, for a distributed cost functional,
is performed in \cite{Tiba2018a}.
\end{remark}
\begin{corollary}
Under assumption (\ref{3,7}) and the boundedness of $\mathcal{F}$ in
$\mathcal{C}^1 (\overline{D})$, the shape optimization problem has
at least one optimal solution $\Omega^*$.
\end{corollary}
\noindent
\textbf{Proof.}
Condition (\ref{3,7}) allows to apply the implicit function theorem
around any point $(x,y) \in G$ and to obtain the local representation
of $G$ via some function $y = y(x) $. In particular, also taking into
account the boundedness of $\mathcal{F}$ in $\mathcal{C}^1 (\overline{D})$,
it yields that $ y^\prime (x) = - \frac{g_x(x,y(x))}{g_y (x,y(x))}$ is bounded,
uniformly with respect to the family of admissible domains, under
appropriate choices of the local axes. This allows the application
of well known existence results due to Chenais (see \cite{Pironneau1984},
Ch. 3.3) and to end the proof.\quad$\Box$
\section{Directional derivative}
\setcounter{equation}{0}
We consider now functional variations $g+\lambda r$, $u+\lambda v$,
$r \in \mathcal{F}$, $\lambda \in \mathbb{R}$, $v\in L^p(D)$.
In the sequel, we shall take into account the condition
(\ref{2.6}), (\ref{2,5}) for $g$, $r$ in the identification of the corresponding domains
from (\ref{2.4}). This is also necessary in
(\ref{2.9})-(\ref{2.11}) and at the numerical level it is
very easy to implement (finding some $\mathbf{x}_0$ arises to solve $g(\mathbf{x}) = 0$, which is a standard routine, and to use (\ref{2.9})-(\ref{2.11}) to identify such initial conditions on each connected component of $G$ by elimination; see \cite{MT2019} for other details). Notice that the perturbations of $u$ are always admissible since we have no constraints on $u$ and the perturbations of $g$ satisfy (\ref{2,5}), (\ref{2.8}), (\ref{2.12}) for $|\lambda|$ small enough (depending on $g$).
We denote by $y_\lambda\in W^{2,p}(D)$,
$\mathbf{z}_\lambda\in \mathcal{C}^1(\mathbb{R})$ the solutions
of (\ref{3.2}), (\ref{3.3}) and (\ref{2.9})-(\ref{2.11})
corresponding to the above variations, respectively. From
the previous section, we know that $\mathbf{z}_\lambda$ is
periodic with some period $T_\lambda>0$ and we take its
definition interval to be $[0,T_\lambda]$. In \cite{MT2019}, it is
proved under conditions (\ref{2.8}), (\ref{2.12}), that
$T_\lambda\rightarrow T$ as $\lambda\rightarrow 0$, where $T$
is the period of $\mathbf{z}$, i.e. $I_g=[0,T]$.
\begin{proposition}\label{prop:3.2}
The system in variations corresponding to (\ref{3.2}),
(\ref{3.3}),
(\ref{2.9})-(\ref{2.11}) is:
\begin{eqnarray}
-\Delta q + q & = & g_+^2 v + 2g_+ u\,r,\quad\hbox{in } D,
\label{3.6}\\
q & = & 0,\quad\hbox{on } \partial D,
\label{3.7}\\
w_1^\prime& = & -\nabla\partial_2 g(\mathbf{z})\cdot \mathbf{w}
-\partial_2 r(\mathbf{z}),\quad\hbox{in } [0,T],
\label{3.8}\\
w_2^\prime& = & \nabla\partial_1 g(\mathbf{z})\cdot \mathbf{w}
+\partial_1 r(\mathbf{z}),\quad\hbox{in } [0,T],
\label{3.9}\\
w_1(0)& = &0,\ w_2(0) = 0,
\label{3.10}
\end{eqnarray}
where $q=\lim_{\lambda\rightarrow 0}\frac{y_\lambda-y}{\lambda}$,
$\mathbf{w}=[w_1,w_2]=\lim_{\lambda\rightarrow 0}
\frac{\mathbf{z}_{\lambda} - \mathbf{z}}{\lambda}$
and the limits exists in $W^{2,p}(D)$,
respectively $\mathcal{C}^1([0,T])$.
\end{proposition}
\noindent
\textbf{Proof.}
This is based on standard techniques in the calculus
of variations and we quote \cite{MT2019} where relevant arguments can
be found.\quad $\Box$
\begin{proposition}\label{prop:3.3}
Under the above assumptions, we have:
$$
\lim_{\lambda\rightarrow 0}\frac{T_\lambda-T}{\lambda}
=-\frac{w_2(T)}{z_2^\prime (T)}
$$
if $z_2^\prime (T) \neq 0$.
\end{proposition}
\noindent
\textbf{Proof.}
Clearly $\nabla(g+\lambda r)\neq 0$ on $G_\lambda$ if
$|\lambda|$ small. Then, by the perturbed variant of
(\ref{2.9})-(\ref{2.11}) it yields
$|z_1^{\lambda \prime}(T_\lambda)|
+ |z_2^{\lambda \prime}(T_\lambda)| >0$ and,
similarly $|z_1^\prime (T)| + |z_2^\prime (T))| >0$, due to
(\ref{2.8}).
We choose here $z_2^\prime (T)\neq 0$ and, consequently,
$z_2^{\lambda \prime}(T_\lambda)\neq 0$, for $\lambda$ ``small''.
Then ${z}_2^\lambda$ is invertible on some interval
$[T-\alpha,T+\beta]$ with $\alpha, \beta >0$, small, not depending on $\lambda$,
(and similarly around 0 due to the periodicity property).
This is due to $\mathbf{z}_{\lambda}\rightarrow \mathbf{z}$
in $\mathcal{C}^1([0,2T])^2$ and $T_\lambda \rightarrow T$.
We have $\mathbf{z}_{\lambda}(T_\lambda)=\mathbf{x}_0$ and
it yields:
\begin{equation}\label{3.11}
T_\lambda=(z_2^\lambda)^{-1}(x_0^2).
\end{equation}
We denote $x_0^\lambda=z_2(T_\lambda)\rightarrow x_0^2$
as $\lambda\rightarrow 0$. We may write
\begin{equation}\label{3.12}
\frac{T_\lambda-T}{\lambda}=
\frac{(z_2^\lambda)^{-1}(x_0^2)-(z_2)^{-1}(x_0^2)}{\lambda}
=\frac{(z_2)^{-1}(x_0^\lambda)-(z_2)^{-1}(x_0^2)}{\lambda}.
\end{equation}
By (\ref{3.11}), (\ref{3.12}) we get
$$
\frac{T_\lambda-T}{\lambda}=
\frac{(z_2)^{-1}(x_0^\lambda)-(z_2)^{-1}(x_0^2)}{x_0^\lambda-x_0^2}
\frac{z_2(T_\lambda)-z_2^\lambda(T_\lambda)}{\lambda}.
$$
Passing to the limit in the above relation and using
Proposition \ref{prop:3.2}, we end the proof.
\quad $\Box$
\begin{remark}\label{rem:3.2}
If $z_1^\prime (T)\neq 0$, the limit is
$-\frac{w_1(T)}{z_1^\prime (T)}$. In general, we denote by
$\theta(g,r)$ this limit. The last condition in Proposition \ref{prop:3.3} is a
consequence of (\ref{2.8}).
\end{remark}
To study the differentiability properties of the penalized
cost function (\ref{3.1}), we also assume
$f\in W^{1,p}(D)$, $\partial D$ is in $\mathcal{C}^{2,1}$
and $\mathcal{F}\subset \mathcal{C}^2(\overline{D})$.
We get that $g_+^2\in W^{1,\infty}(D)$ and
$g_+^2u\in W^{1,p}(D)$ if $u\in W^{1,p}(D)$ and the solution of
(\ref{3.2}), (\ref{3.3}) satisfies $y\in W^{3,p}(D)
\subset \mathcal{C}^2(\overline{D})$.
\begin{proposition}\label{prop:3.4}
Under the above conditions, assume that
$J(\mathbf{x},\cdot)$ is in $\mathcal{C}^1(\mathbb{R})$ and $j(\cdot,\cdot)$ is in
$\mathcal{C}^1(\mathbb{R}^3)$. Then, the directional derivative
of (\ref{3.1}), in the direction
$[v,r]\in W^{1,p}(D) \times \mathcal{F}$, is given by:
\begin{eqnarray}
&&
\theta(g,r)\left[
j(\mathbf{x}_0,y(\mathbf{x}_0))
+\left|\frac{\partial y}{\partial \mathbf{n}}(\mathbf{x}_0) \right|^2
\right] | \nabla g(\mathbf{x}_0)|
+\int_E \partial_2 J(\mathbf{x},y(\mathbf{x})) q(\mathbf{x})d\mathbf{x}
\nonumber\\
&+&
\int_0^T
\nabla_1 j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
\cdot \mathbf{w}(t)|\mathbf{z}^\prime(t)| dt
\nonumber\\
&+&
\int_0^T
\partial_2 j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
\left[
\nabla y(\mathbf{z}(t))\cdot \mathbf{w}(t)
+q(\mathbf{z}(t))
\right]
|\mathbf{z}^\prime(t)| dt
\nonumber\\
&+&
\int_0^T
j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
\frac{\mathbf{z}^\prime(t)\cdot \mathbf{w}^\prime(t)}
{|\mathbf{z}^\prime(t)|}
dt
\nonumber\\
&+&\frac{2}{\epsilon}
\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|^2}
\nabla r(\mathbf{z}(t))\cdot\nabla y(\mathbf{z}(t))
|\mathbf{z}^\prime(t)|
dt
\nonumber\\
&+&\frac{2}{\epsilon}
\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\left[
\left(H\,y(\mathbf{z}(t))\right)\mathbf{w}(t)
+ \nabla q(\mathbf{z}(t))
\right]
\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
|\mathbf{z}^\prime(t)|
dt
\nonumber\\
&+&\frac{2}{\epsilon}
\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\nabla y(\mathbf{z}(t))\cdot
\left[
\frac{\left(H\,g(\mathbf{z}(t))\right)\mathbf{w}(t)}
{|\nabla g(\mathbf{z}(t))|}
\right.
\nonumber\\
&&-\left.
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|^3}
\left(
\nabla g(\mathbf{z}(t))\cdot \nabla r(\mathbf{z}(t))
+\nabla g(\mathbf{z}(t))
\left(H\,g(\mathbf{z}(t))\right)\mathbf{w}(t)
\right)
\right]|\mathbf{z}^\prime(t)|
dt
\nonumber\\
&+&\frac{1}{\epsilon}
\int_0^T
\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\right]^2
\frac{\mathbf{z}^\prime(t)\cdot \mathbf{w}^\prime(t)}
{|\mathbf{z}^\prime(t)|}
dt .
\label{3.13}
\end{eqnarray}
\end{proposition}
The notations are explained in the proof.
\noindent
\textbf{Proof.}
We compute
\begin{eqnarray*}
&&
\lim_{\lambda\rightarrow 0} \frac{1}{\lambda}
\left\{
\int_E J(\mathbf{x},y_\lambda(\mathbf{x})) d\mathbf{x}
+\int_0^{T_\lambda}
j\left(\mathbf{z}_\lambda(t),y_\lambda(\mathbf{z}_\lambda(t))\right)
|\mathbf{z}_\lambda^\prime(t)|
dt
\right.
\nonumber\\
&&
+\frac{1}{\epsilon}
\int_0^{T_\lambda}
\left[
\nabla y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
\right]^2
|\mathbf{z}_\lambda^\prime(t)|
dt
-\int_E J(\mathbf{x},y(\mathbf{x})) d\mathbf{x}
\nonumber\\
&&
\left.
-\int_0^T
j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
|\mathbf{z}^\prime(t)|
dt
-\frac{1}{\epsilon}
\int_0^T
\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\right]^2
|\mathbf{z}^\prime(t)|
dt
\right\} .
\end{eqnarray*}
Applying Proposition \ref{prop:3.2}, (\ref{3.6}),
(\ref{3.7}), and the differentiability hypotheses on $J$,
$j$, we get:
\begin{equation}\label{3.14}
\frac{1}{\lambda}
\left[
\int_E J(\mathbf{x},y_\lambda(\mathbf{x})) d\mathbf{x}
-\int_E J(\mathbf{x},y(\mathbf{x})) d\mathbf{x}
\right]
\rightarrow
\int_E
\partial_2 J(\mathbf{x},y(\mathbf{x}))q(\mathbf{x})
d\mathbf{x}.
\end{equation}
We discuss now the term:
\begin{eqnarray}
&&\frac{1}{\lambda}
\int_T^{T_\lambda}
j\left(\mathbf{z}_\lambda(t),y_\lambda(\mathbf{z}_\lambda(t))\right)
|\mathbf{z}_\lambda^\prime(t)|
dt
=\frac{T_\lambda-T}{\lambda}
j\left(\mathbf{z}_\lambda(\tau_\lambda),
y_\lambda(\mathbf{z}_\lambda(\tau_\lambda))\right)
|\mathbf{z}_\lambda^\prime(\tau_\lambda)|
\nonumber\\
&&\rightarrow
\theta(g,r)j(\mathbf{x}_0,y(\mathbf{x}_0))
|\mathbf{z}^\prime(T)|
=\theta(g,r)j(\mathbf{x}_0,y(\mathbf{x}_0))
|\nabla g(\mathbf{x}_0) |,
\label{3.15}
\end{eqnarray}
due to (\ref{2.9})-(\ref{2.11}) and Remark \ref{rem:3.2}.
Here $\tau_\lambda$ is some intermediary point in the
interval $[T, T_\lambda]$, depending on $\lambda$, $g$, $r$,
$j$, etc. We also use Thm. \ref{theo:2.1} and
$T_\lambda \rightarrow T$.
Similarly, we consider the term:
\begin{eqnarray}
&&\frac{1}{\lambda}
\int_T^{T_\lambda}
\left[
\nabla y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
\right]^2
|\mathbf{z}_\lambda^\prime(t)|
dt
\nonumber\\
&\rightarrow&
\theta(g,r)\left[
\nabla y(\mathbf{x}_0)\cdot
\frac{\nabla g(\mathbf{x}_0)}
{|\nabla g(\mathbf{x}_0)|}
\right]^2
|\nabla g(\mathbf{x}_0) |
=\theta(g,r)
\left| \frac{\partial y}
{\partial \mathbf{n}}(\mathbf{x}_0)\right|^2
|\nabla g(\mathbf{x}_0) | .
\label{3.16}
\end{eqnarray}
In the last two limits, the regularity properties of
$y$, $\mathbf{z}$, $y_\lambda$, $\mathbf{z}_\lambda$ also play
a key role.
Next, we investigate the last term:
\begin{eqnarray*}
&&
\frac{1}{\lambda}
\left\{
\int_0^{T}
j\left(\mathbf{z}_\lambda(t),y_\lambda(\mathbf{z}_\lambda(t))\right)
|\mathbf{z}_\lambda^\prime(t)|
dt
\right.
\nonumber\\
&&
+
\frac{1}{\epsilon}
\int_0^{T}
\left[
\nabla y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
\right]^2
|\mathbf{z}_\lambda^\prime(t)|
dt
\nonumber\\
&&
\left.
-\int_0^{T}
j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
|\mathbf{z}^\prime(t)|
dt
-\frac{1}{\epsilon}
\int_0^{T}
\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}
{|\nabla g(\mathbf{z}(t))|}
\right]^2
|\mathbf{z}^\prime(t)|
dt
\right\}
\end{eqnarray*}
Clearly, the terms containing $j(\cdot,\cdot)$ give
the limit:
\begin{eqnarray}
&&\int_0^{T}
\left[ \nabla_1 j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
\cdot \mathbf{w}(t)
+\partial_2 j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
\nabla y(\mathbf{z}(t)) \cdot \mathbf{w}(t)
\right]
|\mathbf{z}^\prime(t)|
dt
\nonumber\\
&+&
\int_0^{T}
\left[
\partial_2 j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
q(\mathbf{z}(t))
|\mathbf{z}^\prime(t)|
+j\left(\mathbf{z}(t),y(\mathbf{z}(t))\right)
\frac{\mathbf{z}^\prime(t)\cdot \mathbf{w}^\prime(t)}
{|\mathbf{z}^\prime(t)|}
\right]
dt
\label{3.17}
\end{eqnarray}
where $\nabla_1 j$ is the gradient of $j(\cdot,\cdot)$
with respect to the two components of $\mathbf{z}$, and
$\partial_2 j$ is the partial derivative with respect to $y$,
other quantities are defined in (\ref{3.6})-(\ref{3.10}).
Let us consider now the two terms corresponding to the
penalization of Neumann boundary condition.
We intercalate advantageous terms and we compute step by step:
\begin{eqnarray}
&&\frac{1}{\lambda}\int_0^{T}
\left\{
\left[
\nabla y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
\right]^2
-\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}
{|\nabla g(\mathbf{z}(t))|}
\right]^2
\right\}
|\mathbf{z}_\lambda^\prime(t)| dt
\nonumber\\
&=&\frac{1}{\lambda}
\int_0^{T}
S\left[
\nabla y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
-\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}
{|\nabla g(\mathbf{z}(t))|}
\right]
|\mathbf{z}_\lambda^\prime(t)| dt
\nonumber\\
&=&
\int_0^{T}
S\frac{\nabla r(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
\cdot
\nabla y_\lambda(\mathbf{z}_\lambda(t))
|\mathbf{z}_\lambda^\prime(t)| dt
\nonumber\\
&&
+\int_0^{T}
S\frac{\nabla y_\lambda(\mathbf{z}_\lambda(t))
-\nabla y(\mathbf{z}(t))}{\lambda}
\cdot
\frac{\nabla g(\mathbf{z}(t))}
{|\nabla g(\mathbf{z}(t))|}
|\mathbf{z}_\lambda^\prime(t)| dt
\nonumber\\
&&
+\frac{1}{\lambda}
\int_0^{T}
S\left[\nabla
y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla g(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
-\nabla y_\lambda(\mathbf{z_\lambda}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}
{|\nabla (g)(\mathbf{z}(t))|}
\right]
|\mathbf{z}_\lambda^\prime(t)| dt
\nonumber\\
&=&I+II+III
\label{3.18}
\end{eqnarray}
where $S$ is the sum
$$
\nabla y_\lambda(\mathbf{z}_\lambda(t))\cdot
\frac{\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))}
{|\nabla (g+\lambda r)(\mathbf{z}_\lambda(t))|}
+\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}
{|\nabla g(\mathbf{z}(t))|} .
$$
We have:
\begin{eqnarray*}
\lim_{\lambda \rightarrow 0} I & = &
2\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\frac{\nabla r(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\cdot\nabla y(\mathbf{z}(t))
|\mathbf{z}^\prime(t)|
dt
\\
\lim_{\lambda \rightarrow 0} II & = &
2\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\left[
H\,y(\mathbf{z}(t)) + \nabla q(\mathbf{z}(t))
\right]
\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
|\mathbf{z}^\prime(t)|
dt,
\end{eqnarray*}
where $H\,y$ is the Hessian matrix of
$y\in\mathcal{C}^2(\overline{D})$.
Concerning part $III$, we get:
\begin{eqnarray*}
\lim_{\lambda \rightarrow 0} III & = &
2\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
|\mathbf{z}^\prime(t)|
\nabla y(\mathbf{z}(t))\cdot
\left[
\frac{\left(H\,g(\mathbf{z}(t))\right)\mathbf{w}(t)}
{|\nabla g(\mathbf{z}(t))|}
\right.
\\
&&
\left.
-\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|^2}
\left(
\frac{\nabla g(\mathbf{z}(t))\cdot \nabla r(\mathbf{z}(t))}
{|\nabla g(\mathbf{z}(t))|}
+\frac{\nabla g(\mathbf{z}(t))\cdot
\left(H\,g(\mathbf{z}(t))\right)\mathbf{w}(t)}
{|\nabla g(\mathbf{z}(t))|}
\right)
\right]
dt
\\
&=&
2\int_0^T
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\nabla y(\mathbf{z}(t))\cdot
\left[
\frac{\left(H\,g(\mathbf{z}(t))\right)\mathbf{w}(t)}
{|\nabla g(\mathbf{z}(t))|}
\right.
\\
&&
-\left.
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|^3}
\left(
\nabla g(\mathbf{z}(t))\cdot \nabla r(\mathbf{z}(t))
+\nabla g(\mathbf{z}(t))
\left(H\,g(\mathbf{z}(t))\right)\mathbf{w}(t)
\right)
\right]|\mathbf{z}^\prime(t)|
dt
\end{eqnarray*}
Finally, the term
\begin{eqnarray}
&&\int_0^T
\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\right]^2
\frac{|\mathbf{z}_\lambda^\prime(t)| - |\mathbf{z}^\prime(t)|}
{\lambda}
dt
\nonumber\\
&\rightarrow &
\int_0^T
\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t))}{|\nabla g(\mathbf{z}(t))|}
\right]^2
\frac{\mathbf{z}^\prime(t)\cdot \mathbf{w}^\prime(t)}
{|\mathbf{z}^\prime(t)|}
dt
\label{3.19}
\end{eqnarray}
Summing up relations (\ref{3.14})-(\ref{3.19}), we
finish the proof of (\ref{3.13}).
\quad$\Box$
\section{Finite element descent directions}
\setcounter{equation}{0}
We use the piecewise cubic finite element $\mathbb{P}_3$ in
$\mathcal{T}_h$ a triangulation of $D$.
We define
$$
\mathbb{W}_h=\{ \varphi_h\in \mathcal{C}(\overline{D});
\ {\varphi_h}_{|T} \in \mathbb{P}_3(T),\ \forall T \in \mathcal{T}_h \}
$$
of dimension $n=card(I)$ ($I$ the set of nodes in
$\mathcal{T}_h$) and
$$
\mathbb{V}_h=\{ \varphi_h\in \mathbb{W}_h;
\ \varphi_h = 0\hbox{ on }\partial D \},
$$
of dimension $n_0=card(I_0)$ ($I_0$ the set of nodes in
$\mathcal{T}_h$, outside $\partial D$) which are finite element approximations
of Hilbert spaces $\mathbb{W}=H^1(D)$, $\mathbb{V}=H^1_0(D)$, respectively.
The parametrization function $g$ is approached by the finite element function
$g_h\in \mathbb{W}_h$, $g_h(\mathbf{x})=\sum_{i\in I} G_i \phi_i(\mathbf{x})$
where $G=(G_i)_{i\in I} \in \mathbb{R}^n$ is a real vector and $\phi_i$ is
the basis in $\mathbb{W}_h$. Similarly, we denote
$u_h\in \mathbb{W}_h$, $y_h\in \mathbb{V}_h$ and the associated vectors
$U=(U_i)_{i\in I}\in\mathbb{R}^n$ and $Y=(Y_j)_{j\in I_0}\in\mathbb{R}^{n_0}$ for the
discretization of the control, respectively the state.
For the control term $u_h$, one can also employ lower order finite elements,
like continuous piecewise linear $\mathbb{P}_1$
or piecewise constant $\mathbb{P}_0$. See \cite{Ciarlet2002}, \cite{Raviart2004}
for a discussion of finite element spaces.
Here, we consider (\ref{2.1}) with non homogeneous
boundary condition $\frac{\partial y_\Omega}{\partial \mathbf{n}}=\delta$
on $\partial \Omega$, with $\delta$ some given function in $H^1 (D)$.
The objective function (\ref{3.1}) is taken of the form
\begin{eqnarray}
\min_{g,u} \mathcal{J}(g,u)&=&
\left\{
\int_{E} J\left(\mathbf{x},y(\mathbf{x})\right)
d\mathbf{x}
+
\int_{I_g}
j\left(\mathbf{z}(t), y(\mathbf{z}(t))\right)
|\mathbf{z}^\prime(t)| dt
\right.
\nonumber\\
&&
\left.
+\frac{1}{\epsilon}
\int_{I_g}
\left[
\nabla y(\mathbf{z}(t))\cdot
\frac{\nabla g(\mathbf{z}(t)) }{
|\nabla g(\mathbf{z}(t)) |}
-\delta(\mathbf{z}(t))
\right]^2
|\mathbf{z}^\prime(t)| dt
\right\} .
\label{5.1}
\end{eqnarray}
We denote the first term of (\ref{5.1}) by
$$
t_1=\int_{E} J\left(\mathbf{x},y(\mathbf{x})\right)
d\mathbf{x}.
$$
The second and the third terms of (\ref{5.1}) can be rewritten as integrals on
$\partial \Omega_g$, more precisely
\begin{eqnarray*}
t_2 &=& \int_{\partial \Omega_g}
j\left(s, y(s)\right) ds \\
t_3 &=& \frac{1}{\epsilon}
\int_{\partial \Omega_g}
\left[
\nabla y(s)\cdot
\frac{\nabla g(s) }{
|\nabla g(s) |}
-\delta(s)
\right]^2 ds .
\end{eqnarray*}
We employ the software FreeFem++, \cite{freefem++} and these terms
can be computed with the command
\texttt{int1d(Th,levelset=gh)(\dots)}.
We use the general descent direction method
$$
(G^{k+1},U^{k+1})=(G^k,U^k)+\lambda_k (R^k,V^k),
$$
where $\lambda_k >0$ is obtained via some line search
$$
\lambda_k \in \arg\min_{\lambda >0}
\mathcal{J}\left((G^k,U^k)+\lambda (R^k,V^k)\right)
$$
and $(R^k,V^k)$ is a descent direction, i.e. $d\mathcal{J}_{(G^k,U^k)}(R^k,V^k) <0$.
For $E \neq \emptyset$, a projection is necessary in order to get (\ref{2.4}).
The algorithm stops if
$| \mathcal{J}(G^{k+1},U^{k+1}) - \mathcal{J}(G^k,U^k)| < tol$ or
$d\mathcal{J}_{(G^k,U^k)}(R^k,V^k)=0$. Other choices are possible, see \cite{Ciarlet2018}
for details on such algorithms.
Since the approximating state system (\ref{3.2}), (\ref{3.3}) is similar
to \cite{MT2019}, we apply here a similar discretization technique of
the gradient (\ref{3.13}). In the following, we shall
use descent directions based on the discrete simplified adjoint system:
find $p_h\in \mathbb{V}_h$ such that
\begin{eqnarray}
&&\int_D \nabla \varphi_h \cdot \nabla p_h d\mathbf{x}
+\int_D \varphi_h p_h d\mathbf{x}
=
\int_{E} \partial_2 J\left(\mathbf{x},y_h(\mathbf{x})\right)
\varphi_h(\mathbf{x})
d\mathbf{x}
\nonumber\\
&&
+\int_{\partial \Omega_{g_h}} \partial_2 j\left(s, y_h(s)\right)
\varphi_h(s) ds
\nonumber\\
&&
+\frac{2}{\epsilon}
\int_{\partial \Omega_{g_h}}
\left(
\nabla y_h(s)\cdot
\frac{\nabla g_h(s) }{
|\nabla g_h(s) |}
-\delta_h(s)
\right)
\nabla \varphi_h(s) \cdot
\frac{\nabla g_h(s) }{
|\nabla g_h(s) |}
ds
\label{5.2}
\end{eqnarray}
for all $\varphi_h \in \mathbb{V}_h$.
In the right hand side of (\ref{5.2}) appear just the terms multiplying $q$
in the gradient (\ref{3.13}) and $\delta_h(s)$ is a continuous piecewise
linear $\mathbb{P}_1$ discretization of $\delta(s)$ in $D$.
\begin{proposition}\label{prop:5.1}
Given $g_h,u_h\in\mathbb{W}_h$ and the variations $r_h,v_h\in\mathbb{W}_h$,
let $y_h\in\mathbb{V}_h$ be the
finite element solution of
(\ref{3.2}), (\ref{3.3}), let
$q_h\in\mathbb{V}_h$ be the finite element solution of (\ref{3.6}),
(\ref{3.7}) depending in $r_h,\ v_h$
and let $p_h\in\mathbb{V}_h$ be the solution of (\ref{5.2}).
Then
\begin{eqnarray}
&&\int_{E} \partial_2 J\left(\mathbf{x},y_h(\mathbf{x})\right)
q_h(\mathbf{x})
d\mathbf{x}
+\int_{\partial \Omega_{g_h}} \partial_2 j\left(s, y_h(s)\right)
q_h(s) ds
\nonumber\\
&&
+\frac{2}{\epsilon}
\int_{\partial \Omega_{g_h}}
\left(
\nabla y_h(s)\cdot
\frac{\nabla g_h(s) }{
|\nabla g_h(s) |}
-\delta_h(s)
\right)
\nabla q_h(s) \cdot
\frac{\nabla g_h(s) }{
|\nabla g_h(s) |}
ds
\leq 0
\label{5.3}
\end{eqnarray}
if we choose:\\
i) $r_h=-p_hu_h$ and $v_h=-p_h$ or\\
ii) $r_h=-\widetilde{d}_h$ and $v_h=-p_h$ where
$\widetilde{d}_h \in\mathbb{W}_h$ is the solution of
\begin{eqnarray}
&&\int_D \nabla \widetilde{d}_h \cdot \nabla \varphi_h d\mathbf{x}
+\int_D \widetilde{d}_h \varphi_h d\mathbf{x}
=
\int_D 2(g_h)_+ u_h p_h
\varphi_h
d\mathbf{x}
\label{5.4}
\end{eqnarray}
for all $\varphi_h \in \mathbb{W}_h$.
\end{proposition}
\noindent
\textbf{Proof.}
Putting $\varphi_h=q_h$ in (\ref{5.2}) and multiplying (\ref{3.6}) by $p_h$,
integrating by parts over $D$ and using (\ref{3.7}),
we get that the left hand side of (\ref{5.3}) is equal to:
\begin{eqnarray*}
\int_D (g_h)_+^2 v_h p_h d\mathbf{x}
+ \int_D 2(g_h)_+ u_h r_h p_h d\mathbf{x}.
\end{eqnarray*}
For $v_h=-p_h$, we have
$$
\int_D (g_h)_+^2 v_hp_hd\mathbf{x}
=-\int_D (g_h)_+^2 p_h^2 d\mathbf{x}
\leq 0.
$$
If $(g_h)_+ p_h$ is not null, then the above inequality is strict.\\
Case i). For $r_h=-p_h u_h$, we have
$$
\int_D 2(g_h)_+ u_h r_h p_h d\mathbf{x}
=-\int_D 2(g_h)_+ (u_h p_h)^2 d\mathbf{x}
\leq 0.
$$
Case ii). For $r_h=-\widetilde{d}_h$, we have
\begin{eqnarray*}
\int_D 2(g_h)_+ u_h r_h p_h d\mathbf{x}
&=&-\int_D 2(g_h)_+ u_h p_h \widetilde{d}_h d\mathbf{x}\\
&=&-\int_D \nabla \widetilde{d}_h \cdot \nabla \widetilde{d}_h d\mathbf{x}
-\int_D \widetilde{d}_h \widetilde{d}_h d\mathbf{x}
\leq 0.
\end{eqnarray*}
The second equality is obtained by putting
$\varphi_h =\widetilde{d}_h$ in (\ref{5.4}).
This ends the proof.
If $(g_h)_+ p_h$ is not null, then the inequality (\ref{5.3}) is strict.
\quad$\Box$
\begin{remark}\label{rem:5.1}
Due to the strong non convex character of the shape optimization problems,
the descent algorithms find just a local minimum point of the penalized problem,
in general. The penalization term may remain not null, that is the
constraint (\ref{2.2}) may be violated. However, the above methodology offers
a systematic and general approximation procedure that can be applied in many
examples and produces relevant results. Both topological and boundary
variations are performed simultaneously.
\end{remark}
\section{Numerical tests}
\setcounter{equation}{0}
\medskip
\textbf{Example 1.}
We choose
$D=]-3,3[\times ]-3,3[$,
$y_d(x_1,x_2)=x_1^2 + x_2^2 -1^2$, $f(\mathbf{x})=-4+y_d(\mathbf{x})$
and the tracking type cost
$j(\mathbf{x})=\frac{1}{2}\left(y(\mathbf{x})-y_d(\mathbf{x})\right)^2$.
We fix $\delta=2$ for the non homogeneous Neumann
boundary condition.
We consider first the case $E=\emptyset$ and $J = 0$, with the numerical parameters:
$\epsilon=0.5$, the mesh of $D$ has 73786 triangles and 37254 vertices
and the tolerance parameter for the stopping test is $tol=10^{-6}$.
The initial domain is the disk of center $(0,0)$ and radius $2.5$
with a circular hole of center $(-1,-1)$ and radius $0.5$. The corresponding
$g_0(x_1,x_2)$ is given by
$$
\max\left(
(x_1)^2 + (x_2)^2 -2.5^2,
-(x_1+1)^2 - (x_2+1)^2 +0.5^2
\right) .
$$
The initial guess for the control is $u_0=0$.
We use the descent direction
given by the Proposition \ref{prop:5.1}, case ii) and
the algorithm stops after 3 iterations.
For the stopping test, we have computed just the left hand side of (\ref{5.3}) and we
replaced $d\mathcal{J}_{(G^k,U^k)}(R^k,V^k)=0$ by:
there are no smaller values than $\mathcal{J}(G^k,U^k)$ in the direction
$(R^k,V^k)$ for $\lambda \in \{\rho^i; i\in \mathbb{N},\ 0 \leq i < 30 \}$,
with $\rho=0.8$.
We can observe in Figure \ref{fig:ex1_xi} the evolution of the
domain (both boundary and topological changes) and in
Table \ref{tab:ex1_J} the corresponding values of the
objective function. For $u_0=0$, we get $g_1=g_0$, but we have, for the cost functional,
$\mathcal{J}_1 < \mathcal{J}_0$, since there is minimization with respect to the control $u$.
We do not plot in Figure \ref{fig:ex1_xi} the domain for $k=1$ because it
is the same as for
$k=0$, but there is a column in Table \ref{tab:ex1_J} corresponding to $k=1$,
showing the evolution of the penalized cost.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4.5cm]{ex1_b_xi_0.pdf}
\
\includegraphics[width=4.5cm]{ex1_b_xi_1_14.pdf}
\
\includegraphics[width=4.5cm]{ex1_b_xi_1_13.pdf}\\
\includegraphics[width=4.5cm]{ex1_b_xi_1_11.pdf}
\
\includegraphics[width=4.5cm]{ex1_b_xi_1_7.pdf}
\
\includegraphics[width=4.5cm]{ex1_b_xi_1_9.pdf}
\end{center}
\caption{Example 1. Initial domain $k=0$ (top, left), intermediate domains
during the line-search after $k=1$
and the final domain $k=2$ (bottom, right).
\label{fig:ex1_xi}}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
iteration & k=0 & k=1 & & & & & k=2 \\ \hline
$t_2$ & 220.87 & 171.13 & 155.60 & 149.47 & 129.19 & 67.60 & 90.50 \\ \hline
$t_3$ & 35.50 & 34.63 & 40.12 & 38.06 & 32.10 & 54.75 & 18.30 \\ \hline
$\mathcal{J}$ & 291.89 & 240.39 & 235.85 & 225.60 & 193.39 & 177.12 & 127.11 \\ \hline
\end{tabular}
\end{center}
\caption{Example 1. The computed objective function
$\mathcal{J}=t_2+\frac{1}{\epsilon}t_3$.
The columns 4, 5, 6, 7 correspond to the
intermediate configurations
obtained during the line-search after $k=1$. The descent property is valid just for the total cost, on the last line.}
\label{tab:ex1_J}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
iteration & k=0 & & & & & k=2 \\ \hline
$t_2$ & 96.39 & 74.76 & 79.98 & 253.41& 46.59 & 56.62 \\ \hline
\end{tabular}
\end{center}
\caption{Example 1. The values of $t_2$ for the finite element solution
of (\ref{2.1})-(\ref{2.2}) in the domains presented in Figure \ref{fig:ex1_xi}.}
\label{tab:ex1_t2}
\end{table}
For the solution of the elliptic problem (\ref{2.1})-(\ref{2.2})
in the computed domains $\Omega_g$, we obtain in fact the best value
$t_2=46.59$ (see Table \ref{tab:ex1_t2}), which is consistently better
than $t_2=67.60$ obtained for the
solution of (\ref{3.2})-(\ref{3.3}) in $D$, in the corresponding iteration
of the algorithm. This is due to the value of the penalization term $t_3$,
which remians ``far'' from zero.
Such situations are frequent in penalization approaches for nonconvex
minimization problems.
\clearpage
\medskip
\noindent
\textbf{Example 2.}
We study now a case with $E\neq\emptyset$. The $D$, $y_d$, $f$, $\delta$
are the same as in Example 1. The observation domain $E$ is the disk of center
$(0,0)$ and radius $0.5$ and we take
$J(\mathbf{x})=\frac{1}{2}\left(y(\mathbf{x})-y_d(\mathbf{x})\right)^2$
and $j=0$. We fix $\epsilon=0.9$ and the other
numerical parameters are the same as in Example 1. Such a choice of a ``big''
penalization parameter (similar with the previous example) has the consequence
that the constraint (\ref{2.2})
is consistently relaxed and allows a large choice of descent
directions.
For $g_0(x_1,x_2)$, given by
$$
\max\left(
(x_1+0.8)^2 + (x_2+0.8)^2 -1.8^2,
-(x_1+0.8)^2 - (x_2+0.8)^2 +0.6^2
\right)
$$
we obtain as initial domain the ring of center $(-0.8,-0.8)$, exterior
radius $1.8$ and interior radius $0.6$.
In order to observe during the algorithm the restriction (\ref{2.5}), we use
the descent direction method
with projection, see \cite{Ciarlet2018}.
The descent direction is given by the Proposition \ref{prop:5.1}, case ii)
and the projection is computed as follows:
$\Pi(g)=g_E$ in $E$ and $\Pi(g)=g$ outside $E$,
where $g_E\in \mathcal{F}$ is such that $g_E(\mathbf{x})<0$ if and only if
$\mathbf{x}\in E$. In our test, $g_E(x_1,x_2)= (x_1)^2 + (x_2)^2 -0.5^2$.
The line search, with projection only for the parametrization function, is
$$
\lambda_k \in \arg\min_{\lambda >0}
\mathcal{J}\left( \Pi(G^k+\lambda R^k), U^k+\lambda V^k \right)
$$
and the next iteration is defined by
$$
G^{k+1}=\Pi(G^k+\lambda_k R^k),\quad U^{k+1}= U^k+\lambda_k V^k.
$$
\medskip
The initial guess for the control is $u_0=1$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4.5cm]{ex2_xi_0.pdf}
\
\includegraphics[width=4.5cm]{ex2_b_xi_0_17.pdf}
\
\includegraphics[width=4.5cm]{ex2_b_xi_0_13.pdf}\\
\includegraphics[width=4.5cm]{ex2_b_xi_0_12.pdf}
\
\includegraphics[width=4.5cm]{ex2_b_xi_1.pdf}
\
\includegraphics[width=4.5cm]{ex2_b_xi_2.pdf}
\end{center}
\caption{Example 2. Domain for $k=0$ (top, left), intermediate domains
during the line-search after $k=0$,
domain for $k=1$ (bottom, middle)
and the final domain for $k=2$ (bottom, right).
\label{fig:ex2_b_xi}}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|r|r|r|r|r|r|}
\hline
iteration & k=0 & & & & k=1 & k=2 \\ \hline
$t_1$ & 8.03 & 6.01 & 4.00 & 3.37 & 0.35 & 0.54 \\ \hline
$t_3$ & 234.91 & 218.34 & 204.47 & 198.08 & 193.56 & 57.42 \\ \hline
$\mathcal{J}$ & 269.05 & 248.62 & 231.20 & 223.46 & 215.42 & 64.35 \\ \hline
\end{tabular}
\end{center}
\caption{Example 2. The computed objective function
$\mathcal{J}=t_1+\frac{1}{\epsilon}t_3$.
The columns 3, 4, 5 correspond to the
intermediate configurations
obtained during the line-search after $k=0$.}
\label{tab:ex2_b_J}
\end{table}
The domain evolution is presented in Figure \ref{fig:ex2_b_xi}
and the corresponding values of the
objective function are in Table \ref{tab:ex2_b_J}.
For the finite element solution of (\ref{2.1})-(\ref{2.2})
in the domains presented in Figure \ref{fig:ex2_b_xi}, we have reported $t_1$
in Table \ref{tab:ex2_b_t1}. Due to the low value of the initial cost, we
notice the oscillations around this value and the minimal cost is attained
already in the first step of the line search. The interpretation of the
penalization term is similar as in the previous example.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|r|r|r|r|r|r|}
\hline
iteration & k=0 & & & & k=1 & k=2 \\ \hline
$t_1$ & 0.099& 0.00053& 0.11& 0.27& 0.51& 0.49 \\ \hline
\end{tabular}
\end{center}
\caption{Example 2. The values of $t_1$ for the finite element solution
of (\ref{2.1})-(\ref{2.2}) in the domains presented in Figure \ref{fig:ex2_b_xi}.}
\label{tab:ex2_b_t1}
\end{table}
\clearpage
|
3,212,635,537,676 | arxiv | \section{Introduction}
In this note, we are interested in four theorems on spherical point configurations.
The first of these theorems is concerned with orthonormal representation of graphs.
The notion of orthonormal representation of a graph was introduced by Lov\'{a}sz in his study of Shannon capacity of graphs \cite{lov79}.
For a detailed discussion of orthonormal representations see the recent book \cite{lov19}.
Parsons and Pisanski \cite{pp89} introduced the following notion of orthonormal representation, which is slightly different from that of Lov\'{a}sz
\footnote{In Lov\'{a}sz's definition,
the inner product $(p^i)^Tp^j$ is unrestricted if nodes $i$ and $j$ are adjacent.}.
Let $G$ be a simple graph with nodes $1,\ldots,n$. An {\em orthonormal representation } of $G$ is a mapping of the nodes of $G$ to
unit vectors $p^1,\ldots,p^n$ in Euclidean $r$-space $\mathbb{R}^r$ such that $(p^i)^Tp^j$ is negative or zero
depending on whether nodes $i$ and $j$ are adjacent or not.
The smallest dimension $r$ necessary for such a representation is denoted by $d(G)$. It is easy to see that
$d(G) \geq \alpha(G)$, where $\alpha(G)$ is the independence number of $G$.
\v{S}i\v{n}ajov\'{a} proved the following.
\begin{thm}[\v{S}i\v{n}ajov\'{a} \cite{sin91}] \label{thmsin}
Let $G$ be a simple graph on $n$ nodes and let $k$ be the number of its nontrivial connected components, i.e., those connected components
with at least $2$ nodes. Then
$d(G) = n - k$.
\end{thm}
The remaining theorems are concerned with the dispersion problem.
The {\em dispersion problem} is the problem of maximizing, over all $n$-point configurations
on the unit ($r-1$)-sphere in $\mathbb{R}^r$, the minimum distance between any two points.
The dispersion problem has applications in sphere packing and spherical designs \cite{wyx18}.
Davenport and Haj\'{o}s \cite{dh51} and Rankin \cite{ran55} provided solutions of this problem for the case $n=r+2$.
Rankin \cite{ran55}, also, provided a solution for the case $n=2r$.
Before presenting Rankin's two theorems, we need the following definition.
The {\em regular $r$-crosspolytope} is the convex hull of the union of $r$ mutually orthogonal line segments of length 2
and intersecting at their common midpoint. That is, the regular $r$-crosspoltope is the convex hull of ($\pm e^1, \pm e^2, \ldots, \pm e^r$),
where $e^i$ is the $i$th standard unit vector in $\mathbb{R}^r$.
\begin{thm}[Rankin \cite{ran55}] \label{thmran1}
Let $p$ be an $n$-point configuration on the unit ($r-1$)-sphere in $\mathbb{R}^r$. If $n=r+2$, then two points
of $p$ are at a distance of at most $\sqrt{2}$ from each other.
\end{thm}
\begin{thm}[Rankin \cite{ran55}] \label{thmran2}
Let $p$ be an $n$-point configuration on the unit ($r-1$)-sphere in $\mathbb{R}^r$. If $n=2r$ and the distance
between any two points of $p$ is $\geq \sqrt{2}$, then $p$ is unique, up to a rigid motion, and the points of $p$ are the vertices
of the regular $r$-crosspolytope.
\end{thm}
Kuperberg \cite{kup07} generalized Rankin's result to all $n$: $r+2 \leq n \leq 2r$.
\begin{thm}[Kuperberg \cite{kup07}] \label{thmkup1}
Let $p$ be an $n$-point configuration on the unit sphere in $\mathbb{R}^r$ such that $2 \leq n- r \leq r$.
If the minimum distance between any two points of $p$ is at least $\sqrt{2}$, then
$\mathbb{R}^r$ can be split into the orthogonal product $\prod_{i=1}^{n-r} L_i$ of $n-r$ subspaces of $\mathbb{R}^r$
such that $L_i$ contains exactly $r_i + 1$ points of $p$, where $r_i$ is the dimension of $L_i$.
\end{thm}
In this note, we present simple linear algebraic proofs of \v{S}i\v{n}ajov\'{a}, Rankin and
Kuperberg's theorems based on
spherical Euclidean distance matrices (EDMs) and the Perron-Frobenius theorem.
These proofs are given in Sections 3, 4 and 5 respectively, while the necessary background material is given in Section 2.
\subsection{Notation}
We collect here the notation used in this note. $e_n$ and $E_n$ denote, respectively, the vector of all 1's in $\mathbb{R}^n$
and the matrix of all 1's of order $n$. $I_n$ denotes the identity matrix of order $n$. $e^i_n$ denotes the $i$th column of $I_n$.
The subscript $n$, in $e_n$, $E_n$, $I_n$ and $e^i_n$ will be omitted if the dimension is clear from the context.
For a matrix $A$, we denote the vector consisting of the diagonal entries of $A$ by $\mathrm{diag}\,(A)$. Also,
for a real symmetric matrix $A$, we denote by $\lambda_{\max}(A)$ and $m(\lambda_{\max}(A))$, respectively,
the largest eigenvalue of $A$ and its multiplicity. The zero vector or the zero matrix of the appropriate dimension
is denoted by ${\bf 0} $.
PSD stands for positive semidefinite.
Finally, $E(G)$ denotes the edge set of a simple graph $G$.
\section{Preliminaries}
In this section we present the necessary background concerning EDMs and more specifically spherical EDMs.
For a comprehensive treatment of EDMs see the monograph \cite{alfm18}.
An $n \times n$ matrix $D=(d_{ij})$ is said to be an EDM
if there exist points $p^1,\ldots,p^n$ in some Euclidean space such that
\[
d_{ij}= || p^i - p^j ||^2 \mbox{ for all } i,j=1,\ldots, n,
\]
where $||x||$ denotes the Euclidean norm of $x$, i.e., $||x|| = \sqrt{x^Tx}$.
$p^1,\ldots,p^n$ are called the {\em generating points} of $D$ and the dimension of their affine span is
called the {\em embedding dimension} of $D$.
If the embedding dimension of an $n \times n$ EDM $D$ is $n-1$, then we refer to $D$ as the EDM of a simplex.
For example, let $E$ and $I$ denote respectively the matrix of all 1's and the identity matrix. Then
the EDM $D = \gamma(E - I)$, where $\gamma$ is a positive scalar, is
the EDM of a {\em regular simplex}. An EDM $D$ is said to be {\em spherical} if its generating points lie on a sphere.
A {\em unit} spherical EDM is a spherical EDM whose generating points lie on a sphere of radius $\rho = 1$.
Let $e$ denote the vector of all 1's in $\mathbb{R}^n$ and let $s$ be a vector in $\mathbb{R}^n$ such that $e^Ts= 1$.
The following theorem is a well-known characterization of EDMs \cite{sch35,yh38, gow85,cri88}.
\begin{thm} \label{thmEDM}
Let $D$ be an $n \times n$ real symmetric matrix whose diagonal entries are all 0's. Then $D$ is an EDM
if and only if
\begin{equation} \label{defBs}
B= -\frac{1}{2} (I - e s^T) D (I - s e^T )
\end{equation}
is positive semidefinite (PSD), in which case, the embedding dimension of $D$ is given by rank($B$).
\end{thm}
That is, $D$ is an EDM iff it is negative semidefinite on $e^{\perp}$, the orthogonal complement of $e$ in $\mathbb{R}^n$.
It can be easily shown that $B$ as defined in Equation (\ref{defBs}) is a Gram matrix of the generating points of $D$,
or a Gram matrix of $D$ for short.
Let $B$ be a Gram matrix of an EDM $D$ with rank $r$. Then $B$ is PSD and hence $B=PP^T$ for some $n \times r$ matrix $P$. Consequently,
$p^1, \ldots, p^n$, the generating points of $D$, are given by the rows of $P$.
As a result, $P$ is called a {\em configuration matrix} of $D$.
It should be noted that Equation (\ref{defBs}) implies that $Bs={\bf 0} $ and hence $P^Ts={\bf 0} $; that is
\begin{equation} \label{eqnsps}
\sum_{i=1}^n s_i p^i = {\bf 0} .
\end{equation}
It is well known \cite{gow85} that if $D$ is a nonzero EDM, then $e$ lies in the column space of $D$, i.e.,
there exists vector $w$ such that
\begin{equation} \label{defw}
Dw = e.
\end{equation}
It is also well known that if $D$ is an $n \times n$ EDM of a simplex, i.e., if the embedding dimension of $D$ is $n-1$,
then $D$ is spherical and nonsingular.
Among the many different characterizations of spherical EDMs, the one
given in the following theorem is the most relevant for our purpose.
\begin{thm}[\cite{gow82,gow85}] \label{thmgow}
Let $D$ be an EDM and let $Dw =e$. Then $D$ is spherical if and only if
$e^Tw >0$, in which case, $\rho$, the radius of the sphere containing the generating points of $D$, is given by
\begin{equation}
\rho = \left( \frac{1}{2 e^T w } \right)^{1/2}.
\end{equation}
\end{thm}
As an example consider $D= \gamma (E_n-I_n)$, the EDM of a regular simplex. Then
$w= e / (\gamma (n-1))$ and thus its generating points lie on a sphere of radius
$\rho = \sqrt{\gamma (n-1)/(2n)}$. Consequently, the $n \times n$ unit spherical EDM of a regular simplex is given by
\[
D = \frac{2n}{n-1} (E_n - I_n).
\]
A vector $x$ is {\em positive}, denoted by $x > {\bf 0} $, if each of its entries is positive.
Similarly, a matrix $A$ is {\em positive} ({\em nonnegative}), denoted by $A > {\bf 0} $ ($A \geq {\bf 0} $), if each of its entries
is $> 0$ ($\geq 0$).
An $n \times n$ nonnegative matrix $A$ is said to be {\em reducible} if $A$ is the $1 \times 1$ zero matrix or
if $n \geq 2$ and there exists a permutation matrix $Q$ such that
\[
Q A Q^T = \left[ \begin{array}{cc} A_{11} & A_{12} \\
{\bf 0} & A_{22} \end{array} \right],
\]
where $A_{11}$ and $A_{22}$ are square matrices.
It easily follows from the definition that if $A$ is a nonnegative symmetric reducible matrix of order $n \geq 2$,
then there exists a permutation matrix $Q$ such that
$Q A Q^T$ is a block diagonal matrix, of at least two blocks, such that each block is either irreducible or the $1 \times 1$ zero matrix.
A nonnegative matrix that is not reducible is {\em irreducible}.
It is well known that an $n \times n$ nonnegative matrix $A$ is irreducible if and only if $(I + A)^{n-1}> {\bf 0} $.
Moreover, if $A$ is the adjacency matrix of a simple graph $G$, then $A$ is irreducible if and only if $G$ is connected.
We will need the following fact from the celebrated Perron-Frobenius theorem: If $A$ is a nonnegative irreducible matrix, then
the largest eigenvalue of $A$, $\lambda_{\max}(A)$, is positive with multiplicity $m(\lambda_{\max}(A)) = 1$ and
the eigenvector associated with $\lambda_{\max}(A)$ is positive.
\section{Proof of \v{S}i\v{n}ajov\'{a} Theorem}
A connected component of a graph $G$ is said to be {\em nontrivial} if it consists of at least 2 nodes.
In other words, isolated nodes are trivial connected components of $G$.
Now let $p^i$ and $p^j$ be two unit vectors. Then, clearly, $(p^i)^Tp^j = 0$ if and only if $||p^i-p^j||^2 = 2$ and
$(p^i)^Tp^j < 0$ if and only if $||p^i-p^j||^2 > 2$. As a result,
Theorem \ref{thmsin} can be stated in the language of EDMs as follows.
\begin{thm}[\v{S}i\v{n}ajov\'{a} \cite{sin91}] \label{thmsin2}
Let $G$ be a simple graph on $n$ nodes and let $k$ be the number of its nontrivial connected components.
Then there exists a unit spherical EDM $D=(d_{ij})$ of embedding dimension $r= n-k$ such that
\begin{equation} \label{eqdijsin}
d_{ij} \left\{ \begin{array}{ll} > 2 \mbox{ iff } \{i,j\} \in E(G), \\
= 2 \mbox{ iff } \{i,j\} \not \in E(G), \end{array} \right.
\end{equation}
where $E(G)$ denotes the edge set of $G$.
Furthermore, there does not exist a unit spherical EDM of embedding dimension $r \leq n-k-1$ that satisfies (\ref{eqdijsin}).
\end{thm}
Before proving Theorem \ref{thmsin2}, we first prove the following lemma.
\begin{lem} \label{lemBDel}
Let $D$ be an $n \times n$ unit spherical EDM of embedding dimension $r$ and let $Dw=e$.
Let $D = 2 (E-I) + 2 \Delta$. Then
$\lambda_{\max} (\Delta) = 1$ and $w$ is an eigenvector associated with $\lambda_{\max}(\Delta)$. Moreover,
$r=n - m(\lambda_{\max}(\Delta))$, where $m(\lambda_{\max}(\Delta))$ denotes the multiplicity of $\lambda_{\max}(\Delta)$.
\end{lem}
\begin{proof}
By Theorem \ref{thmgow}, $2 e^T w = 1$. Thus by setting $s = 2 w$ in Equation (\ref{defBs}),
it follows that the corresponding Gram matrix of $D$ is
\begin{equation}
B = E - \frac{1}{2}D = I - \Delta.
\end{equation}
Hence $\lambda_{\max}(\Delta) \leq 1$ since $B$ is PSD.
On the other hand, $Bw={\bf 0} $ implies that
\begin{equation}
\Delta w = w.
\end{equation}
Hence $\lambda_{\max}(\Delta) \geq 1$ and consequently $\lambda_{\max}(\Delta) = 1$.
As a result, $r = \mathrm{rank}\, (B)= n - m(\lambda_{\max}(\Delta))$.
\end{proof}
Now we are ready to prove Theorem \ref{thmsin2}.
\begin{proof}[Proof of Theorem \ref{thmsin2}]
Let $A$ denote the adjacency matrix of $G$.
Then there exists a permutation matrix $Q$ such that
\begin{equation} \label{QAQT}
Q A Q^T = \left[ \begin{array}{cccc} A^1 & & & \\
& \ddots & & \\
& & A^{k} & \\
& & & {\bf 0} \end{array} \right],
\end{equation}
where $A^1, \ldots, A^k$ denote the adjacency matrices of the nontrivial connected components of $G$.
Hence, $A^1, \ldots, A^k$ are irreducible nonnegative matrices of orders $\geq 2$.
Therefore, by the Perron-Frobenius theorem, $m(\lambda_{\max}(A^1))=\cdots = m(\lambda_{\max}(A^k)) =1$.
For $i=1,\ldots,k$, let $\xi^i$ denote the eigenvector of $A^i$ associated with $\lambda_{\max}(A^i)$ and
let $\Delta^i = A^i/\lambda_{\max}(A^i)$. Further,
let
\[
\Delta = \left[ \begin{array}{cccc} \Delta^1 & & & \\
& \ddots & & \\
& & \Delta^{k} & \\
& & & {\bf 0} \end{array} \right], \;\;
\xi = \left[ \begin{array}{c} \xi^1 \\ \vdots \\ \xi^{k} \\ {\bf 0} \end{array} \right] \mbox{ and } w = \frac{\xi}{2 e^T \xi}.
\]
Then, obviously, $\Delta_{ij} > 0$ if and only if $\{i,j\} \in E(G)$ and $\Delta_{ij} = 0$ if and only if
$i=j$ or $\{i,j\} \not \in E(G)$. Also, it is equally obvious that
$\lambda_{\max}(\Delta)=1$, $m(\lambda_{\max}(\Delta))=k$ and $\Delta w = w$.
Let $D = 2(E-I)+2 \Delta$. Then $Dw= e$ since $2 e^Tw = 1$. Now if we let $s=2 w$ in Equation (\ref{defBs}), then
\[
B = -\frac{1}{2} (I-e s^T) D (I-se^T) = E - \frac{1}{2} D = I - \Delta
\]
is PSD and of rank $n-k$. As a result, by Theorems \ref{thmEDM} and \ref{thmgow},
$D$ is a unit spherical EDM of embedding dimension $r=n-k$ that satisfies (\ref{eqdijsin}).
To complete the proof, let $r$ be the embedding dimension of any unit spherical EDM $D$ that satisfies (\ref{eqdijsin}).
Let $\Delta = D/2+I-E$ and wlog assume that $\Delta$ is block diagonal. Thus $\Delta$ has $k$ irreducible nonnegative diagonal blocks,
each associated with a nontrivial connected component of $G$. Now it follows from Lemma \ref{lemBDel} that
$\lambda_{\max}(\Delta) = 1$ and $r = n - m(\lambda_{\max}(\Delta))$. Consequently, $r \leq n-k$ since the contribution from each irreducible
diagonal block of $\Delta$ to $m(\lambda_{\max}(\Delta))$ is at most 1.
\end{proof}
\section{Proof of Rankin's Theorems}
Theorems \ref{thmran1} and \ref{thmran2} can be stated in the language of EDMs as follows.
\begin{thm}[Rankin \cite{ran55}] \label{thmran12}
Let $D$ be an $n \times n$ unit spherical EDM of embedding dimension $r$.
If $n=r+2$, then at least one off-diagonal entry of $D$ is $\leq 2$.
\end{thm}
\begin{thm}[Rankin \cite{ran55}] \label{thmran22}
Let $D$ be an $n \times n$ unit spherical EDM of embedding dimension $r$.
If $n=2r$ and if each off-diagonal entry of $D$ is $\geq 2$, then there exists a permutation matrix $Q$ such that
\begin{equation} \label{eqnEDMcp}
QDQ^T = \left[ \begin{array}{cccc} 4(E_2-I_2) & 2E_2 & \cdots & 2E_2 \\
2 E_2 & 4(E_2-I_2) & \cdots & 2E_2 \\
\vdots & \vdots & \ddots & \vdots \\
2E_2 & \cdots & 2E_2 & 4(E_2-I_2) \end{array} \right],
\end{equation}
where $E_2$, $I_2$ are, respectively, the matrix of all 1's and the identity matrix of orders $2$.
\end{thm}
It should be noted that the RHS of Equation (\ref{eqnEDMcp}) is the EDM of the regular $r$-crosspolytope.
As was mentioned earlier, Theorems \ref{thmran1} and \ref{thmran2} are special cases of Theorem \ref{thmkup1} which we prove
in the next section. However, in this section, we present an independent proof of Theorem \ref{thmran1} after we have proved
the following lemma which will be needed in the sequel.
\begin{lem} \label{lemsimplex}
Let $D$ be an $n \times n$ unit spherical EDM of embedding dimension $r$ and assume that each off-diagonal entry of $D$ is $\geq 2$.
Let $D = 2 (E-I) + 2 \Delta$ and let $Dw=e$.
If $\Delta$ is irreducible, then $r = n-1$, i.e.,
$D$ is the EDM of a simplex, and $w > {\bf 0} $.
\end{lem}
\begin{proof}
Clearly, $\Delta \geq {\bf 0} $. Thus, it follows from Lemma \ref{lemBDel} and the Perron-Frobenius theorem
that $\lambda_{\max}(\Delta)=1$, $m(\lambda_{\max}(\Delta))= 1$ and $w > {\bf 0} $.
Consequently, $r = \mathrm{rank}\,(B) = n-1$.
\end{proof}
Now Theorem \ref{thmran1} is an immediate corollary of Lemma \ref{lemsimplex}.
\begin{proof}[Proof of Theorem \ref{thmran12}]
Let $\Delta = D/2 + I - E$ and
assume, by way of contradiction, that each off-diagonal entry of $D$ is $> 2$. Then each off-diagonal entry of $\Delta$ is $> 0$.
Hence, $I + \Delta > 0$ and thus $\Delta$ is irreducible. Consequently, by Lemma \ref{lemsimplex},
the embedding dimension of $D$ is $r=n-1$, which contradicts the assumption that $r=n-2$.
\end{proof}
\section{Proof of Kuperberg's Theorem}
Theorem \ref{thmkup1} can be stated in the language of EDMs as follows.
\begin{thm}[Kuperberg \cite{kup07}] \label{thmkup2}
Let $D$ be an $n \times n$ unit spherical EDM of embedding dimension $r$,
where $2 \leq n- r \leq r$. If each off-diagonal entry of $D$ is $\geq 2$,
then there exists a permutation matrix $Q$ such that
\[
Q D Q^T = \left[ \begin{array}{cccc} D^1 & 2E & \cdots & 2E \\
2E & D^2 & \cdots & 2E \\
\vdots & \vdots & \ddots & \vdots \\
2E & \cdots & 2E & D^{n-r} \end{array} \right],
\]
where $D^1, \ldots, D^{n-r}$ are unit spherical EDMs of simplices; and
$E$ is the matrix of all 1's of the appropriate dimension.
\end{thm}
Two remarks are in order here.
First, as shown in \cite{kup07}, if $n=r+2$, then Theorem \ref{thmkup2} reduces to Rankin's Theorem \ref{thmran12}. This follows since
if $D$ has an off-diagonal entry $< 2$, then there is nothing to prove. On the other hand, if every off-diagonal entry of $D$
is $\geq 2$, then Theorem \ref{thmkup2} implies that there is a permutation matrix $Q$ such that
$Q D Q^T = \left[ \begin{array}{cc} D^1 & 2E \\ 2E & D^2 \end{array} \right]$. Hence, at least one of the off-diagonal diagonal entries of $D$
is $2$ since $2E$ is a submatrix of $Q D Q^T$.
Second, also, as shown in \cite{kup07}, if $n=2r$, i.e., if $n-r = r$, then Theorem \ref{thmkup2} reduces to Rankin's Theorem \ref{thmran22}.
This follows since in this case, each of the submatrices $D^1,\ldots,D^r$ in Theorem \ref{thmkup2} is of order $2$, and thus
$D^1=\cdots=D^r= 4(E_2-I_2)$. Therefore,
the configuration, in this case, is that of the regular $r$-crosspolytope since
the matrix $Q D Q^T$ in Theorem \ref{thmkup2} reduces to that in Theorem \ref{thmran22}.
Before presenting the proof of Theorem \ref{thmkup2}, we need the following lemma which extends Lemma \ref{lemsimplex}
to the case where $\Delta$ is padded with zero rows and columns.
\begin{lem} \label{lemsimplex2}
Let $D$ be an $n \times n$ unit spherical EDM of embedding dimension $r$ and assume that each off-diagonal entry of $D$ is $\geq 2$.
Let $ D = 2(E-I) + 2 \tilde{\Delta}$ and let $D \tilde{w} =e$.
If $\tilde{\Delta} = \left[ \begin{array}{cc} \Delta & {\bf 0} \\ {\bf 0} & {\bf 0} \end{array} \right]$,
where $\Delta$ is irreducible, then $r=n-1$,
i.e., $D$ is the EDM of a simplex, and
$\tilde{w} = \frac{1}{2 e^T \xi} \left[ \begin{array}{c} \xi \\ {\bf 0} \end{array} \right]$, where $\Delta \xi = \xi$ and $\xi > {\bf 0} $.
\end{lem}
\begin{proof}
The proof is similar to that of Lemma \ref{lemsimplex}.
\end{proof}
Now we are ready to prove Theorem \ref{thmkup2}.
\begin{proof}[Proof of Theorem \ref{thmkup2}]
Let $D= 2(E-I) + 2 \Delta $ and thus $\Delta \geq {\bf 0} $ and $\mathrm{diag}\,(\Delta) = {\bf 0} $.
Since the embedding dimension of $D$ is $r$, it follows from Lemma \ref{lemBDel}
that $\lambda_{\max}(\Delta) = 1$ with multiplicity $m(\lambda_{\max}(\Delta))= n-r \geq 2$.
Therefore, by the Perron-Frobenius theorem, $\Delta$ is reducible and thus there exists a permutation matrix $Q$
such that
\begin{equation} \label{QDelQT}
Q \Delta Q^T = \left[ \begin{array}{ccc} \Delta^1 & & \\
& \ddots & \\
& & \Delta^{n-r} \end{array} \right] \mbox{ or }
\left[ \begin{array}{cccc} \Delta^1 & & & \\
& \ddots & & \\
& & \Delta^{n-r} & \\
& & & {\bf 0} \end{array} \right],
\end{equation}
where $\Delta^1, \ldots, \Delta^{n-r}$ are irreducible and thus $\lambda_{\max}(\Delta^1) = \cdots = \lambda_{\max}(\Delta^{n-r})= 1$.
For $i=1,\ldots,n-r$, let $\xi^i$ denote the eigenvector of $\Delta^i$ associated with $\lambda_{\max}(\Delta^i)$.
Therefore, by the Perron-Frobenius theorem $\xi^i > {\bf 0} $ since $\Delta^i$ is irreducible.
Next, we consider the two cases of $Q \Delta Q^T$ in Equation (\ref{QDelQT}) separately.
In the first case, all diagonal blocks of $\Delta$ are irreducible.
Assume that, for $i=1,\ldots,n-r$, $\Delta^i$ is of order $n_i$ where $\sum_{i=1}^{n-r} n_i = n$. Then
$n_i \geq 2$ since $\mathrm{diag}\,(\Delta^i)={\bf 0} $.
Let $D^i = 2(E_{n_i} - I_{n_i}) + 2 \Delta^i$ for $i=1, \ldots,n-r$.
Then $D^1, \ldots , D^{n-r}$ are EDMs since they are principal submatrices of $D$.
Moreover, let $w^i = \xi^i / (2 e^T_{n_i} \xi^i) $. Then $D^i w^i = e_{n_i}$ and $w^i > {\bf 0} $. Consequently,
$D^1, \ldots, D^{n-r}$ are unit spherical EDMs.
Therefore, it follows from Lemma \ref{lemsimplex}
that each of $D^1, \ldots, D^{n-r}$ is the EDM of a simplex.
It is worth pointing out that Equation (\ref{eqnsps}) implies that, for each $i=1,\ldots,n-r$,
the origin ${\bf 0} $ lies in the relative interior \cite{mus19} of the convex hull of the generating points of $D^i$ since $w^i > {\bf 0} $.
In the second case, let
$\tilde{\Delta}^{n-r} = \left[ \begin{array}{cc} \Delta^{n-r} & \\ & {\bf 0} \end{array} \right]$.
Then, similar to the first case, $D^1, \ldots, D^{n-r-1}$ are unit spherical EDMs of simplices and
the origin ${\bf 0} $ lies in the relative interior of the convex hull of the generating points of each of the EDMs $D^1, \ldots, D^{n-r-1}$.
On the other hand, let $D^{n-r} = 2(E-I) + 2 \tilde{\Delta}^{n-r}$ and let
\[
\tilde{w}^{n-r} = \frac{1}{2 e^T \xi^{n-r} } \left[ \begin{array}{c} \xi^{n-r} \\ {\bf 0} \end{array} \right].
\]
Then $\tilde{\Delta}^{n-r} \tilde{w}^{n-r} = \tilde{w}^{n-r}$ and $D^{n-r} \tilde{w}^{n-r} = e$.
Hence, $D^{n-r}$ is a unit spherical EDM and hence, by Lemma \ref{lemsimplex2}, $D^{n-r}$ is the EDM of a simplex.
However, unlike $D^1, \ldots, D^{n-r-1}$,
the origin lies on the relative boundary of the convex hull of the generating points of $D^{n-r}$.
\end{proof}
Finally, we should point out that in the second case of Equation (\ref{QDelQT}), i.e., if $Q \Delta Q^T$ has, say $s$, zero rows (and columns),
then we chose above to define $\tilde{\Delta}^{n-r}$ by appending these $s$ zero rows and columns to $\Delta^{n-r}$.
In fact, we could have appended any number of these zero rows and columns to any of $\Delta^1, \ldots, \Delta^{n-r}$.
As an illustration of the theorems of \v{S}i\v{n}ajov\'{a} and Kuperberg, consider the following example.
\begin{exa}
Let $G$ be the simple graph on the nodes $1,\ldots,5$ and with edge set $E(G)=\{ \; \{1,2\}, \{3,4\} \;\}$.
Hence, $G$ has two nontrivial connected components and one isolated node.
To illustrate \v{S}i\v{n}ajov\'{a}'s Theorem, let
\begin{equation}
\Delta = A = \left[ \begin{array}{ccccc} 0 & 1 & & & \\
1 & 0 & & & \\
& & 0 & 1 & \\
& & 1 & 0 & \\
& & & & 0 \end{array} \right]
= \left[ \begin{array}{ccc} \Delta^1 & & \\
& \Delta^{2} & \\
& & {\bf 0} \end{array} \right],
\end{equation}
where $A$ is the adjacency matrix of $G$. Then $D=2(E-I)+2\Delta$ is a unit spherical EDM of embedding dimension $3$ that satisfies
(\ref{eqdijsin}). Moreover, an orthonormal representation of $G$ is given by
$p^1 = e^1$, $p^2 = - e^1$, $p^3 = e^2$, $p^4 = - e^2$ and $p^5 = e^3$,
where $e^i$ is the $i$th standard unit vector in $\mathbb{R}^3$.
To illustrate Kuperberg's Theorem,
first, if we define $\tilde{\Delta}^{2} = \left[ \begin{array}{cc} \Delta^{2} & \\ & {\bf 0} \end{array} \right]$.
Then $\mathbb{R}^3$ can be split into $2$ orthogonal subsapces $L_1$ and $L_2$ where
$L_1$ consists of the $x$-axis and contains points $p^1$ and $p^2$; while $L_2$ consists of the $y$--$z$ plane and contains points $p^3$, $p^4$ and $p^5$.
Notice that the origin is in the relative interior of the convex hull of $p^1$ and $p^2$, while the origin lies on the relative
boundary of the convex hull of $p^3, p^4$ and $p^5$.
On the other hand, if we define $\tilde{\Delta}^{1} = \left[ \begin{array}{cc} \Delta^{1} & \\ & {\bf 0} \end{array} \right]$.
Then, in this case, the subspace $L_1$ consists of the $x$--$z$ plane and contains points $p^1,p^2$ and $p^5$,
while $L_2$ consists of the $y$-axis and contains points $p^3$ and $p^4$.
\end{exa}
\noindent{\bf \large Acknowledgements} I would like to thank Marton Nasz\'{o}di for bringing my attention,
after this note was first posted on the arXiv,
the reference [10], where the Perron-Frobenius theorem is used to prove Rankin's theorem.
\bibliographystyle{plain}
|
3,212,635,537,677 | arxiv | \section{Introduction}
In the Skyrme model of nuclear physics, both pions and nucleons are
represented by a single scalar SU(2) group valued field, $U(x)$.
Pions occur as field quanta, while baryons are instead represented as
topological solitons. The classical Skyrme theory, with a simple
quantisation of the spin and isospin collective modes, provides a
description of nucleons and the $\Delta$ resonance in modest agreement
with experiment \cite{ANW,AN}.
Applying the Skyrme model to larger nuclei and to nuclear matter is an
even more interesting proposition, since with no additional free
parameters one could compare the theory with the binding
energies and gamma ray spectra of all nuclei. There has been
progress in understanding the structure of Skyrme multisolitons
\cite{Battye}, \cite{Braaten}, but it is clear that unless
quantum fluctuations about the static solutions are included, there is
little chance of success. For real nuclei the relative kinetic
energies of the nucleons in the ground state are large, so a
quantisation at least of a number of degrees of freedom equal to the
number of nucleon coordinates is essential. Note that the simple
collective coordinate quantisations of spin and isospin \cite{ANW,AN}
include effects of order $\hbar^2$, while ignoring
effects of order $\hbar$.
Recently there has been some progress in quantisation, based upon
Manton's notion of representing low energy solitonic excitations as
motion on a finite dimensional space of moduli. A study of
the deuteron by Leese et. al. \cite{Leese} using an instanton
approximation for the field configurations, gave encouraging agreement
with experimental properties, but only included two of the four
expected vibrational modes of the deuteron. Walet \cite{Walet}
extended this treatment to estimate all vibrational frequencies for
the deuteron and triton, again using the instanton approximation.
In this Letter we follow a different track. We directly compute the
low energy normal modes, finding their frequencies and the
representations they lie in under the static soliton's symmetry group.
The frequencies and representations provide a coordinate-independent
description of the configuration space around the static solution; in
the harmonic approximation they determine the quantum vibrational
energy levels. Our results provide new insight to the moduli space
approach, since the representations for the lowest frequency modes
turn out to be just those expected from a recently understood
approximate correspondence between
Skyrmions and BPS monopoles \cite{Manpri}. Furthermore, the success of
these calculations allows one to contemplate going beyond the moduli
space approximation and performing a full semiclassical quantisation
of the field theory. This is an attractive goal, since the Skyrme theory
could then be incorporated in the framework of chiral effective
Lagrangians \cite{Gasser}, allowing a unified treatment of mesons,
baryons and higher nuclei.
\section{Method}
Since the SU(2) manifold is a 3-sphere, we represent it
in terms of a scalar field $\phi\
\in\ {\bf R}^4$, with $\phi^a\phi^a = 1$. In terms of this field, the
Skyrme Lagrangian density is
\begin{eqnarray}
&{\cal L} = {1\over 2} \partial_\mu\phi\cdot\partial^\mu\phi+
\omega^2_\pi\phi^1 + \lambda(\phi\cdot \phi - 1) + {}
&\nonumber\\
& {1\over 4}\bigl\{\(\partial_\mu\phi\cdot \partial_\nu\phi\)
\(\partial^\mu\phi\cdot \partial^\nu\phi\)
- \(\partial_\mu\phi\cdot\partial^\mu\phi\)\(\partial_\nu\phi\cdot\partial^\nu\phi\)\bigr\}&
\labeq{lagrangian}\end{eqnarray}
with $\lambda$ a Lagrange multiplier field.
Here, length and time are in units
of ${2\over e F_\pi}$, energy in units of ${F_\pi\over 2 e}$.
In these rescaled units, the only
remaining parameter is $\omega_\pi = {2 m_\pi \over e F_\pi}$, the
oscillation frequency for the homogenous pion field.
For the most part we have adopted the `standard'
value \cite{AN} of $\omega_\pi =
.526$, although we have also performed calculations at
twice this value.
The mixed space and time
derivative terms in the Skyrme Lagrangian make
numerical solution in general difficult. Isolating the time derivative terms
one has
\begin{equation}
{\cal L} = {1\over 2}\bigdot{\phi^a} K^{ab}(\partial_i\phi)\bigdot{\phi^b} -
V(\phi,\partial_i\phi)
+ \lambda(\phi \cdot \phi - 1)\ ,
\labeq{condensed}
\end{equation}
where $K^{ab}= \delta^{ab} (1+(\partial_i\phi)^2) - \partial_i\phi^a \partial_i\phi^b$
is a local inertia matrix, and
$V(\phi,\partial_i\phi)$ is the potential. In general, $K^{ab}$ is time dependent,
but for small perturbations around a static solution $\phi_{\rm st}({\mbox{\boldmath $x$}})$
we write
$\phi(\mbox{\boldmath $x$},t) =
\phi_{\rm st}(\mbox{\boldmath $x$}) + \epsilon (\mbox{\boldmath $x$},t)$, $\epsilon \ll 1$
and to second order
in $\epsilon$ the Lagrangian is
\begin{eqnarray}
{\cal L} &=& {1 \over 2} \bigdot{\epsilon^a}K^{ab}\(\partial\phi_{\rm st}\)
\bigdot{\epsilon^b} - V(\phi,\partial_i\phi)
+ \lambda(\phi \cdot \phi - 1)
\labeq{continuousapprox}
\end{eqnarray}
This Lagrangian leads to classical field equations:
\begin{equation}
K^{ab}(\partial\phi_{\rm st})\bigddot{\phi^a} = \partial_i\(\partial V\over \partial \phi^b_{\ ,i}\)
- {\partial V \over \partial \phi^b} + \lambda\phi^b \ .
\end{equation}
where the matrix $K^{ab}({\mbox{\boldmath $x$}})$ is
taken to be its value at the static classical solution.
Equation (4) closely approximates the Skyrme equations for fields
near a static classical solution, precisely the desired regime for
studying soliton normal modes.
In order to numerically solve the field equations, we discretise the
action, \eqn{continuousapprox}, using a diagonal differencing scheme
for the four spatial derivative terms, achieving a high degree of
locality and second order accuracy in both spatial and time steps. The
numerical code conserves energy and baryon number to within 1 part in
$10^5$ over the course of extremely long (50k timestep) runs.
Periodic boundary conditions are used.
We first create the appropriate minimal
energy static solution by straightforward time evolution from
four-Skyrmion initial conditions.
A simple relaxation procedure sets the field momenta
zero each time the kinetic energy reaches a maximum. We find
the fields
rapidly converge on the minimum energy configuration.
$K^{ab}$ is set equal to $\delta^{ab}$ in this part of the calculation,
since this does not affect the final static solution.
Next we slightly perturb the fields and
evolve them forward again, but now using the full inertia
matrix $K^{ab}(\partial\phi_{\rm st})$. The evolving field is
\begin{equation}
\phi(\mbox{\boldmath $x$},t) = \phi_{\rm st}(\mbox{{\boldmath $x$}}) +
\sum_{\rm modes} \epsilon_n\delta_n(\mbox{\boldmath $x$}){\rm cos} (\omega_n t)
+ {\bf O}(\epsilon^2) \
\labeq{timebehaviour}
\end{equation}
where the functions
$\delta_n(\mbox{\boldmath $x$})\in {\bf R}^4$, obeying
$\delta_n(\mbox{\boldmath $x$})\cdot\phi_{\rm st}(\mbox{\boldmath $x$})=0$, are the normal
modes, each excited with amplitude $\epsilon_n$. The normal mode frequencies
$\omega_n$ are found by Fourier transforming $\phi(\mbox{\boldmath $x$},t)$
with respect to time
at any point in the box, and plotting the resulting power spectrum.
The space of perturbations has a useful inner product
\begin{equation}
\langle\delta_1 |\delta_2\rangle = \int_{\rm box} \delta_1^a(\mbox{\boldmath $x$}) K^{ab}(\mbox{\boldmath $x$})\delta_2^b(\mbox{\boldmath $x$})\ {\rm d}^3 \mbox{\boldmath $x$},
\labeq{innerproduct}\end{equation}
which is zero for normal modes $\delta_1$ and $\delta_2$
if $\omega_1 \neq \omega_2$. The inner product allows one to determine
the degeneracies of the normal mode frequencies and
the representations in which the modes transform under the soliton's
symmetry group.
\section{Results for the B=4 soliton}\vskip-.4em
We have applied this technique to the case of the Skyrme soliton with
baryon number four: the $\alpha$ particle. The $\alpha$ particle
provides an especially simple case for quantisation, because the
ground state possesses zero angular momentum and isospin. The static
soliton has cubic symmetry; its energy and baryon number density
concentrate along the edges of a cube
\cite{Braaten}. The full 48
dimensional cubic group of symmetries $O_h$ (for notation see
\cite{Hamermesh}) is generated by 90 degree and 120
degree rotations, and parity $I$. After
an appropriate global isospin rotation, the action
of these group elements on spatial coordinates and
pion fields ($\phi^a = (\sigma, \vec{\pi})$) is
as follows \cite{LeeseMan}:
\begin{eqnarray}
C_4: \quad (\pi^1,\pi^2,\pi^3)(x,y,z) &\rightarrow& (-\pi^2,-\pi^1,-\pi^3)(-y,x,z)\nonumber\\
C_3: \quad (\pi^1,\pi^2,\pi^3)(x,y,z) &\rightarrow& (\pi^2,\pi^3,\pi^1)(y,z,x)\nonumber\\
I: \quad (\pi^1,\pi^2,\pi^3)(x,y,z) &\rightarrow& (\tilde{\pi}^1,\tilde{\pi}^2
,\tilde{\pi}^3)(-x,-y,-z)\nonumber
\end{eqnarray}
with $\tilde{\pi}^1={1\over 3} (\pi^1-2\pi^2-2\pi^3)$ etc.
From this it is straightforward to check that a homogeneous pion field
falls into the two dimensional representation $E^+$ and the
one dimensional representation $A_2^-$, where superscripts indicate
parity.
\begin{figure}
\centerline{\psfig{file=spectrum.eps,width=3.in}}
\caption{Fourier power spectrum for perturbations around a
B=4 soliton. Frequency
is in Skyrme units, power scale is arbitrary. Note that the frequency of
homogeneous pion oscillations is $\omega_\pi = .526$ in
these units.}
\labfig{spectrum}
\end{figure}
A typical power spectrum of the perturbations is shown in Figure
\fig{spectrum}. Spectra at different sites and for different field
components show the same peaks, but with differing heights. Once the
normal mode frequencies $\omega_n$ are identified, maps of the normal
modes $\delta_n(\mbox{\boldmath $x$})$ may be constructed by performing discrete
Fourier sums on each component of the field as it evolves. For
degenerate modes, each set of perturbed initial conditions gives a
different linear combination $\sum_i \epsilon_i \delta_i(\mbox{\boldmath $x$})$ of
modes with the same frequency. Other linear
combinations
may also be produced by applying the symmetries of the static soliton ${\cal S}_1,
{\cal S}_2, ...$ to a given mode $\delta$. The degeneracy of a given
frequency is found by computing the rank of the matrix of inner products
between the different linear combinations of degenerate modes produced
in these ways. Once the degeneracy is determined, a complete
orthonormal basis of modes with this frequency can be constructed.
The character of any $O_h$ group element can then be
computed as a trace. These characters were within $\pm .001$
of an integer value, and interpretation was unambiguous.
Each peak in Figure \fig{spectrum} marks the frequency of a normal
mode. The lowest peak (at $\omega=.07$) is the rotational zero mode,
shifted to nonzero frequency by finite size effects, effectively through
interactions with image solitons one box length away.
There are also
several peaks corresponding to relatively delocalised modes which we
interpret as pion radiation.
Two (at $\omega =
.545,.587$) correspond to the lowest radiation modes, homogeneous
away from the soliton, whose threefold
degeneracy is split
into $E^++A_2^-$ by the presence of the soliton.
The first radiation mode at nonzero wavenumber is
at $\omega =.908$.
The remaining modes are the true
vibrational excitations of the $\alpha$ particle. Somewhat
fortuitously, the box size was small enough that the lowest
inhomogeneous radiation mode has a frequency above the highest
vibrational mode.
Four widely separated $B=1$ Skrymions have 24 zero modes, corresponding
to 3 translations and 3 isorotations each.
As they combine to form the
B=4 soliton, 9 of them (3 translations, 3 rotations and 3
isorotations) remain as zero modes of the new system.
One might expect the remaining 15 modes would survive as
low energy vibrational modes. This is one fewer than what we find: we
have an additional breather mode.
The vibrational modes distort the $B=4$ soliton as
illustrated in Figure \fig{bigplot}, and explained in the Table. The
modes naturally divide into two sets. The lower nine vibrational
modes consist of deformations which, roughly, involve
incompressible flow of the baryon charge. In contrast,
the higher seven vibrational modes
all have a `breathing' character, in which local baryon charge expands
or contracts to occupy a greater or lesser volume. The breather
itself is simply a rescaling of the size of the soliton, with
consequent change in density. The next mode up involves breathing
motion of a dipole character, and the one above that of a
quadrupole nature.
Remarkably, the vibrational
modes below the breather
fall into representations corresponding to those for
small zero-mode deformations of the BPS 4-monopole solution
\cite{Manpri}. The same
phenomenon occurs in the deuteron
\cite{deutpaper}, and it will be straightforward to check
the $B=3,5,6,7$ solitons using the methods described here.
Qualitative similarities between
Skyrme multisolitons and BPS multimonopoles
and their scattering dynamics have been
noted before \cite{Battye1},\cite{Battye}. Our finding suggests
a connection between the lowest energy Skyrmion vibrational modes
and the multimonopole moduli spaces.
If it holds
for higher nuclei, there will be $4B-7$ such modes.
It would be very interesting to interpret this
number in terms of individual nucleon degrees of freedom (presumably
translations and spin/isospin).
We have investigated the effect of doubling the parameter $\omega_\pi$
on the spectrum of modes. All modes move up in frequency. The
nine lowest modes move up by 15-25 per cent, the breathing modes
by 30-45 per cent,
and the homogeneous pion modes roughly double in frequency.
This has important phenomenological
consequences.
The lowest lying vibrational level for the
real $\alpha$ particle is at 20.1 MeV, whereas for $\omega_\pi = .526$
our lowest lying frequency is at $0.7 \omega_\pi$, or 94 MeV. For
$\omega_\pi= 1.052$ this is improved to 63 MeV. Whilst finite volume
and grid spacing effects are present at a level of a few per cent in
these results, we can safely conclude that a fit between the Skyrme
model and nuclear gamma ray spectra will require parameters different
than those usually used. It is interesting that a study of a single
Skyrmion's breather mode, attempting to identify it with the `Roper
resonance', also found that large values of $\omega_\pi \sim 1.5$ were
required \cite{Breit}. It remains to be seen whether such large
values of $\omega_\pi$ can be accomodated in the theory.
We thank Richard Battye, Guy Moore, Conor Houghton, Paul Sutcliffe
and especially Nick Manton for helpful discussions. We also
acknowledge the Pittsburgh Supercomputing Center grant \#AST9G3P for
CRAY T3D supercomputer time.
\newcommand\sfig[1]{\psfig{file=#1,width=1.135in}}
\begin{figure*}[t]
\begin{tabular}{ccccc}
Static $B=4$ soliton, $\phi_{\rm st}$ & &
$\phi_{\rm st}$ $+$, $-$ $\delta_{.367}^1$ & &
$\phi_{\rm st}$ $+$, $-$ $\delta_{.367}^2$
\\
\sfig{sh.soliton.eps} & &
\hbox{\sfig{sh.soliton+om2.eps}\hfil\sfig{sh.soliton-om2.eps}} & &
\hbox{\sfig{sh.sol-om2p2b.eps}\hfil\sfig{sh.sol+om2p2b.eps}} \\
& \\
$\phi_{\rm st}$ $+$, $-$ $\delta_{.405}$ & &
$\phi_{\rm st}$ $+$, $-$ $\delta_{.419}$ & &
$\phi_{\rm st}$ $+$, $-$ $\delta_{.513}$
\\
\hbox{\sfig{sh.soliton+om3.eps}\hfil\sfig{sh.soliton-om3.eps}} & &
\hbox{\sfig{sh.soliton+om4.eps}\hfil\sfig{sh.soliton-om4.eps}} & &
\hbox{\sfig{sh.soliton+om5.eps}\hfil\sfig{sh.soliton-om5.eps}} \\
& \\
$\phi_{\rm st}$ $+$, $-$ $\delta_{.605}$
& & $\phi_{\rm st}$ $+$, $-$ $\delta_{.655}$ & & $\phi_{\rm st}$ $+$, $-$ $\delta_{.738}$ \\
\hbox{\sfig{sh.soliton+om13.eps}\hfil\sfig{sh.soliton-om13.eps}}& &\hbox{\sfig{sh.sol+om14b.eps}\hfil\sfig{sh.sol-om14b.eps}}& &
\hbox{\sfig{sh.soliton+om15.eps}\hfil\sfig{sh.soliton-om15.eps}} \\
& \\
\end{tabular}
\caption{Contours of constant baryon density for the B=4
soliton, combined with its normal modes, as indexed by their
frequencies in Figure \protect\fig{spectrum}. These modes were studied
in a box of size $8{\times}8{\times}8$, with a grid spacing $\Delta_x
= {1 \over 8}$ in Skyrme units. For comparison, the soliton itself is
a cube roughly $2{\times}2{\times}2$ in these units. For the case $\omega=.367$
two orthogonal modes of the degenerate multiplet are shown,
in other cases a single mode only is shown.}
\labfig{bigplot}
\end{figure*}
\begin{table*}[t]
\begin{tabular}{|lccp{4.5in}|}
\bf Frequency & \bf Degeneracy & \bf Symmetry & \bf Description \\
\hline\hline
.07 & 3 &$F_1^{+}$&
Rotations of the soliton. This is a zero mode broken
by the finite box size. \\ \hline
.367 & 2 &$E^{+}$& Lowest
vibrational modes. One mode, $\delta^1$, alternately
pulls the
$B=4$ cube into two $B=2$ donuts in two perpendicular
directions.
The orthogonal mode, $\delta^2$, pulls it into
four $B=1$ edges one way and
two $B=2$ donuts the other.
\\ \hline
.405 & 1 & $A_2^{-}$ & The corners of the cube make two interlacing
tetrahedra. This mode pulls one tetrahedron out into four $B=1$ corners,
pushing
the other one in. \\ \hline
.419 & 3 &$F_2^{+}$ & Deform two opposite faces of the cube into rhombuses.
\\ \hline
.513 & 3 &$F_2^{-}$& Deform the cube by pulling two opposite edges on one
face, and the two perpendicular edges on the opposite face.
This takes the cube to four $B=1$ edges.
\\ \hline
.545 & 2 &$E^{+}$ & Two of the pion $k=0$ modes.
\\ \hline
.587 & 1 & $A_2^{-}$ & The remaining pion $k=0$ mode, with tetrahedral symmetry. \\ \hline
.605 & 1 & $A_1^{+}$ & The breathing mode, with the full cubic symmetry of the soliton. \\ \hline
.655 & 3 &$F_2^{-}$ & One face of the cube inflates, while the opposite face
deflates. \\ \hline
.738 & 3 &$F_2^{+}$ & One pair of diagonally opposite edges inflates, the parallel pair deflates. \\ \hline
.908 & 3 & & The lowest nonzero ($k=1,0,0$) pion radiative mode.
\end{tabular}
\caption{Description of the Skyrme field $B=4$ normal modes marked in Figure 1.
The notation of Hamermesh \protect\cite{Hamermesh} is used for the
representations of the cubic group $O_h$;
superscripts denote parity.}
\label{modetable}
\end{table*}
|
3,212,635,537,678 | arxiv | \section{Introduction}
The quasinormal modes (QNMs) and quasinormal frequencies (QNFs)
\cite{Regge:1957td,Zerilli:1971wd, Kokkotas:1999bd, Nollert:1999ji, Konoplya:2011qq}
have recently acquired great interest due
to the detection of gravitational waves \cite{Abbott:2016blz}.
Despite the detected signal is consistent with the Einstein gravity \cite{TheLIGOScientific:2016src}, there are possibilities for alternative theories of gravity due to the large uncertainties in mass and angular momenta of the ringing black hole \cite{Konoplya:2016pmh}. The QNMs and QNFs give
information about the stability of matter fields that
evolve perturbatively in the exterior region of a black hole without backreacting on the
metric. Also, the QNMs are characterized by a spectrum that is independent of the initial conditions of the perturbation and depends on the black hole parameters and probe field parameters, and on the fundamental constants of the system. The QNM infinite discrete spectrum consists of complex frequencies, $\omega=\omega_R+i \omega_I$, in which the real part $\omega_R$ determines the oscillation timescale of the modes, while the complex part $\omega_I$ determines their exponential decaying timescale (for a review on QNM modes see \cite{Kokkotas:1999bd, Berti:2009kk}).
The QNFs have been calculated by means of numerical and analytical techniques; some well known numerical methods are: the Mashhoon method, Chandrasekhar-Detweiler method, WKB method, Frobenius method, method of continued fractions, Nollert, asymptotic iteration method (AIM) and improved AIM, among others. In the case of a probe massless scalar field it was found that for the Schwarzschild and Kerr black hole background the longest-lived modes are always the ones with lower angular number $\ell$. This is expected in a physical system because the more energetic modes with high angular number $\ell$ would have faster decaying rates. In the case of a massive probe scalar field it was found \cite{Konoplya:2004wg, Konoplya:2006br,Dolan:2007mj, Tattersall:2018nve}, at least for the overtone $n = 0$, that if we have a light scalar field, then the longest-lived quasinormal modes are those with a high angular number $\ell$, whereas for a heavy scalar field the longest-lived modes are those with a low angular number $\ell$.
This behaviour can be understood because for the case of massive scalar field even if its mass is small its fluctuations can maintain the quasinormal modes to live longer even if the angular number $\ell$ is large. This anomalous behaviour is depending on whether the mass of the scalar field exceeds a critical value or not. This anomalous decay rate for small mass scale of the scalar field was recently discussed in \cite{Lagos:2020oek}.
Extensive study of QNMs of black holes in
asymptotically flat spacetimes have been performed for the last few decade mainly due to the potential astrophysical interest. Considering the case when the black hole is
immersed in an expanding universe, the QNMs of black holes in de Sitter (dS) space have been investigated \cite{deSitter_1,deSitter_2}.
The AdS/CFT correspondence \cite{Maldacena:1997re,Aharony:1999ti} stimulated the interest in calculating the QNMs and QNFs of black holes in anti-de Sitter (AdS) spacetimes. It was shown in \cite{Horowitz:1999jd} that this principle leads to a correspondence of the QNMs of the gravity bulk to the decay of perturbations in the dual conformal field theory.
The aim of this work is to study the propagation of scalar fields in the Schwarzschild-dS and Schwarzschild-AdS black hole backgrounds in order to see if there is an anomalous decay rate of quasinormal modes. We carry out this study by using the pseudospectral Chebyshev method \cite{Boyd}
which is an effective method to find high overtone modes \cite{Finazzo:2016psx,Gonzalez:2017shu,Gonzalez:2018xrq,Becar:2019hwk,Aragon:2020qdc}. The gravitational QNMs of Schwarzschild-de Sitter black hole were studied in \cite{ Mellor:1989ac, Otsuki, Moss:2001ga}. The QNMs for this geometry was calculated \cite{Zhidenko:2003wq} by using the sixth order WKB formula and the approximation by the P\"oschl--Teller potential. Also, it was shown the frequencies all have a negative imaginary part, which means that the propagation of scalar field is stable in this background. The presence of the cosmological constant leads to decrease of the real oscillation frequency and to a slower decay and high overtones was studied in Ref. \cite{Konoplya:2004uk}. Also, a novel infinite set of purely imaginary modes was found \cite{Jansen:2017oag}, which depending on the black hole mass may even be the dominant mode.
In the case of massless scalar field in the background of a Schwarzschild-dS black hole we find two types of QNMs, the complex modes and the pure imaginary ones. These modes have a different behaviour as the cosmological constant is changing. First of all for the complex modes all the frequencies have a negative imaginary part, which means that the propagation of scalar field is stable in this background. However the presence of a larger cosmological constant leads to decrease the real oscillation frequency and to a slower decay. On the contrary in the case of pure imaginary modes we find the cosmological constant leads to a fast decay, when it increases, that is, contrary to the complex QNFs.
In the case of massive scalar field in the background of a Schwarzschild-dS black hole we find that the imaginary
part of these frequencies has an anomalous behaviour, i.e, the QNFs either grow or decay when the angular harmonic
numbers $\ell$ increase, depending on whether the mass of the scalar field is small or large than a critical mass. We also find that as the value of
the cosmological
constant increases the value of the critical mass also increases. As we will discuss in the following the mass of the scalar field redefines the cosmological constant to $\Lambda_{eff}$ and at the critical value of the mass of the scalar field at which the anomalous behaviour appears, $\Lambda_{eff}$ goes to zero. In the case of Schwarzschild-AdS black hole background we find that we do not have an anomalous behaviour of the QNMs i.e we have a faster decay when the mass of the scalar field increases and when
the angular harmonic numbers $\ell$ decrease. We also show that in the case of massive scalar field the pure imaginary quasinormal frequencies acquire a real part which depends on the scalar fields mass.
The manuscript is organized as follows: In Sec. \ref{QNM}, we study the scalar field stability by calculating the QNFs of scalar perturbations numerically of a massless and massive scalar field in the background of Schwarzschild-dS and Schwarzschild-AdS black hole background by using the pseudospectral Chebyshev method. We conclude in Sec. \ref{conclusion}.
\section{Scalar perturbations}
\label{QNM}
The Schwarzschild-(dS)AdS black holes
are maximally symmetric solutions of the equations of motion that arise from the action
\begin{equation}
S=\frac{1}{16\pi G}\int d^4x\sqrt{-g}(R-2\Lambda)\,,
\end{equation}
where $G$ is the Newton constant, $R$ is the Ricci scalar and $\Lambda$ the cosmological constant. The Schwarzschild-dS and Schwarzschild-AdS black holes are described by the metric
\begin{equation}
ds^2=f(r)dt^2-\frac{dr^2}{f(r)}-r^2(d\theta^2+sin^2\theta d\phi^2)\,,
\label{metric}
\end{equation}
where $f(r)=1-\frac{2M}{r}-\frac{\Lambda r^2}{3}$, $M$ is the black hole mass, $\Lambda > 0$ in the metric represents the Schwarzschild-dS black hole, while $\Lambda < 0$ represents the Schwarzschild-AdS black hole. In Fig. \ref{function} we plot the behavior of $f(r)$, where we observe that for Schwarzschild-dS black holes (left figure) the difference between the event horizon $r_H$ and the cosmological horizon $r_{\Lambda}$ decreases when the cosmological constant increases, while for the Schwarzschild-AdS black holes (right figure) there is one event horizon that decreases when the absolute value of the cosmological constant increases.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\textwidth]{function.pdf}
\includegraphics[width=0.4\textwidth]{functionAdS.pdf}
\end{center}
\caption{The behaviour of $f(r)$ with $M=1$, and different values of the cosmological constant. Left figure for Schwarzchild-dS black holes with $\Lambda = 0.02$ ($r_H= 2.06, r_{\Lambda}=11.09$), $\Lambda = 0.04$ ($r_H= 2.13, r_{\Lambda}=7.40$), $\Lambda = 0.09$ ($r_H= 2.43, r_{\Lambda}=4.16$), and $\Lambda = 0.11$ ($r_H= 2.84, r_{\Lambda}=3.19$). Right figure for Schwarzchild-AdS black holes with $\Lambda = -0.02$ ($r_H= 1.95$), $\Lambda = -0.04$ ($r_H= 1.91$), $\Lambda = -0.09$ ($r_H= 1.82$), and $\Lambda = -0.11$ ($r_H= 1.79$).}
\label{function}
\end{figure}
The QNMs of scalar perturbations in the background of the metric (\ref{metric})
are given by the scalar field solution of the Klein-Gordon equation
\begin{equation}
\frac{1}{\sqrt{-g}}\partial _{\mu }\left( \sqrt{-g}g^{\mu \nu }\partial_{\nu } \varphi \right) =-m^{2}\varphi \,, \label{KGNM}
\end{equation}%
with suitable boundary conditions for a black hole geometry. In the above expression $m$ is the mass
of the scalar field $\varphi $. Now, by means of the following ansatz
\begin{equation}
\varphi =e^{-i\omega t} R(r) Y(\Omega) \,,\label{wave}
\end{equation}%
the Klein-Gordon equation reduces to
\begin{equation}
\frac{1}{r^2}\frac{d}{dr}\left(r^2 f(r)\frac{dR}{dr}\right)+\left(\frac{\omega^2}{f(r)}+\frac{\kappa}{r^2}-m^{2}\right) R(r)=0\,, \label{radial}
\end{equation}%
where we defined $\kappa=-\ell (\ell+1)$, with $\ell=0,1,2,...$, which represents the eigenvalue of the Laplacian on the two-sphere and $\ell$ is the multipole number.
Now, defining $R(r)=\frac{F(r)}{r}$
and by using the tortoise coordinate $r^*$ given by
$dr^*=\frac{dr}{f(r)}$,
the Klein-Gordon equation can be written as a one-dimensional Schr\"{o}dinger-like equation
\begin{equation}\label{ggg}
\frac{d^{2}F(r^*)}{dr^{*2}}-V_{eff}(r)F(r^*)=-\omega^{2}F(r^*)\,,
\end{equation}
with an effective potential $V_{eff}(r)$, which parametrically thought, $V_{eff}(r^*)$, is given by
\begin{equation}\label{pot}
V_{eff}(r)=-\frac{f(r)}{r^2} \left(\kappa - m^2 r^2-f^\prime(r)r\right)~.
\end{equation}
In Fig. \ref{Potential1} we plot the effective potential for massless scalar fields in the background of Schwarzschild-dS black holes and in Fig. \ref{Potential11} we plot a small zone for $\Lambda=0.11$, where we can observe that the effective potential is positive in the zone between the event horizon and the cosmological horizon.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.42\textwidth]{VE3.pdf}
\includegraphics[width=0.42\textwidth]{VE4.pdf}
\end{center}
\caption{The behaviour of the effective potential $V_{eff}$ for massless scalar fields as a function of $r$ for different values of the parameter $\ell$ with $M=1$. Left figure for $\Lambda=0.04$ and right figure for $\Lambda=0.11$.} \label{Potential1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.42\textwidth]{VE11.pdf}
\end{center}
\caption{The behaviour of the effective potential $V_{eff}$ for massless scalar fields as a function of $r$ for different values of the parameter $\ell$ with $M=1$, and $\Lambda=0.11$, given in a small zone.} \label{Potential11}
\end{figure}
Also, in Fig. \ref{Potential2} we plot the effective potential for massive scalar fields and in Fig. \ref{Potential22} we plot a small zone for $\Lambda=0.11$, where we can observe that the effective potential is positive between the horizons.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.42\textwidth]{VE1.pdf}
\includegraphics[width=0.42\textwidth]{VE2.pdf}
\end{center}
\caption{The behaviour of the effective potential $V_{eff}$ for massive scalar fields ($m=0.4$) as a function of $r$ for different values of the parameter $\ell$ with $M=1$. Left figure for $\Lambda=0.04$ and right figure for $\Lambda=0.11$.} \label{Potential2}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{VE22.pdf}
\end{center}
\caption{The behaviour of the effective potential $V_{eff}$ for massive scalar fields ($m=0.4$) as a function of $r$ for different values of the parameter $\ell$ with $M=1$, and $\Lambda=0.11$, given in a small zone.} \label{Potential22}
\end{figure}
In Fig. \ref{PotentialAdS}, we plot the effective potential for massless scalar fields in the background of a Schwarzchild-AdS black hole, which is positive outside the event horizon.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{PotentialAdS.pdf}
\end{center}
\caption{The behaviour of the effective potential $V_{eff}$ for massless scalar fields as a function of $r$ for different values of the parameter $\ell=0,10,20,30$ with $M=1$, and $\Lambda=-0.04$.} \label{PotentialAdS}
\end{figure}
\newpage
\subsection{Numerical analysis. Schwarzchild-de Sitter black holes. }
Now, in order to compute the QNFs, we will solve numerically the differential equation (\ref{radial}) by using the pseudospectral Chebyshev method, see for instance \cite{Boyd}. First, it is convenient to perform a change of variable in order to limit the values of the radial coordinate to the range $[0,1]$. Thus, we define the change of variable $y=(r-r_H)/(r_{\Lambda}-r_H)$. So, the event horizon is located at $y=0$ and the cosmological horizon at $y=1$.
Also, the radial equation (\ref{radial}) becomes
\begin{equation} \label{rad}
f(y) R''(y) + \left( \frac{2 \left( r_{\Lambda}- r_H \right) f(y)}{r_H+\left( r_{\Lambda}-r_H \right) y } + f'(y) \right) R'(y)+ \left( r_{\Lambda}-r_H \right)^2 \left( \frac{\omega^2}{f(y)}- \frac{ \ell(\ell+1)}{\left( r_H + \left( r_{\Lambda}-r_H \right)y \right)^2} -m^2 \right) R(y)=0\,.
\end{equation}
In the vicinity of the horizon (y $\rightarrow$ 0) the function $R(y)$ behaves as
\begin{equation}
R(y)=C_1 e^{-\frac{i \omega \left( r_{\Lambda}-r_H \right)}{f'(0)} \ln{y}}+C_2 e^{\frac{i \omega \left( r_{\Lambda}-r_H \right)}{f'(0)} \ln{y}} \,.
\end{equation}
Here, the first term represents an ingoing wave and the second represents an outgoing wave near the black hole horizon.
So, imposing the requirement of only ingoing waves at the horizon, we fix $C_2=0$. On the other hand, at the cosmological horizon the function $R(y)$ behaves as
\begin{equation}
R(y)= D_1 e^{-\frac{i \omega \left( r_{\Lambda}-r_H \right)}{f'(1)} \ln{1-y}}+D_2 e^{\frac{i \omega \left( r_{\Lambda}-r_H \right)}{f'(1)} \ln{1-y}} \,.
\end{equation}
Here, the first term represents an outgoing wave and the second represents an ingoing wave near the cosmological horizon. So, imposing the requirement of only ingoing waves on the cosmological horizon requires $D_1=0$.
Taking the behaviour of the scalar field at the event and cosmological horizons we define the following ansatz
\begin{equation}
R(y)= e^{-\frac{i \omega \left( r_{\Lambda}-r_H \right)}{f'(0)} \ln{y}} e^{-\frac{i \omega \left( r_{\Lambda}-r_H \right) }{f'(1)} \ln{1-y}} F(y) \,.
\end{equation}
Then, by inserting the above ansatz for $R(y)$ in Eq. (\ref{rad}), it is possible to obtain an equation for the function $F(y)$. The solution for the function $F(y)$ is assumed to be a finite linear combination of the Chebyshev polynomials, and it is inserted in the differential equation for $F(y)$. Also, the interval $[0,1]$ is discretized at the Chebyshev collocation points. Then, the differential equation is evaluated at each collocation point. So, a system of algebraic equations is obtained, and it corresponds to a generalized eigenvalue problem, which is solved numerically to obtain the QNFs ($\omega$). In Table \ref{Table1} we show some fundamental QNFs, in order to check the correctness and accuracy of the numerical technique used. Also, we show the relative error, which is defined by
\begin{equation}
\label{ERe}
\epsilon_{Re(\omega)} =\frac{\mid Re(\omega_1)- Re(\omega_0)\mid}{Re(\omega_0)}\cdot 100\%
\end{equation}
\begin{equation}
\label{EIm}
\epsilon_{Im(\omega)} =\frac{\mid Im(\omega_1)- Im(\omega_0)\mid}{Im(\omega_0)} \cdot 100\%
\end{equation}
where $\omega_1$ corresponds to the result from \cite{Zhidenko:2003wq}, and $\omega_0$ denotes our result. The complex QNFs for this geometry was determined in Ref. \cite{Zhidenko:2003wq} by using the WKB and P\"oschl-Teller method. We can observed that error does not exceed $0.37\%$ when we compare our results with the WKB method and $2.198\%$ with the P-T method. As it was observed, the frequencies all have a negative imaginary part, which means that the propagation of scalar field is stable in this background. Also, we observe that the presence of a bigger cosmological constant leads to decrease the real oscillation frequency and to a slower decay.
\begin{table}[ht]
\caption{Complex quasinormal frequencies ($n=0$) for massless scalar fields with $\ell=1$ in the background of Schwarzchild-de Sitter black holes. The values of
of $\omega_{WKB}$ and $\omega_{P-T}$ appear in Ref. \cite{Zhidenko:2003wq}.
}
\label{Table1}\centering
\begin{tabular}{ | c | c | c | c | c | c |c | c |}
\hline
$\Lambda$ & $\omega_{WKB}$ & $\omega_{P-T}$ & $\omega $ & $\epsilon_{Re(\omega)} (WKB) $ & $\epsilon_ {Im(\omega)} (WKB) $ & $\epsilon_{Re(\omega)} (P-T)$ & $\epsilon_{Im(\omega)} (P-T)$\\ \hline
$0.02$ & $0.2603 - 0.0911i$ & $ 0.263 - 0.093i$ &
$0.2603 - 0.0910 i$ & $0.000$ & $0.110$ & $1.037$ & $2.198$\\
$0.04$ & $0.2247 - 0.0821i$ & $0.226-0.083i$ & $0.2247 - 0.0821 i$ & $0.045$ & $0.122$ & $0.623$ & $1.220 $\\
$0.06$ & $0.1854 - 0.0701i$ & $0.187 - 0.071i$ & $0.1854 - 0.0701 i$ & $0.054$ & $0.143$ & $0.917$ & $1.429$\\
$0.08$ & $0.1404 - 0.0542i$ & $0.141 - 0.055i$ & $0.1404 - 0.0540 i$ & $0.000$ & $0.370$ & $0.427$ & $1.852$\\
$0.09$ & $0.11392 - 0.04397i$ & $0.1147 - 0.0443i$ & $0.11400 - 0.04388 i$ & $0.070$ & $0.205$ & $0.614$ & $0.957$\\
$0.10$ & $0.08156 - 0.03121i$ & $0.0819 - 0.0315i$ & $0.08159 - 0.03123 i$ & $0.037$ & $0.064$ & $0.380 $ & $0.865$\\
$0.11$ & $0.02549 - 0.00965i$ & $0.02550 - 0.00967i$ & $0.02549 - 0.00965 i$ & $0.000$ & $0.000$ & $0.039$ & $0.207$\\ \hline
\end{tabular}%
\end{table}
On the other hand, in \cite{Jansen:2017oag} another branch of purely imaginary QNFs was found for this geometry by using the pseudospectral Chebyshev method, with the metric expressed in Eddington-Finkelstein coordinates. Here, we consider the coordinates given by the metric. Eq. (\ref{metric}) along with the change of variables $y=(r-r_H)/(r_{\Lambda}-r_H)$. In the next, we will show that these quasinormal frequencies acquire a real part which depends on the scalar field mass. Now, in order to check the correctness and accuracy of the numerical techniques used, we show the purely imaginary and fundamental QNFs in Table \ref{Table2}, where the relative error vanishes.
As it was observed, the frequencies all are negative, which means that the propagation of scalar field is stable in this background. However, the presence of the cosmological constant leads to a fast decay, when it increases, that is, contrary to the complex QNFs. Also, it was shown that depending on the black hole mass may even be the dominant modes \cite{Jansen:2017oag}.
\begin{table}[ht]
\caption{Purely imaginary quasinormal frequencies ($n=0$) for massless scalar fields with $\ell=1$ in the background of Schwarzchild-de Sitter black holes with $M=1$. The values of
$\omega_{Im}$ appear in Ref. \cite{Jansen:2017oag}.}
\label{Table2}\centering
\begin{tabular}{ | c | c | c |}
\hline
$\Lambda$ & $ \omega_{Im}$ & $\omega$ \\ \hline
$0.02$ &
$-0.081565496i$ & $-0.081565496 i$ \\
$0.04$ & $-0.11524810i$ & $-0.11524810 i
$ \\
$0.06$ & $-0.14100253i$ & $ -0.14100253 i
$ \\
$0.08$ & $-0.16268011i$ & $-0.16268011i $ \\
$0.09$ & $-0.17249210i$ & $ -0.17249210i$ \\
$0.10$ & $-0.18177480i$ & $-0.18177480 i $ \\
$0.11$ & $-0.19057630i$ & $-0.19057630 i$ \\\hline
\end{tabular}
\end{table}
Now, in order to show the existence of anomalous decay rate of quasinormal modes, we plot in Fig. \ref{F1} the behaviour of the complex fundamental QNFs, for different values of the parameter $\ell$, and different values of the mass $m$ of the scalar field. The numerical values are in appendix \ref{tables} Table \ref{T1}. It is possible to observe that the imaginary part of these frequencies has an anomalous behaviour, i.e, the QNFs either grow or decay when the angular harmonic numbers $\ell$ increase, depending on whether the mass of the scalar field is smaller or larger than the critical mass, where $Im(\omega)_{\ell}=Im(\omega)_{\ell+1}$. Also, the behaviour of the real and imaginary part of the QNFs is smooth, and there is a slower decay of the mode when the mass of the scalar field increases.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.42\textwidth]{PlotsNDn0.pdf}
\includegraphics[width=0.42\textwidth]{AnomalousRealn0.pdf}
\end{center}
\caption{The behaviour of fundamental ($n=0$) $Im(\omega)$ (left panel) and $Re(\omega)$ (right panel) as a function of the scalar field mass $m$ for different values of the parameter $\ell=0,1,2,10,20,30$, with $M=1$, and $\Lambda=0.04$.}
\label{F1}
\end{figure}
In order to show the same anomalous behaviour for other overtone numbers, we plot in Fig. \ref{F11}, the imaginary and real part of the complex QNFs. Note that, the critical mass value increases when the overtone number $n$ increases, for $\ell \geq n$. The numerical values are in appendix \ref{tables} Table \ref{T3}.
\begin{figure} [h]
\begin{center}
\includegraphics[width=0.42\textwidth]{ABN1.pdf}
\includegraphics[width=0.42\textwidth]{Re2.pdf}
\end{center}
\caption{The behaviour of the quasinormal frequencies ($n=1$) $Im(\omega)$ as a function of the scalar field mass $m$ for different values of the parameter $\ell=1,2,10,20,30 \ge n$, with $M=1$, and $\Lambda=0.04$.}
\label{F11}
\end{figure}
The behaviour of the other branch is showed in Fig. \ref{F2}. We can observe that this branch acquires a real part depending on the scalar field mass, see Table \ref{T2}. Thus, as it was observed all the frequencies are negative, which means that the propagation of scalar field is stable in this background.
Moreover, we can observe a faster decay when the parameter $\ell$ increases, and a faster decay when the scalar field mass increases until the QNFs acquire a real part after which the decay is stabilized. Also, the real part increases when the scalar field mass increases.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.44\textwidth]{Im2.pdf}
\includegraphics[width=0.44\textwidth]{ReB.pdf}
\end{center}
\caption{The behaviour of fundamental ($n=0$) $Im(\omega)$ (left pannel) and $Re(\omega)$ (right pannel) as a function of the scalar field mass $m$ for different values of the parameter $\ell=0,1,2$, with $M=1$, and $\Lambda=0.04$.}
\label{F2}
\end{figure}
However, there are other behaviours when we consider higher overtone numbers, see Fig. \ref{F9}, where we plot the behaviour of the imaginary part of the QNFs as a function of the scalar field mass for different overtone numbers and $\ell=0$, and Fig. \ref{Real11} shows the real part. In these figures we can recognize two branches: For zero mass, a branch of complex QNFs given by the black curves, and a purely imaginary branch given by the blue dashed curves. Also, we observe the behavior of the branches when the scalar field mass increases. We see that the black curves remains complex for all the values of $m$ considered. Interestingly, the purely imaginary QNFs for zero mass can combine yielding complex QNFs, given by the continuous colored curves, when the mass increases, and then they split into purely imaginary QNFs which combine into new complex QNFs. As we have mentioned, it was shown that depending on the black hole mass the purely imaginary branch may even be the dominant mode \cite{Jansen:2017oag}. Here, we can observe that for a fixed value of the black hole mass, the purely imaginary QNFs can be dominant depending on the scalar field mass and the angular harmonic numbers. Note that for the fundamental QNFs $n=0$ and a small scalar field mass the dominant branch is the purely imaginary branch however for a scalar field mass $m \geq 0.15$ the dominant branch is the complex one.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{Im.pdf}
\end{center}
\caption{The behaviour of the imaginary part of the quasinormal frequencies $Im(\omega)$ as a function of the scalar field mass $m$ for different overtone numbers, with $\ell=0$, $M=1$, and $\Lambda=0.04$.}
\label{F9}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{Re.pdf}
\end{center}
\caption{The behaviour of the real part of the quasinormal frequencies $Re(\omega)$ as a function of the scalar field mass $m$ for different overtone numbers, with $\ell=0$, $M=1$, and $\Lambda=0.04$.}
\label{Real11}
\end{figure}
Now, in order to see the influence of the cosmological constant in the critical mass, we plot in Fig. \ref{F100} the behaviour of the complex fundamental QNFs, for different values of the parameter $\ell$, and different values of the mass $m$ of the scalar field, but for a cosmological constant greater than the previous case $\Lambda=0.11$. The numerical values are in appendix \ref{Tables2} Table \ref{T6}. We can observe that for a greater cosmological constant the value of the critical mass increases.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{L011N0.pdf}\\
\includegraphics[width=0.5\textwidth]{ReL011N0.pdf}
\end{center}
\caption{The behaviour of fundamental ($n=0$) $Im(\omega)$ (top panel) and $Re(\omega)$ (bottom panel) as a function of the scalar field mass $m$ for different values of the parameters $\ell=0,1,2,10,20,30$, with $M=1$, and $\Lambda=0.11$.}
\label{F100}
\end{figure}
\subsection{Numerical analysis. Schwarzchild-AdS black holes.}
It is convenient to compare our result with those of \cite{Horowitz:1999jd}, so we express the mass $M$ as a function of the event horizon $r_H$
\begin{equation}
M=-\frac{\Lambda r_H^3}{6}+ \frac{r_H}{2}\,,
\end{equation}
where the cosmological constant is taken as $\Lambda=-\frac{3}{R^2}$, with $R$ being the AdS radius. Now, under the change of variable $y=1-r_H/r$ the radial equation (\ref{radial}) becomes
\begin{equation} \label{rad}
(1-y)^4 f(y) R''(y) +(1-y)^4 f'(y)R'(y) + \left( \frac{\omega^2 r_H^2}{f(y)}- \ell(\ell+1) (1-y)^2 -m^2 r_H^2 \right) R(y)=0\, ,
\end{equation}
where the prime denotes derivative with respect to $y$. In the new coordinate the event horizon is located at $y=0$ and the spatial infinity at $y=1$. In the neighborhood of the horizon (y $\rightarrow$ 0) the function $R(y)$ behaves as
\begin{equation}
R(y)=C_1 e^{-\frac{i \omega r_H}{f'(0)} \ln{y}}+C_2 e^{\frac{i \omega r_H}{f'(0)} \ln{y}} \,,
\end{equation}
where the first term represents an ingoing wave and the second represents an outgoing wave near the black hole horizon. Imposing the requirement of only ingoing waves on the horizon, we fix $C_2=0$. Also, at infinity the function $R(y)$ behaves as
\begin{equation}
R(y)= D_1 (1-y)^{\frac{3}{2} + \sqrt{\left(\frac{3}{2} \right)^2 + m^2 R^2}}+ D_2 (1-y)^{\frac{3}{2} - \sqrt{\left(\frac{3}{2} \right)^2 + m^2 R^2}} \,.
\end{equation}
So, imposing the scalar field vanishes at infinity requires $D_2=0$. Therefore, by considering the behaviour at the event horizon and at infinity of the scalar field, it is possible to define the following ansatz
\begin{equation}
R(y)= e^{-\frac{i \omega r_H}{f'(0)} \ln{y}} (1-y)^{\frac{3}{2} + \sqrt{\left(\frac{3}{2} \right)^2 + m^2 R^2}} F(y) \,.
\end{equation}
Then, by inserting this last expression in Eq. (\ref{rad}) we obtain an equation for the function $F(y)$, which we solve numerically employing the pseudospectral Chebyshev method, as in the previous case. Now, in order to check our results, we can see that for $r_H=10/R$ and massless scalar fields, we recover the QNF found in Ref. \cite{Horowitz:1999jd}, see Appendix \ref{C} Table \ref{T7}, the QNFs were also studied in Ref. \cite{Chan:1996yk}. Also, in order to see if there is an anomalous decay rate of quasinormal modes, we plot in Fig. \ref{F1B} the behaviour of the fundamental QNFs, for different values of the parameter $\ell$, and different values of the mass $m$ of the scalar field. The numerical values are in Appendix \ref{C} Table \ref{T7}. We can observe that the anomalous behaviour in the QNMs is not present in Schwarzchild-AdS black holes, for the cases considered. Also, they present a faster decay when the scalar field mass increases and when the angular harmonic numbers $\ell$ decrease. The frequency of the oscillations increases sightly when the scalar field mass increases and also when the angular harmonic numbers $\ell$ decrease.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{AdSIm.pdf}
\includegraphics[width=0.45\textwidth]{AdSRe.pdf}
\end{center}
\caption{The behaviour of fundamental ($n=0$) $-Im(\omega)R$ (left figure) and $Re(\omega)$ (right figure) as a function of the scalar field mass $m$ for different values of the parameter $\ell=0,1,2,10,20,30$, with $r_H/R=10$.}
\label{F1B}
\end{figure}
\section{Conclusions}
\label{conclusion}
In this work, we considered the Schwarzschild-dS and the Schwarzschild-AdS black hole as backgrounds and we studied the propagation of massive scalar fields through the QNFs by using the pseudospectral Chebyshev method in order to determine if there is an anomalous decay behaviour in the QNMs as it was observed in the asymptotically flat Schwarzschild black hole background.
The QNMs in the background of a Schwarzschild-dS black hole are characterized by one branch of QNFs which is complex and another one consisting of purely imaginary QNFs. The pure imaginary QNFs are generated for small scalar field mass and eventually this branch acquires a real part, and it is worth to mention that
to our knowledge, this is the first time that this behaviour has been reported. All the frequencies have a negative imaginary part, which means that the propagation of scalar field is stable in this background. For the complex branch, the presence of the cosmological constant leads to decrease the real oscillation frequency and to a slower decay \cite{Zhidenko:2003wq}. We showed
that for the fundamental QNFs there is a slower decay rate when the mass of the scalar field increases for a fixed angular harmonic number $\ell$.
Furthermore, we showed the existence of anomalous decay rate of QNMs, i.e, the absolute values of the imaginary part of the QNFs decay when the angular harmonic numbers increase if the mass of the scalar field is smaller than a critical mass. On the contrary they grow when the angular harmonic numbers increase, if the mass of the scalar field is larger than the critical mass and they also increase with the overtone number $n$, for $\ell \geq n$.
We also showed that the effect of the cosmological constant is to shift the values of the critical masses i.e when the cosmological constant increases the value of the critical mass also increases. It is worth to mention here that the critical mass is an interesting quantity, because it shows that it is possible to have a scalar field with a critical mass and the decay rate does not depend on the angular harmonic numbers $\ell$; however, its frequency of oscillation depends on the angular harmonic numbers $\ell$, increasing when $\ell$ increases.
It is interesting to note that, despite that the spacetime is asymptotically dS, where the boundary conditions are imposing at the event horizon and at the cosmological horizon the effective potential tends to $-\Lambda(3m^2-2\Lambda)r^2/9$ for $\ell=0$, at infinity, and it can diverge positively, negatively, or be null, specifically, it vanishes for $m_c=p_m \sqrt{2\Lambda/3}$. So, for $\Lambda=0.04$, and for a scalar field with mass $m=m_c\approx 0.163$, and also for $\Lambda=0.11$ and for a scalar field with mass $m=m_c\approx 0.278$, the effective potential vanishes at infinity. These are the critical masses we have considered with $n=0$. So, for a scalar field with critical mass, and $\ell=0$ the effective potential at infinity is not divergent. Also, for $\ell\neq 0$ and $m=m_c$, the effective potential tends to a negative constant at infinity given by $-\ell (\ell+1) \Lambda /3$ and the scalar field does not generate such divergence.
Another simple way to understand the appearance of the anomalous behaviour of the QNMs in the Schwarzschild-dS black hole background, is to define an effective cosmological constant through the relation $\Lambda_{eff}=\Lambda(3m^2-2\Lambda)$. Then as we already discussed, the critical mass of the scalar field to have the anomalous behaviour and the corresponding value of the cosmological constant satisfy the relation $m_c=\sqrt{2\Lambda/3}$. But then the effective cosmological constant becomes $\Lambda_{eff}=0$
leading to the anomalous behaviour of QNMs at that critical mass. The physical picture behind it is that there is a specific critical scale of the scalar field that cancels out the scale introduced by the cosmological constant.
For the other branch, it was shown, depending on the black hole mass it may even be the dominant mode \cite{Jansen:2017oag}. We found that for a fixed value of the black hole mass, the purely imaginary QNFs can be dominant depending on the scalar field mass and the angular harmonic numbers. Also, a faster decay is observed when the $\ell$ parameter increases, as well as, when the scalar field mass increases until that the QNFs acquire a real part, after it the decay is stabilized, and the frequency of the oscillations increases when the scalar field mass increases. Furthermore, we showed that this branch does not present an anomalous behaviour of the QNFs, for the range of scalar field mass analyzed.
In the case of a Schwarzschild-AdS black hole background we have shown that the QNMs of massive scalar fields do not present an anomalous behaviour. In this case, and according to the previous analysis, the effective potential at infinity always diverge, due to the fact that the cosmological constant is negative, and consequently the scalar field can probe the divergence of the effective potential at infinity.
Also, they present a faster decay when the scalar field mass increases and when the angular harmonic numbers $\ell$ decrease. The frequency of the oscillations increases slightly when the scalar field depends on the curvature at infinity, i.e, such anomalous behaviour is possible in asymptotically flat and in asymptotically dS spacetimes and
it is not present in asymptotically AdS spacetimes. The anomalous behaviour
also could depend if the scalar field probes the divergence of the effective potential at infinity, despite that the boundary conditions can be imposed in a different point. It is worth to mention that for a Schwarzschild black hole, the effective potential tends to $m^2$, so the scalar field does not probe the divergence, and consequently the anomalous behaviour in the QNMs can be observed.
It would be interesting to extent this work to the case the background black hole to be charged and study the behaviour of QNMs in this background and in different asymptotic spacetimes. If for example there is an anomalous QNMs decay for massive scalar perturbations in the background of Reissner-Nordstr\"om black hole then this behaviour of the QNMs may have important consequences on the Strong Cosmic Censorship \cite{Cardoso:2017soq,Destounis:2019omd}
\acknowledgments
P. A. G. acknowledges the hospitality of the Universidad de La Serena where part of this work was undertaken. Y.V. acknowledge support by the Direcci\'on de Investigaci\'on y Desarrollo de la Universidad de La Serena, Grant No. PR18142.
|
3,212,635,537,679 | arxiv |
\section{#2\protect\margine{#1}
\else\section{#2}
\fi
\paragrafonew{#2}
\global\advance\nparagrafo by 1 \setcounter{ES}{0} \setcounter{AS}{0}
\scrivi{#1}{p}{PARAGRAFO}{\nparagrafo}
}
\def\undefinedpagestyle{\relax}
\nofiles
\begin{document}
\bozzefalse
\REPORT
LABEL:sp%
DRIVEFIG:./SP/%
DRIVEBIBL:
AUTORI:G. Cariolaro, {\em Member IEEE}, T. Erseghe, N. Laurenti, G. Pierobon
TITOLO:Exact Spectral Analysis of Single-$h$ and Multi-$h$ CPM Signals
through PAM decomposition and Matrix Sereies Evaluation
OVERHEADS:Cariolaro, Erseghe, New Results on Spectral Analysis of Single-$h$ and Multi-$h$ CPM Signals
STILE:
LINGUA:E%
DATA:
FINE
\end{document}
\subsection{The basic decomposition for multi--$h$ CPM}
The phase of the multi--$h$ CPM signal \e(IN4) in the interval $\C(I)_n=[nT,nT+T)$ can be written in the form
$$
\alpha(t) = 2\pi \left[
\sum_{m=-\infty}^{n-L} a_m h_m \,\frac{1}{2}
+ \sum_{m=n-L+1}^n a_m h_m \,\varphi(t-mT)
\right]
\,.
\e(TT2)
$$
Then
$$
v(t) = \sigma_{n-L} \prod_{i=0}^{L-1} e^{ja_{n-i} h_{n-i} \varphi(t-(n-i)T)}
\vq t \in \C(I)_n
\e(TT2bis)
$$
where
$$
\sigma_{n-L}=\prod_{m=-\infty}^{n-L} J_m^{a_m}
\q \hb{with} \q J_m=e^{j\pi h_m}
\e(K4)
$$
plays the role of a {\em state}, which is renewed according to the relation
$$
\sigma_{m+1}=\sigma_m J_{m+1}^{a_{m+1}}\;.
\e(K6)
$$
Comparison with the single--$h$ case \bibl(Cariolaro06) shows that the BD takes the form
$$
v(t)=\sum_n \BB(q)_n(t-nT)\,\BB(b)_n
\e(K8)
$$
where
$$
\BB(b)_n
= \sigma_{n-L} \,\Big[\delta_{\BB(\scriptstyle a)_n,\BB(\scriptstyle\alpha)}\Big]
_{\BB(\scriptstyle\alpha)\in\C(A)_M^L}
\z(a)
$$
is a {\em word sequence} (column vectors of length $N=M^L$), and
$$
\BB(q)_n(t)
= \eta_T(t) \left[ \prod_{i=0}^{L-1}
\exp \left( j2\pi h_{n-i} \alpha_i \varphi(t+iT) \right) \right]
_{\BB(\scriptstyle\alpha)\in\C(A)_M^L}
\z(b)
$$
constitutes an interpolating filter bank (row vectors of length $N$). Here, $\eta_T(t)$ is an indicator function active over the interval $[0,T)$, $\delta$ is a vector generalization of the Kronecker delta function, $\BB(a)_n=(a_{n-L+1},\ldots,a_n)$ collects the input data over a window of length $L$, $\BB(\alpha)=(\alpha_0,\alpha_1,\ldots,\alpha_{L-1}) \in \C(A)_M^L$, and $N$ is the cardinality of $\C(A)_M^L$.
The difference with respect to the single--$h$ CPM is in the renewal law \e(K4), which is periodically time--dependent (PTI), and in the pulses $\BB(q)_n(t)$, which are PTI with respect to the time $n$. This is shown in \Figg(MH30)0, where the non--linear device produces the symbols $\BB(b)_n$ from the input data and the bank of interpolators produces the PAM waveforms.\Figg(MH30)2
We now discuss the nature of the ``state'' $\sigma_m$, whose renewal law is given by \e(K6). We suppose that {\co all the modulation indexes are rational} and we write them in the form $h_i=r_i/p$, where $p$ is the least common denominator of the fractions. As an example \bibl(Perrins06), if $h_0=\fract{1}{4}=\fract{4}{16}$, $h_1=\fract{5}{16}$, $h_2=\fract{8}{16}$, $h_3=\fract{5}{8}=\fract{10}{16}$, the integer $p$ is $16$. Then we can see that
$$
\sigma_m \in \left\{ 1,W_{2p},W_{2p}^2,\ldots,W_{2p}^{2p-1}\right\}
\eqq \C(W)_{2p}
\;,\; W_{2p}=e^{j2\pi/(2p)}\;.
\e(K10)
$$
It will be convenient to replace $\sigma_m$ by an equivalent state $z_m$ such that
$$
\sigma_m=W_{2p}^{z_m}
\e(K11)
$$
which takes the values in the integer set
$$
z_m \in \{ 0,1,\ldots,2p-1 \} \eqq \C(N)_{2p}\;.
\e(K12)
$$
The renewal law for $z_m$ is obtained by rewriting \e(K4) using \e(K11), namely
$$
W_{2p}^{s_{m+1}}
= W_{2p}^{z_m} \, e^{j\pi r_{m+1} a_{m+1}/p} = W_{2p}^{z_m+r_{m+1} a_{m+1}}\;.
$$
Hence
$$
z_{m+1} = (z_m+r_{m+1} a_{m+1})_{2p}
\e(K13)
$$
where $(\;)_{2p}$ denotes modulo $2p$.
In terms of $z_m$ the word sequence becomes
$$
\eqalign{
\BB(b)_n
&= W_{2p}^{z_{n-L}} \Big[ \delta_{\BB(\scriptstyle a)_n,\BB(\scriptstyle\alpha)} \Big]
_{\BB(\scriptstyle\alpha)\in\C(A)_M^L}\cr
&= W_{2p}^{z_{n-L}} \left[ \delta_{a_{n-L+1},\alpha_0} \cdots
\delta_{a_n,\alpha_{L-1}} \right]_{\BB(\scriptstyle\alpha)\in\C(A)_M^L}\;.
}
\e(K15pre)
$$
Now, the vector $\BB(b)_n$ of length $N=M^L$ can be conveniently written by the Kronecker product%
\footnote
Given two matrices $\B(A)=||a_{mn}||$ and $\B(B)=||b_{ij}||$
of arbitrary dimensions $M_A\times N_A$ and $M_B\times N_B$,
the {\it Kronecker product} $\B(A)\times \B(B)$ is defined by
$$
\B(A)\times \B(B) = \qmatrix{%
a_{1 1}\, \B(B) &\ldots & a_{\scriptscriptstyle 1 N_A}\, \B(B) \cr\VS{-3}
\vdots & &\vdots \cr\VS{-9}
a_{\scriptscriptstyle M_A 1}\, \B(B) &\ldots &a_{\scriptscriptstyle M_A N_A}\, \B(B)}
$$
and is a matrix of dimensions $(M_A M_B)\times(N_A N_B)$.
The $L$th \emph{Kronecker power} of a matrix $\B(A)$ is defined as
$\B(A)^{\times L} = \B(A) \times \B(A) \times \cdots \times \B(A) \q
\hbox{($L$ factors)}$. The following property
relates the Kronecker product to the ordinary matrix product
(\emph{mixed--product law}) $(\B(A)\times\B(B)) \: (\B(C)\times \B(D)) = (\B(A)\,\B(C)) \times (\B(B)\,\B(D))$
and more generally $(\B(A)_1 \times \cdots \times \B(A)_n)\:
(\B(B)_1 \times \cdots \times \B(B)_n) = (\B(A)_1\,\B(B)_1)
\times \cdots \times (\B(A)_n\,\B(B)_n)
\pu$
If $\B(A)$ and $\B(B)$ are invertible
square matrices, $(\B(A)\times\B(B))^{-1} = \B(A)^{-1} \times \B(B)^{-1} \pu
$
} $\otimes$. Specifically,
\Proposition(A1) Let $\BB(w)_{a_n}=[\delta_{a_n,\alpha}]_{\alpha\in\C(A)_M}$ be the indicator vector of $a_n$ (a column vector of size $M$). Then
$$
\BB(b)_n
= W_{2p}^{z_{n-L}} \BB(w)_{a_{n-L+1}} \otimes \cdots
\otimes \BB(w)_{a_{n-1}} \otimes \BB(w)_{a_n} \;.~\q\Box
\e(K15)
$$
For instance, for $L=1$ and $M=4$ we have
$$
\left[ \delta_{a_n,\alpha}\right]_{\alpha \in \C(A)_4}
= W_{2p}^{z_{n-1}} \BB(w)_{a_{n}}
$$
where
$$
\eqalign{ &
\BB(w)_{-3}=\qmatrix{1 & 0 & 0 & 0 \cr}^T \vq
\BB(w)_{-1}=\qmatrix{0 & 1 & 0 & 0 \cr}^T \cr &
\BB(w)_{+1}=\qmatrix{0 & 0 & 1 & 0 \cr}^T \vq
\BB(w)_{+3}=\qmatrix{0 & 0 & 0 & 1 \cr}^T\;.
}
$$
\subsection{Formulation of the sequential machine}
We now formalize the non--linear device of \fig(MH30) as a PTI SM (see \bibl(Cariolaro06) for the single--$h$ case), that is as a quintuple
$$
\C(M)_{\hbt{CPM}}=(\C(A),\C(B),\C(S),\BB(g),\BB(h))
\e(K13bis)
$$
where $\C(A)$ is the input alphabet, $\C(B)$ is the output alphabet, $\C(S)$ is the state alphabet, $\BB(g)$ is the state--update function and $\BB(h)$ is the output function, with
$$
\left\{\eqalign{
&\BB(s)_{n+1} = \BB(g)\Big(\BB(s)_{n},\BB(a)_{n},n\Big)\;,\qquad \BB(a)_{n}\in\C(A)\,,\;\BB(s)_{n},\BB(s)_{n+1}\in\C(S)\cr
&\BB(b)_{n} = \BB(h)\Big(\BB(s)_{n},\BB(a)_{n},n\Big)\;,\qquad\BB(b)_{n}\in\C(B)\,.\cr
}\right.
\e(K13tris)
$$
Note that, in general, the state update function and the output function in \e(K13tris) depend on $n$. When the dependence on $n$ can be dropped we will talk of a stationary update/output function, and when the dependence is periodic on $n$ we will talk of a PTI update/output function.
We begin by expliciting the SM in a specific case, and let $L=3$. The input is simply identified as $\BB(a)_{n}=a_n$, with alphabet $\C(A)=\C(A)_M$. We then choose a valid state $\BB(s)_{n}$, and observe that $\sigma_m$ (or $z_m$) is not a sufficient information for determining \e(K15). Hence, by inspection of \e(K15), we let the state be
$$
\BB(s)_n
= \left( s_n^{(0)},s_n^{(1)},s_n^{(2)}\right)
= \left( z_{n-3},a_{n-2},a_{n-1}\right)
\e(K14)
$$
as in the single--$h$ case. Then, from \e(K13), the state update function is expressed as
$$
\cases{
s_{n+1}^{(0)}
= z_{n-2}
= \left( z_{n-3} +r_{n-2} a_{n-2} \right)_{2p}
= \left( s_n^{(0)}+r_{n-2} s_n^{(1)}\right)_{2p} & \cr
s_{n+1}^{(1)}
= a_{n-1}
= s_n^{(2)} & \cr
s_{n+1}^{(2)}
= a_n\;. & \cr
}
\e(K16)
$$
Therefore the state--transition has the form $\BB(s)_{n+1}=\BB(g)(\BB(s)_n,a_n,n)$ whose dependence on $n$ is due to \e(K16), where $r_{n-2}$ depends on $n$. Moreover, the dependence is periodic in $n$ because of the periodicity of $r_n$. From \e(K14) we see that the state set is $\C(S)=\C(N)_{2p} \times \C(A)_M^{2}$. Finally, the output function is defined by \e(K15), which is explicitly
$$
\eqalign{
\BB(b)_n
& = W_{2p}^{z_{n-3}} \BB(w)_{a_{n-2}} \otimes \BB(w)_{a_{n-1}}
\otimes \BB(w)_{a_n} \cr
& = W_{2p}^{s_n^{(0)}} \BB(w)_{s_n^{(1)}} \otimes \BB(w)_{s_n^{(2)}}
\otimes \BB(w)_{a_n} \;.
}
\e(K17)
$$
This relation has the form $\BB(b)_n=\BB(h) \left( \BB(s)_n,a_n \right)$, where $\BB(h)(\cdot)$ is not time dependent.
In the general case we have
\Proposition(P7) In a multi--$h$ CPM with alphabet size $M$, memory $L$ and rational modulation indexes $h_n=r_n/p$, $n=0,1,\ldots,N_h$, the states of the SM $\C(M)_{\hbt{CPM}}$ are defined by
$$
\BB(s)_n
= \left( s_n^{(0)},s_n^{(1)},\ldots,s_n^{(L-1)}\right)
= \left( z_{n-L},a_{n-L+1},\ldots,a_{n-1}\right)
\e(TT6)
$$
and the state set is
$$
\C(S)=\C(N)_{2p} \times \C(A)_M^{L-1}\;.
\e(TT8)
$$
The state--transition function $\BB(s)_{n+1}=\BB(g)(\BB(s)_n,a_n,n)$, periodic in $n$ with period $N_h$, is defined by
$$
\cases{
s_{n+1}^{(0)}=\left( s_n^{(0)}+r_{n-L+1} s_n^{(1)}\right)_{2p} & \cr
s_{n+1}^{(1)}=s_n^{(2)} & \cr
\q\vdots & \cr
s_{n+1}^{(L-2)}=s_n^{(L-1)} & \cr
s_{n+1}^{(L-1)}=a_n\;. & \cr
}
\e(K20)
$$
The output function $\BB(b)_n=\BB(h) \left( \BB(s)_n,a_n\right)$ is given by
$$
\BB(b)_n = W_{2p}^{s_n^{(0)}} \BB(w)_{s_n^{(1)}} \otimes \cdots \otimes \BB(w)_{s_n^{(L-1)}} \otimes \BB(w)_{a_n}\;.~\q\Box
\e(TT10)
$$
The complexity of the SM $\C(M)_{\hbt{CPM}}$ is essentially determined by the cardinality of the state set, given by
$$
I=|\C(S)|=2p \, M^{L-1}\;.
\e(TT12)
$$
The input alphabet is $\C(A)=\C(A)_M$. The length of the output words $\BB(b)_n$ is
$$
|\C(A)_M^L|=M^L \eqq N
\e(TT14)
$$
and their structure is
$$
\eqalign{
& W_{2p}^{s_n^{(0)}} \qmatrix{1 & 0 & 0 & \cdots & 0 & 0 \cr}\cr
& W_{2p}^{s_n^{(0)}} \qmatrix{0 & 1 & 0 & \cdots & 0 & 0 \cr}\cr
& \q \vdots \cr
& W_{2p}^{s_n^{(0)}} \qmatrix{0 & 0 & 0 & \cdots & 0 & 1 \cr}\cr}
$$
so that the output alphabet $\C(B)$ is a subset of $\C(W)_{2p}^{N}$.
\subsection{Statistical description for spectral analysis}
The target is the evaluation of the {\em average PSD} $\overline{R}_v(f)$ of the multi--$h$ CPM signal $v(t)$. In this analysis the following random processes are involved: the input data $a_n=a(nT)$, the state sequence $s_n=s(nT)$, the word sequence $\BB(b)_n=\BB(b)(nT)$, and the CPM signal $v(t)$, $t \in \M(R)$.
The assumptions on which we base our analysis are four, namely
\IT 1) The input alphabet is an $M$-ary alphabet $\C(A)_M$, with $M$ even, containing odd symbols.
\IT 2) The input data $\{a_n\}$ are {\em stationary} and {\em statistically independent}, with given probabilities $q_{\alpha} = \P[a_n=\alpha]$, $\alpha \in \C(A)_M$.
\IT 3) The modulation indexes $h_n$ are rational, namely $h_n=r_n/p$, $n=0,1,\ldots,N_h-1$, $p$ being the common denominator of the fractions and $r_n$ being integers.
\IT 4) The modulation index sequence $\{h_n\}$ is PTI of period $N_h$.
\ET
As a consequence of 4), the sequential machine $\C(M)_{\hbt{CPM}}$ of \proposition(P7) and the filter bank of \ee(K8)a are PTI. This makes all the processes $\BB(s)_n$, $\BB(b)_n$, and $v(t)$ (\Figg(MH20)0).\Figg(MH20)2
As we shall see, the period of cyclostationary $T_c$ depends not only on the period of the modulation indexes $N_h$, but also on their {\co parity}. In fact, the cyclostationarity period is $T_c=N_c T$ with
$$
N_c=\cases{ N_h & if $\sum_{n=0}^{N_h-1} r_n$ is even \cr
2N_h & if $\sum_{n=0}^{N_h-1} r_n$ is odd. \cr} \e(HZ3)
$$
Specifically, for a single-$h$ CPM with even $r_0$ we have $N_c=1$, but for a odd $r_0$ it is $N_c=2$.
The fundamental result for the statistical description of the above random processes is that, as a consequence of 4), the state sequence $\BB(s)_n$ is a {\co non--homogeneus} (i.e., non--stationary) Markov chain. Then, its full statistical specification can be obtained as a straightforward generalization of the results available for homogeneous Markov chains, providing the state absolute probabilities
$$
p_n(\BB(i))=\P[\BB(s)_n=\BB(i)] \vq \BB(i) \in \C(S)
\ee(HZ8)a
$$
and the state transition probabilities
$$
\pi_n(\BB(i),\BB(j))=\P[\BB(s)_{n+1}=\BB(i) | \BB(s)_n=\BB(j)]
\vq \BB(i),\BB(j) \in \C(S)\;.
\z(b)
$$
Both $p_n(\BB(i))$ and $\pi_n(\BB(i),\BB(j))$ are periodic of period $N_c$ in $n$. They are customarily collected in matrix form under the name of, respectively, the state absolute probability vector (APV) $\BB(p)_n$ and the transition probability matrix (TPM) $\BB(\pi)_n$ defined as
$$
\BB(\pi)_n = \Big[\pi_n(\BB(i),\BB(j))\Big]
_{\BB(\scriptstyle i),\BB(\scriptstyle j)\in\C(S)}\vq
\BB(p)_n = \Big[p_n(\BB(i))\Big]
_{\BB(\scriptstyle i)\in\C(S)}\;.
\e(HZ8bis)
$$
\subsection{Evaluation of the transition probability matrix}
For the evaluation of the TPM we can use the same technique of \bibl(CarTron74) with the modification concerning the time--dependence.
\Proposition(M4) Let $\BB(e)_{\alpha,n} = [ e
_{\alpha,n}(\BB(i),\BB(j))] _{\BB(\scriptstyle i),\BB(\scriptstyle j)\in\C(S)}$ be $I \times I$ binary matrices defined by the state--transition function $\BB(s)_{n+1}=\BB(g)_n(\BB(s)_n,\alpha)$ according to
$$
e_{\alpha,n}(\BB(i),\BB(j))=\cases{
1 & if $\BB(i)=\BB(g)_n(\BB(j),\alpha)$ \cr
0 & if $\BB(i)\neq \BB(g)_n(\BB(i),\alpha)$ \cr}
=\delta_{\BB(\scriptstyle i)\,,\,\BB(\scriptstyle g)_n(\BB(\scriptstyle j), \alpha)}\;.
\e(TT20)
$$
We will call $\BB(e)_{\alpha,n}$ conditional transition matrices, since they express state transitions under the condition $a_n=\alpha$. Then
$$
\BB(\pi)_n
= \sum_{\alpha \in \C(A)_M} q_{\alpha} \, \BB(e)_{\alpha,n}\,.~\q\Box
\e(TT22)
$$
Now, by use of \e(TT20) and the state--update function \e(K20), the conditional transition matrices can be explicited as
$$
\eqalign{
\BB(e)_{\alpha,n} & = \Big[
\delta_{\BB(\scriptstyle i)\,,
\,\BB(\scriptstyle g)_n(\BB(\scriptstyle j), \alpha)}
\Big] _{\BB(\scriptstyle i),\BB(\scriptstyle j)\in\C(S)} \cr
& = \Big[
\delta_{i_0,(j_0+r_{n-L+1}j_1)_{2p}}\,
\delta_{i_1,j_2}\,\ldots\,\delta_{i_{L-2},j_{L-1}}
\,\delta_{i_{L-1},\alpha}
\Big] _{i_0,j_0\in \C(N)_{2p},\, i_1,j_1,\ldots,i_{L-1},j_{L-1}\in\C(A)_M}
}
\e(TT24)
$$
where $\BB(i)=[i_0 \,\cdots\, i_{L-1} ]^T$ and similarly for $\BB(j)$. The product of functions in \e(TT24) permits formulating the matrix $\BB(e)_{\alpha,n}$ in compact form through a Kronecker product by following a procedure similar to that leading to \e(K15).
By preliminarily defining the single step ciclical shift matrix
$$
\BB(D)_{2p} \eqq \qmatrix{ & & & 1\cr 1 & & & \cr & \ddots & & \cr & & 1 & }
\qq \hbox{matrix $2p\times2p$}\;,
\e(TT26a)
$$
which is a square matrix obtained by cyclically shifting the main diagonal to the left by $1$ position, we then have (see Appendix~A for a proof)
\Proposition(M4bis) The $I\times I$ conditional transition matrices $\BB(e)_{\alpha,n}$ for $L\ge2$ are given by
$$
\BB(e)_{\alpha,n} = \sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}\,
\otimes\,\BB(w)_{\beta}^T\,
\otimes\,\BB(I)_{M^{L-2}}\,
\otimes\,\BB(w)_{\alpha}
\e(TT30)
$$
As a consequence, and by exploiting \e(TT22), the TPMs $\BB(\pi)_n$ give
$$
\BB(\pi)_n = \sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}\,
\otimes\,\BB(w)_{\beta}^T\,
\otimes\,\BB(I)_{M^{L-2}}\,
\otimes\,\BB(q)\;.
\e(TT32)
$$
The case $L=1$ must be treated separately. We have
$$
\BB(e)_{\alpha,n} =\BB(D)_{2p}^{r_{n-L+1}\alpha}\;,\qq
\BB(\pi)_n = \sum_{\beta\in\C(A)_M} q_\beta\,\BB(D)_{2p}^{r_{n-L+1}\beta} \q~\Box
\e(TT32bis)
$$
\subsection{Evaluation of the absolute probability vector}
With respect to the APV $\BB(p)_n$, the following update relation holds from the definition of $\BB(p)_n$ and $\BB(\pi)_n$,
$$
\BB(p)_{n+1} = \BB(\pi)_n\,\BB(p)_n\;.
\e(TT34)
$$
The interest is to identify that particular APV which is invariant to the update \e(TT34), that is
$$
\BB(p)_{n} = \BB(\pi)_n\,\BB(p)_n\;,
$$
providing information on the limiting probabilities $\BB(p)_\infty$. Note that the APV we are looking for is an eigenvector of $\BB(\pi)_n$ with unit eigenvalue. As proved in Appendix~B, such a vector of interest is
$$
\BB(p)_n = \frac1{2p}\,\BB(1)_{2p}\,\otimes\,
\underbrace{\BB(q)\,\otimes\, \cdots\,\otimes\,\BB(q)}_{L-1}\;.
\e(TT40)
$$
\Paragrafo(CL){Classification of the Markov Chain}
\subsection{Periodicity and reducibility properties}
We have already seen that the TPMs $\BB(\pi)_n$ are periodic with period $N_h$. This makes the Markov chain being itself PTI. In addition, the Markov chain may further be {\em reducible}, that is if it is not possible to get to any state from any state or, equivalently, if some of the states cannot communicate between them. In the specific case of CPM, reducibility is assured by the rationale, explained in the following, and based upon the renewal law \e(K13).
Recalling that $a_{m+1}$ is an odd number, we have the following prospect
\begin{small}
$$
\eqalign{
\hb{$r_{m+1}$ even} \q & \cases{\hb{$z_m$ even} & \hb{$\arr$ \q $z_{m+1}$ even} \cr
\hb{$z_m$ odd} & \hb{$\arr$ \q $z_{m+1}$ odd} \cr}
\cr
\hb{$r_{m+1}$ odd} \q & \cases{\hb{$z_m$ even} & \hb{$\arr$ \q $z_{m+1}$ odd} \cr
\hb{$z_m$ odd} & \hb{$\arr$ \q $z_{m+1}$ even} \cr} \cr} $$
\end{small}
Now, for convenience in the state set $\C(S)=\C(N)_{2p} \times \C(A)_M^{L-1}$ we partition $\C(N)_{2p}$ in its even and odd parts
$$
\C(N)_{2p}(+)=\{0,2,\ldots,2p-1\} \vq
\C(N)_{2p}(-)=\{1,3,\ldots,2p-1\} \;,
\e(K22)
$$
As a consequence, we arrange the matrix and vector indexes by displaying the even integers $\C(N)_{2p}(+)$ first, followed by the odd integers $\C(N)_{2p}(-)$, and find that the newly arranged matrices $\tilde{\BB(e)}_{\alpha,n}$ and $\tilde{\BB(\pi)}_n$ take a {\co block diagonal} form or a {\co block antidiagonal form}.
As an example, with $L=1$, $p=4$, $M=2$ and $r_{n-L+1}=3$, we find that $\tilde{\BB(\pi)}_n$ has the block antidiagonal form
\begin{small}
$$
\begin{array}{c|cccc|cccc}
& 0 & 2 & 4 & \multicolumn{1}{c}{6} & 1 & 3 & 5 & 7 \cr
\hline
0 & & & & & 0 & q_{-1}& q_1 & 0 \cr
2 & & & & & 0 & 0 & q_{-1}& q_1 \cr
4 & & & & & q_1 & 0 & 0 & q_{-1}\cr
6 & & & & & q_{-1}& q_1 & 0 & 0 \cr
\cline{2-9}
1 & 0 & 0 & q_{-1}& q_1 & & & & \cr
3 & q_1 & 0 & 0 & q_{-1}& & & & \cr
5 & q_{-1}& q_1 & 0 & 0 & & & & \cr
7 & 0 & q_{-1}& q_1 & 0 & & & & \cr
\hline
\end{array}
$$
\end{small}
while for $r_{n-L+1}=2$ we obtain the block diagonal form
\begin{small}
$$
\begin{array}{c|cccc|cccc}
& 0 & 2 & 4 & \multicolumn{1}{c}{6} & 1 & 3 & 5 & 7 \cr
\hline
0 & 0 & q_{-1}& 0 & q_1 & & & & \cr
2 & q_1 & 0 & q_{-1}& 0 & & & & \cr
4 & 0 & q_1 & 0 & q_{-1}& & & & \cr
6 & q_{-1}& 0 & q_1 & 0 & & & & \cr
\cline{2-9}
\hline
1 & & & & & 0 & q_{-1}& 0 & q_1 \cr
3 & & & & & q_1 & 0 & q_{-1}& 0 \cr
4 & & & & & 0 & q_1 & 0 & q_{-1}\cr
7 & & & & & q_{-1}& 0 & q_1 & 0 \cr
\end{array}
$$
\end{small}
Note that submatrices are equal or closely related one each other by a simple circular shifts of the row (or column) order.
The result can be generalized by inspection of \e(K20) and \ee(HZ8)b. Specifically, for $\tilde{\BB(\pi)}_n$ we obtain the structure
$$
\tilde{\BB(\pi)}_n = \cases{
\qmatrix{
\BB(F)_n & \B(0) \cr
\B(0) & \BB(F)_n \cr} & $r_{n-L+1}$ even \cr\VS3
\qmatrix{
\B(0) & \BB(D)_{I_0}^{I_0/p}\,\BB(G)_n \cr
\BB(G)_n & \B(0) \cr} & $r_{n-L+1}$ odd. \cr}
\e(K27)
$$
where $\BB(F)_n$ and $\BB(G)_n$ are $I_0\times I_0$ matrices with
$$
I_0 = \frac12\,I = p\,M^{L-1}
\e(K27b)
$$
Note that all submatrices are themselves TPMs.
Moreover, submatices can be easily formulated as Kronecker products by recalling \e(TT32). For $r_{n-L+1}$ even we have
$$
\BB(F)_n = \sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}(+|+)\,
\otimes\,\BB(w)_{\beta}^T\,
\otimes\,\BB(I)_{M^{L-2}}\,
\otimes\,\BB(q)
$$
with $\BB(D)_{2p}^{r_{n-L+1}\beta}(+|+)$ the square matrix collecting the samples of $\BB(D)_{2p}^{r_{n-L+1}\beta}$ at even rows and even columns, while for $r_{n-L+1}$ odd it is
$$
\BB(G)_n = \sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}(-|+)\,
\otimes\,\BB(w)_{\beta}^T\,
\otimes\,\BB(I)_{M^{L-2}}\,
\otimes\,\BB(q)
$$
with $\BB(D)_{2p}^{r_{n-L+1}\beta}(-|+)$ the square matrix collecting the samples of $\BB(D)_{2p}^{r_{n-L+1}\beta}$ at odd rows and even columns. Note also that the correction term in \e(K27) can be written as
$$
\BB(D)_{I_0}^{I_0/p} = \BB(D)_p\,\otimes\,\BB(I)_{M^{L-1}}\;,
\e(K27c)
$$
which restores the true modulo $2p$ operation when transiting from odd to even states. Incidentally, the following equivalences hold
$$
\BB(D)_{I_0}^{I_0/p}\,\BB(F)_n = \BB(F)_n\,\BB(D)_{I_0}^{I_0/p}\vq
\BB(D)_{I_0}^{I_0/p}\,\BB(G)_n = \BB(G)_n\,\BB(D)_{I_0}^{I_0/p}\;.
\e(K27d)
$$
By now decomposing the state set $\C(S)=\C(N)_{2p} \times \C(A)_M^{L-1}$ into the subsets
$$
\C(S)(\pm)=\C(N)_{2p}(\pm) \times \C(A)_M^{L-1}\;.
$$
we can further illustrate the meaning of matrices in \e(K27) by the following graph:
\begin{small}
$$
\eqalign{
\hb{$r_{n-L+1}$ even} \q &
\cases{\hb{$\B(s)_n\in\C(S)(+)$} & $\stackrel{\BB(F)_n}{\Arr}$
\q$\B(s)_n\in\C(S)(+)$ \cr
\hb{$\B(s)_n\in\C(S)(-)$} & $\stackrel{\BB(F)_n}{\Arr}$
\q$\B(s)_n\in\C(S)(-)$ \cr}
\cr
\hb{$r_{n-L+1}$ odd} \q &
\cases{\hb{$\B(s)_n\in\C(S)(+)$} & $\stackrel{\BB(G)_n}{\Arr}$
\q$\B(s)_n\in\C(S)(-)$ \cr
\hb{$\B(s)_n\in\C(S)(-)$} & $\stackrel{\BB(D)_{I_0}^{I_0/p}\,\BB(G)_n}{\Arr}$
\q$\B(s)_n\in\C(S)(+)$ \cr}
}
\e(K44)
$$
\end{small}
An equivalent illustration is given in \Figg(MH202)0.\Figg(MH202)2
We see that, with $r_{n-L+1}$ even, the state sets $\C(S)(\pm)$ do not communicate: starting with $\BB(s)_n \in \C(S)(+)$ the next state $\BB(s)_{n+1}$ is again in $\C(S)(+)$, and this transition is governed by the TPM $\BB(F)_n$. Analogously, starting with $\B(s)_n \in \C(S)(-)$ the next state is $\BB(s)_{n+1} \in \C(S)(-)$ with TPM $\BB(F)_n$. No transition is possible from $\C(S)(+)$ into $\C(S)(-)$ and from $\C(S)(-)$ into $\C(S)(+)$. For $r_{n-L+1}$ odd, the state sets $\C(S)(\pm)$ do communicate deterministically: starting with $\BB(s)_n \in \C(S)(+)$ the next state is $\BB(s)_{n+1} \in \C(S)(-)$ with TPM $\BB(G)_n$, etc.
\subsection{Concluding remarks on Markov chain evaluation}
At this point we realize the fundamental role of the sequence $r_{n-L+1}$ of the {\co normalized} (to $2p$) modulation indexes. We start introducing two examples and conclude with the general case after.
\Example(H2) Let $L=1$, $N_h=3$, $r_0$ even, $r_1$ and $r_2$ odd. We consider the state sequence $\BB(s)_n$ starting from $n=0$. If $\BB(s)_0 \in \C(S)(+)$, the transition is $\BB(s)_1 \in \C(S)(+)$ with TPM $\BB(F)_0$, next the transitions are $\BB(s)_2 \in \C(S)(-)$ with TPM $\BB(G)_1$ and $\BB(s)_3 \in \C(S)(+)$ with TPM $\BB(D)_{I_0}^{I_0/p}\,\BB(G)_2$, etc. So we find the TPM sequence
$$
\BB(F)_0\vl \BB(G)_1\vl \BB(D)_{I_0}^{I_0/p}\,\BB(G)_2\vl
\BB(F)_0\vl \BB(G)_1\vl \BB(D)_{I_0}^{I_0/p}\,\BB(G)_2\vl \BB(F)_0\vl \ldots
$$
as illustrated at the top of \Figg(MH136)0.\Figg(MH136)2
Analogously, if $\BB(s)_0 \in \C(S)(-)$ we find the complemetary sequence illustrated below in \fig(MH136). In both cases the period is $N_c=N_h=3$.
\Example(H3) Now let $L=1$ and $N_h=3$, and suppose $r_0$ and $r_2$ even, $r_1$ odd. Depending on the state $\BB(s)_0$, we find the two trajectories illustrated in \Figg(MH137)0. In both case the period is $N_c=2N_h=6$.\Figg(MH137)2
\Example(H4) Let $L=1$, $N_c=2$, $r_0$ be even, and $r_1$ be odd. If $\B(s)_0 \in \C(S)(+)$ the sequence of TPMs is illustrated in \Figg(MH138)0 with period $N_c=2 N_h=4$.\Figg(MH138)2
From the above examples and from the general rules \e(K44) (see also \fig(MH202)), we see that, starting from $n=0$, the Markov chain
$$
\BB(s)_0 \vq \BB(s)_1 \vq \BB(s)_2 \,, \ldots
$$
has two distinct classes of trajectories $\C(T)(\pm)$ which do not communicate each other anytime. The first class $\C(T)(+)$ is determined by the condition $\BB(s)_0 \in \C(S)(+)$ and the second class $\C(T)(-)$ by the condition $\BB(s)_0 \in \C(T)(-)$. We can extend these considerations to the bilateral chain
$$
\ldots \vq \BB(s)_{-2} \vq \BB(s)_{-1} \vq \BB(s)_0 \vq \BB(s)_1 \vq \BB(s)_2 \,, \ldots
$$
which may be candidate for stationarity or cyclostationarity.
In the underlying probability space, we have to operate under one of the conditions\footnote{In practice, the condition is determined by the remote history of the modulator.}
$$
\C(C)_+: \BB(s)_0 \in \C(S)(+) \vq \C(C)_-: \BB(s)_0 \in \C(S)(-)
$$
and correspondingly we find two distinct classes of trajectories:
$$
\{ \BB(s)_n \} \in \C(T)(+) \q \hb{and} \q \{ \BB(s)_n \} \in \C(T)(-)\;.
$$
We claim that both classes can be modeled by a non--homogeneous irreducible Markov chain with TPMs $\BB(\pi)_n(+)$ and $\BB(\pi)(-)$, respectively. The cardinality of these conditional chains is $p M^{L-1}$, that is half the cardinality $2p M^{L-1}$ of the original unconditional Markov chain.\footnote{The novelty (to be clarified) is that the state of each conditioned Markov chain is not unique but it changes from $\C(S)(+)$ and $\C(S)(-)$ in dependence of the modulation indexes. However, $\C(T)(+)$ and $\C(T)(-)$ are isomorphic.}
The TPMs $\BB(\pi)_n(\pm)$ are obtained from the TPM $\BB(\pi)_n$ of the unconditioned Markov chain with the block partition and the rules seen above and now summarized.
\Proposition(P15) Let $\BB(\pi)_n$ be the TPM of the unconditioned Markov chain, which has the form \e(K27) and period $N_c$. The period $N_c$ of $\BB(\pi)_n(\pm)$ depends on the sequence of the normalized modulation indexes $r_0,r_1,\ldots,r_{N_h-1}$, namely
$$
N_c=\cases{
N_h & if the number of the $r_n$ odd in a period $N_h$ is even \cr
2N_h & if the number of the $r_n$ odd in a period $N_h$ is odd. \cr}
$$
The $\BB(\pi)_n(+)$ in a period starts with $\BB(\pi)_0(+)=\BB(F)_0$ if $r_{-L+1}$ is even ($\BB(s)_1 \in \C(S)(+)$), and with $\BB(\pi)_0(+)=\BB(G)_0$ if $r_{-L+1}$ is odd ($\BB(s)_1 \in \C(S)(-)$). The rest is obtained recursively with the rule \e(K44). Analogously $\BB(\pi)_n(-)$ starts with $\BB(\pi)_0(-)=\BB(F)_0$ if $r_{-L+1}$ is even ($\BB(s)_1 \in \C(S)(-)$), and with $\BB(\pi)_0(-)=\BB(D)_{I_0}^{I_0/p}\,\BB(G)_0$ if $r_{-L+1}$ is odd ($\BB(s)_1 \in \C(S)(+)$), and is obtained with the same rules.~$\Box$
\Paragrafo(PD){Alternative Representation of CPM Modulator}
A promising approach to closed form PSD evaluation is given by reinterpreting the CPM modulator through the {\em polyphase decomposition} (PD) or serial-to-parallel conversion (S/P) of \Figg(MH22)0, where the input symbol sequence $\{a_n\}$ is turned into the word sequence
$$
\BB(x)_n = \Big[a_{nN_c}, a_{nN_c+1},\cdots, a_{nN_c+N_c-1} \Big]
\e(PD2)
$$
which is itself a stationary sequence with alphabet $\C(A)_M^{N_c}$.\Figg(MH22)2
The subsequent SM generating the output words $\{\BB(y)_n\}$ can then be formulated on an equivalent to \e(TT2), with the only difference that now the observation interval of interest becomes of length $T_c$ with the form $\C(I)_n=[nT_c,nT_c+T_c)$. We then have
$$
\alpha(t) = 2\pi \left[
\sum_{m=-\infty}^{nN_c-L} a_m h_m \,\frac{1}{2}
+ \sum_{m=nN_c-L+1}^{nN_c+N_c-1} a_m h_m \,\varphi(t-mT)
\right]
\vq t \in \C(I)_n\;,
\e(PD4)
$$
so that the counterpart to \e(K8) becomes
$$
v(t)=\sum_n \BB(\phi)(t-nT)\,\BB(y)_n
\e(PD6)
$$
where
$$
\eqalign{&
\BB(y)_n
= \sigma_{nN_c-L}
\,\Big[\delta_{\BB(\scriptstyle a)_n,\BB(\scriptstyle\alpha)}\Big]
_{\BB(\scriptstyle\alpha)\in\C(A)_M^{L+N_c-1}}
\cr &
\BB(\phi)(t)
= \eta_{T_c}(t) \left[ \prod_{i=0}^{L-1}
\exp \left( j2\pi h_{nN_c-i} \alpha_i \varphi(t+iT) \right) \;
\prod_{i=1}^{N_c-1}
\exp \left( j2\pi h_{nN_c+i} \beta_i \varphi(t-iT) \right)\right]
_{\BB(\scriptstyle\alpha)\in\C(A)_M^{L+N_c-1}}
}
\z(a)
$$
Here, $\eta_{T_c}(t)$ is an indicator function active over the interval $[0,T_c)$, $\BB(a)_n=(a_{nN_c-L+1}, \ldots, a_{nN_c+N_c-1})$ collects the input data over a window of length $N_c+L-1$, $\BB(\alpha)=(\alpha_0,\alpha_1, \ldots,\alpha_{L-1},\beta_1,\ldots \beta_{N_c-1}) \in \C(A)_M^L$, and $N_0= M^{L+N_c-1}$ is the cardinality of $\C(A)_M^{L+N_c-1}$. So, $\BB(y)_n$ is a row vector of length $N_0$, while $\BB(\phi)(t)$ is a column vector of the same length.
Note that, unlike \e(K8), $\BB(\phi)(t)$ is independent on $n$. Moreover, the second of \ee(PD6)a clearly shows the newly required component driven by the symbols $\beta_i$. The formalization of a proper SM for \e(PD6) is now immediate by exploiting the results of \proposition(P7).
\Proposition(P7bis) In a multi--$h$ CPM with alphabet size $M$, memory $L$, rational modulation indexes $h_n=r_n/p$, $n=0,1,\ldots,N_h$, and time-invariance period $N_c$, the states of the time invariant SM $\C(M)_{\hbt{CPM}}^{{\rm (p)}}$ (where p stands for parallel) are defined by
$$
\BB(\sigma)_n
= \left( \sigma_n^{(0)},\sigma_n^{(1)},\ldots,\sigma_n^{(L-1)}\right)
= \left( z_{nN_c-L},a_{nN_c-L+1},\ldots,a_{nN_c-1}\right)
\e(PD8)
$$
with a one-to-one relation to the PTI states \e(TT6) given by the sampling relation
$$
\BB(\sigma)_n = \BB(s)_{nN_c}\;.
\e(PD10)
$$
The output function $\BB(y)_n=\BB(h) \left( \BB(\sigma)_n,\BB(x)_n\right)$ is given by
$$
\BB(y)_n = W_{2p}^{\sigma_n^{(0)}} \BB(w)_{\sigma_n^{(1)}} \otimes \cdots \otimes \BB(w)_{\sigma_n^{(L-1)}} \otimes \BB(w)_{x_n^{(0)}} \cdots
\otimes \BB(w)_{x_n^{(N_c-1)}}\;.~\q\Box
\e(PD12)
$$
According to the ordering in \e(PD12) we can further attempt an equivalent Kronecker formulation of the interpolating filter bank $\BB(\phi)(t)$ as
$$
\BB(\phi)(t) = \BB(\phi)_{-L+1}(t) \otimes \cdots \otimes \BB(\phi)_{N_c-1}(t)
\e(PD14)
$$
where
$$
\BB(\phi)_i(t) = \eta_{T_c}(t) \sum_{\alpha\in\C(A)_m} \BB(w)_\alpha
\exp \left( j2\pi h_i \alpha \varphi(t-iT) \right)
\z(a)
$$
In addition, by exploiting the sampling relationship \e(PD10), the statistical description of the newly introduced SM immediately follows from \proposition(M4) and \proposition(M4bis), and from general properties of Markov chains. We have
\Proposition(M4ter) The $I\times I$ conditional transition matrices $\BB(E)_{\BB(\scriptstyle \alpha),n}$ for $\BB(\alpha)= [\alpha_0, \ldots, \alpha_{N_c-1}]^T$ and $L\ge2$ are defined through the matrix product
$$
\BB(E_\alpha)
= \BB(e)_{\alpha_{N_c-1},nN_c} \,\cdots\,
\BB(e)_{\alpha_1,nN_c+1}\,\BB(e)_{\alpha_0,nN_c}
\e(PD20)
$$
with $\BB(e)_{\alpha,n}$ as defined in \e(TT30). In turn, the TPM $\BB(\Pi)$ is built as the TPM matrix product
$$
\BB(\Pi) = \BB(\pi)_{nN_c+N_c-1}\,\cdots\,\BB(\pi)_{nN_c+1}\, \BB(\pi)_{nN_c}
\e(PD22)
$$
where $\BB(\pi)_n$ is given by \e(TT32). They both are independent on $n$. Observe that, for $L=1$ expressions \e(TT32bis) must be used in place of \e(TT30) and \e(TT32).~$\Box$
As a consequence of \e(K27) and \proposition(P15), we also have
\Proposition(P15bis) By using the state ordering \e(K22), the TPM matrix $\BB(\Pi)$ takes the block diagonal form
$$
\tilde{\BB(\Pi)} = \qmatrix{
\BB(\Pi)(+) & \B(0) \cr
\B(0) & \BB(\Pi)(-) \cr}
\e(PD24)
$$
where $\BB(\Pi)(+)$ contains the samples of $\BB(\Pi)$ at even rows and even columns, while $\BB(\Pi)(-)$ contains the samples of $\BB(\Pi)$ at odd rows and odd columns. Moreover
$$
\BB(\Pi)(\pm) = \BB(\pi)_{N_c-1}(\pm)\,\cdots\,\BB(\pi)_1(\pm)
\,\BB(\pi)_0(\pm)\vq
\BB(\Pi)(+) = \BB(\Pi)(-)
\e(PD26)
$$
the second equivalence being assured by the permutation property \e(K27d).~$\Box$
Incidentally, an equivalent to \proposition(P15bis) holds for state-transition matrices $\BB(E_\alpha)$, for which we have
$$
\tilde{\BB(E)}\BB(_\alpha) = \qmatrix{
\BB(E_\alpha)(+) & \B(0) \cr
\B(0) & \BB(E_\alpha)(-) \cr}\vq
\BB(E_\alpha)(+)=\BB(E_\alpha)(-)\;.
\e(PD26bis)
$$
As a remark, we observe that the product formulation of \e(PD20), \e(PD22), and \e(PD26) could be restated through more direct expressions. However, this gives minor insights that are of no direct interest to spectral evaluation.
Finally, the APVs of interest, separately for the even and odd trajectories $\C(T)(+)$ and $\C(T)(-)$, can be formulated starting from \e(TT40) to give
$$
\overinf{\BB(p)} = \BB(\Pi)(\pm)\,\overinf{\BB(p)}\vq
\overinf{\BB(p)} = \frac1{p}\,\BB(1)_{p}\,\otimes\,
\underbrace{\BB(q)\,\otimes\, \cdots\,\otimes\,\BB(q)}_{L-1}\;
\e(PD28)
$$
Incidentally, since both trajectories $\C(T)(\pm)$ identify irreducible Markov chains, the following limit property is known to hold (e.g. see \bibl(Gantmacher))
$$
\overinf{\BB(\Pi)}(\pm) \eqq \lim_{k\rightarrow\infty}\BB(\Pi)(\pm)^k
= \overinf{\BB(p)}\,\BB(1)_{I/2}^T
\e(PD30)
$$
that is the limit probability of being in a state is independent on the initial state.
\Paragrafo(SE){Closed Form Spectral Evaluation}
\subsection{Generalities}
We follow the PD approach depicted in \fig(MH22), which is more simple and permits the direct use of the theory of \bibl(CarTron74). In fact, the reformulation of the SM in a time invariant form removes the PTI on the output sequence $\{\BB(y)_n\}$ which, unlike $\{\BB(b)_n\}$, is stationary. Once evaluated the correlation
$$
\BB(r_y)(kT_c) = \E[\BB(y)_m\,\BB(y)_{m-k}^*]
\e(SE2)
$$
and the corresponding PSD
$$
\BB(R_y)(f) = T_c \sum_k \BB(r_y)(kT_c)\,e^{-j2\pi fkT_c}
\e(SE4)
$$
the output (average) PSD is simply given by
$$
\overline{R}_v(f) = \BB(\Phi)(f)\, \BB(R_y)(f)\,\BB(\Phi)^*(f)
\e(SE6)
$$
with $\BB(\Phi)(f)=\int_{-\infty}^{+\infty}\BB(\phi)(t)e^{-j2\pi ft}dt$ the Fourier transform of $\BB(\phi)(t)$. The result \e(SE6) is a straightforward generalization of a widely known property of the interpolating filter to an interpolating filter bank, whose proof can be found in \bibl().
A fundamental role is played by the possible {\co presence of lines} (delta functions) in the SDs. In general, $\BB(R_y)(f)$ is given by the sum of a series, which is not convergent. The technique to handle this divergency is the separation of the correlation $\BB(r_y)(kT_c)$ into a {\co continous} part, $\BB(r)_{\BB(\scriptstyle y)}^{(c)}(kT_c)$, and a {\co discrete} part, $\BB(r)_{\BB(\scriptstyle y)}^{(d)}(kT_c)$. The latter is related to the asymptotic behavior of the correlation, and ultimately to the state transition probabilities \e(PD28). The PSDs will be henceforth explicitely written as
$$
\BB(R_y)(f) = \BB(R)_{\BB(\scriptstyle y)}^{(c)}(f)
+ \BB(R)_{\BB(\scriptstyle y)}^{(d)}(f)
\e(SE10)
$$
and
$$
\overline{R}_v(f) = \overline{R}_v^{(c)}(f)
+ \overline{R}_v^{(d)}(f)\;.
\e(SE12)
$$
We shall see that, when present, the spectral lines occurs at the frequencies multiple of $F_c=1/T_c$.
\subsection{New notation}
We preliminarily need to assess some notation according to \bibl(CarTron74). Let the output symbols $\BB(y)_n$ be organized in matrices such that $\BB(Y_x)$ collects (in its columns) the output words corresponding to the input word $\BB(x)$. Evidently, being $I$ the number of states, such words are in number of $I$, and matrix $\BB(Y_x)$ is thus a $N\times I$ matrix. By exploiting the first of \ee(PD6)a and \e(PD12), in Kronecker form it is (see Appendix~C)
$$
\BB(Y_x) = \BB(V)_{2p} \,\otimes\,
\BB(I)_{M^{L-1}}\,\otimes\,
\BB(w)_{x_0}\,\otimes\,\cdots\,\otimes\,\BB(w)_{x_{N_c-1}}
\e(SE20)
$$
where
$$
\BB(V)_{2p} = [W_{2p}^j]_{j\in\C(N)_{2p}}
= [1, W_{2p}, W_{2p}^2,\cdots, W_{2p}^{2p-1}]\;.
\z(a)
$$
is a row vector. To discriminate between trajectories $\C(T)(\pm)$, we can further introduce the notation
$$
\BB(Y_x)(+) = \BB(V)_{p} \,\otimes\,
\BB(I)_{M^{L-1}}\,\otimes\,
\BB(w)_{x_0}\,\otimes\,\cdots\,\otimes\,\BB(w)_{x_{N_c-1}} \vq
\BB(Y_x)(-) = W_{2p}\,\BB(Y_x)(+) \;.
\e(SE22)
$$
\subsection{Presence of spectral lines in $\BB(R_y)$}
Spectral lines depend on the limit behavior of the Markow chain, specifically they are related to the limit mean value of $\BB(y)_n$, namely
$$
\BB(m_y) = \lim_{n\rightarrow\infty} \E[\BB(y)_n]\;,
\e(SE14)
$$
where APVs \e(PD28) hold. Now, the mean \e(SE14) can be evaluated as
$$
\BB(m_y)(\pm) = \sum_{\BB(\scriptstyle x)} \P[\BB(x)_\infty = \BB(x)]
\BB(Y_x)(\pm)\,\overinf{\BB(p)}
\e(SE14bis)
$$
from which we have the following general result (the proof is reported in Appendix~D)
\Proposition(SP2)
When $p>1$, no spectral lines occurr in CPM and we have
$$
\BB(R)_{\BB(\scriptstyle y)}^{(d)}(f) = \BB(0)
\vq p>1\;.
\ee(SE24)a
$$
Conversely, when $p=1$, that is for integer modulation factors, spectral lines are found and it is
$$
\BB(R_y)^{(d)}(f) = \BB(r_y)^{(d)} \sum_k \delta(f-kF_c)\vq p=1
\z(b)
$$
where
$$
\BB(r_y)^{(d)} = \underbrace{[\BB(q\,q)^*]\,\otimes\,\cdots\,\otimes\,[\BB(q\,q)^*]}_{N_c+L-1}
\e(SE34)
$$
and $F_c=1/T_c$.~$\Box$
We note that \proposition(SP2) is in accordance to the results of the literature on single-$h$ CPM \bibl(Comm81).
\subsection{Continuous spectrum evaluation of $\BB(R_y)$}
In order to evaluate the continuous part of the spectrum we preliminarily need to further explicit the correlation $\BB(r_y)$ in \e(SE2) following the development in \bibl(CarTron74).
For the value at $nT_c=0$ (autocorrelation) we have (see (36) in \bibl(CarTron74))
$$
\BB(r_y)(0) = \E[\BB(y)_m\,\BB(y)_m^*]
= \sum_{\BB(\scriptstyle x)} \P[\BB(x)_m=\BB(x)]
\,\BB(Y_x)(\pm)\,{\rm diag}(\overinf{\BB(p)})\,\BB(Y_x)^*(\pm)\;.
\e(CS1)
$$
where, by exploiting the Kronecker product expressions \e(PD28) and \e(SE22), and after some little algebra, we have
$$
\BB(r_y)(0) = \underbrace{{\rm diag}(\BB(q))\,\otimes\,\cdots
\,\otimes\,{\rm diag}(\BB(q))}_{N_c+L-1}\;.
\e(CS2)
$$
For the values at $n\neq0$, we better distinguish between positive and negative values. For $n>0$ we have (see (37)-(39) in \bibl(CarTron74))
$$
\BB(r_y)(nT_s) = \BB(C)_2(\pm)\,\BB(\Pi)(\pm)^{n-1}\,\BB(C)_1(\pm)\vq n>0
\e(CS4)
$$
where
$$
\eqalign{
\BB(C)_1(\pm) &= \sum_{\BB(\scriptstyle x)} \P[\BB(x)_m=\BB(x)]
\BB(E_x)(\pm)\,{\rm diag}(\overinf{\BB(p)})\,\BB(Y_x)^*(\pm)\cr
\BB(C)_2(\pm) &= \sum_{\BB(\scriptstyle x)} \P[\BB(x)_m=\BB(x)]
\BB(Y_x)(\pm)\;.
}
\z(a)
$$
Instead, for $n<0$ we can exploit the relation
$$
\BB(r_y)(-nT_s) = \BB(r_y)^*(nT_s) \;.
\e(CS5)
$$
Observe that, the correlation samples of the two trajectories coincide since $\BB(E_x)(+)=\BB(E_x)(-)$ and $\BB(\Pi)(+)=\BB(\Pi)(-)$ (see \e(PD26) and \e(PD26bis)), while the $W_{2p}$ difference between $\BB(Y_x)(+)$ and $\BB(Y_x)(-)$ (see \e(SE22)) is removed by the complex conjugation in $\BB(C)_1$.
In addition, we must take into account for the limit value $\BB(r_y)^{(d)}$, which (when not null) is responsible of spectral lines. According to (40) in \bibl(CarTron74) we can write
$$
\BB(r_y)^{(d)} = \BB(m_y)(\pm)\,\BB(m_y)^*(\pm)
= \BB(C)_2(\pm)\,\overinf{\BB(\Pi)}\,\BB(C)_1(\pm)\;.
\e(CS7)
$$
As the reader can verify by exploiting \e(PD30), this expression is not in contrast with the formulation in \e(SE34).
Now, the continuous spectrum becomes
$$
\eqalign{
\BB(R_y)^{(c)}(f)
& = T_c\,\sum_n \Big(\BB(r_y)(nT_c) - \BB(r_y)^{(d)}\Big)
\,e^{-j2\pi fnT_c}\cr
}
\e(CS10)
$$
and, by then relying on the symmetry in \e(CS5) and on the results of \bibl(CarTron74), the PSD can be written as
$$
\eqalign{
\BB(R_y)^{(c)}(f)
& = T_c \,\Big(\BB(r_y)(0) - \BB(r_y)^{(d)}\Big) + \null\cr
& \hspace*{-8mm} \null +
2T_c\,\Re\left[\BB(C)_2(\pm)\,\Big(\BB(I)_{I_0}-\overinf{\BB(\Pi)}\Big)\,
\Big(e^{j2\pi fT_c}\BB(I)_{I_0}-
\BB(\Pi)(\pm)+\overinf{\BB(\Pi)}\Big)^{-1}\,\BB(C)_1(\pm)\right]\;.
}
\e(CS12)
$$
Note that this is a closed-form, unlike the results available in CPM literature (e.g., see \bibl(Wilson81), \bibl(Ilyin04)) where the PSD is evaluated through direct numerical evaluation of the series \e(CS10).
We also underline that the matrix inversion in \e(CS12) can be circumvented. The details can be found in \bibl(CarTron74), while here we report the final result. In particular, the inversion of an $I\times I$ matrix $(\lambda\B(I)-\B(F))$, with $\lambda=e^{j2\pi fT_c}$, can be performed by
$$
(\lambda\B(I)-\B(F))^{-1} = \frac{\sum_{k=0}^{I-1}\BB(G)_k\,\lambda^{I-1-k}}{d(\lambda)}\vq
\BB(G)_k = \sum_{m=0}^k d_{k-m}\,\BB(F)^m
$$
where
$$
d(x) = {\rm trace}\Big(x\B(I)-\B(F)\Big) = \sum_{k=0}^I d_k\,x^{I-k}\;.
$$
\subsection{Power spectral density of $v(t)$}
The derivation of the power spectral density of $v(t)$ can now be straightforwardly obtained by use of \e(SE6). For ease of computational evaluation, here we introduce the matices
$$
\BB(V_x)(f) = \BB(\Phi)(f)\,\BB(Y_x)(+)
\e(AD2)
$$
which are row matrices of dimension $1\times I_0$, namely $\BB(V_x)(f) = [V\BB(_{x,s})(f)]_{\BB(\scriptstyle s)\in \C(S)(+)}$, whose element $V\BB(_{x,s})(f)$ can be obtained from (see also \ee(PD6)a)
$$
\eqalign{
v\BB(_{x,s})(t) & = \eta_{T_c}(t)\; W_{2p}^{s^{(0)}}\,\prod_{i=1}^{L-1}
\exp \left( j2\pi \,h_{nN_c-L+i} \,s^{(i)} \,\varphi(t+(L-i)T) \right) \;\cdot\null\cr
& \qquad\null\cdot\;
\prod_{i=0}^{N_c-1}
\exp \left( j2\pi \, h_{nN_c+i} \, y^{(i)} \varphi(t-iT) \right)
}
\z(a)
$$
through an ordinary Fourier transform, that is $v\BB(_{x,s})(t)\arrf V\BB(_{x,s})(f)$.
So, in the general case of non-integer modulation factors ($p>1$) from \e(CS1) and \e(CS12) we obtain
$$
\eqalign{
\overline{R}_v^{(d)}(f) & = 0 \cr
\overline{R}_v^{(c)}(f)
& = T_c\, K_0(f) +
2T_c\,\Re\left[\BB(K)_2(f)\,
\Big(\BB(I)_{I_0}-\overinf{\BB(\Pi)}\Big)\,
\Big(e^{j2\pi fT_c}\BB(I)_{I_0}-
\BB(\Pi)(\pm)+\overinf{\BB(\Pi)}\Big)^{-1}\,\BB(K)_1(f)
\right]
}
\e(AD4)
$$
where
$$
\eqalign{
K_0(f)
& = \sum_{\BB(\scriptstyle x)} \P[\BB(x)_m=\BB(x)]
\BB(V_x)(f)\,{\rm diag}(\overinf{\BB(p)})\,\BB(V_x)^*(f) \cr
\BB(K)_1(f)
& = \BB(C)_1(+)\,\BB(\Phi)^*(f)
= \sum_{\BB(\scriptstyle x)} \P[\BB(x)_m=\BB(x)]
\BB(E_x)(+)\,{\rm diag}(\overinf{\BB(p)})\,\BB(V_x)^*(f) \cr
\BB(K)_2 (f)
& = \BB(\Phi)(f)\,\BB(C)_2(+)
= \sum_{\BB(\scriptstyle x)} \P[\BB(x)_m=\BB(x)]\BB(V_x)(f)
}
\z(a)
$$
In the very special case of integer modulation factors ($p=1$), we recall \e(SE34) to write the spectral lines as
$$
\overline{R}_v^{(d)}(f)
= \sum_k |\BB(K)_2(kF_c)\,\overinf{\BB(p)}|^2 \delta(f-kF_c)
\e(AD6)
$$
while the continuous part of the spectrum in \e(AD4) needs a correction factor
$$
\Delta\overline{R}_v^{(d)}(f)
= -|\BB(K)_2(f)\,\overinf{\BB(p)}|^2
\e(AD7)
$$
according to \e(CS12).
\Paragrafo(EX){Examples}
In this section we give some examples of PSDs evaluated through \e(AD4). We begin by showing some results for {\em full response} CPM signals (i.e. with $L=1$) in \Figg(EX2)0. All plots refer to a {\em continuous phase frequency shift keying} (CPFSK) employing a phase response
$$
\varphi(t) = \cases{0&, $t<0$\cr \frac{t}{2LT}&, $0\le t< LT$\cr\frac12&, $t\ge LT$}
\e(PR2)
$$
and for different choices of the multi-$h$ sequence factors. Examples were taken from \bibl(Wilson81) and \bibl(Mazur81).\Figg(EX2)2
A further example of full response signaling taken from \bibl(Wilson81) is shown in \Figg(EX4)0, where CPFSK is compared to {\em raised cosine} (RC) signaling
$$
\varphi(t) = \cases{0&, $t<0$\cr \frac{t}{2LT}-\frac1{4\pi}\,\sin\left(2\pi\,\frac{t}{LT}\right)&,
$0\le t< LT$\cr\frac12&, $t\ge LT$}
\e(PR4)
$$
\Figg(EX4)2
A final example is given for the partial response multi-$h$ CPM signaling formats of \bibl(Perrins05) in \Figg(EX6)0. The figure shows two RC formats, and one binary {\em Gaussian minimum shift keying} (GMSK) format with $L=4$, having a phase response
$$
\varphi(t) = \hat{\Phi}\Big(K\,(\fract tT-\fract32)\Big)
- \hat{\Phi}\Big(K\,(\fract tT-\fract52)\Big)
\vq \hat{\Phi}(a)=a\,\Phi(a)+\frac1{\sqrt{2\pi}}e^{-\frac12t^2}
\e(PR6)
$$
where $K=\pi/(2\sqrt{\ln2})$, and $\Phi(a)=\frac1{\sqrt{2\pi}}\int_{-\infty}^ae^{-\frac12t^2}dt$ is the Gaussian normalized cumulative distribution function.
\Figg(EX6)2
\Paragrafo(CO){Conclusions}
\section*{Acknowledgements}
\clearpage
\Appendix
\subsection{Sketch of the proof of \proposition(M4bis)}
We proceed step by step. For $L=1$ we have
$$
\Big[
\delta_{i_0,(j_0+r_n\alpha)_{2p}}
\Big] _{i_0,j_0\in \C(N)_{2p}} = \BB(D)_{2p}^{r_n\alpha}
$$
which is a square matrix obtained by cyclically shifting the main diagonal to the left by $r_n\alpha$ positions. The result can be obtained by evaluating the $r_n\alpha$ power of the single step ciclical shift matrix $\BB(D)_{2p}=\BB(D)_{2p}^1$. This corresponds to \e(TT32bis).
For $L=2$, we have
$$
\Big[
\delta_{i_0,(j_0+r_{n-1}j_1)_{2p}}\,
\delta_{i_1,\alpha}
\Big] _{i_0,j_0\in \C(N)_{2p},\, i_1,j_1\in\C(A)_M}
= \Big[
\delta_{i_0,(j_0+r_{n-1}j_1)_{2p}}
\Big] _{i_0,j_0\in \C(N)_{2p},\,j_1\in\C(A)_M} \otimes \Big[
\delta_{i_1,\alpha}
\Big] _{i_1\in\C(A)_M}
$$
where the first term can be further explicited as
$$
\eqalign{
\Big[
\delta_{i_0,(j_0+r_{n-1}j_1)_{2p}}
\Big] _{i_0,j_0\in \C(N)_{2p},\,j_1\in\C(A)_M}
& = \sum_{\beta\in\C(A)_M}\Big[
\delta_{i_0,(j_0+r_{n-1}\beta)_{2p}}\,\delta_{\beta,j_1}
\Big] _{i_0,j_0\in \C(N)_{2p},\,j_1\in\C(A)_M} \cr
& = \sum_{\beta\in\C(A)_M}\Big[
\delta_{i_0,(j_0+r_{n-1}\beta)_{2p}}
\Big] _{i_0,j_0\in \C(N)_{2p}}\,\otimes\,\Big[
\delta_{\beta,j_1}
\Big] _{j_1\in\C(A)_M} \cr
}
$$
to obtain
$$
\BB(e)_{\alpha,n} = \sum_{\beta\in\C(A)_M}
\underbrace{\BB(D)_{2p}^{r_{n-1}\beta}}_{2p\times 2p}\,
\otimes\,\underbrace{\BB(w)_{\beta}^T}_{1\times M}\,
\otimes\,\underbrace{\BB(w)_{\alpha}}_{M\times 1}\;.
$$
The prospect is similar for $L=3$, where
$$
\Big[
\delta_{i_0,(j_0+r_{n-2}j_1)_{2p}}
\Big] _{i_0,j_0\in \C(N)_{2p},\,j_1\in\C(A)_M}
\otimes \Big[
\delta_{i_1,j_2}
\Big] _{i_1,j_2\in\C(A)_M}
\otimes \Big[
\delta_{i_2,\alpha}
\Big] _{i_2\in\C(A)_M}
$$
with the central matrix being an identity. The general result is thus
$$
\BB(e)_{\alpha,n} = \sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}\,
\otimes\,\BB(w)_{\beta}^T\,
\otimes\,\underbrace{\BB(I)_{M}\,\otimes\,\cdots\,\otimes\,\BB(I)_{M}}_{L-2}\,
\otimes\,\BB(w)_{\alpha}
$$
where the kronecker product of $L-2$ occurrences of $\BB(I)_{M}$ is simply $\BB(I)_{M^L}$. As a consequence, \e(TT30) is valid.
\subsection{Sketch of the proof of \e(TT40)}
Being $\BB(\pi)_n$ expressed as a Kronecker product in \e(TT32), we look for an eigenvector with the Kronecker structure
$$
\BB(p)_{n} = \underbrace{\BB(u)_0}_{2p\times1} \otimes
\underbrace{\BB(u)_1}_{M\times1} \otimes \cdots\otimes
\underbrace{\BB(u)_{L-1}}_{M\times1}
$$
providing the set of equations
$$
\left\{\eqalign{
& \left(\sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}\,\otimes\,\BB(w)_{\beta}^T \right)
\left(\BB(u)_0\otimes\BB(u)_1\right)
= \BB(u)_0 \cr
& \BB(I)_{M}\BB(u)_2 = \BB(u)_1 \cr
& \vdots \cr
& \BB(I)_{M}\BB(u)_{L-1} = \BB(u)_{L-2} \cr
& \BB(q) = \BB(u)_{L-1} \cr
}\right.
$$
By solving the system we immediately obtain $\BB(u)_1=\cdots=\BB(u)_{L-1}=\BB(q)$, plus the remaining equation
$$
\sum_{\beta\in\C(A)_M}
\left( \BB(D)_{2p}^{r_{n-L+1}\beta}\BB(u)_0\right)\,\otimes\,
\left(\BB(w)_{\beta}^T\BB(q)\right)
= \sum_{\beta\in\C(A)_M}
\BB(D)_{2p}^{r_{n-L+1}\beta}\BB(u)_0\;q_\beta = \BB(u)_0
$$
which is solved by $\BB(u)_0=\frac1{2p}\,\BB(1)_{2p}$, where $\BB(1)_{2p}$ is a column vector of length $2p$ with all entries set to $1$. Note that the solution is valid independently on $r_{n-L+1}$, and provides \e(TT40).
\subsection{Validity of \e(SE20)}
By exploiting the first of \ee(PD6)a and \e(PD12), the matrix $\BB(Y_x)$ is
$$
\eqalign{
\BB(Y_x)
& = \Big[Y\BB(_x)(\BB(i),\BB(j))\Big]
_{\BB(\scriptstyle i)\in\C(A)^{N_c+L-1}, \BB(\scriptstyle j)\in\C(S)}\cr
& =\Big[W_{2p}^{j_0}\,\delta_{i_0,j_1}\,\cdots\,\delta_{i_{L-2},j_{L-1}}\,
\delta_{x_0,i_{L-1}}\,\cdots,\delta_{x_{N_c-1},i_{N_c+L-2}}\Big]
_{i_0,\ldots,i_{N_c+L-2}\in\C(A)_M, j_0\in\C(N)_{2p},
j_1,\ldots,j_{L-1}\in\C(A)_M}\cr
}
$$
where $\BB(i)=[i_0,\ldots,i_{N_c+L-2}]^T$ and $\BB(j)=[j_0,\ldots,j_{L-1}]^T$. It is now straightforward to see that the equivalent Kronecker form is given by \e(SE20).
\subsection{Sketch of the proof of \proposition(SP2)}
For $p>1$, we exploit the Kronecker products in \e(PD28) and \e(SE22) in \e(SE14bis), and observe that $\BB(V)_p\,\BB(1)_p=0$. We then immediately obtain $\BB(m_y)(\pm) = \BB(0)$ thus assuring that {\em no} spectral lines are found, as stated in \ee(SE24)a.
Instead, for $p=1$ we have $\BB(V)_1\,\BB(1)_1=1\cdot1=1$, in which case it is easy to show that
$$
\BB(m_y)(+) = \underbrace{\BB(q)\,\otimes\,\cdots\,\otimes\,\BB(q)}_{N_c+L-1}
\vq \BB(m_y)(-) = W_{2p}\,\BB(m_y)(+)\;.
$$
Then, from the equivalence
$$
\eqalign{
\BB(r_y)^{(d)}(nT_s)
& = \BB(m_y)(+)\,\BB(m_y)^*(+)
= \BB(m_y)(-)\,\BB(m_y)^*(-) \cr
& = \underbrace{[\BB(q\,q)^*]\,\otimes\,\cdots\,\otimes\,[\BB(q\,q)^*]}_{N_c+L-1}
}
$$
we straightforwardly obtain \ee(SE24)b, by recalling \e(SE4).
|
3,212,635,537,680 | arxiv | \section{Quantum fluctuations of the gravitational field}
\indent
Gravity deals with the frame in which everything takes place,
i.e., with spacetime. We are used to putting everything into
spacetime, so that we can name and handle events. General
relativity made spacetime dynamical but the relations between
different events were still sharply defined. Because of quantum
mechanics, in such a dynamical frame, objects became fuzzy;
exact locations were substituted by probability amplitudes of
finding an object in a given region of space at a given instant
of time. Spacetime undergoes the quantum fluctuations of the
other interactions and, even more, introduces its own
fluctuations, thus becoming an active agent in the theory. The
quantum theory of gravity suffers from problems (see, e.g. Refs.
\cite{is93,is97}) that have remained unsolved for many years and
that are originated, in part, in this lack of a fixed immutable
spacetime background.
A quantum uncertainty in the position of a particle implies an
uncertainty in its momentum and, therefore, due to the
gravity-energy universal interaction, would also imply an
uncertainty in the geometry, which in turn would introduce an
additional uncertainty in position of the particle. The geometry
would thus be subject to quantum fluctuations that would
constitute the spacetime foam and that should be of the same
order as the geometry itself at the Planck scale. This would
give rise to a minimum length \cite{ga95} beyond which the
geometrical properties of spacetime would be lost, while on
larger scales it would look smooth and with a well-defined
metric structure. The key ingredients for the appearance of this
minimum length are quantum mechanics, special relativity, which
is essential for the unification of all kinds of energy via the
finiteness of the speed of light, and a theory of gravity, i.e.,
a theory that accounts for the active response of spacetime to
the presence of energy (general relativity, Sakharov's
elasticity \cite{sa67,mt73}, strings\ldots). Thus, the existence
of a lower bound to any output of a position measurement, seems
to be a model-independent feature of quantum gravity. In fact,
different approaches to this theory lead to this result
\cite{ga95}.
Planck length $\ell_*$ might play a role analogous to the speed
of light in special relativity. In this theory, there is no
physics beyond this speed limit and its existence may be
inferred through the relativistic corrections to the Newtonian
behavior. This would mean that a quantum theory of gravity could
be constructed only on ``this side of Planck's border'' as
pointed out by Markov \cite{ma80,ma81} (as quoted in Ref.
\cite{bt88}). In fact, the analogy between quantum gravity and
special relativity seems to be quite close: in the latter you
can accelerate forever even though you will never reach the
speed of light; in the former, given a coordinate frame, you can
reduce the coordinate distance between two events as much as you
want even though the proper distance between them will never
decrease beyond Planck length (see Ref. \cite{ga95}, and
references therein). This uncertainty relation $\Delta x\geq
\ell_*$ also bears a close resemblance to the role of $\hbar$ in
quantum mechanics: no matter which variables are used, it is not
possible to have an action $S$ smaller than $\hbar$
\cite{me91,mensky98,me92}.
Based on the work by Bohr and Rosenfeld
\cite{br33,ro63,br50,ro55} (see e.g. Ref. \cite{he54}) for the
electromagnetic field, Peres and Rosen \cite{pr60} and then
DeWitt \cite{de62} carefully analyzed the measurement of the
gravitational field and the possible sources of uncertainty (see
also Refs. \cite{bt88,re58,bt82,tr85}). Their analysis was
carried out in the weak-field approximation (the magnitude of
the Riemann tensor remains small for any finite domain) although
the features under study can be seen to have more fundamental
significance. This approximation imposes a limitation on the
bodies that generate and suffer the gravitational field, which
does not appear in the case of an electromagnetic field. The
main reason for this is that, in this case, the relevant
quantity that is involved in the uncertainty relations is the
ratio between the charge and the mass of the test body, and this
quantity can be made arbitrarily small. This is certainly not
the case for gravitational interactions, since the equivalence
principle precisely fixes the corresponding ratio between
gravitational mass and inertial mass, and therefore it is not
possible to make it arbitrarily small. Let us go into more
detail in the comparison between the electromagnetic and the
gravitational fields as far as uncertainties in the measurement
are concerned and see how it naturally leads to a minimum volume
of the measurement domain.
The measurement of the gravitational field can be studied from
the point of view of continuous measurements
\cite{me91,mensky98,me92}, which we briefly summarize in what
follows (throughout this work we set $\hbar=c=1$, so that the
only dimensional constant is Planck's length
$\ell_*=\sqrt{\mbox{\small G}}$).
\subsection{Continuous measurements}
Assume that we continuously measure an observable $Q$, with\-in
the framework of ordinary quantum mechanics. Let us call $\Delta
q$ the uncertainty of our measurement device. This means that,
as a result of our measurement, we will obtain an output
$\alpha$ that will consist of the result $q(t)$ and any other
within the range $(q-\Delta q, q+\Delta q)$. The probability
amplitude for an output $\alpha$ can be written in terms of path
integrals \cite{me91,mensky98} $A[\alpha]=\int_\alpha {\cal D} x
e^{iS}$, where $\alpha$ denotes not only the output but also the
set of trajectories in configuration space that lead to it. For
a given uncertainty $\Delta q$, the set $\alpha$ is fully
characterized by its central value $q$. We are particularly
interested in studying the shape of the probability amplitude
$A$. More precisely, we will pay special attention to its width
$\Delta
\alpha$ \cite{me91,mensky98,me92}.
There are two different regimes of measurement, classical and
quantum, depending on whether the uncertainty of the measuring
device is large or small. The classical regime of measurement
will be accomplished if $\Delta q$ is large enough. In this
regime, the width of the probability amplitude $\Delta \alpha$
can be seen to be proportional to the uncertainty $\Delta q$.
Also, the uncertainty on the action can be estimated to be
$\Delta S\gtrsim 1$. The quantum regime of measurement occurs
when $\Delta q$ is very small. Now the width of the probability
amplitude is $\Delta
\alpha\sim 1/\Delta q$. The uncertainty in the action is also
greater than unity in this case.
Thus, in any regime of measurement, the action uncertainty will
be greater than unity. In view of this discussion, the width
$\Delta \alpha$ of the probability amplitude will achieve its
minimum value, i.e., the measurement will be optimized, for
uncertainties in the measurement device $\Delta q$ that are
neither too large nor too small. When this minimum nonvanishing
value is achieved, the uncertainty in the action is also
minimized and set equal to one. The limitation on the accuracy
of any continuous measurement is, of course, an expression of
Heisenberg's uncertainty principle. Since we are talking about
measuring trajectories in some sense, a resolution limit should
appear, expressing the fact that position and momentum cannot be
measured simultaneously with infinite accuracy. In the classical
regime of measurement, the accuracy is limited by the intrinsic
uncertainty of the measuring device. On the other hand, when
very accurate devices are employed, quantum fluctuations of the
measuring apparatus affect the measured system and the final
accuracy is also affected. The maximum accuracy is obtained when
there is achieved a compromise between keeping the classical
uncertainty low and keeping quantum fluctuations also small.
\subsection{Measuring the gravitational field}
This discussion
bears a close resemblance with the case of quantum gravity
concerning the existence of a minimum length, where there exists
a balance between the Heisenberg contribution $1/\Delta p$ to
the uncertainty in the position and the active response of
gravity to the presence of energy that produces an uncertainty
$\Delta x\gtrsim \ell_*^2\Delta p$. Actually, any measurement of
the gravitational field is not only extended in time, but also
extended in space. These measurements are made by determining
the change in the momentum of a test body of a given size. That
measurements of the gravitational field have to be extended in
spacetime, i.e., that they have to be continuous, is due to the
dynamical nature of this field. Before analyzing the
gravitational field, let us first briefly discuss the
electromagnetic field whose measurement can also be regarded as
continuous.
In the case of an electromagnetic field, the action has the form
$S=\int d^4x F^2$, where $F$ is the electromagnetic field
strength. Then, the action uncertainty principle $\Delta S
\gtrsim 1$ implies that $\Delta (F^2)\ l^4\gtrsim 1$, which can
be conveniently written as $\Delta F\ l^3\gtrsim q/(Flq)$, where
$l$ is the linear size of the test body and $q$ is its electric
charge. Here, we have already made the assumption that the
quantum fluctuations of the test body are negligible, i.e., that
its size $l$ is larger than its Compton wave length $1/m$, where
$m$ is its rest mass. $Flq$ is just the electromagnetic energy
of the test body. If we impose the condition that the
electromagnetic energy of the test body be smaller than its rest
mass $m$, the uncertainty relation above becomes in this case
$\Delta F\ l^3\gtrsim q/m$. The conditions $l\gtrsim 1/m$ and
$l\gtrsim q^2/m$ that we have imposed on the test body and that
can be summarized by saying that it must be classical from both
the quantum and the relativistic point of view are the
reflection of the following assumptions: the measurement of the
field averaged over a spacetime region, whose linear dimensions
and time duration are determined by $l$, is performed by
determining the initial and final momentum of a uniformly
charged test body; the time interval required for the momentum
measurement is small compared to $l$; any back-reaction can be
neglected if the mass of the test body is sufficiently high; and
finally, the borders of the test body are separated by a
spacelike interval.
Let us now consider \cite{me92} a measurement of the scalar
curvature averaged over a spacetime region of linear dimension
$l$, given by the resolution of the measuring device (the test
body). The action is $S=\ell_*^{-2}\int d^4x\sqrt{-g}R$, where
the integral is extended over the spacetime region under
consideration, so that it can be written as $S=\ell_*^{-2} R
l^4$, $R$ being now the average curvature. The action
uncertainty principle $\Delta S\gtrsim 1$ gives the uncertainty
relation for the curvature $\Delta R\ l^4\gtrsim \ell_*^2$,
which translates into the uncertainty relation $\Delta \Gamma\
l^3\gtrsim \ell_*^2$ for the connection $\Gamma$, or in terms of
the metric tensor, $\Delta g\ l^2\gtrsim \ell_*^2$. The left
hand side of this relation can be interpreted as the uncertainty
in the proper separation between the borders of the region that
we are measuring, so that it states the minimum position
uncertainty relation $\Delta x\gtrsim{\rm
min}(l,\ell_*^2/l)\gtrsim \ell_*$. It is worth noting that it is
the concurrence of the three fundamental constants of nature
$\hbar$, $c$ (which have already been set equal to 1), and
$\mbox{\small G}$ that leads to a resolution limit. If any of
them is dropped then this resolution limit disappears.
We see from the uncertainty relation for the electromagnetic
field that an infinite accuracy can be achieved if an
appropriate test body is used. This is not the case for the
gravitational interaction. Indeed, the role of $F$ is now played
by $\Gamma/\ell_*$, where $\Gamma$ is the connection, and the
role of $q$ is played by $\ell_* m$. It is worth noting
\cite{pr60} that by virtue of the equivalence principle, active
gravitational mass, passive gravitational mass and energy (rest
mass in the Newtonian limit) are all equal, and hence, for the
gravitational interaction, the ratio $q/m$ is the universal
constant $\ell_*$. The two requirements of Bohr and Rosenfeld
are now $l\gtrsim 1/m$ and $l\gtrsim \ell_*^2 m$ so that
$l\gtrsim \ell_*$. This means that the test body should not be a
black hole, i.e. its size should not exceed its gravitational
radius, and that both its mass and linear dimensions should be
larger than Planck's mass and length, respectively. As in the
electromagnetic case, Bohr and Rosenfeld requirements can be
simply stated as follows: the test body must behave classically
from the points of view of quantum mechanics, special relativity
and gravitation. Otherwise, the interactions between the test
body and the object under study would make this distinction (the
test body on the one hand and the system under study on the
other) unclear as happens in ordinary quantum mechanics: the
measurement device must be classical or it is useless as a
measuring apparatus. In this sense, within the context of
quantum gravity, Planck's scale establishes the border between
the measuring device and the system that is being measured.
We can see that the problem of measuring the gravitational
field, i.e., the structure of spacetime, can be traced back to
the fact that any such measurement is nonlocal, i.e. the
measurement device is aware of what is happening at different
points of spacetime and takes them into account. In other words,
the measurement device averages over a spacetime region. The
equivalence principle also plays a fundamental role: the
measurement device cannot decouple from the measured system and
back reaction is unavoidable.
\subsection{Vacuum fluctuations}
One should expect not only
fluctuations of the gravitational field owing to the quantum
nature of other fields and measuring devices but also owing to
the quantum features of the gravitational field itself. As
happens for any other field, in quantum gravity there will exist
vacuum fluctuations that provide another piece of uncertainty to
the gravitational field strength. It can also be computed by means of
the action uncertainty principle. Indeed, in the above analyses,
we have only considered first order terms in the uncertainty
because it was assumed that there was a nonvanishing classical
field that we wanted to measure. However, in the case of vacuum,
the field vanishes and higher order terms are necessary. Let us
discuss this issue for the electromagnetic case first. The
uncertainty in the action can be calculated as $\Delta
S=S[F+\Delta F]-S[F]$, so that $\Delta S=\int d^4x [2 F\Delta
F+(\Delta F)^2]$. The action uncertainty principle then yields
the relation $\Delta F\ l^2 \gtrsim -Fl^2+\sqrt{(Fl^2)^2+1}$. In
the already studied limit of large electromagnetic field (or
very large regions) $Fl^2\gg 1$, the uncertainty relation for
the field becomes $\Delta F\ l^3\gtrsim 1/(Fl)\gtrsim q/m$
obtained above. On the other hand, the limit of vanishing
electromagnetic field (or very small regions of observation)
$Fl^2\ll1$ provides the vacuum fluctuations of the
electromagnetic field $\Delta F\ l^2\gtrsim 1$.
In the gravitational case, the situation is similar. The
gravitational action can be qualitatively written in terms of the
connection $\Gamma$ as $S=\ell_*^{-2}\int d^4x(\partial \Gamma+\Gamma^2)$
so that the uncertainty in the action has the form
\begin{equation}
\Delta S
\sim \ell_*^{-2}[\Delta \Gamma\ l^3+\Gamma\Delta \Gamma\ l^4
+ (\Delta \Gamma)^2l^4]\,.
\end{equation}
It is easy to argue that $\Gamma l$ must be at most of order 1
so that the contribution of the second term is qualitatively
equivalent to that of the first one. Indeed, $\Gamma l$
is the gravitational potential which
is given by $\Gamma l =\Gamma_{\rm ext}l(1-\Gamma
l)$, $\Gamma_{\rm ext}$ being the external gravitational field.
The last term is just an expression of the equivalence
principle, according to which, any kind of energy, including the
gravitational one, also generates a gravitational field. Thus,
$\Gamma l=\Gamma_{\rm ext}l/(1+\Gamma_{\rm
ext}l)$ which is always smaller than one. The action uncertainty
principle then implies that $\Delta \Gamma\ l^2 \gtrsim -l
+\sqrt{l^2+\ell_*^2}$ and that, in terms of the metric tensor,
\begin{equation}
\Delta g \gtrsim -1 +\sqrt{1+\ell_*^2/l^2}\,.
\end{equation}
For test bodies much larger than Planck size, i.e., for $l\gg
\ell_*$, this uncertainty relation becomes the already obtained
$\Delta g\gtrsim \ell_*^2/l^2$, valid for classical test bodies.
However, for spacetime regions of very small size --- closed to
Planck length $l\gtrsim \ell_*$ --- this uncertainty relation
acquires the form $\Delta g\gtrsim \ell_*/l$. This uncertainty
in the gravitational field comes from the vacuum fluctuations of
spacetime itself and not from the disturbances introduced by
measuring devices with $l\gg \ell_*$
\cite{mt73,wh55,wh57,wh62,wh64,wh68}. For alternative
derivations of this uncertainty relation see, e.g., Refs.
\cite{mt73,visser96}.
We then see that proper distances have an uncertainty
$\sqrt{\Delta g l^2}$ that approaches Planck length for very
small (Planck scale) separations thus suggesting that Planck
length represents a lower bound to any distance measurement. At
the Planck scale, the gravitational field uncertainty is of
order 1, i.e., the fluctuations are as large as the geometry
itself. This is indicating that the low-energy theory that we
have been using breaks down at the Planck scale and that a full
theory of quantum gravity is necessary to study such regime.
\section{Spacetime foam}
\indent
In his work ``On the hypotheses which lie at the basis of the
geometry'' \cite{riemann73}, written more than a century ago,
Riemann already noticed that ``[\ldots]. If this independence of
bodies from position does not exist, we cannot draw conclusions
from metric relations of the great, to those of the infinitely
small; in that case the curvature at each point may have an
arbitrary value in three directions, provided that the total
curvature of every measurable portion of space does not differ
sensibly from zero. Still more complicated relations may exist
if we no longer suppose the linear element expressible as the
square root of a quadratic differential. Now it seems that the
empirical notions on which the metrical determinations of space
are founded, the notion of a solid body and of a ray of light,
cease to be valid for the infinitely small. We are therefore
quite at liberty to suppose that the metric relations of space
in the infinitely small do not conform to the hypotheses of
geometry; and we ought in fact to suppose it, if we can thereby
obtain a simpler explanation of phenomena.''
In the middle of this century, Weyl \cite{we49} took these ideas
a bit further and envisaged (multiply connected) topological
structures of ever-increasing complexity as possible
constituents of the physical description of surfaces. He wrote
in this respect \cite{we49} ``A more detailed scrutiny of a
surface might disclose that what we had considered an elementary
piece in reality has tiny handles attached to it which change
the connectivity character of the piece, and that a microscope
of ever greater magnification would reveal ever new topological
complications of this type, {\it ad infinitum}.''
Few years later, Wheeler described this topological complexity
of spacetime at small length scales as the foamlike structure of
spacetime \cite{wh57}. According to Wheeler
\cite{mt73,wh55,wh57,wh62,wh64,mw57}, at the Planck scale, the
fluctuations of the geometry are so large and involve so large
energy densities that gravitational collapse should be
continuously being done and undone at that scale. Because of
this perpetuity and ubiquity of Planck scale gravitational
collapse, it should dominate Planck scale physics. In this
continuously changing scenario, there is no reason to believe
that spacetime topology remains fixed and predetermined. Rather,
it seems natural to accept that the topology of spacetime is
also subject to quantum fluctuations that change all its
properties. Therefore, this scenario, in which spacetime is
endowed with a foamlike structure at the Planck scale, seems to
be a natural ingredient of the yet-to-be-built quantum theory of
gravity. Furthermore, from the functional integration point of
view \cite{mi57}, in quantum gravity all histories contribute
and, among them, there seems unnatural not to consider
nontrivial topologies as one considers not trivial geometries
\cite{wh55,wh57,misner60} (see, however, Ref. \cite{dewitt84}).
On the other hand, it has been shown \cite{horowitz90} that
there exit solutions to the equations of general relativity on
manifolds that present topology changes. In these solutions, the
metric is degenerate on a set of measure zero but the curvature
remains finite. This means that allowing degenerate metrics
amounts to open a doorway to classical topology change.
Furthermore, despite the difficulties of finding an appropriate
interpretation for these degenerate metrics in the classical
Lorentzian theory, they will naturally enter the path integral
formulation of quantum gravity. This is therefore an indication
that topology change should be taken into account in any quantum
theory of gravity \cite{horowitz90} (for an alternative
description of topology change within the framework of
noncommutative geometry, see Ref. \cite{madore98,mangano98}).
Adopting a picture in which spacetime topology depends on the
scale on which one performs the observations, we would conclude
that there would be a trivial topology on large length scales
but more and more complicated topologies as we approach the
Planck scale.
Spacetime foam may have important effects in low-energy physics.
Indeed, the complicated topological structure may provide
mechanisms for explaining the vanishing of the cosmological
constant \cite{coleman88b,1ca97,2ca97} and for fixing all the
constants of nature \cite{coleman88b,ha90} (for a recent
proposal for deriving the electroweak coupling constant from
spacetime foam see Ref. \cite{rosales99}). Spacetime foam may
also induce loss of quantum coherence \cite{ha82} and may well
imply the existence of an additional source of uncertainty.
Related to this, it might produce frequency-dependent energy
shifts \cite{1ga98,2ga98,3ga98} that would slightly alter the
dispersion relations for the different low-energy fields.
Finally, spacetime foam has been proposed as a mechanism for
regulating both the ultraviolet \cite{crane86} (see also Refs.
\cite{de57,le83,pa85,pa88}) and the infrared \cite{magnon88}
behavior of quantum field theory.
It is well-known that it is not possible to classify all
four-dimensional topologies \cite{ma58,ha78} and, consequently,
all the possible components of spacetime foam.
With the purpose of exemplifying the richness and complexity
of the vacuum of quantum gravity, in what follows, we will
briefly discuss a few different kinds of fluctuations encompassed by
spacetime foam, where the
word fluctuations will just denote spacetime configurations that
contribute most to the gravitational path integral \cite{wh57}:
simply connected nontrivial topologies, multiply connected
topologies with trivial second homology group (i.e. with
vanishing second Betti number), spacetimes with a
nontrivial causal structure, i.e. with closed timelike curves,
in a bounded region, and, finally, nonorientable tunnels.
Hawking \cite{ha78} argued that the dominant contribution to the
quantum gravitational path integral over metrics and topologies
should come from topologies whose Euler characteristic
$\chi_{\scriptscriptstyle\rm E}$ was approximately given by the spacetime
volume in Planck units, i.e., from topologies with
$\chi_{\scriptscriptstyle\rm E}\sim (l/\ell_*)^4$. In this analysis, he
restricted to compact simply-connected manifolds with negative
cosmological constant $\lambda$. The choice of compact manifolds
obeys to a normalization condition similar to introducing a box
of finite volume in nongravitational physics. The cosmological
constant is introduced for this purpose and it being negative is
because saddle-point Euclidean metrics with high Euler
characteristic and positive $\lambda$ do not seem to exist, so
that positive-$\lambda$ configurations will not contribute
significantly to the Euclidean path integral. Finally, simple
connectedness can be justified by noting that multiply-connected
compact manifolds can be unwrapped by going to the universal
covering manifold that, although will be noncompact, can be made
compact with little cost in the action. He then concluded that,
among these manifolds, the dominant topology is $S^2\times S^2$
\cite{ha96} which has an associated second Betti number
$B_2=\chi_{\scriptscriptstyle\rm E}-2=2$. These results are based on the
semiclassical approximation and, as such, should be treated with
some caution.
Compact simply-connected bubbles with the topology $S^2\times
S^2$ can be interpreted as closed loops of virtual black holes
\cite{ha96} if one realizes \cite{gi86} that the process of
creation of a pair of real charged black holes accelerating away
from each other in a spacetime which is asymptotic to $\Re^4$ is
provided by the Ernst solution \cite{er76}. This solution has
the topology $S^2\times S^2$ minus a point (which is sent to
infinity) and this topology is the topological sum of the bubble
$S^2\times S^2$ plus $\Re^4$. Virtual black holes will not obey
classical equations of motion but will appear as quantum
fluctuations of spacetime and thus will become part of the
spacetime foam. As a consequence, one can conclude that the
dominant contribution to the path integral over compact
simply-connected topologies would be given by a gas of virtual
black holes with a density of the order of one virtual black
hole per Planck volume.
A similar analysis within the context of quantum conformal
gravity has been performed by Strominger \cite{st84} with the
conclusion that the quantum gravitational vacuum indeed has a
very involved structure at the Planck scale, with a
proliferation of nontrivial compact topologies.
Carlip \cite{1ca97,2ca97} has studied the influence of the
cosmological constant $\lambda$ on the sum over topologies. It
should be stressed that this cosmological constant is not
related to the observed cosmological constant \cite{ha78}.
Rather, it is introduced as a source term of the form
$\ell_*^{-2}\lambda V$, where $V$ is the spacetime volume, added
to the vacuum gravitational action $\ell_*^{-2}\int R\sqrt g$.
In the semiclassical approximation, this sum is dominated by the
saddle-points, which are Einstein metrics. The classical
Euclidean action for these metrics has the form $\tilde
v/(\ell_*^2\lambda)$, where, up to irrelevant numerical factors,
$\tilde v=\lambda^2V$ is the normalized spacetime volume of the
manifold and is independent of $\lambda$. In fact, $\tilde v$
characterizes the topology of the manifold. For instance, for
hyperbolic manifolds it can be identified with the Euler
characteristic. Carlip has shown that, in the semiclassical
approximation, the behavior of the density of topologies, which
counts the number of manifolds with a given value for $\tilde
v$, crucially depends on the sign of the cosmological constant.
For negative values of $\lambda$, the partition function
receives relevant contributions from spacetimes with arbitrarily
complicated topology, so that processes that could be expected
to contribute to the vacuum energy might produce more and more
complicated spacetime topologies, as we briefly discuss in what
follows, thus providing a mechanism for the vanishing of the
cosmological constant. The Euclidean path integral in the
semiclassical approximation can be written as
\begin{equation}
Z[\lambda] =\sum_{\tilde{v}}\rho(\tilde{v}) e^{\tilde{v}
/(\ell_*^{2}\lambda)}\,,
\end{equation}
where $\rho(\tilde{v})$ is a density of topologies. It can be
argued that for negative $\lambda$, the density of topologies
$\rho(\tilde{v})$ grows with the topological complexity $\tilde
v$ at least as $\rho(\tilde{v})\gtrsim
\exp(\tilde{v}\ln \tilde{v})$, i.e., it is
superexponential \cite{1ca97,2ca97}. Then, after introducing an
infrared cutoff to ensure the convergence of the sum above, the
topologies that will contribute most to $Z[\lambda]$ will lie
around some maximum value of the topological complexity
$\tilde{v}_{\rm max}$. The true cosmological constant $\Lambda$,
obtained from the microcanonical ensemble, is in this case
\begin{equation}
-\frac{1}{\Lambda
\ell_*^2}=\left.\frac{\partial\ln\rho(\tilde{v})}
{\partial\tilde{v}}\right|_{\tilde{v}_{\rm max}}\gtrsim
1+\ln\tilde{v}_{\rm max}
\end{equation}
and the ``topological capacity''
\begin{equation}
c_V=-\frac{1}{\Lambda^2
\ell_*^4}\left.\left(\frac{\partial^2\ln\rho(\tilde{v})}
{\partial\tilde{v}^2}\right)^{-1}\right|_{\tilde{v}_{\rm
max}}=-\ell_*^{-2}\left(
\frac{\partial\Lambda}{\partial\tilde v_{\rm max}}\right)^{-1}
\lesssim -\tilde{v}_{\rm max}(1+\ln\tilde{v}_{\rm max})\,,
\end{equation}
where these quantities have been defined by analogy with the
thermodynamical temperature and heat capacity, respectively. In
this analogy, $-\Lambda$ plays the role of temperature while the
topological complexity $\tilde v$ is analogous to the energy.
According to this picture, the behavior of spacetime foam would
be analogous to a thermodynamical system with negative heat
capacity, in which, as we put energy into the system, a greater
and greater proportion of it is employed in the exponential
production of new states rather than in increasing the energy of
already existing states. Similarly, since the topological
capacity is negative, which is a consequence of the
superexponential density of topologies, the microcanonical
cosmological constant will approach a vanishing value as the
maximum topological complexity $\tilde{v}_{\rm max}$ approaches
infinity. We then see that this process, which could be expected
to increase the vacuum energy $|\Lambda|$, actually contributes
to decrease it, until it approaches the smallest value
$|\Lambda|=0$. The case of positive $\lambda$ presents a
different behavior. The topological complexity has a finite
maximum value, namely, that of the four-sphere $\tilde v_{\rm
max}=\chi_{\scriptscriptstyle\rm E}^{\rm max}=2$ and the density of topologies
$\rho(\tilde{v})$ increases as $\tilde{v}$ decreases.
The superexponential lower bound to the density of topologies
given above receives the main contribution from multiply
connected manifolds, among which, Euclidean wormholes
\cite{ha90b,ha90c} have deserved much attention during the last
decade (see, e.g., Ref. \cite{barcelo98}). Wormholes are four-dimensional
spacetime handles that have vanishing second Betti number, while
the first Betti number provides the number of handles. They
were regarded as a possible mechanism for the complete
evaporation of black holes
\cite{ha87,polchinski94,gonzalez91c,cavaglia96}. An evaporating
black hole would have a wormhole attached to it and this
wormhole would transport the information that had fallen into
the black hole to another, quite possibly far away, region of
spacetime. More recently, Hawking \cite{ha96} has proposed an
alternative scenario in which black holes, at the end of their
evaporation process, will have a very small size and will
eventually dilute in the sea of virtual black holes that form
part of spacetime foam. Wormholes also constitute the main
ingredient in Coleman's proposal for explaining the vanishing of
the cosmological constant and for fixing all the constants of
nature \cite{coleman88b,ha90} (see also Ref. \cite{unruh89}).
Wormholes have been studied in the so-called dilute gas
approximation in which wormhole ends are far apart form each
other. It should be noted, however, that, although the semiclassical
approximation probably ceases to be valid at the Planck scale,
it gives a clear indication that one should expect a topological
density of one wormhole per unit four-volume, i.e., the first
Betti number $B_1$ should be approximately equal to the
spacetime volume $B_1\sim V$ at the Planck scale. Multiply
connected topology fluctuations may suffer instabilities
against uncontrolled growth both in Euclidean quantum gravity
\cite{klebanov89,fischler89,polchinski89} (see however Ref.
\cite{coleman89}) and in the Lorentzian sector
\cite{redmount93,redmount94}. These instabilities might put
serious limitations to the kind of multiply connected topologies
encompassed by spacetime foam.
One should also expect other configurations with nontrivial
causal structure to contribute to spacetime foam. For instance,
quantum time machines \cite{go97,go98}, have been recently
proposed as possible components of spacetime foam. From the
semiclassical point of view, most of the hitherto proposed time
machines \cite{1mt88,2mt88} are unstable because quantum vacuum
fluctuations generate divergences in the stress-energy tensor,
i.e., are subject to the chronology protection conjecture
\cite{ha92,cassidy98} (for a beautiful and detailed report on
time machines see Ref. \cite{visser96}). However, quantum time
machines \cite{go97,go98} confined to small spacetime regions,
for which the chronology protection conjecture does not apply
\cite{lg98}, are likely to occur within the realm of spacetime
foam, where strong causality violations or even the absence of a
causal structure are expected. We have in fact argued that the
spacetime metric undergoes quantum fluctuations of order 1 at
the Planck scale. Since the slope of the light cone is
determined by the speed of light obtained from
$ds^2=g_{\mu\nu}dx^\mu dx^\nu=0$, the uncertainty in the metric
will also introduce an uncertainty in the slope of the light
cone of order 1 at the Planck scale so that the notion of
causality is completely lost.
As happens with the causal structure, orientability is likely to
be lost at the Planck scale \cite{friedman88,gonzalez98}, where
the lack of an arbitrarily high resolution would blur the
distinction between the two sides of any surface. Therefore,
nonorientable topologies can be regarded as additional
configurations that may well be present in spacetime foam and
thus contribute to the vacuum structure of quantum gravity.
Indeed, quantum mechanically stable nonorientable spacetime
tunnels that connect two asymptotically flat regions with the
topology of a Klein bottle can be constructed \cite{gonzalez98}
as a generalization of modified Misner space \cite{go97,go98}.
The presence of quantum time machines or nonorientable tunnels
in spacetime amounts to the existence of Planck-size regions in
which violations of the weak energy condition occur. Although
from the classical point of view, the weak energy condition
seems to be preserved, it is well-known (see, e.g., Ref.
\cite{visser96}) that quantum effects may well involve such
exotic types of energy.
\section{Loss of quantum coherence}
\indent
The quantum structure of spacetime would be relevant at energies
close to Planck scale and one could expect that the quantum
gravitational virtual processes that constitute the spacetime
foam could not be described without knowing the details of the
theory of quantum gravity. However, the gravitational nature of
spacetime fluctuations provides a mechanism for studying the
effects of these virtual processes in the low-energy physics.
Indeed, virtual gravitational collapse and topology change would
forbid a proper definition of time at the Planck scale. More
explicitly, in the presence of horizons, closed timelike curves,
topology changes, etc., any Hamiltonian vector field that
represents time evolution outside the fluctuation would vanish
at points inside the fluctuation. This means that it would not
be possible to describe the evolution by means of a Hamiltonian
unitary flow from an initial to a final state and, consequently,
quantum coherence would be lost. These effects and their order
of magnitude would not depend on the detailed structure of the
fluctuations but rather on their existence and global
properties. In general, the regions in which the asymptotically
timelike Hamiltonian vector fields vanish are associated with
infinite redshift surfaces and, consequently, these small
spacetime regions would behave as magnifiers of Planck length
scales transforming them into low-energy modes as seen from
outside the fluctuations \cite{1pa98,2pa98}. Therefore,
spacetime foam and the related lower bound to spacetime
uncertainties would leave their imprint, which may be not too
small, in low-energy physics and low-energy experiments would
effectively suffer a nonvanishing uncertainty coming from this
lack of resolution in spacetime measurements. In this situation,
loss of quantum coherence would be almost unavoidable
\cite{ha82}.
The idea that the quantum gravitational fluctuations contained
in spacetime foam could lead to a loss of quantum coherence was
put forward by Hawking and collaborators \cite{ha82,hp79,hp80}.
This proposal was based in part on the thermal character of the
emission predicted for evaporating black holes
\cite{hawking75,wald75,hawking76}. If loss of coherence occurs
in macroscopic black holes, it seems reasonable to conclude that
the small black holes that are continuously being created and
annihilated everywhere within spacetime foam will also induce
loss of quantum coherence \cite{ha82,hawking76}. On the other
hand, scattering amplitudes of low-energy fields by
topologically nontrivial configurations ($S^2\times S^2$, $K^3$
and $CP^2$ bubbles) lead to the conclusion that pure states turn
into a partly incoherent mixture upon evolution in these
nontrivial backgrounds under certain simplifying assumptions
and, consequently, that quantum coherence is lost.
They made explicit calculations for specific asymptotically flat
spacetimes with nontrivial simply-connected topologies
\cite{ha82,ha96,hp79,hp80,hawking84,warner82,hr97} or causal
structure \cite{ha95} which showed that it was not possible to
separate the complex-time graphs for the obtained Lorentzian
Green functions into two disconnected parts. More explicitly,
the Euclidean Green functions obtained in these backgrounds mix
positive and negative frequencies when the analytic continuation
to Lorentzian signature is performed, since the Green functions
develop extra acausal singularities. This situation is analogous
to that in black hole physics where Lorentzian Green functions
show periodic poles in imaginary time \cite{hp80}. Although
these calculations were performed in a finite dimensional
approximation to metrics of given topology, the contributions of
these extra singularities can be determined by dimensional
analysis and therefore they seem to be characteristic of each
topology and hold for any metric in them \cite{hp80}. In
contrast, Gross \cite{gross84} calculated scattering amplitudes
in specific four-dimensional solutions that can be interpreted
as three-dimensional Kaluza-Klein instantons and concluded that
there was no loss of quantum coherence in such models. Hawking
\cite{hawking84} in turn replied to this criticism that the
solutions used by Gross were special cases in the sense that the
associated three-dimensional Kaluza-Klein instantons were flat
and therefore topologically trivial. He further argued, with
examples, that solutions with topologically nontrivial
three-dimensional instantons can be constructed and that lead to
a nonunitary evolution.
\subsection{Superscattering operator}
Let us consider a scattering
process in an asymptotically flat spacetime with nontrivial
topology. If we denote the density matrices at the far past and
far future by $\rho_-$ and $\rho_+$, respectively, there will be
a superscattering operator $\$ $ that relates both of them
$\rho_+=\$\cdot\rho_-$, i.e., that provides the evolution
between the two asymptotically flat regions across the
nontrivial topology fluctuation \cite{ha82}. Let $|0_\pm\rangle$
represent the vacuum at each region and $\{ |A_\pm\rangle\}$ a
basis of the Fock space, so that we can write
$|A_\pm\rangle=\Upsilon_{\pm A}^\dag |0_\pm\rangle$, where
$\Upsilon^A$ is a string of annihilation operators and,
consequently, $\Upsilon^\dag_A$ is a string of creation
operators. The density matrices $\rho_\pm$ can then be written
as
\begin{equation}
\rho_\pm=\sum_{AB}\rho_{\pm\;B}^{\;A}|A_\pm\rangle\langle B_\pm|=
\rho_{\pm\;B}^{\;A}\Upsilon_{\pm A}^\dag|0_\pm\rangle\langle
0_\pm|\Upsilon^B_\pm\,,
\end{equation}
where a sum over repeated indices is assumed.
The density matrices at both asymptotic regions can then be
related by noting that the density matrix at the far future
$\rho_+$ is given by the expectation values in the far-past
state $\rho_-$ of a complete set of future operators built out
of creation and annihilation operators, namely,
\begin{equation}
\rho_{+\;D}^{\;C}={\rm tr}(\Upsilon_{+D}^\dag \Upsilon^C_+\rho_-)=
\rho_{-\;B}^{\;A}\langle 0_-|\Upsilon^B_-\Upsilon_{+D}^\dag
\Upsilon^C_+\Upsilon_{-A}^\dag|0_-\rangle\,.
\end{equation}
Therefore, the superscattering matrix
$\$^{C\;\;\;\;\;B}_{\;\;DA}\equiv
\langle 0_-|\Upsilon^B_-\Upsilon_{+D}^\dag \Upsilon^C_+
\Upsilon_{-A}^\dag|0_-\rangle$, relates the density
matrices in both asymptotic regions, i.e., $\rho_{+\;D}^{\;C}=
\$^{C\;\;\;\;\;B}_{\;\;DA} \rho_{-\;B}^{\;A}$.
Note that the superscattering matrix
$\$^{C\;\;\;\;\;B}_{\;\;DA}$ is Hermitian in both pairs of
indices $CD$ and $AB$ to ensure that the Hermiticity of the
density matrix is preserved. Also, the conservation of
probability, i.e., ${\rm tr}(\rho_\pm)=1$, implies that
$\$^{C\;\;\;\;\;B}_{\;\;CA}=\delta_A^{\;\;B}$.
The relation between this superscattering operator and the Green
functions discussed above is easily obtained if we write the
annihilation operators $a_\pm(k)$ that form $\Upsilon^A_\pm$ at
each asymptotic region in terms of the corresponding field
operators. For instance, in the case of a complex scalar field,
this expression (up to numerical normalization factors) has the
well-known form
\begin{equation}
a_\pm(k)=-i\int_{\Sigma_\pm} d\Sigma^\mu(x)
e^{-kx}\stackrel{\leftrightarrow}{\nabla}_\mu\phi(x)\,,
\end{equation}
where $\Sigma_\pm$ represent spacelike surfaces in the infinite
past and future.
We now introduce the identity operator $1=\sum_n
|n\rangle\langle n|$, with $|n\rangle$ being energy eigenstates,
in the expression for $\$$
\begin{equation}
\$^{C\;\;\;\;\;B}_{\;\;DA}= \sum_n \langle
0_-|\Upsilon^B_-\Upsilon_{+D}^\dag
|n\rangle\langle n|\Upsilon^C_+
\Upsilon_{-A}^\dag|0_-\rangle
\end{equation}
and note that the only state that can contribute is that with
zero energy, i.e., $n=0$ for energy to be conserved. If
spacetime is globally hyperbolic, so that asymptotic
completeness holds, there is a one-to-one map between states at
any spacetime region, in particular, between the vacua
$|0\rangle$ and $|0_+\rangle$. Therefore, the only contribution
from $1=\sum_n |n\rangle\langle n|$ can be regarded as coming
from $|0_+\rangle\langle 0_+|$:
\begin{equation}
\$^{C\;\;\;\;\;B}_{\;\;DA}= \langle
0_-|\Upsilon^B_-\Upsilon_{+D}^\dag
|0_+\rangle\langle 0_+|\Upsilon^C_+
\Upsilon_{-A}^\dag|0_-\rangle\,.
\end{equation}
In this case, the superscattering operator factorizes into two
unitary factors:
\begin{equation}
\$^{C\;\;\;\;\;B}_{\;\;DA}=S^C_{\;\;A} S^{*\;\;B}_{\;D}\,,
\end{equation}
with $S^C_{\;\;A}=\langle 0_+|\Upsilon^C_+
\Upsilon_{-A}^\dag|0_-\rangle =\langle
C_+|A_-\rangle$. Note that the scattering matrix $S$ is indeed
unitary, i.e., $S^C_{\;\;A} S^{*\;\;B}_{\;C}=
\sum_C\langle B_-|C_+\rangle\langle C_+|A_-\rangle
= \delta_A^{\;\;B}$ by virtue of the condition of conservation
of probability. The factorizability of the superscattering
operator $\$$ always implies unitary evolution for the density
matrix. Indeed, if the superscattering operator can be
factorized as $\$\cdot\rho=S\rho S^\dag$ for some scattering
operator $S$, then conservation of probability, which amounts to
require that ${\rm tr}(\$\cdot\rho)=1$ provided that ${\rm
tr}(\rho)=1$, implies that
\begin{equation}
1={\rm tr}(\$\cdot\rho)={\rm tr}(S\rho S^\dag)={\rm tr}(\rho
S^\dag S)
\end{equation}
and therefore $S^\dag S=1$, i.e., the scattering operator $S$ is
unitary. In this case, the operator $\$$ also implies a
unitary evolution for the density matrix since it preserves
${\rm tr}(\rho^2)$:
\begin{equation}
{\rm tr} (\rho_+^2)={\rm tr}[(\$ \cdot\rho_-)
(\$\cdot\rho_-)]={\rm tr} (S\rho_-S^\dag S\rho_- S^\dag)= {\rm
tr}(S\rho_-^2 S^\dag)={\rm tr}(\rho_-^2)\,.
\end{equation}
If, on the other hand, we cannot guarantee that states at
different spacetime regions are one-to-one related, then the
zero energy state $|0\rangle$ will not correspond in general to
the zero energy state $|0_+\rangle$ and the superscattering
operator will not admit a factorized form: $\$\cdot\rho\neq
S\rho S^\dag$. When the superscattering operator does not
satisfy the factorization condition, the evolution does not
preserve ${\rm tr}(\rho^2)$ in general and quantum coherence is
lost. This can be seen explicitly in the analysis below.
\subsection{Quasilocal superscattering}
Let us assume that the dynamics that underlies a superscattering
operator $\$$ is quasilocal. By quasilocal we mean that any
possible effect leading to a nonfactorizable superscattering
operator is confined to a spacetime region whose size $r$ is
much smaller than the characteristic spacetime size $l$ of the
low-energy fields, i.e., we will assume that $r/l\ll 1$. Then,
the superscattering equation $\rho_+=\$ \cdot
\rho_-$ can be obtained by integrating a differential equation
of the form $\dot\rho(t)=L(t)\cdot
\rho(t)$, where $L(t)$ is a linear operator \cite{el84}.
Furthermore, it can be shown that $L(t)$ can be generally
written as \cite{bs84}
\begin{eqnarray}
L\cdot\rho\!\!\!&=&\!\!\!-i\big[ H_0,\rho\big]-\frac{1}{2}
h_{\alpha\beta} (Q^\beta Q^\alpha\rho+\rho Q^\beta Q^\alpha-2
Q^\alpha\rho Q^\beta)
\nonumber\\
\!\!\!&=&\!\!\!-i\big[ H_0,\rho\big]-\frac{i}{2}{\rm Im
}(h_{\alpha\beta})\big[Q^\alpha,\big[Q^\beta, \rho\big]_+\big]
-\frac{1}{2}{\rm
Re}(h_{\alpha\beta})\big[Q^\alpha,\big[Q^\beta,\rho\big]\big]\,,
\end{eqnarray}
where $H_0$ and $Q^\alpha$ form a complete set of Hermitian
matrices, $Q^\alpha$ have been chosen to be orthogonal, i.e.,
${\rm tr}(Q^\alpha Q^\beta)=\delta^{\alpha\beta}$, and
$h_{\alpha\beta}$ is a Hermitian matrix. A sufficient, but not
necessary, condition for having a decreasing value of ${\rm
tr}(\rho^2)$ and, consequently, loss of coherence is that
$h_{\alpha\beta}$ be real and positive. As a simple example, we
can consider the case in which we have only one operator $Q$.
Then,
\begin{equation}
\frac{d}{dt}{\rm tr}(\rho^2)=-{\rm tr}(\rho^2Q^2-\rho Q\rho Q)\,.
\end{equation}
If we diagonalize the density matrix and call $\{|i\rangle\}$ to
the preferred basis in which $\rho$ is diagonal, so that
$\rho=\sum_i p_i|i\rangle\langle i|$, this equation becomes
\begin{equation}
\frac{d}{dt}{\rm tr}(\rho^2)=-\sum_{ij} p_i |Q_{ij}|^2(p_i-p_j)
= -\sum_{i>j}|Q_{ij}|^2(p_i-p_j)^2\,,
\end{equation}
where $Q_{ij}=\langle i|Q|j\rangle$. We then see that provided
that $Q$ is not diagonal in the basis $\{|i\rangle\}$,
$\frac{d}{dt}{\rm tr}(\rho^2)<0$, except for very specific
states, such as the obvious $p_i=p_j$, which has maximum
entropy.
There has been an interesting debate on the possible violations
of energy and momentum conservation or locality in processes
that do not lead to a factorizable $\$$ matrix. According to
Gross \cite{gross84} and Ellis {\it et al.} \cite{el84}, a
nonfactorizable $\$$ matrix allows for continuous symmetries
whose associated generators are not conserved. In other words,
``invariance principles are no longer equivalent to conservation
laws'' \cite{el84}.
Let us illustrate this issue with the simple example \cite{el84}
of two spin-1/2 particles in a state described by the density
matrix $\rho_-=\frac{1}{4}(1-\vec s_1\vec s_2)$, where $\vec
s_{1,2}$ are the spin vectors of the particles 1 and 2,
respectively. This density matrix represents a pure state since
${\rm tr}(\rho_-^2)=1$. In fact, the two particles are in a
rotationally invariant pure state with vanishing total spin.
Assume that the final state can be obtained by a superscattering
operator $\$$. Then, $\rho_+=\$\cdot\rho_-$ must have the
form $\rho_+=\frac{1}{4}(1-\beta\vec s_1\vec s_2)$, for it to
conserve probability ${\rm tr}(\rho_+)=$1 and be rotationally
invariant. Furthermore, since ${\rm tr}(\rho_+^2)=
(1+3\beta^2)/4\leq1$, we must have $\beta\leq 1$, the equality
holding only when $\rho_+$ is a pure state. The initial state is
such that ${\rm tr}[(\vec s_1+ \vec s_2)^2\rho_-]=-1$, which
means that, in any given direction, there is initially a perfect
anticorrelation between the spin of the two particles, so that
the total spin vanishes. However, for the final state, ${\rm
tr}[(\vec s_1+\vec s_2)^2\rho_+]=1-\beta$. We then see that,
despite the rotational invariance of the states and the
evolution, we will not obtain total anticorrelation in the final
state and, hence, spin conservation, unless $\beta=1$, i.e.,
unless quantum coherence is preserved.
In particular, these authors \cite{gross84,el84} argued that
energy and momentum conservation does not follow from Poincar\'{e}
invariance. However, energy and momentum conservation is a
consequence of the field equations in the asymptotic regions
\cite{hawking84}. This issue also arises when the evolution of
the density matrix is obtained by a differential equation whose
integral leads to a nonfactorizable $\$$ operator. If this
equation is assumed to be local on scales a bit larger than
Planck length, then there appears a conflict between this
pretended locality on the one hand and energy and momentum
conservation on the other \cite{bs84}. This violation of energy
and momentum conservation comes from the high-energy modes,
whose characteristic evolution times is of the same order as the
size of the nontrivial topology region. Again, the existence of
asymptotic regions would enforce this conservation and this can
be effectively achieved if the propagating fields are regarded
as low-energy ones and, therefore, with characteristic size $l$
much larger the size $r$ of the fluctuation. Furthermore, Unruh
and Wald \cite{uw95} analyzed simple non-Markovian toy models
that lose quantum coherence and argued that conservation of
energy and momentum need not be in conflict with causality and
locality, in contrast with the claims of Ref. \cite{bs84} (see
also Ref. \cite{srednicki93,liu93}). Therefore, these topology
fluctuations can be regarded as nonlocal in the length scale
$r$, since, within this scale, the unitary $S$-matrix diagrams
will be mixed (thus leading to a nonfactorizable $\$$ matrix),
while from the low-energy point of view, the fluctuations are
confined in a very small region so that they can be described as
local effective interactions in a master differential equation
as above. This relation will be the subject of the next two
sections.
\section{Quantum bath}
\indent
Spacetime foam contains, according to the scenario above, highly
nontrivial topological or causal configurations, which will
introduce additional features in the description of the
evolution of low-energy fields as compared with topologically
trivial, globally hyperbolic manifolds. The analogy with fields
propagating in a finite-temperature environment is compelling.
Actually, despite the different conceptual and physical origin
of the fluctuations, we will see that the effects of these two
systems are not that different.
In order to build an effective theory that accounts for the
propagation of low-energy fields in a foamlike spacetime, we
will substitute the spacetime foam, in which we possibly have a
minimum length because the notion of distance is not valid at
such scale, by a fixed background with low-energy fields living
on it. We will perform a 3+1 foliation of the effective
spacetime that, for simplicity, will be regarded as flat, $t$
denoting the time parameter and $x$ the spatial coordinates. The
gravitational fluctuations and the minimum length present in the
original spacetime foam will be modeled by means of nonlocal
interactions that relate spacetime points that are sufficiently
close in the effective background, where a well-defined notion
of distance exists \cite{1ga98,2ga98,3ga98} (for related ideas
see also Refs. \cite{martin98,martin98b} and for a review on
stochastic gravity see Ref. \cite{hu99}). Furthermore, these
nonlocal interactions will be described in terms of local
interactions as follows. Let $\{h_i[\phi;t]\}$ be a basis of
local gauge-invariant interactions at the spacetime point
$(x,t)$ made out of factors of the form
$\ell_*^{2n(1+s)-4}\left[\phi(x,t)\right]^{2n}$, $\phi$ being
the low-energy field strength of spin $s$. As a notational
convention, each index $i$ implies a dependence on the spatial
position $x$ by default; whenever the index $i$ does not carry
an implicit spatial dependence, it will appear underlined
${\underline{i}}$. Also, any contraction of indices (except for
underlined ones) will entail an integral over spatial positions.
\subsection{Influence functional}
The low-energy density
$\rho[\phi,\varphi;t]$ at the time $t$ in the field
representation can be generally related to the density matrix at
$t=0$
\begin{equation}
\rho[\phi,\varphi; t]=\int D\phi' D\varphi'
\$[\phi,\varphi;t|\phi',\varphi';0] \rho[\phi',\varphi';0]\,,
\end{equation}
which we will write in the compact form $\rho(t)=\$(t)\cdot
\rho(0)$. Here $\$(t) $ is the propagator for the density matrix
and $D\phi\equiv\prod_x \phi(x,t)$. This propagator has the form
\begin{equation}
\$[\phi,\varphi;t|\phi',\varphi';0]=\int {\cal D}\phi {\cal
D}\varphi e^{i\{S_0[\phi;t]-S_0[\varphi;t]\}}{\cal
F}[\phi,\varphi;t]\,,
\end{equation}
where ${\cal F}[\phi,\varphi;t]$ is the so-called influence
functional \cite{fv63,fh65,ca83}, ${\cal D}\phi\equiv\prod_{x,s}
\phi(x,s)$ and these path integrals are
performed over paths $\phi(s)$, $\varphi(s)$ such that at the
end points match the values $\phi$, $\varphi$ at $t$ and
$\phi'$, $\varphi'$ at $s=0$. The influence functional ${\cal
F}[\phi,\varphi;t]$ contains all the information about the
interaction of the low-energy fields with spacetime foam. Let us
now introduce another functional ${\cal W}[\phi,\varphi;t]$ that
we will call influence action and such that ${\cal
F}[\phi,\varphi;t]=\exp{\cal W}[\phi,\varphi;t]$. If the
influence action ${\cal W}[\phi,\varphi;t]$ were equal to the
zero, then we would have unitary evolution provided by a
factorized superscattering matrix. However, ${\cal W}$ does not
vanish in the presence of gravitational fluctuations and, in
fact, the nonlocal effective interactions will be modeled by
terms in ${\cal W}$ that follow the pattern
\begin{equation}
\int dt_1\cdots dt_N \upsilon^{i_1\cdots i_N}(t_1\ldots
t_N)h_{i_1}[\phi;t_1]\cdots h_{i_N}[\phi;t_N]\,.
\end{equation}
Here, $\upsilon^{i_1\cdots i_N}(t_1\ldots t_N)$ are
dimensionless complex functions that vanish for relative
spacetime distances larger than the length scale $r$ of the
gravitational fluctuations. If the gravitational fluctuations
are smooth in the sense that they only involve trivial
topologies or contain no horizons, the coefficients
$\upsilon^{i_1\cdots i_N}(t_1\ldots t_N)$ will be $N$-point
propagators which, as such, will have infinitely long tails and
the size of the gravitational fluctuations will be effectively
infinite. In other words, we would be dealing with a local
theory written in a nonstandard way. The gravitational origin of
these fluctuations eliminate these long tails because of the
presence of gravitational collapse and topology change. This
means that, for instance, virtual black holes \cite{ha96} will
appear and disappear and horizons will be present throughout. As
Padmanabhan \cite{1pa98,2pa98} has also argued, horizons induce
nonlocal interactions of finite range since the Planckian
degrees of freedom will be magnified by the horizon (because of
an infinite redshift factor) thus giving rise to low-energy
interactions as seen from outside the gravitational fluctuation.
Virtual black holes represent a kind of components of spacetime
foam that because of the horizons and their nontrivial topology
will induce nonlocal interactions but, most probably, other
fluctuations with complicated topology will warp spacetime in a
similar way and the same magnification process will also take
place.
The coefficients $\upsilon^{i_1\cdots i_N}(t_1\ldots t_N)$ can
depend only on relative positions and not on the location of the
gravitational fluctuation itself. The physical reason for this is
conservation of energy and momentum: the fluctuations do not
carry energy, momentum, or gauge charges. Thus, diffeomorphism
invariance is preserved, at least at low-energy scales. One
should not expect that at the Planck scale this invariance still
holds. However, this violation of energy-momentum conservation is
safely kept within Planck scale limits \cite{uw95}, where the
processes will no longer be Markovian.
Finally, the coefficients $\upsilon^{i_1\cdots i_N}(t_1\ldots
t_N)$ will contain a factor $[e^{-S(r)/2}]^N$, $S(r)$ being the
Euclidean action of the gravitational fluctuation, which is of
the order $(r/\ell_*)^2$. This is just an expression of the idea
that inside large fluctuations, interactions that involve a
large number of spacetime points are strongly suppressed. As the
size of the fluctuation decreases, the probability for events in
which three or more spacetime points are correlated increases,
in close analogy with the kinetic theory of gases: the higher
the density of molecules in the gas, the more probable is that a
large number of molecules collide at the same point. The
expansion parameter in this example is typically the density of
molecules. In our case, the natural expansion parameter is the
transition amplitude. It is given by the square root of the
two-point transition probability which in the semiclassical
approximation is of the form $e^{-S(r)}$.
Thus the $N$-local interaction term in ${\cal W}$ will be of
order $[e^{-S(r)/2}]^N$. In the weak-coupling approximation,
i.e., up to second order in the expansion parameter, the
trilocal and higher effective interactions do not contribute.
The terms corresponding to $N=0,1$ are local and can be absorbed
in the bare action (note that the coefficient $\upsilon$ is
constant and that the coefficients $\upsilon^{i_1}(t_1)$ cannot
depend on spacetime positions because of diffeomorphism
invariance). Consequently, we can write the action functional
${\cal W}$ as a bilocal whose most general form is \cite{fh65}
\begin{eqnarray}
{\cal W}[\phi,\varphi;t] \!\!\!&=&\!\!\! -\frac{1}{2}\int_0^t
ds\int_0^s ds'\{h_i[\phi;s]-h_i[\varphi;s]\}
\nonumber\\
\!\!\!&&\!\!\!\times \{\upsilon^{ij}(s-s')h_j[\phi;s']-
\upsilon^{ij}(s-s')^*h_j[\varphi;s']\}\,,
\end{eqnarray}
where we have renamed $\upsilon^{ij}(s,s')$ as
$\upsilon^{ij}(s-s')$, and without loss of generality we have
set $s>s'$. This complex coefficient is Hermitian in the pair of
indices $ij$ and depends on the spatial positions
$x_{\underline{i}}$ and $x_{\underline{j}}$ only through the
relative distance $|x_{\underline{i}}-x_{\underline{j}}|$. It is
of order $e^{-S(r)}$ and is concentrated within a spacetime
region of size $r$.
Let us now decompose $\upsilon^{ij}(\tau)$ in terms of its real
and imaginary parts as
\begin{equation}
\upsilon^{ij}(\tau) = c^{ij}(\tau)+i\dot f^{ij}(\tau)\,,
\end{equation}
where $c^{ij}(\tau)$ and $f^{ij}(\tau)$ are real and symmetric,
and the overdot denotes time derivative. The imaginary part is
antisymmetric in the exchange of $i,\tau$ and $j,-\tau$ and has
been written as a time derivative for convenience, since this
choice does not involve any restriction. The $f$ term can then
be integrated by parts to obtain
\begin{eqnarray}
{\cal W}[\phi,\varphi;t]\!\!\!&=&\!\!\! -\frac{1}{2}\int_0^t ds
\int_0^sds' c^{ij}(s-s')\{h_i[\phi;s]-h_i[\varphi;s]\}
\{h_j[\phi;s']- h_j[\varphi;s']\}
\nonumber\\
\!\!\!&&\!\!\!-\frac{i}{2}\int_0^t ds\int_0^s ds'
f^{ij}(s-s')\{h_i[\phi;s]-h_i[\varphi;s]\} \{\dot
h_j[\phi;s']+\dot h_j[\varphi;s']\}\,.
\end{eqnarray}
In this integration, we have ignored surface terms that
contribute, at most, to a finite renormalization of the bare
low-energy Hamiltonian.
The functions $f^{ij}(\tau)$ and $c^{ij}(\tau)$ characterize
spacetime foam in our effective description but, under fairly
general assumptions, the characterization can be carried out by
a smaller set of independent functions. In what follows we will
simplify this set. With this aim, we first write $f^{ij}(\tau)$
and $c^{ij}(\tau)$ in terms of their spectral counterparts
$\tilde f^{\underline{i}\underline{j}}(\omega)$ and $\tilde
c^{\underline{i}\underline{j}}(\omega)$. Lorentz invariance and
spatial homogeneity implies that $f^{ij}(\tau)$ and
$c^{ij}(\tau)$ must have the form
\begin{eqnarray}
f^{ij} (\tau)\!\!\!&=&\!\!\!\int_0^\infty d\omega \tilde
f^{\underline{i}\underline{j}}(\omega)
8\pi \frac{\sin(\omega
|x_{\underline{i}}-x_{\underline{j}}|)} {\omega
|x_{\underline{i}}-x_{\underline{j}}|} \cos(\omega\tau)\,,
\\
c^{ij} (\tau)\!\!\!&=&\!\!\!\int_0^\infty d\omega \tilde
c^{\underline{i}\underline{j}}(\omega) 8\pi \frac{\sin(\omega
|x_{\underline{i}}-x_{\underline{j}}|)} {\omega
|x_{\underline{i}}-x_{\underline{j}}|} \cos(\omega\tau)\,,
\end{eqnarray}
for some real functions $\tilde
f^{\underline{i}\underline{j}}(\omega)$ and $\tilde
c^{\underline{i}\underline{j}}(\omega)$. It seems reasonable to
assume a kind of equanimity principle by which spacetime foam
produces interactions whose intensity does not depend on the
pair of interactions $h_i$ itself but on its independent
components for each mode, i.e., that the spectral interaction is
given by products of functions $\chi^{\underline{i}}(\omega)$:
\begin{eqnarray}
\tilde f^{\underline{i}\underline{j}}(\omega)\!\!\!&=&\!\!\!
\chi^{\underline{i}}(\omega)\chi^{\underline{j}}(\omega)\,,
\\
\tilde c^{\underline{i}\underline{j}}(\omega)\!\!\!&=&\!\!\! g(\omega)
\chi^{\underline{i}}(\omega)\chi^{\underline{j}}(\omega)\,,
\end{eqnarray}
where $g(\omega)$ is a function that, together with
$\chi^{\underline{i}}(\omega)$, fully characterize spacetime
foam under these assumptions.
Then, $f^{ij}(\tau)$ and $c^{ij}(\tau)$ can be written as
\begin{eqnarray}
f^{ij} (\tau)\!\!\!&=&\!\!\!\int_0^\infty d\omega G^{ij}(\omega)
\cos(\omega\tau)\,,
\label{fij}\\
c^{ij}(\tau)\!\!\! &=&\!\!\!\int_0^\infty d\omega g(\omega)
G^{ij}(\omega)\cos(\omega\tau)\,,
\label{cij}
\end{eqnarray}
with
\begin{equation}
G^{ij}(\omega)=8\pi \frac{\sin(\omega
|x_{\underline{i}}-x_{\underline{j}}|)} {\omega
|x_{\underline{i}}-x_{\underline{j}}|}
\chi^{\underline{i}}(\omega)\chi^{\underline{j}}(\omega)\,.
\end{equation}
The functions $\chi^{\underline{i}}(\omega)$ can be interpreted
as the spectral effective couplings between spacetime foam and
low-energy fields. Since $\upsilon^{ij}(\tau)$ is of order
$e^{-S(r)}$ and is concentrated in a region of linear size $r$,
the couplings $\chi^{\underline{i}}(\omega)$ will have
dimensions of length, will be of order $e^{-S(r)/2}r$, and will
induce a significant interaction for all frequencies $\omega$ up
to the natural cutoff $r^{-1}$. On the other hand, the function
$g(\omega)$ has dimensions of inverse length and must be of
order $r^{-1}$. Actually, this function must be almost flat in
the frequency range $(0,r^{-1})$ to ensure that all the modes
contribute significantly to all bilocal interactions. As we will
see, the function $g(\omega)$ also admits a straightforward
interpretation in terms of the mean occupation number for the
mode of frequency $\omega$.
Once we have computed the influence functional $\mathcal{F}$, it
is possible to obtain the master equation that governs the
evolution of the density of low-energy fields, although we will
not follow this procedure here. We postpone the derivation of
the full master equation until next section.
The bilocal effective interaction does not lead to a unitary
evolution. The reason for this is that it is not sufficient to
know the fields and their time derivatives at an instant of time
in order to know their values at a later time: we need to know
the history of the system, at least for a time $r$. There exist
different trajectories that arrive at a given configuration
$(\phi,\dot\phi)$. The future evolution depends on these past
trajectories and not only on the values of $\phi$ and $\dot
\phi$ at that instant of time. Therefore, the system cannot
possess a well-defined Hamiltonian vector field and suffers from
an intrinsic loss of predictability \cite{ew89}.
This can be easily seen if we restrict to the case in which
$f^{ij}(\tau)$ vanishes, i.e.,
$\upsilon^{ij}(\tau)=c^{ij}(\tau)$. Then, the influence
functional ${\cal F}_{\rm c}$ is the characteristic functional
of a Gaussian probability functional distribution, i.e., it can
be written as
\begin{equation}
{\cal F}_{\rm c}[\phi,\varphi;t]=\int {\cal D}\alpha
e^{-\frac{1}{2} \int_0^t ds \int_0^sds^\prime
\gamma_{ij}(s-s^\prime)\alpha^i(s)\alpha^j(s^\prime)}
e^{i\int_0^t ds \alpha^i(s)\{ h_i[\phi;s]- h_i[\varphi;s]\}}\,.
\end{equation}
Here, the continuous matrix $\gamma_{ij}(s-s^\prime)$ is the
inverse of $c^{ij}(s-s^\prime)$, i.e.,
\begin{equation}
\int ds^{\prime\prime}\gamma_{ik}(s-s^{\prime\prime})
c^{kj}(s^{\prime\prime}-s^\prime)=\delta_i^j\delta(s-s^\prime)\,.
\end{equation}
Then, in this case, the propagator $\$(t) $ has the form
\begin{equation}
\$(t)=\int{\cal D}\alpha P[\alpha] \$_\alpha(t)\,,
\end{equation}
where ${\$}_\alpha(t)$ is just a factorizable propagator
associated with unitary evolution governed by the action
$S_0+\int\alpha^i h_i$ and
\begin{equation}
P[\alpha]=e^{-\frac{1}{2} \int_0^t ds\int_0^s ds^\prime
\gamma_{ij}(s-s^\prime)\alpha^i(s)\alpha^j(s^\prime)}\,.
\end{equation}
Therefore, $\$(t) $ is just the average with Gaussian weight
$P[\alpha]$ of the unitary propagator $\$_\alpha(t)$.
Note that
the quadratic character of the distribution for the fields
$\alpha^i$ is a consequence of the weak-coupling approximation,
which keeps only the bilocal term in the action. Higher-order
terms would introduce deviations from this noise distribution.
The nonunitary nature of the bilocal interaction has been
encoded inside the fields $\alpha^i$, so that, when insisting on
writing the system in terms of unitary evolution operators, an
additional sum over the part of the system that is unknown
naturally appears. Note also that we have a different field
$\alpha^i$ for each kind of interaction $h_i$. Thus, we have
transferred the nonlocality of the low-energy field $\phi$ to
the set of fields $\alpha^i$, which are nontrivially coupled to
it and that represent spacetime foam.
\subsection{Semiclassical diffusion}
We can see that the limit
of vanishing $f^{ij}(\tau)$, with nonzero $c^{ij}(\tau)$ (and
therefore real $\upsilon^{ij}(\tau)$), is a kind of
semiclassical approximation since, in this limit, one ignores
the quantum nature of the gravitational fluctuations. Indeed,
the fields $\alpha^i$ represent spacetime foam but, as we have
seen, the path integral for the whole system does not contain
any trace of the dynamical character of the fields $\alpha^i$.
It just contains a Gaussian probability distribution for them.
The path integral above can then be interpreted as a Gaussian
average over the classical noise sources $\alpha^i$.
Classicality here means that we can keep the sources $\alpha^i$
fixed, ignoring the noise commutation relations, and, at the end
of the calculations, we just average over them.
The low-energy density matrix $\rho$ then satisfies the
following master equation \cite{1ga98,2ga98,3ga98}
\begin{equation}
\dot \rho= -i \big[ H_0,\rho\big]- \int_0^\infty d\tau c^{ij}(\tau)
\big[h_{i},\big[h_{j}^{\scriptscriptstyle\rm I}(-\tau),\rho\big]\big]\,,
\end{equation}
where $h_j^{\scriptscriptstyle\rm I}(-\tau)=e^{-iH_0\tau}h_je^{iH_0\tau}$.
Since $e^{iH_0\tau}=1+O(\tau/l)$, the final form of the master
equation for a low-energy system subject to gravitational
fluctuations treated as a classical environment and at zeroth
order in $r/l$ (the effect of higher order terms in $r/l$ will
be thoroughly studied together with the quantum effects) is
\begin{equation}
\dot \rho= -i \big[ H_0,\rho\big]- \int_0^\infty d\tau c^{ij}(\tau)
\big[h_{i},\big[h_{j},\rho\big]\big]
\end{equation}
(for similar approaches yielding this type of master equation
see also Refs. \cite{bs84,diosi87,percival95}).
The first term gives the low-energy Hamiltonian evolution that
would also be present in the absence of fluctuations. The second
term is a diffusion term which will be responsible for the loss
of coherence (and the subsequent increase of entropy). It is a
direct consequence of the foamlike structure of spacetime and
the related existence of a minimum length. Note there is no
dissipation term. This term is usually present in order to
preserve the commutation relations under time evolution.
However, we have considered the classical noise limit, i.e., the
noise $\alpha$ has been considered as a classical source and the
commutation relations are automatically preserved. We will see
that the dissipation term, apart from being of quantum origin,
is $r/l$ times smaller than the diffusion term and we have only
considered the zeroth order approximation in $r/l$.
The characteristic decoherence time $\tau_{d}$ induced by the
diffusion term can be easily calculated. Indeed, the interaction
Hamiltonian density $h_i$ is of order
$\ell_*^{-4}(\ell_*/l)^{2n_{\underline{i}}(1+s_{\underline{i}})}$
and $c^{ij}(\tau)$ is of order $e^{-S(r)}$. Furthermore, the
diffusion term contains one integral over time and two integrals
over spatial positions. The integral over time and the one over
relative spatial positions provide a factor $r^4$, since
$c^{ij}(\tau)$ is different from zero only in a spacetime region
of size $r^4$, and the remaining integral over global spatial
positions provides a factor $l^3$, the typical low-energy
spatial volume. Putting everything together, we see that the
diffusion term is of order
$l^{-1}\epsilon^2\sum_{{\underline{i}}{\underline{j}}}
(\ell_*/l)^{\eta_{\underline{i}}+\eta_{\underline{j}}}$, with
$\eta_{\underline{i}}=2n_{\underline{i}}(1+s_{\underline{i}})-2$
and $\epsilon=e^{-S(r)/2}(r/\ell_*)^2$. This quantity defines
the inverse of the decoherence time $\tau_d$. Therefore, the
ratio between the decoherence time $\tau_d$ and the low-energy
length scale $l$ is
\begin{equation}
\tau_d/l\sim
\epsilon^{-2}\bigg[\sum_{{\underline{i}}{\underline{j}}}
(\ell_*/l)^{\eta_{\underline{i}}+\eta_{\underline{j}}}
\bigg]^{-1}\,.
\end{equation}
Because of the exponential factor in $\epsilon$, only the
gravitational fluctuations whose size is very close to Planck
length will give a sufficiently small decoherence time. Slightly
larger fluctuations will have a very small effect on the
unitarity of the effective theory. For the interaction term that
corresponds to the mass of a scalar field, the parameter $\eta$
vanishes and, consequently, $\tau_d/l\sim
\epsilon^{-2}$. Thus, the scalar mass term will lose coherence
faster than any other interaction. Indeed, for higher spins
and/or powers of the field strength, $\eta\geq 1$ and therefore
$\tau_d/l$ increases by powers of $l/\ell_*$. For instance, the
next relevant decoherence time corresponds to the scalar-fermion
interaction term $\phi^2\bar\psi\psi$, which has an associated
decoherence ratio $\tau_d/l\sim \epsilon^{-2}l/\ell_*$. We see
that the decoherence time for the mass of scalars is independent
of the low-energy length scale and, for gravitational
fluctuations of size close to Planck length, $\epsilon$ may be
not too small so that scalar masses may lose coherence fairly
fast, maybe in a few times the typical evolution scale. Hawking
has argued \cite{ha96} that this might be the reason for not
observing the Higgs particle. Higher power and/or spin
interactions will lose coherence much slower but for
sufficiently high energies $l^{-1}$, although much smaller than
the gravitational fluctuations energy $r^{-1}$, the decoherence
time may be small enough. This means that quantum fields will
lose coherence faster for higher-energy regimes. Hawking has
also suggested that loss of quantum coherence might be
responsible for the vanishing of the $\theta$ angle in quantum
chromodynamics \cite{ha96}.
\subsection{Spacetime foam as a quantum bath}
As we have briefly mentioned before, considering that the
coefficients $\upsilon^{ij}$ are real amounts to ignore the
quantum dynamical nature of spacetime foam, paying attention
only to its statistical properties. In what follows, we will
study these quantum effects and show that spacetime foam can be
effectively described in terms of a quantum thermal bath with a
nearly Planckian temperature that has a weak interaction with
low-energy fields. As a consequence, other effects, apart from
loss of coherence, such as Lamb and Stark transition-frequency
shifts, and quantum damping, characteristic of systems in a
quantum environment \cite{ga91,ca93}, naturally appear as
low-energy predictions of this model \cite{1ga98,2ga98,3ga98}.
Let us consider a Hamiltonian of the
form
\begin{equation}
H=H_0+H_{\rm int}+H_{\rm b}\,.
\end{equation}
$H_0$ is the bare Hamiltonian that represents the low-energy
fields and $H_{\rm b}$ is the Hamiltonian of a bath that, for
simplicity, will be represented by a real massless scalar field.
The interaction Hamiltonian will be of the form $H_{\rm
int}=\xi^i h_i$, where the noise operators $\xi^i$ are given by
\begin{equation}
\xi^{\underline{i}}(x,t)= \int dx'
\chi^{{\underline{i}}}(x-x')p(x',t)\,.
\end{equation}
Here, $p(x,t)$ is the momentum of the bath scalar field whose
mode decomposition has the form
\begin{equation}
p(x,t)=i\int dk \sqrt \omega [ a^\dag(k) e^{i(\omega t-k
x)}-a(k) e^{-i(\omega t-k x)}]\,,
\end{equation}
$\omega=\sqrt{k^2}$, and $a$ and $a^\dag$ are, respectively, the
annihilation and creation operators associated with the bath;
$\chi^{\underline{i}}(y)$ represent the couplings between the
low-energy field and the bath in the position representation.
Since we are trying to construct a model for spacetime foam, we
will assume that the couplings $\chi^{{\underline{i}}}(y)$ will
be concentrated on a region of radius $r$ and that they are
determined by the spectral couplings
$\chi^{{\underline{i}}}(\omega)$ introduced before:
\begin{equation}
\chi^{\underline{i}}(y)=\int \frac{dk}{\omega}
\chi^{{\underline{i}}}(\omega) \cos (ky)\,.
\end{equation}
The influence functional in this case has the form \cite{fh65}
\begin{equation}
{\cal F}[\phi,\varphi;t]=\int Dq' DQ' \rho_{\rm b}[q',Q';0]
\int{\cal D} q {\cal D} Q e^{i\{S_{\rm b}[q;t]-S_{\rm b}[Q;t]\}}
e^{i\{S_{\rm int}[\phi,q;t]-S_{\rm int}[\varphi,Q;t]\}}\,,
\end{equation}
where these path integrals are performed over paths $q(s)$ and
$Q(s)$ such that at the initial time match the values $q'$ and
$Q'$ and $S_{\rm b}$ is the action of the bath.
If we assume that the bath is in a stationary, homogeneous, and
isotropic state, this influence functional can be computed to
yield an influence action ${\cal W}$ of the form discussed
above. Furthermore, for a thermal state with temperature $T\sim
1/r$, the function $g(\omega)$ has the form
\begin{equation}
g(\omega)=\omega[N(\omega)+1/2]\,,
\end{equation}
where $N(\omega)=[\exp(\omega/T)-1]^{-1}$ is the mean occupation
number of the quantum thermal bath corresponding to the
frequency $\omega$. Recall that the functions $G^{ij}(\omega)$
and, hence, $f^{ij}(\tau)$ are uniquely determined by the
couplings $\chi^{\underline{i}}(\omega)$. In particular, they
are completely independent of the state of the bath or the
system. All the relevant information about the bath is encoded
in the function $g(\omega)$.
With this procedure, we see that spacetime foam can be
represented by a quantum bath determined by $g(\omega)$ that
interacts with the low-energy fields by means of the couplings
$\chi^{\underline{i}}(\omega)$ which characterize spacetime
foam, in the sense that both systems produce the same low-energy
effects.
This model that we have proposed is particularly suited to the
study of low-energy effects produced by simply connected
topology fluctuations such as closed loops of virtual black
holes \cite{ha96}. Virtual black holes will not obey classical
equations of motion but will appear as quantum fluctuations of
spacetime and thus will become part of the spacetime foam as we
have discussed. Particles could fall into these black holes and
be re-emitted. The scattering amplitudes of these processes
\cite{ha96,hr97} could be interpreted as being produced by
nonlocal effective interactions that would take place inside the
fluctuations and the influence functional obtained above could
then be interpreted as providing the evolution of the low-energy
density matrix in the presence of a bath of ubiquitous quantum
topological fluctuations of the virtual-black-hole type.
\subsection{Wormholes and coherence}
Euclidean solutions of the wormhole type were obtained for a
variety of matter contents (see, e.g.,
\cite{giddings88a,halliwell89,keay89}). Quantum solutions to the
Wheeler-DeWitt equation that represent wormholes can be found in
Refs. \cite{ha88,hp90,lyons89,dowker90,dowker91,barcelo96}.
These solutions allowed the calculation of the effective
interactions that they introduce in low-energy physics
\cite{ha88,lyons89,dowker90,dowker91,barcelo98a,barcelo98b,lukas95}.
Wormholes do not seem to induce loss of coherence despite the
fact that they render spacetime multiply connected
\cite{co88,giddings88b}. The reason why they seem to preserve
coherence is that, in the dilute gas approximation, they join
spacetime regions that may be far apart from each other and
therefore, both wormhole mouths must be delocalized, i.e., the
multiply-connectedness requires energy and momentum conservation
in both spacetime regions separately. In this way, wormholes can
be described as bilocal interactions whose coefficients
$\upsilon^{ij}$ do not depend on spacetime positions.
Diffeomorphism invariance on each spacetime region also requires
the spacetime independence of $\upsilon^{ij}$. This can also be
seen by analyzing these wormholes from the point of view of the
universal covering manifold, which is, by definition, simply
connected. Here, each wormhole is represented by two boundaries
located at infinity and suitably identified. This identification
is equivalent to introducing coefficients $\upsilon^{ij}$ that
relate the bases of the Hilbert space of wormholes in both
regions of the universal covering manifold. Since
$\upsilon^{ij}$ are just the coefficients in a change of basis,
they will be constant. As a direct consequence, the correlation
time for the fields $\alpha^i$ is infinite. This means that the
fields $\alpha^i$ cannot be interpreted as noise sources that
are Gaussian distributed at each spacetime point independently.
Rather, they are infinitely coherent thus giving rise to
superselection sectors. The Gaussian distribution to which they
are subject is therefore global, spacetime independent. The only
effect of wormholes is thus introducing local interactions with
unknown but spacetime independent coupling constants
\cite{co88,giddings88b,hawking88}. The spacetime independence
implies that, once a experiment to determine one such constant
is performed, it will remain with the obtained value forever, in
sharp contrast with those induced by simply-connected
topological fluctuations such as virtual black holes. In this
way and because of the infinite-range correlations induced by
wormholes, which forbid the existence of asymptotic regions
necessary to analyze scattering processes, the loss of coherence
produced by these fluctuations should actually be ascribed to
the lack of knowledge of the initial state or, in other words,
to the impossibility of preparing arbitrarily pure quantum
states \cite{co88}.
One could also expect some effects originated in their quantum
nature. However, the coefficients $\upsilon^{ij}$ are spacetime
independent. This means that $c^{ij}$ are constant and,
consequently, $\tilde c^{\underline{i}\underline{j}}(\omega)\sim
\delta(\omega)$. As we have argued, $\tilde
c^{\underline{i}\underline{j}}(\omega)$ and $\tilde
f^{\underline{i}\underline{j}}(\omega)$ are related by a nearly
flat function so that $\tilde
f^{\underline{i}\underline{j}}(\omega)\sim\delta(\omega)$ as
well. This in turn implies that $f^{ij}$ is also constant and
$\dot f^{ij}=0$, therefore concluding that $\upsilon^{ij}$ is
real. We have already argued that in the case of real
$\upsilon^{ij}$, no quantum effects will show up.
Wormhole spacetimes do not lead, strictly speaking, to loss of
quantum coherence although global hyperbolicity does not hold.
On the other hand, the difficulties in quantum gravity with
unitary propagation mainly come from the quantum field theory
axiom of asymptotic completeness \cite{ha82,alvarez83}, which is
closely related to global hyperbolicity. Indeed, in order to
guarantee asymptotic completeness, it is necessary that the
expectation value of the fields at any spacetime position be
determined by their values at a Cauchy surface at infinity.
Topologically nontrivial spacetimes however are not globally
hyperbolic in general and therefore do not admit a foliation in
Cauchy surfaces. Let us have a closer look at this issue.
Gravitational entropy, which is closely related to the loss of
quantum coherence, has its origin in the existence of
two-dimensional spheres in Euclidean space that cannot be
homotopically contracted to a point, i.e., with nonvanishing
second Betti number. These two-dimensional surfaces become fixed
points of the timelike Killing vector, so that global
hyperbolicity is lost. A well-known example (for other more
sophisticated examples see Ref. \cite{hawking98}) is a
Schwarzschild black hole whose Euclidean sector is described by
the metric
\begin{equation}
ds^2=f(r)dt^2+f(r)^{-1} dr^2+r^2 d\Omega_2^2\,,
\end{equation}
with $f(r)=1-2\ell_*^2m/r$, $m$ being the black hole mass. In
order to make this solution regular, we consider the region
$r\geq 2\ell_*^2m$ and set $t$ to be periodic with period
$\beta=8\pi \ell_*^2m$. The surface defined by $r=2\ell_*^2m$ is
a fixed point of the Killing vector $\partial_t$. Thus, we have
a spacetime with the topology of $\Re^2\times S^2$, so that
$B_2=1$. As we will see below, it is the existence of this
surface that accounts for the entropy of this spacetime. This
does not mean that it is localized in the surface itself.
Rather, it is a global quantity characteristic of the whole
spacetime manifold. The Euclidean action of this solution is
given by the sum of the contributions $I_{\rm fp}$, $I_{\infty}$
of the two surface terms at $r=2\ell_*^2m$ and $r=\infty$. In
the semiclassical approximation, the partition function is given
by $Z=e^{-I_{\rm fp}-I_{\infty}}$. Taking into account that the
entropy is $S=\ln Z-\beta E$ and that $\beta E$ is precisely the
surface term at infinity $\beta E=-I_{\infty}$, we conclude that
the entropy is given by the surface term at $r=2\ell_*^2m$,
$S=-I_{\rm fp}=4\pi\ell_*^2 m^2$, as is well-known.
In the wormhole case, the second Betti number is zero and the
first and third Betti numbers are equal. For a spacetime with a
single wormhole, $B_1=B_3=1$. This means that there exists one
circle that cannot be homotopically contracted to a point and
that there also exists one three-sphere that is not homotopic to
a point, but all two-spheres are contractible. Regular solutions
of this sort can be identified with the wormhole throat. The
only contributions to the Euclidean action in this case come
from the asymptotic regions, which is precisely the term that we
have to subtract from $\ln Z$, in the semiclassical
approximation, in order to calculate the gravitational entropy.
Thus, wormholes have vanishing entropy despite the fact that
they are not globally hyperbolic. From the point of view of
their universal covering manifold, a wormhole is represented by
two three-surfaces whose contribution to the action are equal in
absolute value but with opposite sign because of their reverse
orientations, thus leaving only the asymptotic contribution,
irrelevant as far as the entropy is concerned (for a different
approach see Ref. \cite{gonzalez91}). The striking difference
between wormholes and virtual black holes is precisely the
formation of horizons which has no counterpart in the wormhole
case. This is closely related to the issue of the infinite-range
spacetime correlations established by wormholes versus the
finite size of the regions occupied by virtual black holes or
quantum time machines, for instance.
\section{Low-energy effective evolution}
\indent
As we have already mentioned, from the influence functional
obtained in the previous section, we can obtain the master
equation satisfied by the low-energy density matrix, although
here we will follow a different procedure: We will derive the
master equation in the canonical formalism from von Neumann
equation for the joint system of the low-energy fields plus the
effective quantum bath coupled to them that accounts for the
effects of spacetime foam.
\subsection{Master equation}
It is easy to see that the function $f^{ij}(\tau)$ given in Eq.
(\ref{fij}) determines the commutation relations at different
times of the noise variables. Indeed, taking into account the
commutation relations for the annihilation and creation
operators $a$ and $a^\dag$, i.e.,
\begin{equation}
\big[a(k),a(k')\big]=0\,,
\hspace{5mm}
\big[a(k),a^\dag(k')\big]=\delta(k-k')\,,
\end{equation}
we obtain by direct calculation the relation
\begin{equation}
\big[\xi^i(t),\xi^j(t')\big]=i \frac{d}{dt} f^{ij}(t-t')\,,
\end{equation}
Similarly, the function $c^{ij}(\tau)$ of Eq. (\ref{cij})
determines the average of the anticommutator of the noise
variables,
\begin{equation}
\frac{1}{2}\big\langle\big[\xi^i(t),\xi^j(t')\big]_+
\big\rangle=c^{ij}(t-t')\,,
\end{equation}
where the average of any operator $Q$ has been defined as
$\langle Q\rangle\equiv {\rm tr}_{\rm b}(Q\rho_{\rm b})$,
provided that the bath is in a stationary, homogeneous, and
isotropic state determined by $g(\omega)$, i.e.,
\begin{equation}
\langle a(k)\rangle=0\,,
\hspace{5mm}
\langle a(k)a(k')\rangle=0\,,
\hspace{5mm}
\langle a^\dag(k)a(k')\rangle=[g(\omega)/\omega-1/2]\delta(k-k')\,.
\end{equation}
We are now ready to write down the master equation for the
low-energy density matrix. We will describe the whole system
(low-energy field and bath) by a density matrix $\rho_{\scriptscriptstyle\rm
T}(t)$. We will assume that, initially, the low energy fields
and the bath are independent, i.e., that at the time $t=0$
\begin{equation}
\rho_{\scriptscriptstyle\rm T}(0)=\rho(0)\otimes \rho_{\rm b}\,.
\end{equation}
If the low-energy fields and the bath do not decouple at any
time, an extra renormalization term should be added to the
Hamiltonian. In the interaction picture, the density matrix has
the form
\begin{equation}
\rho_{\scriptscriptstyle\rm T}^{\scriptscriptstyle\rm I}(t)=U^\dag(t)\rho_{\scriptscriptstyle\rm T}(t)U(t)\,,
\end{equation}
with $U(t)=U_0(t)U_{\rm b}(t)$, where $U_0(t)= e^{-iH_0t}$ and
$U_{\rm b}(t)=e^{-iH_{\rm b}t}$. It obeys the equation of motion
\begin{equation}
\dot\rho_{\scriptscriptstyle\rm T}^{\scriptscriptstyle\rm I}(t)=-i \big[\xi^i(t) h_i^{\scriptscriptstyle\rm
I}(t),\rho_{\scriptscriptstyle\rm T}^{\scriptscriptstyle\rm I}(t)\big]\,.
\end{equation}
Here,
\begin{eqnarray}
\xi^i(t)\!\!\!&=&\!\!\!U^\dag(t)\xi^iU(t)=
U_{\rm b}^\dag(t)\xi^iU_{\rm b}(t)\,,
\\
h_i^{\scriptscriptstyle\rm
I}(t)\!\!\!&=&\!\!\!U^\dag(t)h_iU(t)=U_0^\dag(t)h_iU_0(t)\,.
\end{eqnarray}
Integrating this evolution equation and introducing the result
back into it, tracing over the variables of the bath, defining
$\rho^{\scriptscriptstyle\rm I}(t)\equiv{\rm tr}_{\rm b}[\rho_{\scriptscriptstyle\rm
T}^{\scriptscriptstyle\rm I}(t)]$, and noting that ${\rm tr}_{\rm
b}[\xi^i(t)h_i^{\scriptscriptstyle\rm I}(t)\rho_{\scriptscriptstyle\rm T}^{\scriptscriptstyle\rm
I}(t_0)]=0$, we obtain
\begin{equation}
\dot\rho^{\scriptscriptstyle\rm I}(t)= -\int_{t_0}^t dt' {\rm tr}_{\rm
b}\left\{\big[ \xi^i(t) h_i^{\scriptscriptstyle\rm I}(t), \big[\xi^j(t')
h_j^{\scriptscriptstyle\rm I}(t'),\rho_{\scriptscriptstyle\rm T}^{\scriptscriptstyle\rm
I}(t')\big]\big]\right\}\,.
\end{equation}
In the weak-coupling approximation, which implies that
$\xi^ih_i$ is much smaller than $H_0$ and $H_{\rm b}$ (this is
justified since it is of order $\epsilon$), we assume that the
bath density matrix does not change because of the interaction,
so that $\rho_{\scriptscriptstyle\rm T}^{\scriptscriptstyle\rm I}(t)=\rho^{\scriptscriptstyle\rm
I}(t)\otimes\rho_{\rm b}$. The error introduced by this
substitution is of order $\epsilon$ and ignoring it in the
master equation amounts to keep terms only up to second order in
this parameter. Since $\big[\xi^i(t), h_j^{\scriptscriptstyle\rm
I}(t')\big]=0$ because $\big[\xi^i, h_j\big]=0$, the right hand
side of this equation can be written in the following way
\begin{equation}
-\int_{0}^t dt'\big\{ c^{ij}(t-t') \big[h_i^{\scriptscriptstyle\rm
I}(t),\big[h_j^{\scriptscriptstyle\rm I}(t'),\rho^{\scriptscriptstyle\rm I}(t')\big]\big]
+\frac{i}{2}f^{ij}(t-t') \big[h_i^{\scriptscriptstyle\rm
I}(t),\big[h_j^{\scriptscriptstyle\rm I}(t'),\rho^{\scriptscriptstyle\rm
I}(t')\big]_+\big]\big\}\,.
\end{equation}
The Markov approximation allows the substitution of
$\rho^{\scriptscriptstyle\rm I}(t')$ by $\rho^{\scriptscriptstyle\rm I}(t)$ in the master
equation because the integral over $t'$ will get a significant
contribution from times $t'$ that are close to $t$ due to the
factors $\dot f^{ij}(t-t')$ and $c^{ij}(t-t')$ and because, in
this interval of time, the density matrix $\rho^{\scriptscriptstyle\rm I}$
will not change significantly. Indeed, the typical evolution
time of $\rho^{\scriptscriptstyle\rm I}$ is the low-energy time scale $l$,
which will be much larger than the time scale $r$ associated
with the bath. If we perform a change of the integration
variable from $t'$ to $\tau=t-t'$, write
\begin{equation}
\rho^{\scriptscriptstyle\rm I}(t')=\rho^{\scriptscriptstyle\rm I}(t-\tau)=
\rho^{\scriptscriptstyle\rm I}(t)-\tau\dot\rho^{\scriptscriptstyle\rm I}(t) +O(\tau^2)\,,
\end{equation}
and introduce this expression in the master equation above, we
easily see that the error introduced by the Markovian
approximation is of order $\epsilon^2$, i.e., it amounts ignore
a term of order $\epsilon^4$. The upper integration limit $t$ in
both integrals can be substituted by $\infty$ for evolution
times $t$ much larger than the correlation time $r$, because of
the factors $\dot f^{ij}(\tau)$ and $c^{ij}(\tau)$ that vanish
for $\tau>r$.
Then, after an integration by parts of the $f$ term, and
transforming the resulting master equation back to the
Schr\"{o}dinger picture we obtain
\begin{equation}
\dot\rho= -i\big[H_0',\rho\big]-\frac{i}{2}\int_0^\infty d\tau
f^{ij}(\tau) \big[h_i,\big[\dot h_j^{\scriptscriptstyle\rm
I}(-\tau),\rho\big]_+\big] -\int_0^\infty d\tau c^{ij}(\tau)
\big[h_i,\big[h_j^{\scriptscriptstyle\rm I}(-\tau),\rho\big]\big]\,,
\end{equation}
where $ H_0'=H_0-\frac{1}{2}f^{ij}(0) h_ih_j$ is just the
original low-energy Hamiltonian plus a finite renormalization
originated in the integration by parts of the $f$ term. It can
be checked that the low-energy density matrix $\rho(t)$ obtained
by means of the influence functional $\mathcal{F}$ is indeed a
solution of this master equation.
Before discussing this equation in full detail, let us first
study the classical noise limit. With this aim, let us introduce
the parameter
\begin{equation}
\sigma=\int dk' \big[a(k),a^\dag(k')\big]\,,
\end{equation}
which is equal to 1 for quantum noise and 0 for classical noise.
Then, the $f$ term is proportional to $\sigma$ and therefore
vanishes in the classical noise limit. On the other hand, the
function $g(\omega)$ becomes $g(\sigma \omega)$ when introducing
the parameter $\sigma$. In the limit $\sigma\rightarrow 0$, it
acquires the value $g(0)$ which is a constant of order $1/r$.
Therefore, $c^{ij}(\tau)$ becomes in this limit $c_{\rm
class}^{ij}(\tau)=g(0)f^{ij}(\tau)$. Also, the renormalization
term of the low-energy Hamiltonian vanishes in this limit. In
this way, we have arrived at the same master equation that we
obtained in the previous section. This is not surprising because
the origin of the $f$ term is precisely the noncommutativity of
the noise operators, i.e., its quantum nature, while the $c_{\rm
class}$ term actually contains the information about the state
of the bath. In the case of a thermal bath, $g(0)$ is precisely
the temperature of the bath. At zeroth order in $r/l$, the
master equation for classical noise then acquires the form
\begin{equation}
\dot \rho= -i \big[ H_0,\rho\big]- \int_0^\infty d\tau c_{\rm
class}^{ij}(\tau) \big[h_{i},\big[h_{j},\rho\big]\big]\,.
\end{equation}
\subsection{Low-energy effects}
Let us now analyze the general master equation, valid up to
second order in $\epsilon$, that takes into account the quantum
nature of the gravitational fluctuations. These contributions
will be fairly small in the low-energy regime, but may provide
interesting information about the higher-energy regimes in which
$l$ may be of the order of a few Planck lengths and for which
the weak-coupling approximation is still valid. In order to see
these contributions explicitly, let us further elaborate the
master equation. In terms of the operator $L_0$ defined as
$L_0\cdot A=\big[H_0,A\big]$ acting of any low-energy operator
$A$, the time dependent interaction $h^{\scriptscriptstyle\rm I}_j(-\tau)$ can
be written as
\begin{equation}
h^{\scriptscriptstyle\rm I}_j(-\tau)=e^{-iL_0\tau}h_j\,.
\end{equation}
The interaction $h_j$ can be expanded in eigenoperators
$h_{j\Omega}^{\pm}$ of the operator $L_0$, i.e.,
\begin{equation}
h_j=\int d\mu_\Omega\left(h_{j\Omega}^++h_{j\Omega}^-\right)\,,
\end{equation}
with $L_0\cdot h_{j\Omega}^{\pm}=\pm \Omega h_{j\Omega}^{\pm}$ and
$d\mu_\Omega$ being an appropriate spectral measure, which is
naturally cut off around the low-energy scale $l^{-1}$. This
expansion always exists provided that the eigenstates of $H_0$
form a complete set. Then, $h^{\scriptscriptstyle\rm I}_j(-\tau)$ can be
written as
\begin{equation}
h^{\scriptscriptstyle\rm I}_j(-\tau)=\int d\mu_\Omega (e^{-i\Omega \tau }
h^+_{j\Omega }+e^{i\Omega \tau } h^-_{j\Omega })\,.
\end{equation}
It is also convenient to define the new interaction operators
for each low-energy frequency $\Omega$
\begin{equation}
h^1_{j\Omega}=h^+_{j\Omega }-h^-_{j\Omega }\,,
\hspace{5mm}
h^2_{j\Omega}=h^+_{j\Omega }+h^-_{j\Omega }\,.
\end{equation}
The quantum noise effects are reflected in the master equation
through the term proportional to $f^{ij}(\tau)$ and the term
proportional to $c^{ij}(\tau)$, both of them integrated over
$\tau\in (0,\infty)$. Because of these incomplete integrals,
each term provides two different kinds of contributions whose
origin can be traced back to the well-know formula
\begin{equation}
\int_0^\infty d\tau e^{i\omega\tau} =\pi\delta(\omega)+{\cal
P}(i/\omega)\,,
\end{equation}
where ${\cal P}$ is the Cauchy principal part \cite{rs72}.
The master equation can then be written in the following form
\begin{equation}
\dot \rho =-(iL_0'+L_{\rm diss}+L_{\rm diff}+
iL_{\rm s-l})\cdot\rho\,,
\end{equation}
where the meaning of the different terms are explained in what
follows.
The first term $-iL_0'\cdot\rho$, with
$L_0'\cdot\rho=\big[H_0',\rho\big]$, is responsible for the
renormalized low-energy Hamiltonian evolution. The
renormalization term is of order $\varepsilon^2$ as compared
with the low-energy Hamiltonian $H_0$, where $\varepsilon^2=
\epsilon^2\sum_{{\underline{i}}{\underline{j}}}
(\ell_*/l)^{\eta_{\underline{i}}+\eta_{\underline{j}}}$ and,
remember, $\eta_{\underline{i}}= 2n_{\underline{i}}
(1+s_{\underline{i}})-2$ is a parameter specific to each kind of
interaction term $h_i$.
The dissipation term
\begin{equation}
L_{\rm diss}\cdot\rho=\frac{\pi}{4}\int d\mu_\Omega \Omega
G^{ij}(\Omega) \big[ h_i,\big[h^1_{j\Omega},\rho\big]_+\big]
\end{equation}
is necessary for the preservation in time of the low-energy
commutators in the presence of quantum noise. As we have seen,
it is proportional to the commutator between the noise creation
and annihilation operators and, therefore, vanishes in the
classical noise limit. Its size is of order
$\varepsilon^2r/l^2$.
The diffusion process is governed by
\begin{equation}
L_{\rm diff}\cdot\rho=\frac{\pi}{2}\int d\mu_\Omega g(\Omega)
G^{ij}(\Omega)
\big[h_i,\big[h^2_{j\Omega},\rho\big]\big]\,,
\end{equation}
which is of order $\varepsilon^2/l$.
The next term provides an energy shift which can be interpreted
as a mixture of a gravitational ac Stark effect and a Lamb shift
by comparison with its quantum optics analog \cite{ga91,ca93}.
Its expression is
\begin{equation}
L_{\rm s-l} =-\int d\mu_\Omega {\cal P}
\int_{0}^{\infty } d\omega\frac{\Omega }{\omega^2-
\Omega^2 } G^{ij}(\omega)
\left\{g(\omega) \big[h_i,\big[h^1_{j\Omega},\rho\big]\big]
+\frac{\Omega}{2}
\big[h_i,\big[h^2_{j\Omega},\rho\big]_+\big]\right\}\,.
\end{equation}
The second term is of order $\varepsilon^2r^2/l^3$, which is
fairly small. However, the first term will provide a significant
contribution of order $\varepsilon^2r/l^2[\ln(l/r)+1]$. This
logarithmic dependence on the relative scale is indeed
characteristic of the Lamb shift \cite{ga91,ca93,it85}. As we
have argued the function $g(\omega)$ must be fairly flat in the
whole range of frequencies up to the cutoff $1/r$ and be of
order $1/r$ in order to reproduce the appropriate correlations
$c^{ij}(\tau)$. A thermal bath, for instance, produces a
function $g(\omega)$ with the desired characteristics, at least
at the level of approximation that we are considering. In this
specific case, it can be seen that the logarithmic contribution
to the energy shift is not present and it would only appear in
the zero temperature limit. However, since we are modeling
spacetime foam with this thermal bath, the effective temperature
is $1/r$, which is close to Planck scale and certainly far from
zero. From the practical point of view, the presence or not of
this logarithmic contribution is at most an order of magnitude
larger than the standard one and therefore it does not
significantly affect the results. Almost any other state of the
bath with a more or less uniform frequency distribution will
contain such logarithmic contribution.
As a summary, the $f$ term provides a dissipation part,
necessary for the preservation of commutators, and a fairly
small contribution to what can be interpreted as a gravitational
Lamb shift. On the other hand, the $c$ term gives rise to a
diffusion term and a shift in the oscillation frequencies of the
low-energy fields that can be interpreted as a mixture of a
gravitational Stark effect and a Lamb shift. The size of these
effects, compared with the bare evolution, are the following:
the diffusion term is of order $\varepsilon^2$ (see, however,
Refs. \cite{ellis97,ellis97b}); the damping term is smaller by a
factor $r/l$, and the combined effect of the Stark and Lamb
shifts is of order $(r/l)[\ln(l/r)+1]$ as compared with the
diffusion term. Note that the quantum effects induced by
spacetime foam become relevant as the low-energy length scale
$l$ decreases, as we see from the fact that these effects depend
on the ratio $r/l$, while, in this situation, the diffusion
process becomes faster, except for the mass of scalars, which
always decoheres in a time scale which is close to the
low-energy evolution time.
\subsection{Observational and experimental prospects}
These quantum gravitational effects are just energy shifts and
decoherence effects similar to those appearing in other areas of
physics, where fairly well established experimental procedures
and results exist, and which can indeed be applied here,
provided that sufficiently high accuracy can be achieved.
Neutral kaon beams have been proposed as experimental systems
for measuring loss of coherence owing to quantum gravitational
fluctuations \cite{el84,huet95,huet96,bf98}. In these systems,
the main experimental consequence of the diffusion term
(together with the dissipative one necessary for reaching a
stationary regime) is violation of CPT \cite{ha82,pa82} because
of the nonlocal origin of the effective interactions (see also
Refs. \cite{fivel97,ahluwalia99}). The estimates for this
violation are very close to the values accessible by current
experiments with neutral kaons and will be within the range of
near-future experiments. Macroscopic neutron interferometry
\cite{el84,ze84} provides another kind of experimental systems
in which the effects of the diffusion term may have measurable
consequences since they may cause the disappearance of the
interference fringes \cite{el84,ze84}.
As for the gravitational Lamb and Stark effects, they are energy
shifts that depend on the frequency, so that different
low-energy modes will undergo different shifts. This translates
into a modification of the dispersion relations, which makes the
velocity of propagation frequency-dependent, as if low-energy
fields propagated in a ``medium''. Therefore, upon arrival at
the detector, low-energy modes will experience different time
delays (depending on their frequency) as compared to what could
be expected in the absence of quantum gravitational
fluctuations. These time delays in the detected signals will be
very small in general. However, it might still be possible to
measure them if we make the low-energy particles travel large
(cosmological) distances. In fact, $\gamma$-ray bursts provide
such a situation as has been recently pointed out \cite{am98}
(see also Refs. \cite{am98b,schaefer98,biller98}), thus opening
a new doorway to possible observations of these quantum
gravitational effects. These authors assume that the dispersion
relation for photons has a linear dependence on $r/l$ because of
quantum gravitational fluctuations, i.e., that the speed of
light is of the form $v\sim 1+\zeta r/l$, with $\zeta$ being an
unknown parameter of order 1 (see also Ref. \cite{gambini98}).
In this situation, photons that travel a distance $L$ will show
a frequency-dependent time delay $\Delta t\sim \zeta L r/l$.
Using data from a $\gamma$-ray flare associated with the active
galaxy Markarian 421 \cite{biller98,gaidos96} which give
$l^{-1}\sim 1$TeV, $L\sim 1.1\times 10^{16}$ light-seconds, and
a variability time scale $\delta t$ less than 280 seconds, it
can be obtained the upper bound $\zeta r/\ell_*<250$. If $\zeta$
is indeed of order 1, this inequality implies an upper limit on
the scale $r$ of the gravitational fluctuations of a few hundred
Planck lengths. One would then expect that the presence of the
gravitational Lamb and Stark shifts predicted above could be
observationally tested. However, in spacetime foam the role of
the parameter $\zeta$ is played by $\varepsilon^2$ and this
quantity is much smaller than 1, since it contains two factors
which are smaller than 1 for different reasons. The first one is
$e^{-S(r)}(r/\ell_*)^2$. In the semiclassical approximation to
nonperturbative quantum gravity, this exponential can be
interpreted as the density of topological fluctuations of size
$r$, which decreases with $r$ fairly fast. The second factor is,
for the electromagnetic field, of the form $(\ell_*/l)^4$; it
comes from the spin dependence of the effective interactions and
is closely related to the existence of a length scale in quantum
gravity. Then, $\varepsilon^2$ in this case maybe so small that
might render any bound on the size of quantum spacetime foam
effects on the electromagnetic field nonrestrictive at all.
\section{Real clocks}
\indent
In previous sections, we have analyzed the evolution of
low-energy fields in the bath of quantum gravitational
fluctuations that constitute spacetime foam. Here we will
briefly discuss the evolution of physical systems when measured
by real clocks, which are generally subject to errors and
fluctuations, in contrast with ideal clocks which, although
would accurately measure the time parameter that appears in the
Schr\"{o}dinger equation, do not exist in nature (see, e.g., Refs.
\cite{wigner57,salecker58,peres80,page83,hartle88,unruh89b}).
The evolution according to real clocks bears a close resemblance
with low-energy fields propagating in spacetime foam, although
there also exist important differences which will be discussed
at the end of this section.
Quantum real clocks inevitably introduce uncertainties in the
equations of motion, as has been widely discussed in the
literature from various points of view (see, e.g., Refs.
\cite{wigner57,salecker58,peres80,page83,hartle88,unruh89b}).
Actually, real clocks are not only subject to quantum
fluctuations. They are also subject to classical imperfections,
small errors, that can only be dealt with statistically. For
instance, an unavoidable classical source of stochasticity is
temperature, which will introduce thermal fluctuations in the
behavior of real clocks. Thus, the existence of ideal clocks is
also forbidden by the third law of thermodynamics. Even at
zero-temperature, the quantum vacuum fluctuations of quantum
field theory make propagating physical systems (real clocks
among them) suffer a cold-diffusion and consequently a need for
a stochastic description of their evolution \cite{gour98}.
Let us study, within the context of the standard quantum theory,
the evolution of an arbitrary system according to a real clock
\cite{egusquiza98}.
\subsection{Good real clocks}
A real clock will be a system with a degree of freedom $t$ that
closely follows the ideal time parameter $t_{\rm i}$, i.e.,
$t_{\rm i}=t+\Delta(t)$, where $\Delta(t)$ is the error at the
real-clock time $t$. Given any real clock, its characteristics
will be encoded in the probability functional distribution for
the continuous stochastic processes $\Delta(t)$
\cite{kampen81,gardiner85} of clock errors, ${\cal
P}[\Delta(t)]$, which must satisfy appropriate conditions, so
that it can be regarded as a good clock.
A first property is that Galilean causality should be preserved,
i.e., that causally related events should always be properly
ordered in clock time as well, which implies that $t_{\rm i}
(t')>t_{\rm i}(t)$ for every $t'>t$. In terms of the derivative
$\alpha(t)=d\Delta(t)/dt$ of the stochastic process $\Delta(t)$,
we can state this condition as requiring that, for any
realization of the stochastic sequence, $\alpha(t)>-1$.
A second condition that we would require good clocks to fulfill
is that the expectation value of relative errors, determined by
the stochastic process $\alpha(t)$, be zero, i.e.,
$\langle\alpha(t)\rangle=0$ for all $t$. Furthermore, a good
clock should always behave in the same way (in a statistical
sense). We can say that the clock behaves consistently in time
as a good one if those relative errors $\alpha(t)$ are
statistically stationary, i.e., the probability functional
distribution ${\cal P}[\alpha(t)]$ for the process of relative
errors $\alpha(t)$ (which can be obtained from $P[\Delta(t)]$,
and vice versa) must not be affected by global shifts $t\to
t+t_0$ of the readout of the clock. Note that the stochastic
process $\Delta(t)$ need not be stationary, despite the
stationarity of the process $\alpha(t)$.
The one-point probability distribution function for the
variables $\alpha(t)$ should be highly concentrated around the
zero mean, if the clock is to behave nicely, i.e.,
\begin{equation}
\langle\alpha(t)\alpha(t-\tau)\rangle\equiv c(\tau)\leq c(0)\ll
1 \,,
\end{equation}
where $c(\tau)=c(-\tau)$.
The correlation time $\vartheta$ for the stochastic process
$\alpha(t)$ is given by
\begin{equation}
\vartheta=\int_{0}^{\infty}c(\tau)/c(0)\,.
\end{equation}
For convenience, let us introduce a new parameter $\kappa$ with
dimensions of time, defined as $\kappa^2=c(0)\vartheta^2$ and
for which the good-clock conditions imply $\kappa\ll\vartheta$.
As we shall see, $\vartheta$ cannot be arbitrarily large, and,
therefore, the ideal clock limit is given by $\kappa\to0$.
In addition to these properties, a good clock must have enough
precision in order to measure the evolution of the specific
system, which imposes further restrictions on the clock. On the
one hand, the characteristic evolution time $l$ of the system
must be much larger than the correlation time $\vartheta$ of the
clock. On the other hand, the leading term in the asymptotic
expansion of the variance $\langle\Delta(t)^2\rangle$ for large
$t$ is of the form $\kappa^2 t/\vartheta$ which means that,
after a certain period of time, the absolute errors can be too
large. The maximum admissible standard deviation in $\Delta(t)$
must be at most of the same order as $l$. Then the period of
applicability of the clock to the system under study, i.e., the
period of clock time during which the errors of the clock are
smaller than the characteristic evolution time of the system is
approximately equal to $l^2\vartheta/\kappa^2$. For a good
clock, $\kappa\ll\vartheta\ll l$, as we have seen, so that the
period of applicability is much larger than the characteristic
evolution time $l$.
\subsection{Evolution laws}
We shall now obtain the evolution
equation for the density matrix of an arbitrary quantum system
in terms of the clock time $t$. Let $H$ be the time-independent
Hamiltonian of the system and $S$ its action in the absence of
errors.
For any given realization of the stochastic process $\alpha(t)$
that characterizes a good clock, we can write the density matrix
at the time $t$, $\rho_\alpha(t)$, in terms of the initial
density matrix $\rho(0)$ as
\begin{equation}
\rho_\alpha(t)=\$_\alpha(t)\cdot \rho(0)\,,
\end{equation}
where the density matrix propagator $\$_\alpha(t)$ has the
form
\begin{equation}
\$_\alpha(t)=\int {\cal D}q {\cal D}q'
e^{iS_\alpha[q;t]-iS_\alpha[q';t]}\,.
\end{equation}
Here, $S_\alpha[q;t]=S[q;t]-\int_0^t ds\alpha(s) H[q(s)]$ is the
action of the system for the given realization of the stochastic
process $\alpha(t)$.
The average of the density matrix $\rho_\alpha(t)$ can be
regarded as the density matrix of the system $\rho(t)$ at the
clock time $t$:
\begin{equation}
\rho(t)=\int {\cal D}\alpha {\cal P}[\alpha] \$_\alpha(t)
\cdot \rho(0)\,.
\end{equation}
In the good-clock approximation, only the two-point correlation
function $c(\tau)$ is relevant, so that we can write the
probability functional as a Gaussian distribution. The
integration over $\alpha(t)$ is then easily performed to obtain
the influence action ${\cal W}$
\begin{equation}
{\cal W}[q,q';t]=-\frac{1}{2}\int_0^t ds\int_0^s ds'
\{H[q(s)]-H[q'(s)]\}c(s-s')\{H[q(s')]-H[q'(s')]\}\,.
\end{equation}
We see that there is no dissipative term there as could be
expected from the fact that the noise source is classical
\cite{fv63,fh65}. Moreover, as the interaction term is
proportional to $H$, there is no response of the system to the
outside noise, which means that the associated impedance is
infinite \cite{callen51,ga91,mandel95}.
Therefore, we see that the effect of using good real clocks for
studying the evolution of a quantum system is the appearance of
an effective interaction term in the action integral which is
bilocal in time. This can be understood as the first term in a
multilocal expansion, which corresponds to the weak-field
expansion of the probability functional around the Gaussian
term. This nonlocality in time admits a simple interpretation:
correlations between relative errors at different instants of
clock time can be understood as correlations between clock-time
flows at those clock instants. The clock-time flow of the system
is governed by the Hamiltonian and, therefore, the correlation
of relative errors induces an effective interaction term,
generically multilocal, that relates the Hamiltonians at
different clock instants.
From the form of the influence action, it is not difficult to
see that, in the Markov approximation and provided that the
system evolves for a time smaller than the period of
applicability of the clock, the density matrix $\rho(t)$
satisfies the master equation
\begin{equation}
\dot\rho(t)=-i\big[H,\rho(t)\big]-(\kappa^2/\vartheta)
\big[H,\big[H,\rho(t)\big]\big]\,,
\end{equation}
where the overdot denotes derivative with respect to the clock
time $t$. Notice that, in the ideal clock limit, $\kappa\to0$,
the unitary von Neumann equation is recovered. We should also
point out that irreversibility appears because the errors of the
clock cannot be eliminated once we have started using it.
From a different point of view, the clock can be effectively
modeled by a thermal bath, with temperature $T_{\rm b}$ to be
determined, coupled to the system. Let $H+H_{\rm int}+H_{\rm b}$
be the total Hamiltonian, where $H$ is the free Hamiltonian of
the system and $H_{\rm b}$ is the Hamiltonian of a bath that
will be represented by a collection of harmonic oscillators
\cite{ga91,mandel95}. The interaction Hamiltonian will be of the
form $H_{\rm int}=\xi H$, where the noise operator $\xi$ is
given by
\begin{equation}
\xi(t)=\frac{i}{\sqrt{2\pi}}\int_0^\infty d\omega
\chi(\omega)
[ a^{\dag}(\omega) e^{i\omega t}-a(\omega) e^{-i\omega t}]\,.
\end{equation}
In this expression, $a$ and $a^{\dag}$ are, respectively, the
annihilation and creation operators associated with the bath,
and $\chi(\omega)$ is a real function, to be determined, that
represents the coupling between the system and the bath for each
frequency $\omega$.
Identifying, in the classical noise limit, the classical
correlation function of the bath with $c(\tau)$, the suitable
coupling between the system and the bath is given by the
spectral density of fluctuations of the clock:
\begin{equation}
T_{\rm b}\chi(\omega)^2=\int_{0}^\infty d\tau
c(\tau)\cos(\omega\tau)\,.
\end{equation}
With this choice, the master equation for evolution according to
real clocks is identical to the master equation for the system
obtained by tracing over the effective bath.
\subsection{Loss of coherence}
The master equation contains a diffusion term and will therefore
lead to loss of coherence. However, this loss depends on the
initial state. In other words, there exists a pointer basis
\cite{zurek81,zurek82,zurek94}, so that any density matrix which
is diagonal in this specific basis will not be affected by the
diffusion term, while any other will approach a diagonal density
matrix. The stochastic perturbation $\alpha(t)H$ is obviously
diagonal in the basis of eigenstates of the Hamiltonian, which
is therefore the pointer basis: the interaction term cannot
induce any transition between different energy levels. The
smallest energy difference provides the inverse of the
characteristic time for the evolution of the system $l$ and,
therefore, the decay constant is $\kappa^2/\vartheta l^2$, equal
to the inverse of the period of applicability of the clock. By
the end of this period, the density matrix will have been
reduced to the diagonal terms and a much diminished remnant of
those off-diagonal terms with slow evolution. In any case, the
von Neumann entropy grows if the density matrix is not initially
diagonal in the energy basis.
The effect of decoherence due to errors of real clocks does not
only turn up in the quantum context. Consider for instance a
classical particle with a definite energy moving under a
time-independent Hamiltonian $H$. Because of the errors of the
clock, we cannot be positive about the location of the particle
in its trajectory on phase space at our clock time $t$.
Therefore we have an increasing spread in the coordinate and
conjugate momentum over the trajectory. For a generic system,
this effect is codified in the classical master equation
\begin{equation}
\dot\varrho=\big\{H,\varrho\big\}+
(\kappa^2/\vartheta)\big\{H,\big\{H,\varrho\big\}\big\}\,,
\end{equation}
where $\varrho(t)$ is the probability distribution on phase
space in clock time. Finally, it should be observed that the
mechanism of decoherence is neither tracing over degrees of
freedom, nor coarse graining, nor dephasing
\cite{giulini96,cooper97}. Even though there is no integration
over time introduced here by fiat, as happens in dephasing in
quantum mechanics, the spread in time due to the errors of the
clock has a similar effect, and produces decoherence.
\subsection{Real clocks and spacetime foam}
As we have seen, there exist strong similarities between the
evolution in spacetime foam and that in quantum mechanics with
real clocks. In both cases, the fluctuations are described
statistically and induce loss of coherence. However, there are
some major differences. In the case of real clocks, the
diffusion term contains only the Hamiltonian of the system
while, in the spacetime foam analysis, a plethora of
interactions appeared. Closely related to this, fluctuations of
the real clock affect in very similar ways to both classical and
quantum evolution; this is not the case in spacetime foam. The
origin of these differences is the nature of the fluctuations
that we are considering and, more specifically, the existence or
not of horizons. Indeed, when studying real clocks, we have
ensured that they satisfied Galilean causality, i.e., that the
real-time parameter always grows as compared with the ideal
time, so that no closed timelike curves are allowed in Galilean
spacetime, whichever clock we are using. This requirement is in
sharp contrast with the situation that we find in spacetime
foam, where we have to consider topological fluctuations that
contain horizons (virtual black holes, time machines, etc.).
Scattering processes in a spacetime with horizons are
necessarily of quantum nature. A classical scattering process in
the presence of these horizons would inevitably lead to loss of
probability because of the particles that would fall inside the
horizons and would never come out to the asymptotic region.
In other words, the underlying dynamics is completely different
in both cases. Spacetime foam provides a non-Hamiltonian
dynamics since the underlying manifold is not globally
hyperbolic. On the other hand, in the case of quantum mechanics
according to clocks subject to small errors, the underlying
evolution is purely Hamiltonian, although the effective one is
an average over all possible Hamiltonian evolutions and becomes
nonunitary.
\section{Conclusions}
\indent
Quantum fluctuations of the gravitational field may well give
rise to the existence of a minimum length in the Planck scale
\cite{ga95}. This can be seen, for instance, by making use of
the fact that measurements and vacuum fluctuations of the
gravitational field are extended both in space and time and can
therefore be treated with the techniques employed for continuous
measurements, in particular the action uncertainty principle
\cite{me92}. The existence of this resolution limit spoils the
metric structure of spacetime at the Planck scales and opens a
doorway to nontrivial topologies \cite{mt73}, which will not
only contribute to the path integral formulation but will also
dominate the Planck scale physics thus endowing spacetime with a
foamlike structure \cite{wh57} with very complicated topology.
Indeed, at the Planck scale, both the partition function and the
density of topologies seem to receive the dominant contribution
from topological configurations with very high Betti numbers
\cite{ha78,1ca97}.
Spacetime foam may leave its imprint in the low-energy physics.
For instance, it can play the role of a universal regulator for
both the ultraviolet \cite{crane86} and infrared \cite{magnon88}
divergences of quantum field theory. It has also been proposed
as the key ingredient in mechanisms for the vanishing of the
cosmological constant \cite{1ca97,coleman88b}. Furthermore, it
seems to induce loss of coherence \cite{ha82} in the low-energy
quantum fields that propagate on it as well as mode-dependent
energy shifts \cite{1ga98}. In order to study some of these
effects in more detail, we have built an effective theory in
which spacetime foam has been substituted by a fixed classical
background plus nonlocal interactions between the low-energy
fields confined to bounded spacetime regions of nearly Planck
size \cite{1ga98}. In the weak-coupling approximation, these
nonlocal interactions become bilocal. The low-energy evolution
is nonunitary because of the absence of a nonvanishing timelike
Hamiltonian vector field. The nonunitarity of the bilocal
interaction can be encoded in a quantum noise source locally
coupled to the low-energy fields. From the form of the influence
functional that accounts for the interaction with spacetime
foam, we have derived a master equation for the evolution of the
low-energy fields which contains a diffusion term, a damping
term, and energy shifts that can be interpreted as gravitational
Lamb and Stark effects. We have also discussed the size of these
effects as well as the possibility of observing them in the near
future.
We have seen that the evolution of quantum systems according to
good real clocks \cite{egusquiza98} is quite similar to that in
spacetime foam. Indeed, we have argued that good classical
clocks, which are naturally subject to fluctuations, can be
described in statistical terms and we have obtained the master
equation that governs the evolution of quantum systems according
to these clocks. This master equation is diffusive and produces
loss of coherence. Moreover, real clocks can be described in
terms of effective interactions that are nonlocal in time.
Alternatively, they can be modeled by an effective thermal bath
coupled to the system. In view of this analysis, we have seen
that, although there exist strong similarities between
propagation in spacetime foam and according to real clocks,
there are also important differences that come from the fact
that the underlying evolution laws for spacetime foam are
nonunitary because of the presence of horizons while, in the
case of real clocks, the underlying evolution is unitary and the
loss of coherence is due to an average over such Hamiltonian
evolutions.
\section*{Acknowledgments}
\indent
I am grateful to C. Barcel\'{o} and P.F. Gonz\'{a}lez-D\'{\i}az for helpful
discussions and reading the manusscript. I was supported by
funds provided by DGICYT and MEC (Spain) under Projects
PB93--0139, PB94--0107, and PB97--1218.
|
3,212,635,537,681 | arxiv | \section{Introduction}
Two broad questions often occur in problems related to condensed matter systems: What sort of gapless phases can arise from finite density or charge density states? What kind of physics can emerge at a quantum critical point? In the absence of sharp quasiparticles or weakly coupled effective degrees of freedom in the infrared (IR), answering to these questions can be a hard task and conventional field theory methods might fail. In the last decade a new powerful tool to tackle strongly coupled field theories has been developed: AdS/CFT (alias gauge/gravity correspondence alias holography) \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj} (see also the review \cite{Aharony:1999ti}). It is natural to wonder what such method can say about the aforementioned problems. Other reviews, besides the present one, that explore at length the issue are \cite{Hartnoll:2009sz, Herzog:2009xv, McGreevy:2009xe, Horowitz:2010gk, Hartnoll:2011fn}. Here we will focus on the low-temperature limit of simple holographic models, on some models of non-Fermi liquids and on a relation to impurity models.
AdS/CFT is a correspondence between a conformal (non-gravitational) quantum field theory in $d$ dimen\-sions---that we will call the ``boundary theory''---and a quantum theory of gravity on a $d+1$-dimensional background which is asymptotically AdS$_{d+1}$---called the ``bulk theory'' (we refer the reader to the literature \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj, Aharony:1999ti}). The fact that the correspondence, strictly speaking, only applies to conformal boundary theories is not a harmful limitation because on the one hand in condensed matter problems we are often interested in the IR physics around a (quantum) critical point regardless of its UV completion, and on the other hand we will consider systems with finite charge density and possibly non-vanishing order parameters where the UV conformality of the boundary theory is anyway broken (while a new scaling symmetry might emerge in the IR).
To each operator of the boundary theory corresponds a field in the bulk. For instance the stress tensor $T_{\mu\nu}$ corresponds to the graviton $g_{\mu\nu}$, a conserved current $J_\mu$ corresponds to a gauge field $A_\mu$, and a scalar order parameter $\Phi$ corresponds to a scalar field $\varphi$. The statement of the correspondence is that the boundary theory partition function $Z[J]$, as a function of sources $J$ coupled to the operators $\mathcal{O}$ through the action $S_\mathcal{O} = \int d^dx\, J\mathcal{O}$, equals the bulk partition function with the boundary condition that fields asymptote to the sources. For instance, for a scalar operator $\Phi$ of dimension $\Delta$ the boundary condition around $r \to 0$ is:
\begin{equation}} \newcommand{\ee}{\end{equation}
\varphi \,\sim\, J \, r^{d-\Delta} + \langle \Phi \rangle\, r^\Delta \;,
\ee
where $r$ is a ``radial'' coordinate that goes to zero at the boundary and is positive in the interior of AdS$_{d+1}$. The vacuum expectation value (VEV) $\langle \Phi \rangle$ is not part of the boundary conditions, but rather is read off from the gravity solution. The partition function of the bulk quantum theory of gravity can be very difficult to define. However in full-fledged example the boundary theory is usually a non-Abelian gauge theory with $N$ colors, and in a large $N$ and large 't Hooft coupling limit the gravity theory becomes classical. In such limit the correspondence states that
\begin{equation}} \newcommand{\ee}{\end{equation}
Z_\text{boundary}[J] = e^{iS_\text{bulk}[\varphi_0]} \Big|_{\varphi_0 \,\sim\, J\, r^{d-\Delta} + \langle \Phi \rangle\, r^\Delta}
\ee
where $\varphi_0$ is a classical solution of the bulk equations of motion (EOMs) subject to the boundary conditions.
The correspondence can be extended to boundary theories at finite temperature $T$: in this case the gravity solution contains a black hole (BH) whose horizon has temperature $T$. Extra regularity boundary conditions have to be imposed at the horizon, depending on the signature and the type of correlators one is interested in. On the other hand we are interested in finite (charge) density states. The easiest way to obtain a finite charge density state is to start with a theory with a symmetry, say a $U(1)$ symmetry for simplicity, and introduce a chemical potential $\mu$, that is a source term $\mu J^0$ into the Lagrangian. This is achieved by imposing the corresponding boundary condition to the gauge field $A_0$.
\subsection{Emergent gauge fields}
\label{sec: emergent gauge}
In most condensed matter problems the only gauge field present is the photon, a $U(1)$ gauge field. On the contrary, AdS/CFT becomes most computable when the boundary theory is in the large $N$ limit. Indeed AdS/CFT is exploited by considering a CFT with a \emph{global} U(1) symmetry---the photon is thus a spectator that might be weakly gauged at the end of the day---and a large $N$ strongly coupled \emph{gauge} symmetry which keeps the system at strong coupling. Such gauge fields, in relation to the original condensed matter problem, have to be thought of as emergent IR degrees of freedom, not visible outside the fixed point. As part of our motivations, we would like to give one concrete example of emergent gauge field.
The following example has been studied in \cite{Senthil:2004a, Senthil:2004b} and we will follow \cite{Sachdev:2007}. Consider a square lattice spin-half model, with Hamiltonian:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Hamiltonian spin}
H = J \sum_{\langle ij \rangle} \vec S_i \cdot \vec S_j - Q \sum_{\langle ijkl \rangle} \Big( \vec S_i \cdot \vec S_j - \frac14 \Big) \Big( \vec S_k \cdot \vec S_l - \frac14 \Big) \;,
\ee
where $J,Q>0$ are coupling constants and $\vec S_i$ are three-dimensional spins. The first summation is over nearest neighbor sites, whilst the second is over squared plaquettes. The system has a $\mathbb{Z}_4$ symmetry that rotates the square lattice, and a $spin(3)$ symmetry that rotates the spins.
To begin with, consider the limit $Q/J \to 0$: the system asymptotes to the isotropic Heisemberg antiferromagnet, whose ground state has N\'eel order
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle \vec S_i \rangle = (-1)^i \vec \Phi \;,
\ee
where we mean that signs are alternating, and breaks the spin rotation symmetry. The low energy excitations are spin density waves of $\vec\Phi$ (spin-triplets), described by a mean-field IR effective Lagrangian. With a change of variables we can represent the vector $\Phi^a$ as a bi-spinor
\begin{equation}} \newcommand{\ee}{\end{equation}
\Phi^a = z^*_\alpha \sigma^a_{\alpha\beta} z_\beta \;,
\ee
where $z_\alpha$ is a complex spinor and $\sigma^a_{\alpha\beta}$ are gamma-matrices. The new degrees of freedom are redundant, though, as a phase rotation of $z_\alpha$ does not affect $\Phi^a$. We should then gauge such phase away, with the introduction of a $U(1)$ gauge field $A_\mu$. The effective IR Lagrangian is:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Lag emergent gauge}
\mathcal{L}_{z,A} = - \big| (\partial - iA) z \big|^2 - s |z|^2 - u |z|^4 - \frac1{2e^2} F^2 \;,
\ee
with $u>0$ and $s$ below a critical value $s_c$. Therefore $z_\alpha$ condenses, the gauge field is Higgsed and it might be integrated out: at this stage the introduction of $A_\mu$ looks as an unnecessary formal manipulation.
On the contrary, consider the limit $Q/J \to \infty$. The second term in \eqref{Hamiltonian spin} favors the arrangement of the spins in neighboring spin-singlet pairs---the ground state has valence-bond-solid (VBS) order which breaks the $\mathbb{Z}_4$ lattice rotation symmetry (while preserving spin rotation symmetry). The order parameter is the operator:
\begin{equation}} \newcommand{\ee}{\end{equation}
\Psi = (-1)^{j_x} S_j \cdot S_{j+ \hat x} + i(-1)^{j_y} S_j \cdot S_{j+ \hat y}
\ee
where $j_{x,y}$ are the lattice coordinates. The low energy excitations come from breaking a spin-singlet into two free spins $z_\alpha$, and are again described by the Lagrangian \eqref{Lag emergent gauge} with $s$ above a critical value $s_c$, if we identify the $U(1)$ topological symmetry (shift $\zeta \to \zeta + \delta$ of the scalar $\zeta$ dual to $A_\mu$) with the lattice rotation symmetry, and include a term $\mathcal{L}_\zeta \sim \cos \frac{8\pi \zeta}{e^2}$ in the Lagrangian so to explicitly break such topological $U(1)$ to $\mathbb{Z}_4$. Such term makes the dual photon $\zeta$ massive. Moreover $\Psi$ can be identified with the monopole operator.
Now \cite{Senthil:2004a, Senthil:2004b} observe that if we tune $s$ to the critical value $s_c$, the extra term $\mathcal{L}_\zeta$ becomes irrelevant and both $z,A_\mu$ stay massless at the critical point. They provide the effective description of \eqref{Hamiltonian spin} at the critical value of $Q/J$, and $A_\mu$ is an emergent gauge field (not present on either side).
\section{Finite density states and AdS$_2$}
\label{sec: AdS2}
In the following we will focus on $2+1$-dimensional boundary theories, corresponding to a gravity theory in asymptotically AdS$_4$. The minimal setup to describe finite charge density states includes the graviton $g_{\mu\nu}$ and a $U(1)$ gauge field $A_\mu$. We will consider the simple Lagrangian:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Lagrangian basic setup}
\mathcal{L} = \frac1{2\kappa^2} \Big( R + \frac6{L^2} \Big) - \frac1{4e^2} F_{\mu\nu} F^{\mu\nu} + \dots
\ee
where dots stand for other fields that will play a r\^ole later on. Imposing the boundary conditions for chemical potential $\mu$ and temperature $T$, the solution is the Reissner-Nordstr\"om-AdS black hole:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{RNAdS BH}
ds^2 = \frac{L^2}{r^2} \Big( -f(r) dt^2 + \frac{dr^2}{f(r)} + dx^2 + dy^2 \Big) \;,\qquad A = \mu \Big( 1 - \frac r{r_+} \Big) \, dt
\ee
where $r$ is a ``radial'' coordinate that vanishes at the boundary and is positive in the interior, $f(r)$ is a blackening function and $r_+$ is the position of a horizon:
\begin{equation}} \newcommand{\ee}{\end{equation}
f(r) = 1 - \Big( 1 + \frac{r_+^2 \mu^2}{2\gamma^2} \Big) \Big( \frac r{r_+} \Big)^3 + \frac{r_+^2 \mu^2}{2\gamma^2} \Big( \frac r{r_+} \Big)^4 \;,\qquad T = \frac1{4\pi r_+} \Big( 3 - \frac{r_+^2 \mu^2}{2\gamma^2} \Big) \;.
\ee
We introduced the parameter $\gamma \equiv eL/\kappa$.
Taking the zero-temperature and near-horizon limit, the geometry asymptotes to $AdS_2 \times \mathbb{R}^2$:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{AdS2 vacuum}
ds^2 \,\to\, \frac{L^2}6 \Big( \frac{-dt^2 + dr^2}{r^2} \Big) + dx^2 + dy^2 \;,\qquad A = \frac{\gamma}{\sqrt6}\, \frac{dt}r \;,
\ee
in terms of new coordinates $r,t,x,y$.
Surprisingly, in the IR the state has an emergent ``local'' scaling symmetry: $r\to \lambda r$, $t \to \lambda t$, $x,y \to x,y$. We can think of it as the $z \to \infty$ limit of a Lifshitz scaling (sec. \ref{sec: Lifshitz}). As we will see later in section \ref{sec: impurity}, a possible lattice realization is through an impurity model. Such state has some peculiar properties. First, the entropy density (computed from the black hole horizon area $A$ as $s = 2\pi A/\kappa^2 vol$) has a non-vanishing zero-temperature limit:
\begin{equation}} \newcommand{\ee}{\end{equation}
s(T \to 0) = \frac{\pi \mu^2}{3e^2} \;.
\ee
Second, the density of states is IR divergent \cite{Jensen:2011su}: $\rho(E) \sim e^s \delta(E) + E^{-1}$. One thus expects some instability to kick in somewhere in the IR. We will later consider two (non-exhaustive) possibilities: Bose-Einstein condensation and population of a Fermi sea.
\subsection{Probe fermions and fermionic spectral functions}
\label{sec: probe fermions}
In order to gather more information about the state \eqref{AdS2 vacuum}, one can compute 2-point functions of probe fermionic operators \cite{Lee:2008xf, Liu:2009dm, Cubrovic:2009ye, Faulkner:2009wj}. A bulk probe Dirac fermion $\psi$ of charge $e$ and mass $m$ corresponds to a fermionic operator $\mathcal{O}_\psi$ of dimension%
\footnote{In the range $|m|L < \frac12$ an alternative quantization $\Delta = \frac32 - |m|L$ is possible.}
$\Delta = \frac32 + |m|L$. To compute 2-point functions only the quadratic action is needed, and we will consider the Dirac Lagrangian:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Dirac Lagrangian}
\mathcal{L}_\psi = i\bar\psi \Gamma^\mu \Big( \partial_\mu + \frac14 \omega_\mu^{ab} \Gamma_{ab} - iA_\mu \Big) \psi - im \bar\psi \psi \;.
\ee
For parameters that satisfy $(mL)^2 \leq \gamma^2$, there is Schwinger pair production in the bulk \cite{Pioline:2005pf}, and one should expect a finite density of fermions hoovering outside the charged horizon.
It is particularly interesting to compute the single-particle retarded Green's and spectral functions:
\begin{equation}} \newcommand{\ee}{\end{equation}
G_R(t,\vec x) \equiv i \Theta(t) \langle \{ \mathcal{O}_{\psi}(t,\vec x), \mathcal{O}_\psi^\dag(0) \} \rangle \;,\qquad A(\omega, \vec k) \equiv \frac1\pi \,\mathbb{I}\mbox{m}\, G_R(\omega, \vec k) \;,
\ee
where $\Theta(t)$ is the Heaviside step function. The spectral function $A$ describes the density of states, and a surface of sharp peaks represents the dispersion relation $\omega(\vec k)$ of quasinormal modes. One can make a connection with photo-emission (ARPES) experiments on different materials, where such density of states is measured \cite{Damascelli:2003}.
It turns out \cite{Faulkner:2009wj, Faulkner:2010zz} that in the IR CFT the operator $\mathcal{O}_\psi(\vec k)$ at momentum $\vec k$ has dimension $\delta_k$:
\begin{equation}} \newcommand{\ee}{\end{equation}
\delta_k = \frac12 + \nu_k \;,\qquad \nu_k \equiv \frac1{\sqrt6} \, \sqrt{ m^2L^2 + \frac{3k^2}{\mu^2} - \gamma^2}
\ee
and its Green's function (in the IR CFT) is $\varsigma_k(\omega) = c(k) \, \omega^{2\nu_k}$, where $c(k)$ is some analytic function of $k$.
For $(mL)^2 < \frac23 \gamma^2$ the Dirac equation has static normalizable solutions \cite{Liu:2009dm} (see their figure 2), which signal a Fermi surface at momentum $k_F$. While the precise value of $k_F$ depends on the details of the UV theory, the physics around it does not.
The small-frequency expansion of the Green's function around the Fermi surface fits the following asymptotic expression:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{holographic Green's function}
G_R(\omega, k) \simeq \frac{h_1}{k - k_F - \frac1{v_F} \omega - \Sigma(\omega, k)} \;,\qquad \Sigma(\omega,k) = h_2\, \varsigma_{k_F}(\omega) \;,
\ee
where $h_1, h_2, v_F$ are constants while $\Sigma(\omega,k)$ at leading order does not depend on $k$ (only on $k_F$).
Let us compare such behavior with that in Landau Fermi liquid (LFL) theory:
\begin{equation}} \newcommand{\ee}{\end{equation}
G_R^\text{(LFL)}(\omega, k) = \frac{Z}{\omega - v_F (k-k_F) + i \Gamma} + \dots \;,\qquad \Gamma \sim \omega^2 \;.
\ee
Following the pole in the lower complex $\omega$-plane as a function of $k$, we deduce the dispersion relation $\omega_c(k)$ that we split into real and imaginary part: $\omega_c(k) = \omega_*(k) - i \Gamma(k)$. The behavior of the holographic liquid \eqref{holographic Green's function} depends on the value of the parameter $\nu_{k_F}$ \cite{Faulkner:2009wj, Faulkner:2010zz}. For $\nu_{k_F} > \frac12$ we have a Fermi liquid, characterized by sharp quasiparticles:
\begin{equation}} \newcommand{\ee}{\end{equation}
\omega_*(k) = v_F(k-k_F) + \dots \;,\qquad \frac{\Gamma(k)}{\omega_*(k)} \propto (k-k_F)^{2\nu_{k_F} - 1} \to 0 \;,\qquad Z = h_1 v_F \;.
\ee
We observe, respectively, linear dispersion with Fermi velocity $v_F$; decay rate that goes to zero (stable quasiparticles); non-vanishing spectral weight.
The case $\nu_{k_F}=1$, studied in \cite{Cubrovic:2009ye}, is very similar to a Landau Fermi liquid, although corrections of the form%
\footnote{Logarithmic corrections appears for any $\nu_{k_F} \in \mathbb{N}/2$ \cite{Faulkner:2009wj}.}
$\omega^2 \log \omega$ are present in $\Gamma$ so that it is not a conventional LFL yet.
For $\nu_{k_F}<\frac12$ we have a non-Fermi liquid, with no sharp quasiparticles:
\begin{equation}} \newcommand{\ee}{\end{equation}
\omega_*(k) \sim (k-k_F)^{1/2\nu_{k_F}} \;,\qquad \frac{\Gamma(k)}{\omega_*} \to \text{const} \;,\qquad Z \propto (k-k_F)^\frac{1-2\nu_{k_F}}{\nu_{k_F}} \to 0 \;.
\ee
In this case we have non-analytic dispersion relation, with imaginary part always comparable with its real part (quasiparticles never stable); vanishing spectral weight.
The boundary case $\nu_{k_F} = \frac12$ is particularly interesting as it provides a realization of the ``marginal Fermi liquid'' (MFL) phenomenological model introduced in \cite{Varma:1989zz}. The small-frequency behavior is
\begin{equation}} \newcommand{\ee}{\end{equation}
G_R^\text{(MFL)} \simeq \frac{h_1}{(k-k_F) + c_R \omega \log \omega + c_1 \omega} \;,\qquad Z \sim \frac1{|\log \omega_*|} \to 0 \;,
\ee
where $c_1$ is complex while $c_R$ is real. The single-particle scattering rate is suppressed with respect to the real part, but only logarithmically; the quasiparticle residue vanishes, but only logarithmically.
At finite temperature $T \ll \mu$, the Green's function pole never reaches the real axis and the Fermi surface gets smeared. One finds the two asymptotic behaviors \cite{Faulkner:2010zz}:
\begin{equation}} \newcommand{\ee}{\end{equation}
\omega \ll T: \quad \Sigma(\omega,k) \propto T^{2\nu_k} \;,\qquad\qquad \omega \gg T:\quad \Sigma(\tau,k) \sim \left| \frac{\pi T}{\sin(\pi T \tau)} \right|^{2\Delta_k} \;.
\ee
The leading contribution of the Fermi surface to the conductivity, which can be computed with Kubo's formula $\sigma(\omega) = \frac1{i\omega} \langle J_x(\omega) J_x(-\omega) \rangle_\text{retarded}$, is evaluated \cite{Faulkner:2010zz} by a one-loop diagram in the bulk where the fermions run in the loop connecting two insertions of $A_\mu$. The resulting DC conductivity, for $\nu_k \leq \frac12$, is:
\begin{equation}} \newcommand{\ee}{\end{equation}
\sigma(\omega \to 0) \sim T^{-2\nu_k} \;.
\ee
In particular, when the parameter $\nu_{k_F}$ is set to $\nu_{k_F} = \frac12$ one gets a contribution to the resistivity linear in temperature. This is precisely the behavior observed in the strange metal phase of cuprates (see \textit{e.g.} \cite{Damascelli:2003}), whose theoretical origin has not been fully understood yet.
\subsection{Semi-holographic (non)-Fermi liquids}
\label{sec: semiholographic}
The IR Green's and spectral functions discussed in the previous section can be reproduced by a simple effective or semi-holographic model \cite{Faulkner:2010tq}. Consider a Fermi liquid $\Psi$ coupled to the fermionic fluctuations $\chi$ of a critical system with large dynamical exponent $z$---for simplicity we will consider a local critical system:
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{L} = i \big[\overline{\Psi} (\omega - v_F k_\perp)\Psi + g\overline\Psi \chi + g^*\bar\chi \Psi + \bar\chi \varsigma^{-1} \chi \big] \;,
\ee
where $k_\perp = k - k_F$ while $\varsigma$ is the local critical system Green's function: $\varsigma = \langle \bar\chi \chi \rangle = c(k) \omega^{2\nu_k}$. By resumming the series of interactions \cite{Faulkner:2010tq} one obtains the effective Green's function:
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle \overline\Psi \Psi \rangle = \frac1{\omega - v_F k_\perp - |g|^2\varsigma} \;,
\ee
which has the same form as the ones discussed before.
\section{Instabilities: superconductors and electron stars}
We want to consider now what effects other fields, represented by dots in the Lagrangian \eqref{Lagrangian basic setup} and generically present in the bulk, can have. The fields in \eqref{Lagrangian basic setup} are the ones responsible for the backgrounds we discussed, while the extra fields will be coupled to them and might become unstable. The two main mechanisms we will discuss are Bose-Einstein condensation (BEC) in the case of bosons, and population of the Fermi sea in the case of fermions.
\subsection{Holographic superconductors}
\label{sec: superconductors}
A first type of instability might arise when an order parameter $\mathcal{O}_\phi$---charged under the $U(1)$ symmetry with chemical potential---is present. Consider a charged scalar field $\phi$ \cite{Gubser:2008px, Hartnoll:2008vx} with the minimally coupled Lagrangian:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Lagrangian superconductor}
\mathcal{L} = \frac1{2\kappa^2} \Big( R + \frac6{L^2} \Big) - \frac1{4e^2} F_{\mu\nu} F^{\mu\nu} - \big| \nabla \phi - i A \phi \big|^2 - m^2 |\phi|^2 - V(|\phi|)
\ee
where $V$ is some potential. In the presence of a bulk electric flux (that follows from the boundary chemical potential), one could expect BEC ending up with a background:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{superconductor ansatz}
\frac{ds^2}{L^2} = - f(r) dt^2 + g(r) dr^2 + \frac{dx^2 + dy^2}{r^2} \;,\qquad A = \gamma \, h(r) dt \;,\qquad \phi = \phi(r)
\ee
where $f,g,h,\phi$ are radial functions. Whether the condensation takes place depends on which state has lower free energy. Indeed, upon numerical evaluation, it turns out \cite{Hartnoll:2008vx, Hartnoll:2008kx, Gubser:2008pf, Denef:2009tp} that at $T=0$, $\phi$ condenses whenever its effective mass in the IR AdS$_2$ region falls below the AdS$_2$ BF bound \cite{Breitenlohner:1982bm}:
\begin{equation}} \newcommand{\ee}{\end{equation}
\frac16 \big( m^2L^2 - \gamma^2 \big) \leq - \frac14 \;.
\ee
Indeed this is the bound on parameters to have Schwinger pair production in the bulk \cite{Pioline:2005pf}. The condensation takes place up to some critical temperature $T_c$ of order $\mu$. The macroscopically occupied ground state spontaneously breaks the $U(1)$ symmetry, and realizes a ``holographic'' superfluid (if the symmetry is weakly gauged, then it is a superconductor).
Lots of properties of holographic superfluid states have been studied (we refer the reader to the literature, see also \cite{Horowitz:2008bn, Gubser:2008wz}). One key feature is that the condensate undergoes a mean-field second-order phase transition at the critical temperature:
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{O}_\phi/T_c \sim (T - T_c)^{1/2} \;.
\ee
The optical conductivity $\sigma(\omega)$ shows in its real part a gap $\Delta_\omega$. An interesting fact is that the ratio $\Delta_\omega/T_c$ is in these systems around 8 \cite{Hartnoll:2008vx}, at least in a limit of large charge $e$. Such number compares well with what measured in some cuprate high-$T_c$ superconductors, while in BCS theory it takes a value around $3.5$.
\subsection{Fermions in superconductors}
\label{sec: fermions superconductors}
Once a superfluid/superconducting state has been established, one can proceed as in section \ref{sec: probe fermions} to study its properties by analyzing two-point functions of fermionic operators $\mathcal{O}_\Psi$ \cite{Faulkner:2009am, Gubser:2009dt}, corresponding to Dirac fermions $\Psi$ in the bulk. In the bulk Lagrangian only terms up to quadratic order in the fermions are relevant to such computation, however it is essential to keep into account couplings to the background. In \cite{Faulkner:2009am} the following Lagrangian has been considered:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Majorana-like Lagrangian}
\mathcal{L}_\Psi = i \overline\Psi ( \Gamma^\mu D_\mu - m ) \Psi + \eta_5^* \phi^* \overline{\Psi^c} \Gamma_5 \Psi + \text{h.c.} \;,
\ee
which requires the fermions to have half the charge of $\phi$, and where $\Psi^c$ is the charge conjugate to $\Psi$.
The Majorana-like coupling is crucial because it leads to a gapping of the Fermi surface. It couples positive- to negative-frequency modes, as in a BCS s-wave superconductor. When computing the Green's function $G_R(\omega, \vec k)$, one should look for solutions to the Dirac equation from \eqref{Majorana-like Lagrangian} of the form:
\begin{equation}} \newcommand{\ee}{\end{equation}
\Psi = e^{-i\omega t + i \vec k \cdot \vec x} \Psi^{(\omega, \vec k)}(r) + e^{i\omega t - i \vec k \cdot \vec x} \Psi^{(-\omega, - \vec k)}(r)
\ee
(the details on how to compute $G_R$ can be found in \cite{Iqbal:2009fd}).
Without the Majorana-like coupling ($\eta_5 = 0$), solutions to the Dirac equation that satisfy the boundary conditions determine quasinormal modes of $\Psi$ whose dispersion relation would be $ \omega = \omega_*(k)$---and its intersection with the plane $\omega = 0$ would determine the Fermi surface at $k_F$. The dispersion relation for the quasinormal modes of $\Psi^c$ would be $\omega = - \omega_*(-k)$. Once $\eta_5$ is turned on, the quasinormal modes of $\Psi$ and $\Psi^c$ are coupled: they cross at $\omega = 0$ and eigenvalue repulsion determines a gapping of the would-be Fermi surface \cite{Faulkner:2009am}.
\subsection{p-wave superconductors}
p-wave superfluids/superconductors are characterized by a spin-1 order parameter. Since the order parameter $\vec W$ is complex, different patterns of symmetry breaking are possible: in a state with $p$ order spatial rotations are broken, while in a $p+ip$ state time reversal is broken while spatial rotations are preserved (up to charge rotations).
Examples are Sr$_2$RuO$_4$ (whose ground state has $p+ip$ order) and $^3$He (which is $p+ip$ at ambient pressure, and $p$ at high pressure). A holographic model \cite{Gubser:2008wv, Ammon:2009xh} can be obtained from a CFT with non-Abelian symmetry, say $SU(2)$, explicitly broken by a $U(1)$ chemical potential. The bulk Lagrangian is
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{L}_p = \frac1{2\kappa^2} \Big( R + \frac6{L^2} \Big) - \frac1{4g^2} F_{\mu\nu}^a F^{\mu\nu}_a \;.
\ee
A chemical potential along $\tau^3$ in color space breaks the $SU(2)$ symmetry to $U(1)$ and generates a charged massive W-boson, which becomes the possibly condensing spin-1 order parameter. One can consider a $p$ and a $p+ip$ ansatz respectively:
\begin{equation}} \newcommand{\ee}{\end{equation}
A_{(p)} = \Phi(r) \tau^3 dt + w(r) \tau^1 dx \;,\qquad A_{(p+ip)} = \Phi(r) \tau^3 dt + w(r) (\tau^1 dx + i\tau^2 dy) \;.
\ee
At large $g$, $p+ip$ is unstable to decay to $p$ \cite{Gubser:2008wv}, but it is not known what happens at small $g$.
\subsection{d-wave superconductors}
d-wave superfluids/superconductors have a spin-2 order parameter. Examples are the high-temperature cuprate superconductors. They present a particularly rich phenomenology and phase diagram (fig. \ref{fig: cuprate phase diagram}), also related to the nature of the order parameter. For a review see \cite{Damascelli:2003}.
For instance, in the superconductive phase the Fermi surface is gapped in an anisotropic way:%
\footnote{Cuprates have a layered molecular structure, so that the physics is mainly $2+1$-dimensional.}
the dependence of the gap $\Delta_\omega$ on the direction in momentum space is $\Delta_\omega \propto |\cos 2\theta|$ (where $\theta$ is an angle). Along the would-be Fermi surface there are four ``Dirac nodes'' where the gap vanishes, and the dispersion relation of quasinormal modes takes the shape of anisotropic Dirac cones. Above the superconductive dome in the underdoped region there is a ``pseudo-gap phase'' where the nodes open up into Fermi arcs whose (angular) length is linear in temperature. One could wonder whether a holographic model would possess such properties.
\begin{figure}
\begin{minipage}{0.37\textwidth}
\includegraphics[width=\textwidth]{CupratePhaseDiagram2.jpg}
\caption{Schematic phase diagram of the cuprates showing temperature versus hole doping. Below the curve $T^*$ a pseudo-gap with Fermi arcs opens in the quasiparticle spectrum. Image taken from \cite{Varma:2010}.
\label{fig: cuprate phase diagram}}
\end{minipage}
\hfil
\begin{minipage}{0.60\textwidth}
\includegraphics[width=\textwidth]{DensitySpectralFunction.jpg}
\caption{Density plot of the fermion spectral function evaluated at $\omega = 0$ for temperatures $T = 0.49T_c$ (left) and $T=0.59 T_c$ (right). Red and blue correspond to large and small values of the spectral function. Image taken from \cite{Benini:2010qc}.
\label{fig: spectral function}}
\end{minipage}
\end{figure}
To reproduce a d-wave order parameter, a holographic model must contain a massive charged spin-2 field in the bulk \cite{Benini:2010pr}. Writing down a consistent and causal action for such field is knowingly hard \cite{Velo:1972, Buchbinder:2000fy}. One possibility would be to take a model that admits a background with compact directions, and perform a Kaluza-Klein (KK) reduction: from the graviton one obtains a KK tower of massive charged spin-2 fields. All these fields will presumably condense at roughly the same temperature, thus solving numerically for the background might be challenging. Another possibility \cite{Benini:2010pr} is to work in a limit of large condensate charge $q$ (at fixed $\mu q$), in which the spin-1 and spin-2 fields do not backreact on the metric: then it is possible to write down a consistent Fierz-Pauli-like action. The background is the AdS-Schwarzschild black hole, while the matter action is:
\begin{multline}
\mathcal{L}_d = - |D_\rho \varphi_{\mu\nu}|^2 + 2 |\varphi_\mu|^2 + |D_\mu \varphi|^2 - (\varphi^{*\mu} D_\mu \varphi + \text{c.c.}) - m^2 \big( |\varphi_{\mu\nu}|^2 - |\varphi|^2 \big) \\
+ 2 R_{\mu\nu\rho\lambda} \varphi^{*\mu\rho} \varphi^{\nu\lambda} - \frac14 R |\varphi|^2 - i q F_{\mu\nu} \varphi^{*\mu\lambda} \varphi^\nu_\lambda - \frac14 F_{\mu\nu} F^{\mu\nu} \;,
\end{multline}
where $\varphi_{\mu\nu}$ is a symmetric tensor, $\varphi_\mu \equiv D^\nu \varphi_{\nu\mu}$, $\varphi \equiv \varphi^\mu_\mu$ and $D_\mu = \nabla_\mu - iqA_\mu$. The action is ghost-free and leads to a hyperbolic system of EOMs, that is to a well-posed Cauchy problem. On the other hand it leads to superluminality, which could be cured by higher order terms: such corrections are small in a limit of large $mL$.
Both a $d$ and a $d+id$ ansatz\"e are possible. Let us consider a $d$ ansatz: $\varphi_{xx} = - \varphi_{yy} \equiv \varphi_\Delta(r)$, $A = \Phi(r) dt$. It turns out that, after a suitable rescaling, the equations for $\varphi_\Delta, \Phi$ are identical to the s-wave case: in particular there is condensation below a critical temperature $T_c$.
As in sections \ref{sec: probe fermions} and \ref{sec: fermions superconductors}, the superfluid/superconducting state is conveniently probed by 2-point functions of fermionic operators. In \cite{Benini:2010qc} the following Lagrangian, quadratic in the fermions, has been considered:
\begin{equation}} \newcommand{\ee}{\end{equation}
\label{Lagrangian fermions in d}
\mathcal{L}_\Psi = i \overline\Psi (\Gamma^\mu D_\mu - m) \Psi + \eta_5^* \varphi_{\mu\nu}^* \overline{\Psi^c} \Gamma^\mu D^\nu \Psi + \text{h.c.} \;.
\ee
If we restrict ourselves to couplings of dimension smaller that six, to a condensate with twice the charge of the fermions (natural if the order parameter is a sort of Cooper pair) and to the $d$ ansatz, the Lagrangian \eqref{Lagrangian fermions in d} is essentially uniquely fixed.%
\footnote{Other two possible terms are $|\varphi_{\mu\nu}|^2 \overline \Psi(c_1 + c_2 \Gamma_5)\Psi$ which corrects the fermion mass, and the dipole term $\overline \Psi \Gamma^{\mu\nu} F_{\mu\nu} \Psi$ which does not depend on the condensate.}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{EDCs.jpg}
\caption{EDCs at the would-be Fermi momentum $k_F(\theta)$ for several angles $\theta$ in momentum space and temperatures. Angles run from $0$ to $\frac\pi4$ with homogeneous spacing. Left (from \cite{Kanigel:2007}): underdoped Bi$_2$Sr$_2$CaCuO$_8$; red curves show a gap, green curves show no gap. Right (from \cite{Benini:2010qc}): the holographic model.
\label{fig: EDCs}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Gap.jpg}
\caption{Gapping $\Delta_\omega(\theta)$ of the Fermi surface as function of the angle $\theta$ in momentum space, at different temperatures. Left (from \cite{Kanigel:2007}): underdoped Bi$_2$Sr$_2$CaCuO$_8$. Right (from \cite{Benini:2010qc}): the holographic model. In both a node is visible at low temperatures, while it opens up into an arc at higher temperatures.
\label{fig: gap}}
\end{figure}
Let us give some details on the resulting traced spectral function $A(\omega, \vec k) \equiv \frac1\pi \Tr \,\mathbb{I}\mbox{m}\, G_R(\omega, \vec k)$, which can be computed numerically \cite{Benini:2010qc}. In figure \ref{fig: EDCs} we plot the energy distribution curves (EDCs), that is the spectral function computed along $\omega$ at the would-be Fermi momentum $k_F(\theta)$ and at different values of the angle $\theta$ in momentum space, at different temperatures and compare them with an experimental sample of underdoped Bi$_2$Sr$_2$CaCuO$_8$ \cite{Kanigel:2007}. From the EDCs we can extract the gap $\Delta_\omega(\theta)$ at different temperatures, plotted in fig. \ref{fig: gap} and compared with the sample. From the plot is evident the d-wave behavior $\Delta_\omega(\theta) \propto |\cos(2\theta)|$ with four nodes at low temperatures, and the development of four Fermi arcs at higher temperatures. In fig. \ref{fig: spectral function} a density plot of the spectral function in momentum space at $\omega=0$ also shows how Dirac nodes (left) open up into arcs (right). Despite this encouraging similarities, in the holographic model the arc length does not show a linear behavior with temperature.
\subsection{Electron stars}
\label{sec: electron stars}
Let us now come back to the simple bulk system of a graviton, a $U(1)$ gauge field and a Dirac fermion, as in \eqref{Lagrangian basic setup} and \eqref{Dirac Lagrangian}:
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{L} = \frac1{2\kappa^2} \Big( R + \frac6{L^2} \Big) - \frac1{4e^2} F_{\mu\nu} F^{\mu\nu} - i \overline{\psi} (\Gamma^\mu D_\mu - m) \psi \;.
\ee
This time, instead of treating the fermions as pure probes, one can notice that for $(mL)^2 \leq \gamma^2$ there is Schwinger pair production in the bulk \cite{Pioline:2005pf} which leads to the population of the Fermi sea \cite{Arsiwalla:2010bt, Hartnoll:2009ns, Hartnoll:2010gu, Hartnoll:2011dm}: the $U(1)$ is not broken (because Pauli exclusion principle prevents a macroscopic occupation of the ground state), rather a bulk Fermi surface appears. This represents a fermionic form of instability.
In the limit $mL, \gamma \gg 1$ one can use a WKB (or Thomas-Fermi-Oppenheimer-Volkov) approximation in which the Dirac eigenstates become very localized and the fermions can be treated as an ideal fluid \cite{Hartnoll:2009ns, Hartnoll:2010gu}:
\begin{equation}} \newcommand{\ee}{\end{equation}
\mathcal{L} = \frac1{2\kappa^2} \Big( R + \frac6{L^2} \Big) - \frac1{4e^2} F_{\mu\nu} F^{\mu\nu} + (\mu_\text{loc} \sigma - \rho) \;,
\ee
where $\mu_\text{loc}$ is the local bulk chemical potential, $\rho$ the energy density and $\sigma$ the charge density. Assuming that $T=0$ in the bulk, the equation of state is fixed by the following equations:
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
\mu_\text{loc} &= \frac{A_t}{\sqrt{g_{tt}}} \;,\qquad& p &= \mu_\text{loc} \sigma - \rho \;,\qquad \\
\rho &= \int_m^{\mu_\text{loc}} E \, g(E)\, dE \;,\qquad& \sigma &= \int_m^{\mu_\text{loc}} g(E)\, dE \;,\qquad& g(E) &= \frac E{\pi^2} \sqrt{E^2 - m^2} \;,
\eea
where $p$ is pressure and $g(E)$ is the density of states.
The whole system can be solved numerically.
\subsection{IR geometries and Lifshitz scaling}
\label{sec: Lifshitz}
In the presence of the bosonic (sec. \ref{sec: superconductors}) or fermionic (sec. \ref{sec: electron stars}) instabilities, the extra fields deform the core of the geometry. It turns out that in the IR a (non-relativistic) Lifshitz scaling symmetry
\begin{equation}} \newcommand{\ee}{\end{equation}
r \to \lambda r \;,\qquad t \to \lambda^z t \;,\qquad (x,y) \to \lambda (x,y)
\ee
might emerge. Here $z$ is called ``dynamical exponent'': $z=1$ corresponds to a relativistic scaling, while $z=\infty$ is called ``local criticality''.
Indeed if we are only interested in the asymptotic IR geometry, for instance in the bosonic case \eqref{superconductor ansatz}, we can analytically look for solutions of the form:
\begin{equation}} \newcommand{\ee}{\end{equation}
\frac{ds^2}{L^2} = - \frac{dt^2}{r^{2z}} + g_\infty \frac{dr^2}{r^2} + \frac{dx^2 + dy^2}{r^2} \;,\qquad A = \gamma\, h_\infty \frac{dt}{r^z} \;,\qquad \phi = \phi_\infty \;,
\ee
where $g_\infty, h_\infty, \phi_\infty$ are constants. The existence of such solutions depends on the choice of mass $m^2$ and potential $V$ in \eqref{Lagrangian superconductor}. In the fermionic case the background is the same, with $p = p_\infty$ and $\rho = \rho_\infty$. In this case for $e^2\gamma^2 \to \infty$ one finds $z \to 1$ and the geometry is AdS$_4$; for $e^2\gamma^2 \to 0$ one finds $z \to \infty$ and the geometry is $AdS_2 \times \mathbb{R}^2$.
Let us remark some properties of the Lifshitz background. First, it does not have horizon: the electric flux---that was emanating from the black hole horizon in \eqref{RNAdS BH} before taking into account the extra unstable fields---now emanates from the boson or the fermion. Second, Lifshitz scaling implies that the temperature dependence of the entropy is $S \propto T^{2/z}$. Indeed at $T=0$ the entropy vanishes, while for $z \to \infty$ one finds a finite entropy ground state. Third, the Lifshitz geometry is not geodesically complete, and the question remains whether there is a production of excited string states in the far IR, leading to a further instability.
\section{Local criticality and the impurity problem}
\label{sec: impurity}
In section \ref{sec: AdS2} we saw that a simple holographic model can realize, as the $AdS_2 \times \mathbb{R}^2$ background, a local quantum critical ground state, which is particularly interesting because of its relation to the strange metal phase. In fact it is also possible to realize such state with an impurity problem \cite{Sachdev:2010um, Sachdev:2010uj}.
For instance, following \cite{Sachdev:2010uj} consider a single spin impurity at $\vec x = 0$ coupled to the CFT$_3$ at the N\'eel/VBS antiferromagnetic transition discussed in sec. \ref{sec: emergent gauge}. Let us couple them through the following partition function:
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
Z &= \int \mathcal{D} z^\alpha(x,\tau) \, \mathcal{D} A_\mu(x,\tau) \, \mathcal{D}\chi(\tau)\, \exp \Big\{ - \int d\tau\, \mathcal{L}_\text{imp} - \int d^2x\, d\tau\, \mathcal{L}_{z,A} \Big\} \\
\mathcal{L}_\text{imp} &= i \chi^\dag \Big( \parfrac{}{\tau} - iA_\tau(0,\tau) \Big) \chi \;,
\eea
where $\mathcal{L}_{z,A}$ is the effective Lagrangian \eqref{Lag emergent gauge} written in terms of the slave fermion $z_\alpha$, and the impurity $\hat S^a$ has been written in terms of a slave fermion $\chi$ as well: $\hat S^a = \frac12 \chi^*_\alpha \sigma^a_{\alpha\beta} \chi_\beta$.
The impurity correlators can be computed at large $N$ for $N$-dimensional spins: at low temperature $T \ll \omega$ they decay with power-law in time:
\begin{equation}} \newcommand{\ee}{\end{equation}
\langle \hat S^a(\tau) \hat S^b(0) \rangle \,\sim\, \delta^{ab} \Big| \frac{\pi T}{\sin(\pi T \tau)} \Big|^\gamma \,\xrightarrow{T \to 0}\, \delta^{ab} |\tau|^{-\gamma} \;.
\ee
Moreover the ground state has finite zero-temperature entropy. These are precisely the properties of the AdS$_2$ local quantum critical system, and after all local criticality is the expected scaling of an impurity.
Similar properties can be obtained in four dimensions by coupling a spin impurity to 4d $\mathcal{N}=4$ SYM \cite{Kachru:2009xf, Mueck:2010ja}:
\begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}
S &= \int d\tau\, \mathcal{L}_\text{imp} + \int d^3x\, d\tau\, \mathcal{L}_\text{SYM} \;, \\
\mathcal{L}_\text{imp} &= \chi^{\dag a} \Big( \delta^b_a \parfrac{}{\tau} + i A_\tau(0,\tau)^b_a + i v^I \phi_I(0,\tau)^b_a \Big) \chi_b \;.
\eea
Indeed such system is very similar to the semi-holographic Fermi liquid of sec. \ref{sec: semiholographic}
\begin{acknowledgement}
I would like to thank the organizers of the ``XVII European Workshop on String Theory 2011'' (5-9 September, Padova, Italy) for the invitation to give this talk, and Rakibur Rahman and especially Chris Herzog and Amos Yarom for a very enjoyable collaboration and many interesting discussions and clarifications. My work is supported in part by the DOE grant DE-FG02-92ER40697.
\end{acknowledgement}
\bibliographystyle{fdp}
|
3,212,635,537,682 | arxiv | \section{Introduction}
Since an international electron-positron ($e^-e^+$) linear collider (ILC)
can measure particle masses very accurately,
there is a growing consensus that the next high energy machine to be built
after the Large Hadron Collider (LHC) should be an ILC.
Such a machine is technically feasible and
the initial consensus is for the TESLA design~\cite{tesla}.
The siting is still under discussion.
There has been in the past a huge amount of analysis on methods of
detecting SUSY at an ILC.
However, the minimal supergravity (mSUGRA) model~\cite{sugra1,sugra2,nilles},
has several special aspects that make its
predictions clearer and more directly accessible to experimental
study. Hence it is worthwhile to examine this particular model.
The existing experiments have already begun to restrict the
SUSY parameter space significantly. Most significant of these are the
amount of cold dark matter (CDM), the Higgs mass bound,
the $b\rightarrow s \gamma$ branching ratio,
and (possibly) the muon $a_\mu$ anomaly.
The allowed parameter space, at present, have three distinct regions~\cite{dark}:
(i)~the stau neutralino ($\stau$-$\schionezero$) co-annihilation region
where
$\schionezero$ is the lightest SUSY particle (LSP),
(ii)~the $\schionezero$ having a
larger Higgsino component (focus point) and
(iii)~the scalar Higgs ($A^0$, $H^0$) annihilation funnel
(2$M_{\schionezero}\simeq M_{A^0,H^0}$).
These three regions have been selected out by the CDM constraint.
(There stills exists a bulk region
where none of these above properties is observed, but this region is now very small due to the existence
of other experimental bounds.)
The distinction between the above regions can not be observed in
the dark matter experiments where only the mass of the lightest SUSY particle would be
obtained. However these regions can be observed at the ILC or the LHC where the particles will be
produced directly and their masses will be measured.
The three dark matter allowed regions need very precise measurements at the
colliders to confirm which is correct. Since the
ILC is suitable for making precision measurements,
the cosmologically allowed parameter space is under a great
deal of scrutiny.
In this paper we choose to work with the $\stau$-$\schionezero$ co-annihilation region.
We note that many SUGRA models
possess a co-annihilation region and if the $a_{\mu}$ anomaly maintains,
it is the only allowed region for mSUGRA.
Coannihilation is characterized by a mass difference ($\Delta M$)
between $\stau$ and $\schionezero$ of about 5-15 GeV.
This narrow mass difference allows the $\stau$'s to co-annihilate in the early
universe along with the $\schionezero$'s in order to produce the current amount of
dark matter density of the universe.
The co-annihilation region has a large extension for $m_{1/2}$, up to 1-1.5 TeV,
and can be explored at the LHC.
The main difficulty, however, in probing this region is the small $\Delta M$ value.
This $\Delta M$ needs to be measured very accurately in order to claim that
the co-annihilation explains
the dark matter content of the universe.
However, the small $\Delta M$ value generates
signals with very low energy tau ($\tau$) leptons and thus makes
it difficult to discover this region at any collider due
to the large size of the standard model (SM) and SUSY background (BG) events.
It is this question for the ILC that we address in this paper.
At an ILC, a major source of the SM backgrounds
is the large two-photon events.
The previous studies~\cite{exp1,exp2} use
counting experiments to achieve their results.
The discovery significance is calculated using
$N_{\rm signal}/\sqrt{ N_{\rm BG} }$ in Ref.~\cite{exp1},
while in Ref.~\cite{exp2}, the $\stauone$ mass is measured
using the threshold method where they either scan over various
center-of-mass (CM) energies or
assume the mass of the LSP is known from the $\sele$ and the $\smu$ decays
(to set the beam energy) in order to achieve
the maximum sensitivity for a given $\stauone$ mass.
However, as shown in Sec.~3,
we study the scenarios where the $\sele$ and $\smu$ masses
are too heavy to be produced at a 500 GeV machine.
We investigate the accuracy of measuring $\Delta M$
by analyzing the shapes of
invariant mass distributions
of two $\tau$ jets and unbalanced event transverse momentum
(\mpt).
In our present work, we use
a fixed collider energy ($\sqrt s$ = 500 GeV) for the mass measurement.
We first discuss the available mSUGRA parameter space in Sec.~2,
followed by an analysis of
the signals and cross sections in Sec.~3.
Monte Carlo (MC) studies on the event selection cuts
to probe the SUSY events and the SM background are reported in Sec.~4
and the precision in the mass
measurements in Sec.~5.
We conclude in Sec.~6.
\section{mSUGRA Parameter Space}
The models of mSUGRA depends on only four parameters and one sign.
These are $m_0$ (the universal soft breaking mass at the
GUT scale $M_G$); $m_{1/2}$ (the universal gaugino soft breaking mass at $M_G$);
$A_0$ (the universal cubic soft breaking mass at $M_G$);
$\tan\beta = \langle H_2 \rangle / \langle H_1 \rangle$
at the electroweak scale (where $H_2$ gives rise to $u$ quark masses and $H_1$
to $d$ quark and lepton masses); and the sign of $\mu$, the Higgs mixing
parameter in the superpotential ($W_{\mu} = \mu H_1 H_2$). Note that the lightest
neutralino $\schionezero$ and the gluino $\gluino$ are approximately related to
$m_{1/2}$ by
$M_{\schionezero} \cong 0.4\ m_{1/2}$ and $M_{\gluino} \cong 2.8\ m_{1/2}$ .
The
model parameters are already significantly constrained by different experimental
results. Most important
for limiting the parameter space are:\\
\begin{itemize}
\item The light Higgs mass bound of $M_{h^0} > 114$ GeV from LEP \cite{higgs1}. Since
theoretical calculations of $M_{h^0}$ still have a 2-3 GeV error, we will
conservatively assume this to mean that $(M_{h^0})^{\rm theory} > 111$ GeV.
\item The $b\rightarrow s \gamma$ branching ratio~\cite{bsgamma}.
We assume here a relatively
broad range (since there are theoretical errors in extracting the branching
ratio from the data):
\begin{equation} 1.8\times10^{-4} < {\cal B}(B \rightarrow X_s \gamma) <
4.5\times10^{-4}
\label{bs}
\end{equation}
\item In mSUGRA the $\schionezero$ is the candidate for CDM.
Previous bounds from balloon flights (Boomerang, Maxima,
Dasi, etc.) gave a relic density bound for CDM of $0.07 < \Omega_{\rm CDM} h^2 < 0.21 $
(where $\Omega_{\rm CDM}$ is the density of dark matter relative to the critical
density to close the universe, and $h = H$/100 km/sec Mpc where $H$ is the Hubble
constant). However, the new data from WMAP \cite{sp} greatly tightens this (by a
factor of four) and the 2$\sigma$ bound is now:
\begin{equation}
0.095 < \Omega_{\rm CDM} h^2 <0.129
\label{om}
\end{equation}
\item The bound on the lightest chargino mass
of $M_{\schionepm} >$ 104 GeV from LEP \cite{aleph}.
\item The muon magnetic moment anomaly, $\delta a_\mu$,
using both $\mu^+$ and $\mu^-$ data~\cite{BNL}.
Using the \epem\ data to calculate the SM leading order hadronic contribution, one gets a 2.7$\sigma$
deviation of the SM from the experimental result \cite{dav,hag}.
(The \epem\ data appears to be more reliable than the $\tau$
decay data and Conserved Vector Current (CVC) analysis with CVC breaking~\cite{mar}.)
Assuming the future data confirms the $a_{\mu}$ anomaly, the combined effects
of $g_\mu -2$ and $M_{\schionepm} >$ 104 GeV then only
allow $\mu >0$ and leave only the $\stau$-$\schionezero$ co-annihilation
domain of the relic density.
\end{itemize}
One can now qualitatively state the constraints on the parameter space
produced by the above experimental bounds:
(a)~The relic density constraint produces a narrow rising band of allowed
parameter space in the $m_0$-$m_{1/2}$ plane;
(b)~In this band, the $M_{h^0}$ and $b\rightarrow s \gamma$ constraints produce a lower
bound on $m_{1/2}$ for all $\tan\beta$ of
$m_{1/2}\ \relax\ifmmode{\mathrel{\mathpalette\oversim >}\ 300\ {\rm GeV}$,
which implies $M_{\schionezero}> 120$ GeV and $M_{\schionepm} >$ 250 GeV.
In the following, we will analyze the case of $\mu > 0$.
In order to carry out the calculations it is necessary to include a
number of corrections to obtain results of sufficient accuracy, and
we list some of these here:
(i)~two loop gauge and one loop Yukawa renormalization group equations (RGEs)
are used from $M_G$ to
the electroweak scale, and QCD RGE below the electroweak scale for the light quarks;
(ii)~two loop
and pole mass corrections are included in the calculation of $M_{h^0}$;
(iii)~One loop corrections to $M_b$ and $M_\tau$ are included~\cite{rattazi};
(iv)~large $\tan\beta$ SUSY corrections to $b\rightarrow s \gamma$ are included
\cite{degrassi};
(v)~all $\stauone$-$\schionezero$ co-annihilation channels are included in the
relic density calculation \cite{bdutta}.
We do not include Yukawa unification or proton decay constraints as these
depend sensitively on post GUT physics, about which little is known.
Figure~\ref{WMAP_allowed_region} illustrates the constraints on the mSUGRA parameter
space for $\tan\beta$ = 10, 40 and 50 with $A_0$ = 0.
The narrow blue band is the region now allowed by WMAP (see Eq.~\ref{om}).
The dotted
pink lines are for different Higgs masses, and the light blue region would
be excluded if $\delta a_\mu > 11\times10^{-10}$. The three short solid lines
indicate the $\schionezero$-$p$ cross section values.
In the case of $\tan\beta=40$ they represent (from left) 0.03 $\times 10^{-6}$ pb,
0.002 $\times 10^{-6}$ pb, 0.001 $\times
10^{-6}$ pb and in the case of $\tan\beta=50$ they represent
(from left) 0.05 $\times 10^{-6}$ pb, 0.004 $\times 10^{-6}$ pb,
0.002 $\times 10^{-6}$ pb. In the case of $\tan\beta=10$ they represent
(from left) 5 $\times 10^{-9}$ pb and 1 $\times
10^{-9}$ pb.
It is important to note that the
narrowness of the allowed dark matter band is not a fine tuning. The lower
limit of the band comes from the rapid annihilation of neutralinos in the
early universe due to co-annihilation effects as the light $\stauone$ mass,
$M_{\stauone}$, approaches the neutralino mass as one lowers $m_0$.
Thus the lower
edge of the band corresponds to the lower
bound of Eq.~\ref{om}, and the band is
cut off sharply due to the Boltzman exponential behavior.
The upper limit
of the band, corresponding to the upper bound of Eq.~\ref{om},
arises due to
insufficient annihilation as $m_0$ is raised. As the WMAP data becomes more
accurate, the allowed band will narrow even more. (Note that the
slope and position of the band changes, however as $A_0$ is changed.)
Thus the
astronomical determination of the amount of dark matter effectively
determines one combination of the four parameters of mSUGRA.
Since the $\stau$-$\schionezero$
co-annihilation region seems to be experimentally most favored (including the $g-2$
effect), we probe this region.
Let us now study the available sparticles when we
try to probe this co-annihilation band in a linear collider.
\begin{figure}[htb]\vspace{0cm}
\hspace*{0.0cm}
\centerline{
\epsfxsize=8.6cm\epsfysize=9cm
\epsfbox{tanbeta10lcwmb1n.eps}}
\vspace*{0.5cm}
\centerline{\epsfxsize=8.6cm\epsfysize=9cm
\epsfbox{tanbeta40lcwmb1n.eps}
\hspace*{0.2cm}
\epsfxsize=8.6cm\epsfysize=9cm
\epsfbox{tanbeta50lcwmb1n.eps}
\vspace*{0.0cm}}
\caption[fig:fig2]{
Allowed region in the $m_0$-$m_{1/2}$ plane from the relic density
constraint for $\tan\beta$ = 10, 40, and 50
with $A_0 = 0$ and $\mu >0$.
The narrow blue band by the WMAP data.
The dotted pink vertical lines are different Higgs masses, and the
current LEP bound produces a lower bound on $m_{1/2}$ for low $\tan\beta$.
The brick red region depicts the
$b\rightarrow s \gamma$ constraint for
$\tan\beta=40$ and 50.
For $\tan\beta=10$, the pink region depicts the Higgs mass
region $M_{h^0} \leq$ 114 GeV. The light blue region
is excluded if $\delta a_{\mu} > 11 \times 10^{-10}$.
(Other lines are discussed in text.)}
\label{WMAP_allowed_region}
\end{figure}
\section{Production and Signals of SUSY Particles at an ILC}
Figure~\ref{WMAP_allowed_region} shows
the production cross section of 0.1 fb
for $\seleRp\seleRm$ (black dashed),
$\stauonep\stauonem$ (blue solid), $\schionezero\schitwozero$ (blue
dashed-dot) and chargino pair (vertical black dot)
productions.
We see that for large $\tan\beta$ the chargino pair production
is almost not observable and the selectron pair
production is unobservable.
The stau pair has the largest reach in $m_{1/2}$ and the neutralino pair
has the largest reach in $m_0$.
We therefore focus on the stau pair and the neutralino pair production
cross sections.
The kinematical reach of the production cross sections
of $\stauonep\stauonem$ and
$\schionezero\schitwozero$ productions
for both $\sqrt s=$ 500 and 800 GeV are shown in the figures.
We see that the 800 GeV ILC will have a much bigger reach.
We will, however, use the 500 GeV collider to study the signal
since it seems to be the
intial CM energy for ILC.
The possible signals for $\stauonep\stauonem$ and $\schionezero\schitwozero$ in
mSUGRA are the following:
\begin{eqnarray}
\epem & \rightarrow & \stauonep \stauonem \rightarrow
( \tau^+ \schionezero) + ( \tau^- \schionezero)\\
\epem & \rightarrow & \schionezero\ \schitwozero
\rightarrow \schionezero + (\tau \stauone) \rightarrow
\schionezero + (\tau^+ \tau^- \schionezero)
\end{eqnarray}
We look at the hadronic final state of taus (\tauh's)
in order to have larger event rates.
The final signal thus
has two $\tauh$'s plus \mpt.
The analysis now is quite complicated
since the $\tau$'s have low energy due to a small $\Delta M$ value.
We need to develop appropriate event selection cuts to extract
the signal from the SM background which is dominated
by the $\gamma^* \gamma^* e^+ e^-$.
In general, the co-annihilation region occurs for $\Delta M\,\sim$
5-15 GeV.
We choose three points for $m_{1/2}$ = 360 GeV, $m_0$ = 205, 210 and 220 GeV,
with $A_0$ = 0 and $\tan\beta=40$ and develop our event selection cuts.
The masses of SUSY particles in these three representative scenarios
are given in Table \ref{table:Msusy}.
The values of $\Delta M$ for these three points are 5, 10 and 19 GeV.
The first selection we use is the electron beam polarization.
Since both signals and background processes cross sections are affected by it,
we choose appropriate polarization to increase the significance of the signal.
The production cross sections for $\sqrt s$ = 500 GeV for different polarizations
are given in Table~\ref{table:Xsec}.
The right-handed (RH) polarization $\Pol(e^-) = -0.9$ enhances
the $\stauonep \stauonem$ signal,
and the left-handed (LH) polarization,
$\Pol(e^-) = +0.9$ enhances the $\schionezero \schitwozero$ signal.
The SM background, mentioned in table, consists of $\bar\nu \nu \tau^+ \tau^-$
states arising from $WW$, $ZZ$ and $Z\nu\nu$
production and this background becomes smaller for
a right-handed electron beam.
In addition to this, we also have two photon processes which will
be described later:
$e^+ e^- \rightarrow \gamma^* \gamma^* + e^+ e^-\rightarrow \tau^+ \tau^-$
(or $q \bar{q}$) + $e^+ e^-$
where the final state $e^+ e^-$ pair are at a small angle to the beam pipe and
the $\mbox{$q\overline{q}$}$ jets fake a $\tau^+ \tau^-$ pair. This background, does not change with beam
polarization and needs to be suppressed by appropriate cuts.
\begin{table}[ht]
\caption{Masses (in GeV) of SUSY particles in three representative scenarios of
$\Delta M \equiv M_{\stauone} - M_{\schionezero}$
for $m_{1/2}$ = 360 GeV, $\tan\beta = 40$, $\mu > 0$, and $A_0 = 0$.
These points satisfy all the existing experimental bounds on mSUGRA.
The numbers were obtained
using {\tt ISAJET} \protect\cite{isajet}.}
\label{table:Msusy}
\begin{center}
\begin{tabular}{l c c c c }
\hline \hline MC Point & $M_{\schitwozero}$ & $M_{\stauone}$ &
$M_{\schionezero}$ &
$\Delta M$ \\
($m_0$ in GeV) & & & & \\
\hline
1 (205) & 274.2 & 147.2 & 142.5 & 4.76 \\
2 (210) & 274.2 & 152.0 & 142.5 & 9.53 \\
3 (220) & 274.3 & 161.6 & 142.6 & 19.0 \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\caption{Cross section times branching ratio (in fb),
$\sigma \times \Br(\tau \rightarrow \tauh)^{2}$, for
SUSY and SM 4-fermions (4f) production in two cases of
polarizations, $\Pol(e^-) = -0.9$ (RH) and +0.9 (LH).
The SUSY cross sections were obtained using {\tt ISAJET}~\protect\cite{isajet},
and {\tt WPHACT}~\cite{wphact} was used for the cross sections of the $\bar\nu \nu \tau^+ \tau^-$ processes.}
\label{table:Xsec}
\begin{center}
\begin{tabular}{l c | c c }
\hline \hline
& $\Pol(e^-)$ & $-$0.9 (RH)& 0.9 (LH)\\\hline
SM 4f & & 7.84 & 89.8 \\
\hline
SUSY point 1 & $\schionezero\schitwozero$ & 0.41 & 6.09 \\
& $\stauonep\stauonem$ & 28.3 & 13.2 \\
\hline
SUSY point 2 & $\schionezero\schitwozero$ & 0.40 & 6.00 \\
& $\stauonep\stauonem$ & 26.6 & 12.4 \\
\hline
SUSY point 3 & $\schionezero\schitwozero$ & 0.38 & 5.68 \\
& $\stauonep\stauonem$ & 23.0 & 10.6 \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
The event selection cuts with the LH polarization will be
optimized to enhance the $\schionezero\schitwozero$ signal
and the RH cuts to optimize the
$\stauonep\stauonem$ signal.
The generation of MC samples and the analysis for the signal and the background
was done using the following programs:
(1)~{\tt ISAJET}~\cite{isajet} to generate SUSY events;
(2)~{\tt WPHACT}~\cite{wphact} for SM backgrounds;
(3)~{\tt TAUOLA}~\cite{tauola} for tau decay;
(4)~Events were simulated and analysed with a LC detector simulation~\cite{LCD}.
\subsection{Event Selection}
\begin{table}[t]
\caption{Kinematic cuts
for the LH ($\Pol = 0.9$) and the RH ($\Pol = -0.9$) cases}
\label{table:EventSelectionCuts}
\begin{center}
\begin{tabular}{ l | c c c }
\hline \hline
Cut Variable(s) & LH ($\Pol(e^-)$ = 0.9) & ~~ & RH ($\Pol(e^-)$ = $-$0.9) \\
\hline \hline
$N_{jet}$($E_{jet}>3$ GeV) & \multicolumn{3}{c}{2} \\
$\tau_h$ ID & \multicolumn{3}{c}{1, 3 tracks; $M_{tracks} <$ 1.8 GeV} \\
\hline
Jet acceptance & $-q_{jet} \cos\theta_{jet} < 0.7$ & ~~ & $|\cos\theta_{jet} | < 0.65$\\
& $-0.8 < \cos\theta(j_2,p_{vis}) < 0.7$ & ~~ & $| \cos\theta(j_2,p_{vis}) | < 0.6$\\\hline
Missing \pt & \multicolumn{3}{c}{$> 5$ GeV}\\ \hline
Acoplanarity & \multicolumn{3}{c}{$> 40\mbox{$^{\circ}$}$} \\
\hline
Veto on EM clusters & \multicolumn{3}{c}{No EM cluster in $5.8\mbox{$^{\circ}$} < \theta < 28\mbox{$^{\circ}$}$ with $E > 2$ GeV} \\
or electrons & \multicolumn{3}{c}{No electrons within $\theta > 28\mbox{$^{\circ}$}$ with $\pt > 1.5$ GeV} \\
\hline
Very forward calorimeter (1\mbox{$^{\circ}$} (2\mbox{$^{\circ}$}) - 5.8\mbox{$^{\circ}$}) &
\multicolumn{3}{c}{No EM cluster with $E>100$ GeV} \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
In order to reduce the backgrounds we require
a set of event selection cuts and
these cuts are given in Table~\ref{table:EventSelectionCuts}.
In this table: $j_2$ stands for second leading $\tau$ jet,
$p_{vis}$ gives the sum of visible momenta and $\theta(j_2,p_{vis})$ is the angle
between them. $\theta_{jet}$ is the angle between a $\tau$ jet and the beam direction.
The jets are reconstructed using the JADE algorithm with $Y_{{\rm cut}} \geq 0.0025$~\cite{jade}
and selected with $E_{jet} >$ 3 GeV.
Such a value of the $Y_{{\rm cut}}$ parameter helps to select narrow $\tau$-like jets.
The jet acceptance cut is required to reduce the SM background events such as $WW$ and $ZZ$ production.
The acoplanarity is defined as ${\cal A} = 180\mbox{$^{\circ}$} - \Delta\phi(j_1,j_2)$,
where $\Delta\phi(j_1,j_2)$ is the azimuthal angle
between two $\tau$-jets.
The cut on acoplanarity is very powerful in rejecting two photon SM backgrounds which
have a huge cross section.
In order to have MC samples of manageable size for two photon SM processes
we apply a cut ${\cal A}_{MC} > 30\mbox{$^{\circ}$}$ already at the generator level.
(In addition for these samples we apply the generator level cut on $\pt^{\tau MC} > 4$ GeV
and require the $\tau$ to be separated from the beam line by more than 35\mbox{$^{\circ}$}).
We also require no EM clusters (a)~in $5.8\mbox{$^{\circ}$} < \theta < 28\mbox{$^{\circ}$}$ where the ILC detector
has no tracking system and
(b)~in the angle below $5.8\mbox{$^{\circ}$}$ with two options of a very forward calrometer
(VFD).
In our calculation, beamstrahlung and bremsstrahlung are
included in the two-photon annihilation process.
The two photon background in our analysis is similar to that discussed in Refs.~\cite{exp1,exp2}.
The number of accepted events for each class of final states for the case \mpt\, $>$ 5 , 10, and 20
GeV are summarized in Table~\ref{table:Nevent_500invfb}.
\begin{itemize}
\item The RH polarization strongly suppresses the SM background events
($WW$ etc.) and
the neutralino events ($\schionezero\schitwozero$ ). We also need
a 1\mbox{$^{\circ}$}\ VFD and \mpt\ $>$ 5 GeV to get a clean
signal for the $\stauonep\stauonem$ events.
With no VFD there would be approximately 4,400
SM $\gamma\gamma$ background events swamping the SUSY signal.
\item The LH polarization allows for
the detection of the $\schionezero\schitwozero$ signal
with \mpt\ $>$ 20 GeV without a VFD
(as the $\gamma\gamma$ background falls to zero then),
or \mpt\ $>$ 10 GeV with a 2\mbox{$^{\circ}$}\ VFD.
However, both a 1\mbox{$^{\circ}$}\ VFD and \mpt\ $>$ 5 GeV are necessary
to detect the $\stauonep\stauonem$ events and to measure
$\Delta M$ in the LH case.
In the case of no VFD there would be $\sim$9,300
SM $\gamma\gamma$ background events with \mpt\ $>$ 5 GeV.
Note that the event selection criteria in
the LH polarization case are different from the RH case.
\end{itemize}
Thus we find that the VFD is
essential to detect SUSY in this region of parameter space.
A lower \mpt\ increases the number of
events and the significance.
A 5 GeV \mpt\ cut has been found to be feasible at a 500 GeV ILC.
It should be noted that the
1\mbox{$^{\circ}$}\ VFD is feasible for the ILC
since the TESLA design (which has been accepted for
the ILC technology) allows a VFD coverage
down to 3.2 mrad (or 0.18$^\circ$)~\cite{tesla}.
We also note that
our study is based on head-on collisions of electron and positron.
However, it has been shown that the VFD is still able to
reduce the two-photon background events
even in the case of a beam crossing~\cite{exp2}.
The $\stauonep\stauonem$ cross section has the largest reach
along the co-annihilation band and one would use this channel
to measure the mass difference.
This channel needs RH polarization for enhancement.
In Figure~\ref{fig5}, we plot the number of events accepted
in our selection criteria with 500 \mbox{${\rm fb}^{-1}$}\
of luminosity as a
function of $\Delta M$ for $m_0 =$ 203-220 GeV
($m_{1/2}$ = 360 GeV) in the RH polarization case.
We see that we have more than 100 events, which
will be adequate for the measurement of $\Delta M$
as discussed in Section \ref{sec:MassMeasurement},
for $\Delta M > 4.5$ GeV.
Figure~\ref{fig5B} is a plot of the acceptance
as a function of $\Delta M$ for $m_0 =$ 203-220 GeV
with a $1\mbox{$^{\circ}$}$ VFD in the case of RH polarization.
The acceptance drops rapidly as $\Delta M$ goes below 5 GeV.
The event acceptance also depends on $m_{1/2}$
as shown in Figure~\ref{figaccpt}.
This dependence arises because the $\tau$'s are less energetic
and its angular distribution changes as the stau becomes heavier.
We calculate the significance ($\sigma$) as
$N_{{\rm signal}}/ \sqrt{ N_{{\rm BG}} }$,
where the
$\schionezero\schitwozero$ events are also treated as backgrounds, for a window of
$\mbox{$M_{{\rm eff}}$}\
\equiv M(j_1,j_2,\menergy)$ (invariant mass of two $\tau$-jets
and missing energy).
For $\Delta M$ = 4.76 and 19 GeV,
the allowed ranges for \mbox{$M_{{\rm eff}}$}\ are 0-54.5 GeV and 0-183.5 GeV, respectively.
The 5$\sigma$ reach for the $\stauone$ mass is found to be
$\leq$ 215 GeV ($m_{1/2}\leq$ 520 GeV)
for $\Delta M$ = 4.76 GeV with a 1\mbox{$^{\circ}$}\ VFD and \mpt\ $>$ 5 GeV.
For $\Delta M$ = 19 GeV,
the $5\sigma$ reach of the $\stau$ mass at a 500 GeV ILC
is $\leq$ 237 GeV ($m_{1/2}\leq$ 537 GeV).
It should be noted that our event selection cuts
are optimized for a 500 GeV machine.
In the case of an 800 GeV ILC, the cuts need to be re-optimized
based on the new SUSY backgrounds
and machine design limitations (e.g. the lower bound on \mpt\ needs to be raised).
\begin{figure}[htb]\vspace{-4cm}
\centerline{
\epsfxsize=12.6cm\epsfysize=16.0cm
\epsfbox{LCRH09tk.eps}}\vspace{-2cm}
\caption[fig:fig5]{Number of $\tau_h\tau_h\schionezero\schionezero$ events
from $\stauonep\stauonem$ (solid circles)
and $\schionezero\schitwozero$ (solid triangles) production
as a function of $\Delta M$ (for $m_0$ = 203-220 GeV at $m_{1/2}$ = 360 GeV)
in the RH polarization case.
We assume 500 \mbox{${\rm fb}^{-1}$}\ of luminosity.}
\label{fig5}
\end{figure}
\begin{figure}[htb
\centerline{
\epsfxsize=7.6cm\epsfysize=9.0cm
\epsfbox{LCtwotau_AvsDeltaM.eps}
\caption[fig:fig5b]{Total event acceptance for
$\stauonep\stauonem \relax\ifmmode{\rightarrow}\else{$\rightarrow$}\fi \tauh\tauh\schionezero\schionezero$
as a function of $\Delta M$ for $m_0$ = 203-220 GeV
($m_{1/2}$ = 360 GeV) in the RH polarization case. }
\label{fig5B}
\end{figure}
\begin{figure}[htb
\centerline{
\epsfxsize=10cm\epsfysize=10.0cm
\epsfbox{m12_vs_acc_for_dm5.1.eps}
\caption[fig:fig5x]{Total event acceptance for
$\stauonep\stauonem \relax\ifmmode{\rightarrow}\else{$\rightarrow$}\fi \tauh\tauh\schionezero\schionezero$
as a function of $m_{1/2}$ for $\Delta M$ = 5.1 GeV in the RH polarization case. }
\label{figaccpt}
\end{figure}
\begin{table}[t]
\caption{Number of $\tau_h\tau_h$ plus \mpt\ events for luminosity of 500 fb$^{-1}$ for
points 1, 2 and 3 corresponding to $\Delta M$ = 4.76, 9.5, and 19.0 GeV,
respectively.
All numbers except for two-photon backgrounds are common
for different options of VFD.
\label{table:Nevent_500invfb}}
\begin{center}
\begin{tabular}{l l | r r r c |c r r r }
\hline
\hline
& & \multicolumn{4}{c|}{${\cal P}(e^-)=0.9$ (LH)} &
\multicolumn{4}{c}{${\cal P}(e^-)=-0.9$ (RH)} \\
Process & & $\mpt^{min}$ = 5 & 10 & 20 & ~ & ~ & 5 & 10 & 20 \\
\hline
$\schitwozero \schionezero$
& Pt.1 & 374 & 342 & 260 & ~ & ~ & 15 & 14 & 11 \\
& Pt.2 & 624 & 572 & 425 & ~ & ~ & 26 & 24 & 18 \\
& Pt.3 & 743 & 697 & 529 & ~ & ~ & 29 & 28 & 22 \\
\hline
$\staup \staum$
& Pt.1 & 73 & 2 & 0 & ~ & ~ & 122 & 2 & 0 \\
& Pt.2 & 524 & 267 & 11 & ~ & ~ & 786 & 437 & 22 \\
& Pt.3 & 946 & 781 & 335 & ~ & ~ & 1283 & 1076 & 468 \\
\hline
SM 4f & & 1745 & 1626 & 1240 & ~ & ~ & 129 & 123 & 100 \\
SM $\gamma\gamma$ & 2-5.8$^\circ$ VFD & 535 & 7 & 0 & ~ & ~& 249 & 4 & 0\\
& 1-5.8$^\circ$ VFD & 10 & 0 & 0 & ~ & ~ & 4 & 0 & 0\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{Measurement of Stau Neutralino Mass Difference}
\label{sec:MassMeasurement}
The measurement of a small $\Delta M$ value is crucial since it would be a key evidence
of the existence of
the $\stau$-$\schionezero$ co-annihilation.
We propose the variable \mbox{$M_{{\rm eff}}$}\
as a key discriminator of the signal events
from its background events.
We first generate the high statistics MC samples for the SM and various SUSY events
(by changing the $m_0$ value) and prepare the templates of the \mbox{$M_{{\rm eff}}$}\ distributions
for the SM, $\schionezero\schitwozero$, and $\stauonep\stauonem$ events.
Figure~\ref{fig8} (without the data points for 500\ \mbox{${\rm fb}^{-1}$}) shows examples of such templates
for two $m_0$ values for
a 2\mbox{$^{\circ}$}\ VFD in the RH polarization case.
The SM cross section is fitted by a blue line, the stau pair by
a green line and the neutralino pair by a red line.
The stau pair production
peak separates from the SM background as $\Delta M$ increases.
This is because for
smaller $\Delta M$,
the two $\tau$ signal appears like the $\tau$'s
from the two photon background and consequently
this
region requires a VFD coverage down to 1\mbox{$^{\circ}$}.
From Figure~\ref{fig8}
we also find
that the stau pair production cross section can be measured upto an accuracy of
$\pm$ 4\% for Point 2.
Since the data of 500\ \mbox{${\rm fb}^{-1}$}\ of luminosity will be generated in the initial run for a few
years, we then generate the MC samples equivalent to 500\ \mbox{${\rm fb}^{-1}$}\ of luminosity
for particular $\Delta M$ values and
fit them with the template functions generated for high statistics sample.
The black lines in Figure~\ref{fig8} shows the fitting of the 500 \mbox{${\rm fb}^{-1}$}\ MC samples for point 2 with
the templates of two different $m_0$ values of 210 and 211 GeV.
(Other parameters are kept at the same values as before.) We then compare the $\chi^2$ for
these fits.
We find that the $\chi^2$ for these fits is minimum for the $m_0$ = 210 GeV case.
We use the range of $m_0$ = 203-220 GeV and try to fit the 500 \mbox{${\rm fb}^{-1}$}\
MC sample for point 2 and determine the $\chi^2$ for all these
different points.
We plot the $\chi^2$ of these fits in Figure~\ref{fig9}
and find that 1$\sigma$ in the $\chi^2$ corresponds to $9.5\pm 1$ GeV.
The true value of $\Delta M$ for the point 2 is 9.53 GeV.
We repeat the same study for different $\stauone$ masses
i.e. for different $\Delta M$.
For lower $\Delta M$($\sim$ 5 GeV),
we need to use a VFD of $1^\circ$.
The accuracy of mass determination for
two different VFDs is summarized
in Table~\ref{table:dM_accuracy},
showing
the uncertainties are at a level of 10\%, except
for $\Delta M \sim$ 5 GeV
where it is 20\% and we have ~100 $\stauonep\stauonem$ events.
Figure~\ref{fig10} illustrates the $\stauone$ mass reach
as a function of luminosity for a 5$\sigma$
discovery
with at least 100 events
for $\Delta M\sim$ 5 GeV,
where
the $\Delta M$ would be determined to 20\% or better.
We find that 164 GeV and 205 GeV $\stauone$ masses
to be $5\sigma$ reach and 20\% (or better) uncertainty
in $\Delta M$ measurement
with 500 and 2500 \mbox{${\rm fb}^{-1}$}\
at a 500 GeV ILC.
\begin{table}[t]
\caption{Accuracy of the determination of $\Delta M$ for different VFDs.}
\label{table:dM_accuracy}
\begin{center}
\begin{tabular}{c c| c| c c }
\hline
\hline
& & $N_{\stauonep\stauonem}$ &
\multicolumn{2}{c}{$\Delta M$(``500 \mbox{${\rm fb}^{-1}$}'' experiment)} \\ \cline{4-5}
$m_0$ & $\Delta M$ & (500 \mbox{${\rm fb}^{-1}$}) & 2$^\circ$ VFD & 1$^\circ$ VFD \\
\hline
205 & 4.76 & 122 & Not Determined & 4.7$^{+1.0}_{-1.0}$\\
210 & 9.53 & 787 & 9.5$^{+1.1}_{-1.0}$ & 9.5$^{+1.0}_{-1.0}$\\
213 & 12.4 & 1027 & 12.5$^{+1.4}_{-1.4}$ & 12.5$^{+1.1}_{-1.4}$\\
215 & 14.3 & 1138 & 14.5$^{+1.1}_{-1.4}$ & 14.5$^{+1.1}_{-1.4}$\\
\hline
\hline
\end{tabular}
\end{center}\end{table}
\section{Conclusion}
We have probed the mSUGRA and SM signals
in the $\stau$-$\schionezero$ co-annihilation region at a
500 GeV ILC with 500 \mbox{${\rm fb}^{-1}$} of luminosity.
In this region,
the mass difference $\Delta M$ between the $\stauone$ and the $\schionezero$
would typically be 5-15 GeV for a large range of $m_{1/2}$.
This small mass difference
would produce very low energy taus in the final state.
The dominant SM background would be the two-photon process.
With RH $e^-$ beams
our study has focused on the
$\staup\staum$ production because
it allowed us to reach large $m_{1/2}$ values
in the allowed parameter space.
We proposed the invariant mass of two tau jets
and missing energy variable, $M(j_1,j_2,\menergy)$,
to determine the mass difference and
found the accuracy would be at a level of 10\%
using a 1\mbox{$^{\circ}$}\ (or 2\mbox{$^{\circ}$}) VFD
except for $\Delta M$ = 4.76 GeV.
For $\Delta M \simeq$ 5 GeV,
a 1\mbox{$^{\circ}$}\ VFD would be crucial
to suppress the two-photon background
and the accuracy there would be
about 20\% with approximately 100 signal events.
We also calculated
the discovery significance of this region
and determined the $5\sigma$ reach in $m_{1/2}$ for
500 \mbox{${\rm fb}^{-1}$} of luminosity.
\begin{figure}\vspace{-2cm}
\centerline{\epsfxsize=10cm\epsfysize=10.0cm
\epsfbox{fit_ex_MEff1_EX210_HI210.eps}}\vspace*{-0cm}
\centerline{\epsfxsize=10cm\epsfysize=10.0cm
\epsfbox{fit_ex_MEff1_EX210_HI211.eps}}
\caption{Example of fitting a MC sample containing
SM and SUSY ($m_0$ = 210 GeV, $m_{1/2}$ = 360 GeV) events equivalent to
500\ \mbox{${\rm fb}^{-1}$}\
to two \mbox{$M_{{\rm eff}}$}\ templates
for SM+SUSY ($m_0$ = 210 or 211 GeV, $m_{1/2}$ = 360 GeV).
A 2\mbox{$^{\circ}$}\ VFD is assumed.
The value of $\chi^{2}$/n.d.f. is minimum when the events from the same
SUSY parameter are in the 500 \mbox{${\rm fb}^{-1}$}\ sample.
A $\gamma\gamma$ contribution (a narrow distribution around 20 GeV)
is apparent.
The fitting is similar for 1\mbox{$^{\circ}$}\ VFD,
except the $\gamma\gamma$ contribution is substantially reduced.}
\label{fig8}
\end{figure}
\begin{figure}\vspace{-2cm}
\centerline{ \DESepsf(diff_noGG_chi2m_P-09_210.eps width 10 cm) }
\caption {\label{fig9} $\chi^2$-$\chi^2_{\rm min}$
of fitting the 500 \mbox{${\rm fb}^{-1}$}\ sample
for SUSY Point 2 ($m_0$ = 210 GeV, $m_{1/2}$ = 360 GeV)
with the high statistics templates
is plotted as a function of $\Delta M$.}
\end{figure}
\begin{figure}\vspace*{1cm}
\centerline{\epsfxsize=10cm\epsfysize=10.0cm
\epsfbox{lumstau1.eps}
\caption{The $\stauone$ mass reach with $\Delta M= 5$ GeV as a function of luminosity
for a 5$\sigma$ discovery with at least 100 events.}
\label{fig10}
\end{figure}
\section{Acknowledgments}
This work is supported in part by a NSF Grant
PHY-0101015, in part by NSERC
of Canada and in part by a DOE Grant DE-FG02-95ER40917.
\newpage
|
3,212,635,537,683 | arxiv | \section{\bf Introduction}
Poisson cohomology, a way of parametrizing the deformations of a Poisson structure, is an important invariant in Poisson geometry. However, the computation of Poisson cohomology is quite difficult in general and few explicit results are known (\cite{Dufour}, p.43). One class of manifolds where the Poisson cohomology is known is the case of b-symplectic manifolds. A \emph{b-symplectic manifold}, defined by Victor Guillemin, Eva Miranda, and Ana Rita Pires \cite{Guillemin01}, is a $2n$-dimensional manifold $M$ equipped with a Poisson bi-vector $\pi$ that is non-degenerate except on a hypersurface $Z$ where there exist coordinates such that locally $Z=\left\{x_1=0\right\}$ and
$$\displaystyle{\pi=x_1\dfrac{\partial}{\partial x_1}\wedge \dfrac{\partial}{ \partial y_1}+\sum_{i=2}^n \dfrac{\partial}{\partial x_i}\wedge \dfrac{\partial}{\partial y_i}.}$$
Recently, much work has been done studying the various facets of $b$-symplectic structures (a few select examples include \cite{Frejlich, Guillemin02, Marcut02}). In particular, Ioan M\u{a}rcut and Boris Osorno Torres (\cite{Marcut01}, Prop.1) and Guillemin, Miranda, and Pires (\cite{Guillemin01}, Thm. 30) showed that the Poisson cohomology $H^p_\pi(M)$ is isomorphic to the de Rham cohomology of a specific Lie algebroid, the $b$-tangent bundle, and hence, $$H^p_\pi(M)\simeq H^p(M)\oplus H^{p-1}(Z).$$
In \cite{Lanius}, we introduced an approach to computing Poisson cohomology using the de Rham cohomology of a Lie algebroid, called the rigged algebroid. We will employ this method to study a class of log symplectic manifolds, a generalization of the b-symplectic case as formulated by Marco Gualtieri, Songhao Li, Alvaro Pelayo, and Tudor Ratiu \cite{Gualtieri02}.
We will consider smooth manifolds M together with a finite set $D$ of smooth hypersurfaces $Z_i\subset M$ that intersect transversely. In other words, $D$ is a smooth normal crossing divisor on $M$. The $b$-tangent bundle over $(M,D)$, called the $\log$ tangent bundle ${}^{log}TM$ in \cite{Gualtieri02}, is the vector bundle whose smooth sections are the vector fields tangent to $D$, that is $$\displaystyle{\left\{u\in\mathcal{C}^\infty(M;TM):u|_{Z}\in\mathcal{C}^\infty(Z,TZ)\text{ for all }Z\in D \right\}}.$$ This vector bundle is a Lie algebroid with anchor map the inclusion into $TM$ and with bracket induced by the standard Lie bracket on $TM$.
Let $\tau$ denote a nonempty subset of $\left\{1,\dots,|D|\right\}$. In \cite{Gualtieri02}, they point out that the Lie algebroid cohomology of the $b$-tangent bundle over $(M,D)$ is
\begin{equation}\label{eq:logcohom}{}^{b}H^p(M)\simeq H^p(M)\oplus\bigoplus_{|\tau|\leq p}H^{p-|\tau|}\big(\bigcap_{t\in \tau}Z_t\big).\end{equation} In the case when $D$ is a single hypersurface $Z\subset M$, we recover the de Rham cohomology of the $b$-tangent bundle originally computed by Rafe Mazzeo and Richard Melrose (\cite{MelroseGreenBook}, Prop. 2.49).
\begin{definition*}\cite{Gualtieri02} A \emph{log symplectic structure} on $(M,D)$ is a closed non-degenerate 2-form $\omega$ in the de Rham complex of the $b$-tangent bundle. In other words, $$\omega\in{}^{b}\Omega^2(M)=\mathcal{C}^\infty\big(M;\wedge^2\big({}^{b}T^*M\big)\big)$$ satisfying $$d\omega=0\text{ and }\omega^n\neq 0.$$ The form $\omega$ induces a map $\omega^\flat$ between the $b$-tangent and $b$-cotangent bundles.\begin{center}$\xygraph{!{<0cm,0cm>;<1cm, 0cm>:<0cm,1cm>::}
!{(1.25,0)}*+{{}^{b}TM}="b"
!{(4.9,0)}*+{{{}^{b}T^*M}}="c"
!{(1.75,.1)}*+{{}}="d"
!{(4.25,.1)}*+{{}}="e"
!{(1.75,-.1)}*+{{}}="f"
!{(4.25,-.1)}*+{{}}="g"
"d":"e"^{\omega^{\flat}}
"g":"f"^{\pi^{\sharp}=(\omega^{\flat})^{-1}}}$\end{center} The inverse map is induced by a bi-vector $\pi\in\mathcal{C}^\infty(M;\wedge^2({}^{b}TM))$. This bi-vector is called a \emph{log Poisson structure} on $(M,D)$.
\end{definition*}
Log symplectic manifolds are a broad generalization of $b$-symplectic manifolds as developed in \cite{Guillemin01}. We restrict our attention to a class of structures that satisfy some nice geometric features of $b$-symplectic structures that are lost in the whole of the $\log$ symplectic category. In particular, Victor Guillemin, Eva Miranda, and Ana Rita Pires \cite{Guillemin01} showed that every $b$-symplectic form induces a cosymplectic structure $$(\theta,\eta)\in\Omega^1(Z)\times\Omega^2(Z)$$ on its singular hypersurface $Z$. That is, there exists a pair of closed forms $\theta, \eta$ on $Z$ such that $$\theta\wedge\eta^{n-1}\neq 0$$ where the dimension of $Z$ is $2n-1$. We will consider certain $\log$ symplectic structures that induce a generalization of cosymplectic structures on the intersection of any subset of hypersurfaces in $D$. In particular we would like the induced cosymplectic structures on each $Z\in D$ to intersect `nicely'.
\begin{Example}\label{motivatingexample} As a motivating example, let us examine some cases on the 4-torus $\mathbb{T}^4$ identified as $\mathbb{T}^2\times\mathbb{T}^2$ with angular coordinates $(a_1,a_2)$ and $(b_1,b_2)$ respectively. Let $D=\left\{Z_1,Z_2\right\}$ where $Z_1$ is the zero set of $\sin(a_1)$ and $Z_2$ is the zero set of $\sin(a_2)$. The $b$-tangent bundle is generated by the vector fields $$\sin(a_1)\dfrac{\partial}{\partial a_1},\hspace{.25ex}\sin(a_2)\dfrac{\partial}{\partial a_2},\hspace{.25ex}\dfrac{\partial}{\partial b_1},\hspace{.25ex}\dfrac{\partial}{\partial b_2}.$$
We will look at three symplectic forms on this bundle. As an example of the behavior we desire, consider the $\log$ symplectic forms $$\omega_I=\dfrac{da_1}{\sin(a_1)}\wedge\dfrac{da_2}{\sin(a_2)}+db_1\wedge db_2\hspace{2ex}$$\text{ and }$$\omega_{II}=\dfrac{da_1}{\sin(a_1)}\wedge db_1-\dfrac{da_2}{\sin(a_2)}\wedge db_2.$$ The corresponding bi-vectors are respectively given by $$\pi_I=\sin(a_2)\sin(a_1)\dfrac{\partial}{\partial a_2}\wedge\dfrac{\partial}{\partial a_1}+\dfrac{\partial}{\partial b_2}\wedge \dfrac{\partial }{\partial b_1}$$
and$$\pi_{II}=\sin(a_1) \dfrac{\partial }{\partial b_1}\wedge\dfrac{\partial}{\partial a_1}-\sin(a_2)\dfrac{\partial}{\partial b_2}\wedge\dfrac{\partial}{\partial a_2}.$$ \vspace{1ex}
\noindent \begin{minipage}{0.6\linewidth}
\hspace{3ex}In the symplectic foliation induced by $\pi_I$, the submanifold $\left\{a_1=0,a_2=0\right\}$ is a symplectic leaf with symplectic form $db_1\wedge db_2$. On the other hand, $\pi_{II}$ induces a foliation on $\left\{a_1=0,a_2=0\right\}$ of points. To see this, note that the leaves associated to $\pi_{II}$ on $\left\{a_1=0,a_2=0\right\}$ are given by the distribution $\ker db_1\cap \ker db_2=\left\{0\right\}$.
\hspace{3ex} Thus we say $\omega_{I}$ induces the symplectic form $db_1\wedge db_2$ on $\left\{a_1=0,a_2=0\right\}$ and that $\omega_{II}$ induces the closed 1-forms $db_1$ and $db_2$ satisfying $db_1\wedge db_2\neq 0.$ We call the pair $(db_1,db_2)$ a $2$-cosymplectic structure (see Definition \ref{kco}). \end{minipage}
\hspace{-8ex}\begin{minipage}{0.4\linewidth}
\hspace{11ex}{\bf $\begin{array}{c} \text{ symplectic foliation}\\ \text{ on }\left\{a_1=0,a_2=0\right\}\end{array}$}
\begin{multicols}{2}
\begin{flushright}$~$\vspace{2ex}
$\pi_I$ \vspace{9ex}
$\pi_{II}$ \end{flushright}\columnbreak
\includegraphics[scale=.3]{PartitionablePic05.eps} \end{multicols} \end{minipage}\vspace{1ex}
We are interested in studying $\log$ symplectic forms that induce structures like these at the intersection of elements in $D$. Not all log symplectic forms on the 4-torus will induce a symplectic or $2$-cosymplectic structure. For instance, consider the log symplectic form $$\omega_{III}=\cos(b_1)\omega_I+\sin(b_1)\omega_{II}.$$ The inverse is given by $$\pi_{III}=\cos(b_1)\pi_I+\sin(b_1)\pi_{II}.$$
\noindent \begin{minipage}{0.6\linewidth}
\hspace{3ex} In the case of $\pi_{III}$, the induced folitation on $\left\{a_1=0,a_2=0\right\}$ is not regular. There are two open leaves $\left\{a_1=0,a_2=0\right\}\setminus\left\{b_1=\pi/2,3\pi/2\right\}.$ At $\left\{b_1=\pi/2,3\pi/2\right\}$, the foliation is given by the distribution $\ker db_1\cap \ker db_2$ and its leaves are points. Because $\cos(b_1)db_1\wedge db_2$ vanishes at $b_1=\pi/2$ and $3\pi/2$, the form $\omega_{III}$ does not induce a global symplectic structure on $\left\{a_1=0,a_2=0\right\}$.
\end{minipage}
\hspace{-7ex}\begin{minipage}{0.4\linewidth}
\hspace{10ex}{\bf $\begin{array}{c} \text{ symplectic foliation}\\ \text{ on }\left\{a_1=0,a_2=0\right\}\end{array}$}
\begin{multicols}{2}
\begin{flushright}$~$\vspace{2ex}
$\pi_{III}$\end{flushright}\columnbreak
\includegraphics[scale=.3]{PartitionablePic06.eps} \end{multicols} \end{minipage}\vspace{1ex}
Because $\sin(b_1)$ vanishes at $b_1=0$ and $\pi$, the pair $(\sin(b_1)db_1,\sin(b_1)db_2)$ is not a 2-cosymplectic structure on $\left\{a_1=0,a_2=0\right\}$. Since the foliation on $\omega_{III}$ does not arise from a global structure, such as symplectic or $2$-cosymplectic, on $\left\{a_1=0,a_2=0\right\}$, we will exclude forms such as this from our discussion. \end{Example}
\begin{definition}\label{kco} A $k$-cosymplectic structure\footnote{The term k-cosymplectic has also been used in classical field theories, with a different meaning, see e.g. \cite{Cappelletti} for details.} on a $k+2\ell$ dimensional manifold $M$ is a family $(\alpha_i,\beta)$ of $k$ closed one forms $\alpha_i\in\Omega^1(M)$ and a closed two form $\beta\in\Omega^2(M)$ such that
\centerline{$\left(\wedge_{i}\alpha_i\right)\wedge\beta^{\ell}\neq 0.$}\end{definition}
By restricting our attention to log forms satisfying certain cohomological restrictions, we guarantee the existence of such a structure at the intersection of any subset of $D$.
To any $\log$ symplectic form $\omega$ we can associate a \emph{decomposition} of its cohomology class $[\omega]\in {}^{b}H^2(M)$ in terms of the isomorphism (\ref{eq:logcohom}):
$$(a,\underbrace{b_1,\dots,b_k}_{b_i\in H^{1}(Z_i)}, \underbrace{c_{1,2},\dots,c_{k-1,k}}_{c_{i,j}\in H^{0}(Z_i\cap Z_j)})\in H^2(M)\oplus\bigoplus_{i} H^{1}(Z_i)\oplus\bigoplus_{i\neq j} H^{0}(Z_i\cap Z_j).$$
\begin{definition} Let $\omega$ be a $\log$ symplectic form on a manifold $(M,D)$. The structure $\omega$ is \emph{partitionable} if its $b$-de Rham cohomology class decomposition $$(a,b_1,\dots,b_k,c_{1,2},\dots,c_{k-1,k})$$ satisfies the following conditions.
\begin{enumerate}
\item If $b_{s}\neq 0$, then $c_{i,s}=0$ for all $i$. \vspace{1ex}
\item Consider the inclusions \begin{center}$\xygraph{!{<0cm,0cm>;<1cm, 0cm>:<0cm,1cm>::}
!{(1,1.5)}*+{Z_s\cap Z_t}="b"
!{(0,0)}*+{{Z_s}}="c"
!{(2,0)}*+{{Z_t}}="d"
"b":"c"^{i_s}
"b":"d"_{i_t}}$.\end{center} If $c_{s,t}\neq 0$, then $i^*_sb_{s}=i^*_tb_{t}=0$, and $c_{s,j}=c_{i,t}=0$ for all $j\neq t$, $i\neq s$.
\end{enumerate}\end{definition}
\begin{remark}\label{partitionremark} Every partitionable type form $\omega$ determines a partition $\Lambda_D$ of the set $D$. By the definition of partitionable, if $b_j=0$ for $Z_j\in D$, then there must exist exactly one element $Z_{j^\prime}\in D$ such that $c_{j,j^\prime}\neq 0$. The tuple $Z_j,Z_{j^\prime}$ forms a pair which we can label $Z_{x_i},Z_{y_i}$. We call this a pair of type $x$ and type $y$. If $b_j\neq 0$ for $Z_j\in D$, then we will label $Z_j$ in the form $Z_{z_i}$. We call these hypersurfaces of type $z$.
Thus the set $D$ admits a relabeling $$Z_{x_1},Z_{y_1},\dots,Z_{x_k},Z_{y_k},\hspace{3ex}Z_{z_1},\dots,Z_{z_\ell}.$$ Up to switching the labels $x_i$ and $y_i$, and permuting the set $\left\{1,\dots,k\right\}$ and $\left\{1,\dots,\ell\right\}$, this partition is unique.
As we saw from $\omega_I$ in Example \ref{motivatingexample}, the intersection $Z_{x_i}\cap Z_{y_i}$ is a leaf of the induced symplectic foliation. The log symplectic form $\omega_{II}$ showed that $Z_{z_i}\cap Z_{z_j}$, when non-empty, has a codimension 2 foliation. Intuitively, intersections of pairs $Z_{x_i},Z_{y_i}$ will be symplectic leaves. The symplectic leaves drop in dimension on intersections of hypersurfaces of type $Z_{z_j}$. \end{remark}
This partition $\Lambda_D$ of $D$ is vital in our computation and statement of the Poisson cohomology. Similar to the b-symplectic case, the Poisson cohomology of partitionable $\log$ symplectic structures will involve the de Rham cohomology of the $b$-tangent bundle, however the two will not always be isomorphic.
\begin{theorem}\label{theorem01} Let $(M,\pi)$ be a partitionable $\log$ symplectic structure for $$D=\left\{Z_{x_1},Z_{y_1},\dots,Z_{x_k},Z_{y_k},Z_{z_1},\dots,Z_{z_\ell}\right\}.$$ Let $\mathscr{M}$ denote all collections of sets $I,J,K,L$ satisfying $$I,J,K\subseteq\left\{1,\dots,k\right\}, L\subseteq\left\{1,\dots,\ell\right\}$$ $$\text{ such that } I\neq\emptyset\text{ and } I\cap J=I\cap K =\emptyset.$$
Set $$m=2|I|+|J|+|K|+|L|$$ and let $v_i$ denote $Z_{x_i}\cap Z_{y_i}$.
Then the Poisson cohomology $H_{\pi}^p(M)$ of $(M,\pi)$ is $${}^{b}H^p(M)\oplus\bigoplus_{\mathscr{M}}H^{p-m}(\bigcap\underbrace{Z_{x_i}\cap Z_{y_i}\cap Z_{x_j}\cap Z_{y_k}\cap Z_{z_\ell}}_{i\in I,~j\in J,~ k\in K,~\ell\in L};\bigotimes\underbrace{|N_{v_i}^*Z_{x_i}|^{-1}\otimes|N_{v_i}^*Z_{y_i}|^{-1}}_{i\in I})$$ for $m\leq p$. \end{theorem}
\begin{remark} The factor of ${}^{b}H^p(M)$ appearing in the cohomology is encoding purely topological information about the $b$-tangent bundle over the manifold $M$ and the set $D$. The remaining factors appearing in the cohomology are encoding specific information about the bi-vector $\pi$. \end{remark}
The proof of Theorem \ref{theorem01} appears in Section 3. In Section 2 we discuss aspects of the geometry of partitionable $\log$ symplectic structures and provide examples.
\vspace{3ex}
{\bf Acknowledgements:} I greatly benefited from attending the 2016 Poisson geometry meeting Gone Fishing held at the University of Colorado at Boulder. Travel support was provided by NSF Grant DMS 1543812 for the Gone Fishing 2016 Conference. I am particularly grateful to Ioan M\u{a}rcut for inspiring this project with his suggestion that I apply my approach for computing Poisson cohomology to other settings. I am grateful to Pierre Albin for carefully reading several versions of this paper. Travel support was provided by Pierre Albin's Simon's Foundation grant \# 317883.
\section{\bf Partitionable log symplectic structures}
In this section we will discuss various features of partitionable $\log $ Poisson structures.
\subsection{Local normal forms and $k$-cosymplectic structures}
\begin{definition} A map $\phi:(M_1, D_1)\to(M_2, D_2)$ is a \emph{$b$-map} if $$\phi^*:{}^{b}\Omega^1(M_2)|_{M_2\setminus D_2}\to{}^{b}\Omega^1(M_1)|_{M_1\setminus D_1}$$ extends to a map $$\phi^*:{}^{b}\Omega^1(M_2)\to{}^{b}\Omega^1(M_1).$$ Given two log symplectic forms $\omega_1$, $\omega_2$ on $(M,D)$, a log-\emph{symplectomorphism} is a $b$-map $\phi:M\to M$ such that $\phi^*\omega_2=\omega_1$.
\end{definition}
As noted in Remark \ref{partitionremark}, given a partitionable $\log$ symplectic form $\omega$ on a manifold $(M,D)$, the cohomological decomposition of $[\omega]$ gives us a partition $\Lambda_D$ of the set $D$ as $Z_{x_1},Z_{y_1},\dots,Z_{x_k},Z_{y_k}, Z_{z_1},\dots,Z_{z_\ell}.$
Given a subset $S$ of a divisor $D$, we call $$X_S=\bigcap_{Z\in S} Z$$ a \emph{maximal intersection} if $X$ is non-empty and if $Z \cap X_S=\emptyset$ for all $Z\in D\setminus S$.
We can assign a subpartition $\Lambda_S$ to the subset $S$ according to the decomposition $(a,b_1,\dots,b_k,c_{1,2},\dots,c_{k-1,k})$ of $[\omega]\in {}^{b}H^2(M)|_{X_S}$. In particular, hypersurfaces of type $z$ in $\Lambda_D$ will remain type $z$ in the subpartition $\Lambda_S$. However if there is a hypersurface $Z_{x_i}$ of type $x$ in $S$ and its type $y$ counterpart $Z_{y_i}$ is not in $S$, then $Z_{x_i}$ will have a type $z$ designation in the subpartition $\Lambda_S$.
We will show that there are hypersurface defining functions, i.e. $x\in\mathcal{C}^\infty(M)$ such that $Z_x=\left\{x=0\right\}$ and $dx(p)\neq 0$ for all $p\in Z$, so we can express $\omega$ near $X_S$ as
\begin{equation}\label{eq:normal}\omega_0= \sum_{i=1}^k\dfrac{dx_i}{x_i}\wedge\dfrac{dy_i}{y_i}+\sum_{j=1}^{m}\dfrac{dz_j}{z_j}\wedge\alpha_j+\delta.\end{equation} where $\alpha_j\in\Omega^1(X_S)$ is a closed form representing $b_{z_j}$, and $\delta\in\Omega^2(X_S)$ is a closed form representing $a$.
\begin{proposition}\label{prop01} Let $\omega$ by a partitionable $\log$ symplectic form on a manifold $(M,D)$. Let $S$ be any subset of $D$ such that $\displaystyle{X_S=\bigcap_{Z\in S} Z}$ is a \emph{maximal intersection} and let $\Lambda_S$ be a subpartition of $S$. If $(a,b_1,\dots,b_k,c_{1,2},\dots,c_{k-1,k})$ is a decomposition of $[\omega]\in {}^{b}H^2(M)|_{X_S}$, then there exist \begin{itemize}\item hypersurface defining functions $x_i$, $y_i$, $z_i$ partitioned according to $\Lambda_S$, \item a tubular neighborhood $U\supset X_S$, and \item $\alpha_j\in\Omega^1(X_S)$ a closed form representing $b_{z_j}$ and $\delta\in\Omega^2(X_S)$ a closed form representing $a$, \end{itemize}
such that on $U$ there is a $\log$-symplectomorphism pulling $\omega$ back to (\ref{eq:normal}).
\end{proposition}
\begin{proof}
Let $\omega$ be a partitionable log symplectic form on $(M,D)$. Given a maximal intersection $X_S$ given by $S\subseteq D$, let $(a,b_1,\dots,b_k,c_{1,2},\dots,c_{k-1,k})$ be a decomposition of $[\omega]\in {}^{b}H^2(M)|_{X_S}$. Let $x_i$, $y_i$, $z_i$ be hypersurface defining functions partitioned according to $\Lambda_S$.
By the isomorphism from equation (\ref{eq:logcohom}), in these coordinates $\omega|_{X_S}=$ $$\sum_{i,j}\left(\dfrac{dx_i}{x_i}\wedge\left(a_{ij}\dfrac{dx_j}{x_j}+b_{ij}\dfrac{dy_j}{y_j}+c_{ij}\dfrac{dz_j}{z_j}\right)+\dfrac{dy_i}{y_i}\wedge\left(d_{ij}\dfrac{dy_j}{y_j}+e_{ij}\dfrac{dz_j}{z_j}\right)\right)$$ $$+\sum_{i,j}f_{ij}\dfrac{dz_i}{z_i}\wedge\dfrac{dz_j}{z_j}+\sum_k\left(\dfrac{dx_k}{x_k}\wedge A_k+\dfrac{dy_y}{y_k}\wedge B_k+\dfrac{dz_k}{z_k}\wedge C_k\right)+\delta.$$
From $d\omega=0$, it follows that $a_{ij},b_{ij},c_{ij},d_{ij},e_{ij},f_{ij}$ are all closed $0$-forms and thus are real numbers. By the definition of partitionable, the only non-zero numbers among these are $b_{ii}$. Further, by the definition of partitionable, the one-forms $A_k$ and $B_k$ satisfy $\displaystyle{A_k|_{Z_{x_k}\cap Z_{y_k}}=B_k|_{Z_{x_k}\cap Z_{y_k}}=0}$. Thus $A_k|_{X_S}=B_k|_{X_S}=0$.
Thus under an appropriate relabeling and change of $X_S$ defining functions $$\omega|_{X_S}= \sum_{i=1}^k\dfrac{dx_i}{x_i}\wedge\dfrac{dy_i}{y_i}+\sum_{j=1}^{m}\dfrac{dz_j}{z_j}\wedge\alpha_j+\delta.$$
Now we will proceed by the standard relative Moser argument (See \cite{Cannas} Sec. 7.3 for the smooth setting, and \cite{Scott} Thm 6.4 for the $b$-symplectic version). Pick a tubular neighborhood $U_0$ of $X_S$. Define $\omega_0$ to be $\omega|_{X_S}$ pulled back to $U$. Then $$\omega-\omega_0=\sum_{j=1}^m\dfrac{dz_j}{z_j}\wedge(\alpha_j-\widetilde{\alpha}_j)+\delta-\tilde{\delta}.$$
Since the form $\omega-\omega_0$ is closed on $U_0$, and $(\omega-\omega_0)|_{X_s}=0$ and $\delta-\tilde{\delta}=0$, by the relative Poincar\'{e} Lemma, there exist primitives $\mu_j$ of $\alpha_j-\widetilde{\alpha}_j$ and a primitive $\sigma$ of $\delta-\tilde{\delta}$ such that $\mu_j|_{X_s}=\sigma|_{X_S}=0$. Define $$\mu=\sum_{j=1}^m\dfrac{dz_j}{z_j}\mu_j+\sigma.$$
Then $\omega-\omega_0=d\mu$. Let $\omega_t=(1-t)\omega_0+t\omega$. Then $\dfrac{d\omega_t}{d t}=\omega-\omega_0=d\mu$. Because $\mu$ is a log one form, the vector field defined by $i_{v_t}\omega_t=-\mu$ is a log vector field and its flow fixes the divisor $D$. Further, $v_t=0$ on $X_s$. Thus we can integrate $v_t$ to an isotopy that is the identity on $X_{S}$ and fixes $D$. This isotopy is the desired $\log$-symplectomorphism. \end{proof}
This proposition gives us $k$-cosymplectic structures on every intersection of subsets of $D$:
\begin{proposition}\label{inducedstructure}
Let $\omega$ be a partitionable $\log$ symplectic form $\omega$ on a manifold $(M,D)$. For any set $S\subseteq D$ such that $\displaystyle{\bigcap_{z\in S}Z}$ is nonempty, let $\displaystyle{X= \bigcap_{z\in S}Z}$ away from higher order intersections. The form $\omega$ induces a $\ell$-cosymplectic structure on $X$ where $\ell$ is the number of type $z$ forms in $\Lambda_S$.
\end{proposition}
\begin{proof} For any $S\subseteq D$ such that $\displaystyle{\bigcap_{z\in S}Z}$ is nonempty, let $\displaystyle{X= \bigcap_{z\in S}Z}$ away from higher order intersections. By equation (\ref{eq:normal}), $$\omega|_
X= \sum_{i=1}^k \dfrac{dr_i}{r_i}\wedge\dfrac{ds_i}{s_i}+\sum_{j=1}^m\dfrac{dt_j}{t_j}\wedge\theta_j+\beta$$ for closed $\theta_j\in\Omega^1(X)$ and closed $\beta\in\Omega^2(X)$.
By the non-degeneracy of $\omega$, $$0\neq \wedge^n\omega|_
X=\big(\bigwedge_i\dfrac{dr_i}{r_i}\wedge\dfrac{ds_i}{s_i}\big)\wedge\big(\bigwedge_j \dfrac{dt_j}{t_j}\wedge\theta_j\big)\wedge\beta^{n-k-m}.$$ Thus $$\big(\bigwedge_j \theta_j\big)\wedge\beta^{n-k-m}\neq 0$$ and $(\theta_j,\beta)$ is an $\ell$-cosymplectic structure on $X$.
\end{proof}
By the standard symplectic linear algebra argument (for instance see \cite{Cannas} Sec. 1.1) we can express $\omega$ at point $p\in D$ as $$\omega_p=\sum_{i=1}^k \dfrac{dx_i}{x_i}\wedge\dfrac{dy_i}{y_i}+\sum_{j=1}^m\dfrac{dz_j}{z_j}\wedge ds_j+\sum_{k=1}^n dp_k\wedge dq_k.$$ By the proof of Proposition \ref{prop01}, a relative Moser's argument gives an analogue of Darboux's theorem for partitionable $\log$ symplectic structures.
\begin{corollary}
Let $\omega$ be a partitionable $\log$ symplectic form $\omega$ on a manifold $(M,D)$. Let $p\in D$ and let $S$ be the subset of $D$ of hypersurfaces containing $p$. Let $\Lambda_S$ be a subpartition of $S$. Then there exist local coordinates centered at $p$ with local hypersurface defining functions $x_i$, $y_i$, $z_i$ partitioned according to $\Lambda_S$ such that $$\omega=\sum_{i=1}^k\dfrac{dx_i}{x_i}\wedge\dfrac{dy_i}{y_i}+\sum_{j=1}^m\dfrac{dz_j}{z_j}\wedge ds_j+\sum_{k=1}^ndp_k\wedge dq_k $$ and the Poisson bi-vector associated to $\omega$ has the form $$\pi=\sum_{i =1}^k x_iy_i\dfrac{\partial}{\partial y_i}\wedge\dfrac{\partial}{\partial x_i}+\sum_{j=1}^m z_j \dfrac{\partial}{\partial s_j}\wedge\dfrac{\partial}{\partial z_j}+\sum_{k=1}^n \dfrac{\partial}{\partial q_k}\wedge\dfrac{\partial}{\partial p_k}. $$ \end{corollary}
\begin{remark}\label{kremark}
In general, $\mathcal{A}$-symplectic structures for a Lie algebroid $\mathcal{A}$ can reduce showing the existence of `Darboux' coordinates for a variety of structures into a standard symplectic linear algebra and a symplectic relative Moser argument.
Indeed we will next show that every $k$-cosymplectic structure $(\alpha_j,\beta)$ can sit inside a larger manifold $(M,D)$ such that a log symplectic form on $M$ induces $(\alpha_j,\beta)$ on $W$.
Thus, given a $k$-cosymplectic manifold $(W,\alpha_j,\beta)$, there exist local coordinates
$$(s_1,\dots,s_k,p_1,q_1,\dots,p_n,q_n)$$ such that $$\alpha_j=ds_j\text{ and }\beta=\sum dp_k\wedge dq_k.$$
\end{remark}
\subsection{Examples from products}
In Example 18 of \cite{Guillemin01}, Guillemin, Miranda, and Pires explained how given any compact $b$-symplectic surface $(M_b,\pi_b)$ and any compact symplectic surface $(M_s,\pi_s)$, the product $(M_b\times M_s,\pi_b + \pi_s)$ is a b-Poisson manifold. Similarly, the product of any partitionable log symplectic surfaces will produce a partitionable log symplectic manifold.
For instance, the torus $\mathbb{T}^2\simeq\mathbb{S}^1_\theta\times\mathbb{S}^1_\rho$ is a log symplectic surface with partitionable form $$\omega=\dfrac{d\theta}{\sin(\theta)}\wedge\dfrac{d\rho}{\sin(\rho)}.$$
In general, we have a product structure for partitionable log symplectic manifolds: Given two partitionable log symplectic manifolds $(M_1,D_1,\omega_1)$ and $(M_2,D_2,\omega_2)$, one can show that the product $$(M_1\times M_2,(D_1\times M_2)\cup (D_2\times M_1),\omega_1+\omega_2)$$ is also a partitionable log symplectic manifold. Further, this product respects the existing partitions of $D_1$ and $D_2$: For instance, if $Z\in D_1$ was type $x$, then $Z\times M_2$ is type $x$ in $(D_1\times M_2)\cup (D_2\times M_1)$.
We can also explicitly construct partitionable log symplectic manifolds from a given $k$-cosymplectic structure by taking a product with a torus.
\begin{Example}
Let $(M,\beta)$ be any symplectic manifold. Then $\mathbb{T}^k\times\mathbb{T}^k\times M$ with $\theta_1,\dots,\theta_k,\rho_1,\dots,\rho_k$ angular coordinates on $\mathbb{T}^k\times\mathbb{T}^k$ is a partitionable log symplectic structure with the form $$\omega = \sum_{i=1}^k\dfrac{d\theta_i}{\sin(\theta_i)}\wedge \dfrac{d\rho_i}{\sin(\rho_i)}+\beta.$$
Let $(M,\alpha_j,\beta)$ be any $k$-cosymplectic manifold. Then $\mathbb{T}^k\times M$ with $\theta_i$ the angle coordinates on $\mathbb{S}^1$ is a partitionable log symplectic structure with the form $$\omega = \sum_{i=1}^k\dfrac{d\theta_i}{\sin(\theta_i)}\wedge \alpha_i+\beta.$$
\end{Example}
\subsection{Symplectic foliations} Next we will show that the class of partitionable $\log$ symplectic structures induce regular foliations on the intersection of hypersurfaces in $D$.
By Propositions \ref{prop01} and \ref{inducedstructure} there are exactly two types of behavior for a partitionable $\log$ symplectic form at the nonempty intersection of two hypersurfaces $Z_1,Z_2$ in divisor $D$ away from $D\setminus \left\{Z_1,Z_2\right\}$.
\begin{multicols}{2}
{\bf Type 1:} In the first instance, we have a form of the type $$\omega=\dfrac{dx}{x}\wedge\dfrac{dy}{y}+\delta$$ where $X=\left\{x=0\right\}$ and $Y=\left\{y=0\right\}$. Then $X\cap Y$ is a symplectic leaf in the foliation induced by $\omega$. This leaf extends the foliation on $X$ away from $X\cap Y$ and extends the foliation on $Y$ away from $X\cap Y$.
\columnbreak \begin{center}
{\bf Type 1 Intersection}
\vspace{.5ex}
\includegraphics[scale=.8]{PartitionablePic01.eps} \end{center}
\end{multicols}
\begin{multicols}{2}
{\bf Type 2:} In the second instance, we have a form of the type $$\omega=\dfrac{dz}{z}\wedge\alpha +\dfrac{dv}{v}\wedge\beta+\delta$$ where $Z=\left\{z=0\right\}$ and $V=\left\{v=0\right\}$. Then $\omega$ induces a codimension 2 symplectic foliation on $Z\cap V$.
\columnbreak \begin{center}
{\bf Type 2 Intersection}
\vspace{.5ex}
\includegraphics[scale=.8]{PartitionablePic02.eps} \end{center}
\end{multicols}
For general intersections, let $I,J,K\subseteq\left\{1,\dots,k\right\}$ and $ L\subseteq\left\{1,\dots,m\right\}$ such that $I\cap J=I\cap K =\emptyset.$ On $$W=\bigcap_{i\in I}\left(Z_{x_i}\cap Z_{y_i}\right)\bigcap_{j\in J}Z_{x_j}\bigcap_{k\in K}Z_{y_k}\bigcap_{\ell\in L}Z_{z_\ell}$$ away from $W\cap (D\setminus \left\{Z_{x_i}, Z_{y_i},Z_{x_j},Z_{y_k},Z_{z_\ell}\right\})$, $\omega$ induces a regular codimension $|J|+|K|+|L|$ symplectic foliation. Further, the foliation is given by the $k$-cosymplectic structure provided in Proposition \ref{inducedstructure}.
\section{\bf Proof of Theorem \ref{theorem01}}
Recall, given a Poisson manifold $(M,\pi)$, the Poisson cohomology $H^*_\pi(M)$ is defined as the cohomology groups of the Lichnerowicz complex (see for instance \cite{Dufour}, p. 39). The $k$-th element in the sequence is made up of smooth $k$-multivector fields on $M$, $\mathcal{V}^k(M):=\mathcal{C}^{\infty}(M;\wedge^k TM)$. $$\dots\to\mathcal{V}^{k-1}(M)\xrightarrow{d_{\pi}}\mathcal{V}^{k}(M)\xrightarrow{d_{\pi}}\mathcal{V}^{k+1}(M)\to\dots$$ The differential $$d_{\pi}: \mathcal{V}^k(M)\to\mathcal{V}^{k+1}(M)$$ is defined as $$d_{\pi}=[\pi,\cdot],$$ where $[\cdot,\cdot]$ is the Schouten bracket extending the standard Lie bracket on vector fields $\mathcal{V}^1(M)$.
Before delving into the details of the proof of Theorem \ref{theorem01}, we will sketch the computation of Poisson cohomology for a partitionable log symplectic form on $\mathbb{T}^4$ to motivate the constructions involved in the proof.
\subsubsection{{\bf Motivating Computation.}} Consider $\mathbb{T}^4$ identified as $\mathbb{T}^2\times\mathbb{T}^2$ with angular coordinates $(x,y)$ and $(z,t)$ respectively. Let $D$ be $$Z_x=\left\{\sin(x)=0\right\}, Z_y=\left\{\sin(y)=0\right\},\text{ and }Z_z=\left\{\sin(z)=0\right\}.$$ We equip this manifold with the log symplectic form $$\omega=\dfrac{dx}{\sin(x)}\wedge\dfrac{dy}{\sin(y)}+\dfrac{dz}{\sin(z)}\wedge dt.$$ This symplectic form is the inverse to the Poisson bi-vector $$\pi=\sin(x)\sin(y)\dfrac{\partial}{\partial y}\wedge\dfrac{\partial}{\partial x}+\sin(z)\dfrac{\partial}{\partial t}\wedge\dfrac{\partial}{\partial z}.$$
The image $\omega^\flat(TM)$ is spanned by $$\dfrac{dx}{\sin(x)\sin(y)},~\dfrac{dy}{\sin(x)\sin(y)},~\dfrac{dz}{\sin(z)},~ \dfrac{dt}{\sin(z)}.$$
We can realize this image as the dual of the Lie algebroid, called $\mathcal{R}$, spanned by $$\sin(x)\sin(y)\dfrac{\partial}{\partial x},~ \sin(x)\sin(y)\dfrac{\partial}{\partial y}, ~\sin(z)\dfrac{\partial}{\partial z},~ \sin(z)\dfrac{\partial}{\partial t}.$$ This is a Lie algebroid with anchor map inclusion into $TM$ and with Lie bracket induced by the standard Lie bracket.
The Lichnerowicz Poisson complex of $\pi$ is isomorphic to the Lie algebroid de Rham cohomology of $\mathcal{R}$. We compute the de Rham cohomology of $\mathcal{R}$ in stages, by first computing the de Rham cohomology of a subcomplex $\mathcal{A}$.
Let $\mathcal{A}$ denote the Lie algebroid spanned by $$\dfrac{\partial}{\partial x},~ \dfrac{\partial}{\partial y},~ \sin(z)\dfrac{\partial}{\partial z},~ \sin(z)\dfrac{\partial}{\partial t}.$$ This vector bundle is a Lie algebroid with anchor map the inclusion into $TM$ and with Lie bracket induced by smoothly extending the standard Lie bracket away from $Z$ to $Z$.
Because $\mathcal{A}\subseteq \mathcal{R}$ is a sub Lie algebroid, there is an inclusion of Lie algebroid de Rham complexes $${}^{\mathcal{A}}\Omega^p(\mathbb{T}^4)\to{}^{\mathcal{R}}\Omega^p(\mathbb{T}^4).$$
We will first compute ${}^{\mathcal{A}}H^p(\mathbb{T}^4)$. A degree-p $\mathcal{A}$ form has an expression $$\mu=\dfrac{dz}{\sin(z)}\wedge\dfrac{dt}{\sin(z)}\wedge \cos(z)A+\dfrac{dz}{\sin(z)}\wedge B+\dfrac{dt}{\sin(z)}\wedge C+D$$ where $A\in\Omega^{p-2}(Z_z)$, $B\in\Omega^{p-1}(Z_z)$, $C\in\Omega^{p-1}(Z_z)$, and $ D\in\Omega^p(\mathbb{T}^4)$. Then $d\mu =$ $$\dfrac{dz}{\sin(z)}\wedge\dfrac{dt}{\sin(z)}\wedge \cos(z)dA-\dfrac{dz}{\sin(z)}\wedge dB-\cos(z)\dfrac{dz}{\sin^2(z)}\wedge dt \wedge C-\dfrac{dt}{\sin(z)}\wedge dC+dD.$$ Note that $\cos(z)=\pm 1$ when $\sin(z)=0$. In particular, $\cos(z)=1$ when $z=0$ and $\cos(z)=-1$ when $z=\pi$. Thus $$\ker d=\left\{ C=dA,\hspace{2ex} dB=0, \hspace{2ex} dD = 0\right\}.$$
If $\mu$ is closed, then there is a degree-$(p-1)$ $\mathcal{A}$ form $$\widetilde{\mu}= \dfrac{dz}{\sin(z)}\wedge b+\dfrac{dt}{\sin(z)}\wedge A+\delta$$ such that \begin{equation}\label{eq:01}\mu+d\widetilde{\mu}=\dfrac{dz}{\sin(z)}\wedge (B-db)+(D+d\delta).\end{equation}
Thus $[\mu+d\widetilde{\mu}]\in {}^{\mathcal{A}}H^p(\mathbb{T}^4)$ has a representative as in equation (\ref{eq:01}). This computation also shows that if $\mu=\dfrac{dz}{\sin(z)}\wedge B+D\in d({}^{\mathcal{A}}\Omega^{p-1}(\mathbb{T}^4))$, then $\mu=d\nu$ for some $\nu\in{}^{\mathcal{A}}\Omega^{p-1}(\mathbb{T}^4)$ where $$\nu=\dfrac{dz}{\sin(z)}\wedge\dfrac{dt}{\sin(z)}\wedge\cos(z)\alpha +\dfrac{dz}{\sin(z)}\wedge \beta+\dfrac{dt}{\sin(z)}\wedge\gamma +\delta$$ and $B=d\beta$, $D=d\delta$. Thus $B$ and $D$ are exact. Further, if two forms $\nu_1, \nu_2$ are representatives of the same cohomology class in ${}^{\mathcal{A}}H^p(\mathbb{T}^4)$, this shows that the coefficients of the expression $\nu_1-\nu_2$ must be exact.
Thus we have shown that $${}^{\mathcal{A}}H^p(\mathbb{T}^4)=\frac{\left\{{B\in\Omega^{p-1}(Z_z):dB=0}\right\}}{\left\{{B:B=db,b\in \Omega^{p-2}(Z_z)}\right\}}\bigoplus \frac{\left\{{D\in\Omega^{p}(\mathbb{T}^4):dD=0}\right\}}{\left\{{D:D=d\delta,\delta\in \Omega^{p-1}(\mathbb{T}^4)}\right\}}$$
and $${}^\mathcal{A}H^p(\mathbb{T}^4)\simeq H^p(\mathbb{T}^4)\oplus H^{p-1}(Z_z).$$
\vspace{2ex}
Now we can compute ${}^{\mathcal{R}}H^p(\mathbb{T}^4)$ using ${}^{\mathcal{A}}H^p(\mathbb{T}^4)$. A degree-p $\mathcal{R}$ form $\mu$ has an expression $$\dfrac{dx\wedge dy }{\sin^2(x)\sin^2(y)}\wedge (A_{00}+A_{10}\cos(y)\sin(x)+A_{01}\cos(x)\sin(y)+A_{20}\cos(y)\sin^2(x))$$ $$+\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (A_{02}\cos(x)\sin^2(y)+A_{11}\sin(x)\sin(y))$$ $$+\dfrac{dx}{\sin(x)\sin(y)}\wedge (B_0+B_1\sin(x))+\dfrac{dy}{\sin(x)\sin(y)}\wedge (C_0+C_1\sin(y))$$ $$+\dfrac{dx}{\sin(x)}\wedge D+\dfrac{dy}{\sin(y)}\wedge E + F$$ where $A_{i,j}\in{}^{\mathcal{A}}\Omega^{p-2}(Z_x\cap Z_y)$, $C_i,D\in{}^{\mathcal{A}}\Omega^{p-1}(Z_x)$, $B_i,E\in{}^{\mathcal{A}}\Omega^{p-1}(Z_y)$, and $ F\in{}^{\mathcal{A}}\Omega^p(\mathbb{T}^4)$. Further, $B_0$ is independent of $x$ and $C_0$ is independent of $y$.
Then $$d\mu =\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (dA_{00}+dA_{10}\cos(y)\sin(x)+dA_{01}\cos(x)\sin(y))$$
$$+\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (dA_{20}\cos(y)\sin^2(x)+ dA_{02}\cos(x)\sin^2(y)+dA_{11}\sin(x)\sin(y))$$ $$+\cos(y)\dfrac{dx}{\sin(x)}\wedge\dfrac{dy}{\sin^2(y)}\wedge (B_0+B_1\sin(x)) -\dfrac{dx}{\sin(x)\sin(y)}\wedge (dB_0+dB_1\sin(x))$$ $$-\cos(x)\dfrac{dx}{\sin^2(x)}\wedge\dfrac{dy}{\sin(y)}\wedge (C_0+C_1\sin(y)) -\dfrac{dy}{\sin(x)\sin(y)}\wedge (dC_0+dC_1\sin(y))$$ $$-\dfrac{dx}{\sin(x)}\wedge dD-\dfrac{dy}{\sin(y)}\wedge dE + dF.$$ Note that $\cos(x)=\pm 1$ when $\sin(x)=0$ and $\cos(y)=\pm 1$ when $\sin(y)=0$. Thus $\ker d$ is determined by the relations
$$dA_{00}=0, \hspace{2ex} B_0=-dA_{10}, \hspace{2ex} B_1=-dA_{20}, \hspace{2ex}
C_0=dA_{01},$$
$$C_1=dA_{02}, \hspace{2ex} dA_{11}=0, \hspace{2ex} dD = 0, \hspace{2ex} dE=0, \hspace{2ex} dF=0$$ even though $B_1$ could depend on $x$ and $C_1$ could depend on $y$ above.
\noindent If $d\mu=0$, then there is a degree-$(p-1)$ $\mathcal{R}$ form $\widetilde{\mu}$ of the form $$\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (a_{00}+a_{11}\sin(x)\sin(y))+\dfrac{dx}{\sin(x)\sin(y)}\wedge (A_{10}+A_{20}\sin(x))$$ $$+\dfrac{dy}{\sin(x)\sin(y)}\wedge (A_{01}+A_{02}\sin(x))+\dfrac{dx}{\sin(x)}\wedge \delta+\dfrac{dy}{\sin(y)}\wedge e + f$$ such that $[\mu-d\widetilde{\mu}]\in {}^{\mathcal{R}}H^p(\mathbb{T}^4)$ has a representative $$\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge ((A_{00}-da_{00})+(A_{11}-da_{11})\sin(x)\sin(y))$$ $$+\dfrac{dx}{\sin(x)}\wedge (D+d\delta)+\dfrac{dy}{\sin(y)}\wedge (E+de) + (F-df).$$
This computation also shows that if $\mu=$ $$\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (A_{00}+A_{11}\sin(x)\sin(y))+\dfrac{dx}{\sin(x)}\wedge D+\dfrac{dy}{\sin(y)}\wedge E + F$$ is in $ d({}^{\mathcal{R}}\Omega^{p-1}(M)),$ then $\mu=d\nu$ for some $\nu\in{}^{\mathcal{A}}\Omega^{p-1}(M)$ where $\nu=$ $$\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (a_{00}+a_{10}\cos(y)\sin(x)+a_{01}\cos(x)\sin(y)+a_{20}\cos(y)\sin^2(x))$$
$$+\dfrac{dx\wedge dy}{\sin^2(x)\sin^2(y)}\wedge (a_{02}\cos(x)\sin^2(y)+a_{11}\sin(x)\sin(y))$$ $$+\dfrac{dx}{\sin(x)\sin(y)}\wedge (b_0+b_1\sin(x))+\dfrac{dy}{\sin(x)\sin(y)}\wedge (c_0+c_1\sin(y))$$ $$+\dfrac{dx}{\sin(x)}\wedge \delta+\dfrac{dy}{\sin(y)}\wedge e + f$$ and $A_{00}=da_{00}$, $A_{11}=da_{11}$, $D=d\delta$, $E=de$, and $F=df$. Thus if two forms $\nu_1, \nu_2$ are representatives of the same cohomology class in ${}^{\mathcal{R}}H^p(\mathbb{T}^4)$, then the coefficients of the expression $\nu_1-\nu_2$ must be exact. Thus, we have shown ${}^\mathcal{R}H^p(\mathbb{T}^4)$ is $$\frac{\left\{{A_{00}\in{}^{\mathcal{A}}\Omega^{p-2}(Z_x\cap Z_y):dA_{00}=0}\right\}}{\left\{{A_{00}=da_{00},a_{00}\in {}^{\mathcal{A}}\Omega^{p-3}(Z_x\cap Z_y) }\right\}}\bigoplus\frac{\left\{{A_{11}\in{}^{\mathcal{A}}\Omega^{p-2}(Z_x\cap Z_y):dA_{11}=0}\right\}}{\left\{{A_{11}=da_{11}, a_{11}\in {}^{\mathcal{A}}\Omega^{p-3}(Z_x\cap Z_y) }\right\}}$$ $$\bigoplus\frac{\left\{{D\in{}^{\mathcal{A}}\Omega^{p-1}(Z_x):dD=0}\right\}}{\left\{{D=d\delta,\delta\in {}^{\mathcal{A}}\Omega^{p-2}(Z_x)}\right\}}\bigoplus \frac{\left\{{E\in{}^{\mathcal{A}}\Omega^{p-1}(Z_y):dE=0}\right\}}{\left\{{E=de,e\in {}^{\mathcal{A}}\Omega^{p-2}(Z_y)}\right\}}\bigoplus $$ $$\frac{\left\{{F\in{}^{\mathcal{A}}\Omega^{p}(\mathbb{T}^4):dF=0}\right\}}{\left\{{F=df,f\in {}^{\mathcal{A}}\Omega^{p-1}(\mathbb{T}^4)}\right\}}.$$
Thus in fixed coordinates $x,y,z,t$, ${}^{\mathcal{R}}H^p(\mathbb{T}^4)$ is computable as
\centerline{$ {}^{\mathcal{A}}H^p(\mathbb{T}^4)\oplus {}^{\mathcal{A}}H^{p-1}(Z_y)\oplus {}^{\mathcal{A}}H^{p-1}(Z_x)\oplus {}^{\mathcal{A}}H^{p-2}(Z_x\cap Z_y)\oplus {}^{\mathcal{A}}H^{p-2}(Z_x\cap Z_y)\simeq$} $$\underbrace{H^p(\mathbb{T}^4)\oplus H^{p-1}(Z_z)}_{{}^{\mathcal{A}}H^p(\mathbb{T}^4)}\oplus \underbrace{H^{p-1}(Z_y)\oplus H^{p-2}(Z_y\cap Z_z)}_{{}^{\mathcal{A}}H^{p-1}(Z_y)} \oplus \underbrace{H^{p-1}(Z_x) \oplus H^{p-2}(Z_x\cap Z_z)}_{{}^{\mathcal{A}}H^{p-1}(Z_x)}$$
$$\oplus \underbrace{H^{p-2}(Z_x\cap Z_y) \oplus H^{p-3}(Z_x\cap Z_y\cap Z_z)}_{{}^{\mathcal{A}}H^{p-2}(Z_x\cap Z_y)}\oplus \underbrace{H^{p-2}(Z_x\cap Z_y)\oplus H^{p-3}(Z_x\cap Z_y\cap Z_z)}_{{}^{\mathcal{A}}H^{p-2}(Z_x\cap Z_y)}.$$
\vspace{2ex}
\noindent We will now discuss the details necessary to complete the computation for general partitionable log symplectic structures.
\subsection{Good tubular neighborhoods}
Consider a manifold $M$ with a set $D$ of smooth transversely intersecting hypersurfaces. For each point $p\in D$, there is a chart $(U, f)$ of $M$ centered at $p$ such that $f(D)$ is a union of a subset of the coordinate hyperplanes in $\mathbb{R}^n$ intersected with $f(D)$. \begin{center} \includegraphics[scale=.4]{PartitionablePic03.eps}\includegraphics[scale=.7]{PartitionablePic04.eps}\end{center}
A good tubular neighborhood $\tau=Z\times(-\varepsilon,\varepsilon)$ of $Z\in D$ is a neighborhood that extends charts of the type $U$ above at the intersection of $Z\cap (D\setminus Z)$. In our computations below we will always use good tubular neighborhoods. For existence of such neighborhoods, see for instance section 5 of \cite{Albin}.
\subsection{Constructing the rigged algebroid}
To compute the Poisson cohomology of a partitionable $\log$ symplectic manifold we will use rigged algebroids, see \cite{Lanius} for more details.
\begin{definition}
Given a partitionable $\log$ symplectic manifold $(M,D,\omega)$, the dual rigged bundle ${}^{log}\mathcal{R}^*$ is the extension to $M$ of the image $\omega^\flat(TM)$ away from $D$. The log rigged forms are $${}^{log \mathcal{R}}\Omega^p(M)=\mathcal{C}^\infty(M;\wedge^p({}^{log}\mathcal{R}^*)).$$ \end{definition}
The rigged Lie algebroid is isomorphic to the Poisson Lie algebroid $T^*M$ with anchor map $\pi^\sharp=(\omega^\flat)^{-1}$. Because the $\log$ rigged anchor map $\rho$ is given by inclusion into $TM$, this new setting translates the computation of Poisson cohomology into the familiar language of de Rham cohomology of $T^*M$.
\begin{center}$\xygraph{!{<0cm,0cm>;<1cm, 0cm>:<0cm,1cm>::}
!{(.25,1)}*+{T^*M}="b"
!{(1.75,1)}*+{{{}^{log}\mathcal{R}}}="c"
!{(1,1)}*+{{\simeq}}="f"
!{(1,0)}*+{TM}="d"
"b":"d"_{\pi^{\sharp}}
"c":"d"^{\rho}}$\end{center}
Next, we will identify the Lie algebroid ${}^{log}\mathcal{R}$. For the purposes of computing Poisson cohomology, it will be useful to construct ${}^{log}\mathcal{R}$ through a sequence of rescalings.
\subsubsection{{\bf The Lie algebroid $\mathcal{A}_i$.}} Consider an expression of $\omega$ as in equation (\ref{eq:normal}):
$$\omega= \sum_{i=1}^k\dfrac{dx_i}{x_i}\wedge\dfrac{dy_i}{y_i}+\sum_{j=1}^{m}\dfrac{dz_j}{z_j}\wedge\alpha_j+\delta.$$ We will begin by rescaling $TM$ at $Z_{z_1}$ by the vector bundle $\ker\alpha_1\to Z_{z_1}$. In order to employ Theorem 2.2 from \cite{Lanius}, we must verify that $$[\ker\alpha_1,\ker\alpha_1]\subseteq \ker\alpha_1.$$ Let $X,Y\in\ker\alpha_1$. Consider \begin{equation}\label{eq:kercomp} d\alpha_1(X,Y)=X\alpha_1(Y)-Y\alpha_1(X)-\alpha_1([X,Y]).\end{equation} Because $d\alpha_1=0$ and $X,Y\in\ker\alpha_1$, this reduces to $\alpha_1([X,Y])=0$. Thus $[X,Y]\in\ker\alpha_1$.
Thus by Theorem 2.2 in \cite{Lanius}, there is a Lie algebroid $\mathcal{A}_1$ whose space of sections is $$\left\{u\in\mathcal{C}^\infty(M;TM):u|_{Z_{z_1}}\in\mathcal{C}^\infty(Z_{z_1};\ker\alpha_{1}) \right\}.$$
Note that the cotangent bundle $T^*M$ includes into $\mathcal{A}^*_1$. Thus the one form $\alpha_2\in\Omega^1(M)|_{Z_{z_2}}$ can be lifted to a one form $\widetilde{\alpha}_2=i(\alpha_2)\in{}^{\mathcal{A}_1}\Omega^1(M)|_{Z_{z_2}}$.
\begin{center}$\xymatrix{
\mathcal{A}^*_1|_{Z_{z_2}} & \\
T^*M|_{Z_{z_2}} \ar[u]^{i} & M \ar@{.>}[lu]_{\widetilde{\alpha}_2} \ar[l]_{\alpha_2}}$\end{center}
Note that we can always lift forms in this way, however the lifted form may vanish at $Z_{z_1}\cap Z_{z_2}$ while the original does not. In order to employ Theorem 2.2 from \cite{Lanius}, we must verify that $\ker\widetilde{\alpha}_2$ is a subbundle of $\mathcal{A}_1|_{Z_{z_2}}$.
Because $\alpha_2$ is a closed one-form in a $k$-cosymplectic structure at $Z_{z_2}$, $\ker\alpha_2$ is a subbundle of $TM|_{Z_{z_2}}$. Away from $Z_{z_1}\cap Z_{z_2}$, the inclusion $i$ gives us an isomorphism $\mathcal{A}^*_1|_{Z_{z_2}}\simeq T^*M|_{Z_{z_2}}$ and it is clear that $\ker\widetilde{\alpha}_2$ is a subbundle of $\mathcal{A}^*_1|_{Z_{z_2}}$ .
Let $p\in Z_{z_1}\cap Z_{z_2}$. Then, as described in Remark \ref{kremark}, there exist local coordinates $z_1,s_1,z_2,s_2,p_1,\dots,p_n$ of $M$ at $Z_{z_1}\cap Z_{z_2}$ such that $\alpha_1=ds_1$ and $\alpha_2=ds_2$.
Note that $$T^*_pM\text{ is spanned by }dz_1,ds_1,dz_2,ds_2,dp_1,\dots, dp_n$$
and $$\mathcal{A}_1^*|_p\text{ is spanned by }\dfrac{dz_1}{z_1},\dfrac{ds_1}{z_1},dz_2,ds_2,dp_1,\dots, dp_n.$$
Note that the support of $\alpha_2$ is the image of the support of $\widetilde{\alpha}_2$ under the anchor map of $\mathcal{A}_1$. Thus rank(ker$\alpha_2$)=rank(ker$\widetilde{\alpha}_2$) and $\ker\widetilde{\alpha}_2$ is a subbundle of $\mathcal{A}_1|_{Z_{z_2}}$.
We will form $\mathcal{A}_2$ by rescaling $\mathcal{A}_1$ by $\ker\widetilde{\alpha}_2$ at $Z_2$. By the computation analogous to equation (\ref{eq:kercomp}) with respect to $\widetilde{\alpha}_2$, there exists a Lie algebroid $\mathcal{A}_2$ whose space of sections is $$\left\{u\in\mathcal{C}^\infty(M;\mathcal{A}_1):u|_{Z_{z_2}}\in\mathcal{C}^\infty(Z_{z_2};\ker\widetilde{\alpha}_2) \right\}.$$
To form $\mathcal{A}_j$ we will rescale $\mathcal{A}_{j-1}$ at $Z_{z_{j}}$. As above, we lift the one form $\alpha_j\in\Omega^1(Z_{z_j})$ to the one form $\widetilde{\alpha}_j=i(\alpha_j)\in{}^{\mathcal{A}_{j-1}}\Omega^1(M)|_{Z_{z_j}}$. Analogous to the argument above, one can verify in local coordinates that $\ker\widetilde{\alpha}_j$ is a subbundle of $\mathcal{A}_{j-1}|_{Z_{z_j}}$. By computation (\ref{eq:kercomp}), there exists a Lie algebroid $\mathcal{A}_j$ whose space of sections is $$\left\{u\in\mathcal{C}^\infty(M;\mathcal{A}_{j-1}):u|_{Z_{z_j}}\in\mathcal{C}^\infty(Z_{z_j};\ker\widetilde{\alpha}_{j}) \right\}.$$
\subsubsection{{\bf The Lie algebroid $\mathcal{B}_i$.}}
Next, we rescale $\mathcal{A}_m$ at $Z_{x_i}$ and $Z_{y_i}$.
As previously described, we can lift the one form $dx_1 \in\Omega^1(M)|_{Z_{x_1}}$ to the one form $\widetilde{dx}_1=i(dx_1)\in{}^{\mathcal{A}_{m}}\Omega^1(M)|_{Z_{x_1}}$ and we can lift $dy_1\in\Omega^1(M)|_{Z_{y_1}}$ to the one form $\widetilde{dy}_1=i(dy_1)\in{}^{\mathcal{A}_{m}}\Omega^1(M)|_{Z_{y_1}}$. The $\mathcal{A}_m$-one form $\widetilde{dx}_1$ is non-zero at $Z_{x_1}$ and the $\mathcal{A}_m$-one form $\widetilde{dy}_1$ is non-zero at $Z_{y_1}$. Further, by (\ref{eq:kercomp}) above, $[\ker \widetilde{dx}_1,\ker \widetilde{dx}_1]\subseteq \ker \widetilde{dx}_1$ and $[\ker \widetilde{dy}_1,\ker \widetilde{dy}_1]\subseteq \ker \widetilde{dy}_1$. Additionally, since we are working in a good tubular neighborhood, $$[\partial_{x_1},\partial_{y_1}]=0.$$ Thus $$[\ker \widetilde{dx}_1\cap\ker \widetilde{dy}_1,\ker \widetilde{dx}_1\cap\ker \widetilde{dy}_1]\subseteq \ker \widetilde{dx}_1\cap\ker \widetilde{dy}_1$$ and by Theorem 2.2 in \cite{Lanius}, there exists a Lie algebroid $\mathcal{B}_1$ whose space of sections is $$\left\{u\in\mathcal{C}^\infty(M;\mathcal{A}_m)\bigg|\begin{array}{c} u|_{Z_{x_1}}\in\mathcal{C}^\infty({Z_{x_1}}, \ker\widetilde{dx}_1)\\ u|_{Z_{y_1}}\in\mathcal{C}^\infty({Z_{y_1}},\ker\widetilde{dy}_1)\end{array} \right\}.$$
We iteratively form $\mathcal{B}_j$ by rescaling $\mathcal{B}_{j-1}$. First, we lift the one form $dx_j \in\Omega^1(M)|_{Z_{x_j}}$ to the one form $\widetilde{dx}_j=i(dx_j)\in{}^{\mathcal{B}_{j-1}}\Omega^1(M)|_{Z_{x_j}}$ and we lift $dy_j\in\Omega^1(M)|_{Z_{y_j}}$ to the one form $\widetilde{dy}_j=i(dy_j)\in{}^{\mathcal{B}_{j-1}}\Omega^1(M)|_{Z_{y_j}}$. Similar to above, the algebroid $\mathcal{B}_n$ is the vector bundle whose space of sections is $$\left\{u\in\mathcal{C}^\infty(M;\mathcal{B}_{j-1})\bigg|\begin{array}{c} u|_{Z_{x_j}}\in\mathcal{C}^\infty({Z_{x_j}}, \ker\widetilde{dx}_j)\\ u|_{Z_{y_j}}\in\mathcal{C}^\infty({Z_{y_j}},\ker\widetilde{dy}_j)\end{array} \right\}.$$
By using our local expression for a partitionable log symplectic form $\omega$, one can check that $\mathcal{B}_k={}^{log}\mathcal{R}$ as a vector bundle and, by the continuity of the standard Lie bracket off of $D$, these are in fact isomorphic as Lie algebroids. If $\displaystyle{W=\bigcap_{Z\in D}Z}$ is non-empty, then for all $p\in W$ there exist local coordinates
\centerline{$(x_1,y_1,\dots,x_k,y_k,\hspace{.5ex}z_1,v_1,\dots,z_\ell,v_\ell,\hspace{.5ex}p_1,q_1\dots,p_m,q_m)$} \noindent such that the sections of ${}^{\log}\mathcal{R}$ are smooth linear combinations of
$$x_1y_1\dfrac{\partial}{\partial x_1},x_1y_1\dfrac{\partial}{\partial y_1},\dots,x_ky_k\dfrac{\partial}{\partial x_k},x_ky_k\dfrac{\partial}{\partial y_k},$$
$$z_1\dfrac{\partial}{\partial z_1},z_1\dfrac{\partial}{ \partial v_1},\dots,z_\ell\dfrac{\partial}{\partial z_\ell},z_\ell\dfrac{\partial}{ \partial v_\ell},\text{ and }\dfrac{\partial}{\partial p_1},\dfrac{\partial}{\partial q_1},\dots,\dfrac{\partial}{\partial p_m},\dfrac{\partial}{\partial q_m}.$$
We can locally identify ${}^{log}\mathcal{R}^*$ as the span of $$\dfrac{dx_i}{x_iy_i},\dfrac{dy_i}{x_iy_i}, \dfrac{z_j}{z_j},\dfrac{\alpha_j}{z_j},dp_n,dq_n.$$
The \emph{$\log$ rigged de-Rham forms} are $${}^{\log\mathcal{R}}\Omega^p(M)= \mathcal{C}^\infty(M; \wedge^p ({}^{\log} \mathcal{R}^*)),$$ smooth sections of the $p$-th exterior power of ${}^{\log}\mathcal{R}^*$. This complex has exterior derivative $d$ given by extending the standard smooth differential on $M\setminus D$ to $M$.
\subsection{Computing the de Rham cohomology of ${}^{\log} \mathcal{R}$ }
\begin{lemma}The Poisson cohomology of a partitionable $\log$ Poisson manifold $(M,D,\pi)$ is isomorphic to the de Rham cohomology $^{log\mathcal{R}}H^*(M)$ of the $\log$ rigged algebroid ${}^{log}\mathcal{R}$.
\end{lemma}
The details of the proof of this lemma can be found in Section 5 of \cite{Lanius}. Note that we have inclusions of de Rham complexes: $${}^{\mathcal{A}_1}\Omega^*(M)\to \dots\to{}^{\mathcal{A}_m}\Omega^*(M)\to{}^{\mathcal{B}_1}\Omega^*(M)\to\dots\to{}^{\mathcal{B}_k}\Omega^*(M)={}^{log\mathcal{R}}\Omega^*(M).$$
\begin{lemma}\label{Alemma} Let $\widetilde{D}$ be the subset of $D$ consisting of hypersurfaces labeled $Z_{z_i}$. The Lie algebroid cohomology of $\mathcal{A}_m$ is isomorphic to the de Rham cohomology of the log $\widetilde{D}$ tangent bundle. That is, $${}^{\mathcal{A}_m}H^p(M)\simeq H^p(M)\bigoplus_{\tau\in\mathscr{T}}H^{p-|\tau|}\big(\bigcap_{j\in \tau}Z_{z_j}\big)$$ where $\mathscr{T}$ denotes all of the nonempty subsets of $\left\{1,\dots,|\widetilde{D}|\right\}$.
\end{lemma}
\begin{proof}
The bundle map $i: \mathcal{A}_{i}\to \mathcal{A}_{i-1}$ constructed in Theorem 2.2 of \cite{Lanius} is an inclusion of Lie algebroids and hence fits into a short exact sequence of complexes \begin{center} $0\to {}^{\mathcal{A}_{i-1}}\Omega^p(M)\to{}^{\mathcal{A}_i}\Omega^p(M)\to\mathscr{C}^p\to 0$ \end{center}where $$\mathscr{C}^p={}^{\mathcal{A}_{i}}\Omega^p(M)/{}^{\mathcal{A}_{i-1}}\Omega^p(M).$$ The differential on $^{\mathscr{C}}d$ is induced by the differential $^{\mathcal{A}_i}d$ on ${}^{\mathcal{A}_i}\Omega^{p}(M)$: if $P$ is the projection ${}^{\mathcal{A}_{i}}\Omega^p(M)\to{}^{\mathcal{A}_{i}}\Omega^p(M)/{}^{\mathcal{A}_{i-1}}\Omega^p(M),$ then $^{\mathscr{C}}d(\eta)=P({}^{\mathcal{A}_i}d(\theta))$ where $\theta\in{}^{\mathcal{A}_i}\Omega^{p}(M)$ is any form such that $P(\theta)=\eta$. Hence $({}^\mathscr{C}d)^2=0$ and $(\mathscr{C}^*,{}^\mathscr{C}d)$ is in fact a complex.
Given a good tubular neighborhood $\tau=Z_{z_i}\times(-\varepsilon,\varepsilon)_{z_i}$ of $M$ near $Z_{z_i}$, note that $z_i$ defines a trivialization $t_{z_i}:N^*Z_{z_i}\to\mathbb{R}$ of $N^*Z_{z_i}$.
We can write a degree-$p$ $\mathcal{A}_i$ form $\mu\in {}^{\mathcal{A}_i}\Omega^p(M)$ as $$\mu = \theta+ \dfrac{dz_i\wedge\alpha_i}{z_i^2}\wedge A + \dfrac{dz_i}{z_i}\wedge B +\dfrac{\alpha_i}{z_i}\wedge C$$ and $\theta\in {}^{\mathcal{A}_{i-1}}\Omega^p(M)$, $A,B,C\in {}^{\mathcal{A}_{i-1}}\Omega^*(Z_{z_i})\simeq {}^{\mathcal{A}_{i-1}}\Omega^*(Z_{z_i};|N^*Z_{z_i}|^{-1})\text{ by }t_{{z_i}^*}.$
We write $\mathscr{R}(\mu)=\theta$ and $\mathscr{S}(\mu)=\mu-\mathscr{R}(\mu)$ for `regular' and `singular' parts. It is easy to see that $\mathscr{R}(d\mu)=d(\mathscr{R}(\mu))$ and $\mathscr{S}(d\mu)=d(\mathscr{S}(\mu))$. Thus the trivialization $\tau$ induces a splitting ${}^{\mathcal{A}_i}\Omega^*(M)={}^{\mathcal{A}_{i-1}}\Omega^*(M)\oplus\mathscr{C}^*$ as complexes. As a consequence ${}^{\mathcal{A}_i}H^p(M)={}^{\mathcal{A}_{i-1}}H^p(M)\oplus H^p(\mathscr{C}^{*})$ and we are left to compute the cohomology of the quotient complex.
After identifying $\mathscr{C}^p=\left\{\mu\in{}^{\mathcal{A}_i}\Omega^p(M):\theta=0\right\}$, the differential is given by
$$d\mu= \dfrac{dz_i\wedge\alpha_i}{z_i^2}\wedge (dA-C) - \dfrac{dz_i}{z_i}\wedge dB -\dfrac{\alpha_i}{z_i}\wedge dC.$$
Thus $\ker(d:\mathscr{C}^{p}\to\mathscr{C}^{p+1})=\left\{C=dA,\hspace{2ex} dB=0\right\}$. If $d\mu=0$, then there is $$\tilde{\mu}=\dfrac{\alpha_i}{z_i}\wedge A-\dfrac{dz_i}{z_i}\wedge b\in\mathscr{C}^{p-1}$$ such that $d\left(-\dfrac{\alpha_i}{z_i}\wedge A-\dfrac{dz_i}{z_i}\wedge b\right)=\dfrac{dz_i\wedge\alpha_i}{z_i^2}\wedge A +\dfrac{\alpha_i}{z_i}\wedge dA-\dfrac{dz_i}{z_i}\wedge db$. Then $[\mu-d\tilde{\mu}]\in H^p(\mathscr{C})$ has representative $$\dfrac{dz_i}{z_i}\wedge(B-db).$$
This computation also shows that if $\mu=\dfrac{dz_i}{z_i}\wedge B\in d(\mathscr{C}^{p-1})$, then $\mu=d\nu$ for some $\nu\in\mathscr{C}^{p-1}$ where $$\nu=\dfrac{dz_i}{z_i}\wedge\dfrac{\alpha_i}{z_i}\wedge a +\dfrac{dz_i}{z_i}\wedge b +\dfrac{\alpha_i}{z_i}\wedge c $$ and $B=d b$. Thus $B$ is exact. Further, if two forms $\nu_1, \nu_2$ are representatives of the same cohomology class in $H^p(\mathscr{C})$, this shows that the coefficients of the expression $\nu_1-\nu_2$ must be exact.
Let us consider the effect of changing the $Z_{z_i}$-defining function. By the change of $Z_{z_i}$ defining function computation found in Example 2.11 of \cite{Lanius}, the cohomology class $[B-db]$ is unambiguous despite a representative being expressed
using a particular $Z_{z_i}$ defining function.
Thus $\displaystyle{H^p(\mathscr{C})= \frac{\left\{{B\in{}^{\mathcal{A}_{i-1}}\Omega^{p-1}(M):dD=0}\right\}}{\left\{{B=db,b\in {}^{\mathcal{A}_{i-1}}\Omega^{p-2}(M)}\right\}}}$
and $${}^{\mathcal{A}_i}H^p(M)\simeq{}^{\mathcal{A}_{i-1}}H^p(M)\oplus {}^{\mathcal{A}_{i-1}}H^{p-1}(Z_{z_i}).$$
Since ${}^{\mathcal{A}_1}H^p(M)\simeq H^p(M)\oplus H^{p-1}(Z_{z_1})$, we have that $${}^{\mathcal{A}_m}H^p(M)\simeq H^p(M)\bigoplus_{\tau\in\mathscr{T}}H^{p-|\tau|}\big(\bigcap_{j\in \tau}Z_{z_j}\big)$$ where $\mathscr{T}$ denotes all of the nonempty subsets of $\left\{1,\dots,|\widetilde{D}|\right\}$.
\end{proof}
\begin{lemma}The Lie algebroid cohomology of $\mathcal{B}_m$ is isomorphic to $${}^{b}H^p(M)\oplus\bigoplus_{\mathscr{M}}H^{p-m}(\bigcap\underbrace{Z_{x_i}\cap Z_{y_i}\cap Z_{x_j}\cap Z_{y_k}\cap Z_{z_\ell}}_{i\in I,~j\in J,~ k\in K,~\ell\in L};\otimes\underbrace{|N_{v_i}^*Z_{x_i}|^{-1}\otimes|N_{v_i}^*Z_{y_i}|^{-1}}_{i\in I})$$ where $\mathscr{M}$ denotes all collections of sets $I,J,K,L$ satisfying $$I,J,K\subseteq\left\{1,\dots,k\right\}, L\subseteq\left\{1,\dots,n\right\}\text{ such that } I\neq\emptyset,\text{ and } I\cap J=I\cap K =\emptyset$$ with $m:=2|I|+|J|+|L|+|K|$ and $v_i=Z_{x_i}\cap Z_{y_i}$. \end{lemma}
\begin{proof}
We set $\mathcal{B}_0=\mathcal{A}_m$. The bundle map $i: \mathcal{B}_{i}\to \mathcal{B}_{i-1}$ constructed in Theorem 2.2 of \cite{Lanius} is an inclusion of Lie algebroids and hence fits into a short exact sequence of complexes
$$0\to{}^{\mathcal{B}_{i-1}}\Omega^p(M)\to{}^{\mathcal{B}_i}\Omega^p(M)\to\mathscr{C}^p\to 0.$$
where $$\mathscr{C}^p={}^{\mathcal{B}_{i}}\Omega^p(M)/{}^{\mathcal{B}_{i-1}}\Omega^p(M).$$ The differential on $^{\mathscr{C}}d$ is induced by the differential $^{\mathcal{B}_i}d$ on ${}^{\mathcal{B}_i}\Omega^{p}(M)$: if $P$ is the projection ${}^{\mathcal{B}_{i}}\Omega^p(M)\to{}^{\mathcal{B}_{i}}\Omega^p(M)/{}^{\mathcal{B}_{i-1}}\Omega^p(M),$ then $^{\mathscr{C}}d(\eta)=P({}^{\mathcal{B}_i}d(\theta))$ where $\theta\in{}^{\mathcal{B}_i}\Omega^{p}(M)$ is any form such that $P(\theta)=\eta$. Hence $({}^\mathscr{C}d)^2=0$ and $(\mathscr{C}^*,{}^\mathscr{C}d)$ is in fact a complex.
Given good tubular neighborhoods $\tau_x=Z_{x_i}\times(-\varepsilon,\varepsilon)_{x_i}$ of $M$ near $Z_{x_i}$ and $\tau_y=Z_{y_i}\times(-\varepsilon,\varepsilon)_{y_i}$ of $M$ near $Z_{y_i}$, note that $z_{x_i}$ defines a trivialization $t_{x_i}:N^*Z_{x_i}\to\mathbb{R}$ of $N^*Z_{x_i}$ and $z_{y_i}$ defines a trivialization $t_{y_i}:N^*Z_{y_i}\to\mathbb{R}$ of $N^*Z_{y_i}$.
We can write a degree-$p$ $\mathcal{B}_i$ form $\mu\in {}^{\mathcal{B}_i}\Omega^p(M)$ as $$\mu =\theta+\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge (A_{00}+A_{10}x_i+A_{01}y_i+A_{20}x^2+A_{02}y^2+A_{11}x_iy_i)$$ $$+\dfrac{dx_i}{x_iy_i}\wedge (B_0+B_1x_i)+\dfrac{dy_i}{x_iy_i}\wedge (C_0+C_1y_i)+\dfrac{dx_i}{x_i}\wedge D+\dfrac{dy_i}{y_i}\wedge E$$ where $\theta\in {}^{\mathcal{B}_{i-1}}\Omega^p(M)$, $$A_{k,l}\in {}^{\mathcal{B}_{i-1}}\Omega^*(Z_{x_i}\cap Z_{y_i})\simeq {}^{\mathcal{B}_{i-1}}\Omega^*(Z_{x_i}\cap Z_{y_i};|N_{v_i}^*Z_{x_i}|^{-1}\otimes |N_{v_i}^*Z_{x_i}|^{-1} )$$ by $t_{{x_i}^*}$, $t_{{y_i}^*}$ where $v_i=Z_{x_i}\cap Z_{y_i}$, $$C_i,D\in {}^{\mathcal{B}_{i-1}}\Omega^*(Z_{x_i})\simeq {}^{\mathcal{B}_{i-1}}\Omega^*(Z_{x_i};|N^*Z_{x_i}|^{-1})\text{ by }t_{{y_i}^*}, \text{ and }$$ $$B_i,E\in {}^{\mathcal{B}_{i-1}}\Omega^*(Z_{y_i})\simeq {}^{\mathcal{B}_{i-1}}\Omega^*(Z_{y_i};|N^*Z_{y_i}|^{-1})\text{ by }t_{{x_i}^*}.$$ Further, $B_0$ is independent of $x$ and $C_0$ is independent of $y$.
We write $\mathscr{R}(\mu)=\theta$ and $\mathscr{S}(\mu)=\mu-\mathscr{R}(\mu)$ for `regular' and `singular' parts. It is easy to see that $\mathscr{R}(d\mu)=d(\mathscr{R}(\mu))$ and $\mathscr{S}(d\mu)=d(\mathscr{S}(\mu))$. Thus the trivializations $\tau_{x_i},\tau_{y_i}$ induce a splitting ${}^{\mathcal{B}_i}\Omega^*(M)={}^{\mathcal{B}_{i-1}}\Omega^*(M)\oplus\mathscr{C}^*$ as complexes. As a consequence ${}^{\mathcal{B}_i}H^p(M)={}^{\mathcal{B}_{i-1}}H^p(M)\oplus H^p(\mathscr{C}^{*})$ and we are left to compute the cohomology of the quotient complex.
After identifying $\mathscr{C}^p=\left\{\mu\in{}^{\mathcal{B}_i}\Omega^p(M):\theta=0\right\}$, the differential is given by $d\mu =$ $$\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge (dA_{00}+(dA_{10}+B_0)x_i+(dA_{01}-C_0)y_i+(dA_{20}+B_1)x^2+(dA_{02}-C_1)y^2)$$
$$+\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge dA_{11}x_iy_i-\dfrac{dx_i}{x_iy_i}\wedge dB -\dfrac{dy_i}{x_iy_i}\wedge dC-\dfrac{dx_i}{x_i}\wedge dD-\dfrac{dy_i}{y_i}\wedge dE.$$ Thus $ker(d:\mathscr{C}^p\to\mathscr{C}^{p+1})$ can be identified with the relations $$dA_{00}=0,\hspace{2ex} B_0=-dA_{10}, \hspace{2ex} B_1=-dA_{20}, \hspace{2ex} C_0=dA_{01},$$
$$C_1=dA_{20}, \hspace{2ex} dA_{11}=0, \hspace{2ex} dD=0, \hspace{2ex} dE=0$$ even though $B_1$ could depend on $x_i$ and $C_1$ could depend on $y_i$ above.
If $d\mu=0$, then there is an element $\tilde{\mu}$ in $\mathscr{C}^{p-1}$, $$\tilde{\mu}=-\dfrac{dx_i}{x_iy_i}\wedge (A_{10}+A_{20}x_i)+\dfrac{dy_i}{x_iy_i}\wedge (A_{01}+A_{02}y_i)+\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}(a_{00}+a_{11}x_iy_i)$$ $$+\dfrac{dx_i}{x_i}\wedge\delta+\dfrac{dy_i}{y_i}\wedge e$$ such that $\mu-d\tilde{\mu}$ equals $$\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge \bigg((A_{00}-da_{00})+(A_{11}-da_{11})x_iy_i\bigg)+\dfrac{dx_i}{x_i}\wedge(D+d\delta)+\dfrac{dy_i}{y_i}\wedge(E+de).$$
This computation also shows that if $$\mu=\dfrac{dx_i\wedge dy_i}{x^2_iy^2_i}\wedge (A_{00}+A_{11}x_iy_i)+\dfrac{dx_i}{x_i}\wedge D+\dfrac{dy_i}{y_i}\wedge E$$ is in $ d(\mathscr{C}^{p-1}),$ then $\mu=d\nu$ for some $\nu\in\mathscr{C}^{p-1}$ where $$\nu=\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge (a_{00}+a_{10}x_i+a_{01}y_i+a_{20}x_i^2+a_{02}y_i^2+a_{11}x_iy_i)$$
$$ +\dfrac{dx_i}{x_iy_i}\wedge (b_0+b_1x_i)+\dfrac{dy_i}{x_iy_i}\wedge (c_0+c_1y_i)+\dfrac{dx_i}{x_i}\wedge \delta+\dfrac{dy_i}{y_i}\wedge e$$ and $A_{00}=da_{00}$, $A_{11}=da_{11}$, $D=d\delta$, and $E=de$. Thus if two forms $\nu_1, \nu_2$ are representatives of the same cohomology class in $H^p(\mathscr{C}^p)$, then the coefficients of the expression $\nu_1-\nu_2$ must be exact.
Note that $\dfrac{dx_i\wedge dy_i}{x_iy_i}\wedge (A_{11}-da_{11})+\dfrac{dx_i}{x_i}\wedge(D+d\delta)+\dfrac{dy_i}{y_i}\wedge(E+de)$ is a $\log \widetilde{D}_i$ symplectic form for $\widetilde{D}_i=\left\{Z_{z_1},\dots,Z_{z_\ell},Z_{x_1},Z_{y_1},\dots,Z_{x_j},Z_{y_j}\right\}$. Further, ${}^{B_i}\Omega^p(M)$ splits as ${}^{\log \widetilde{D}_i}\Omega^p(M)\oplus\mathscr{D}^p$ for the appropriate quotient complex $\mathscr{D}^p$. Thus we know that the representatives $ (A_{11}-da_{11}), (D+d\delta)$, and $(E+de)$ are invariant under change of $Z_{x_i}$ and $Z_{y_i}$ defining function.
Thus we are left to compute what happens to $$\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge (A_{00}-da_{00})$$ under change of $Z_{x_i}$ and $Z_{y_i}$ defining function. By this computation, which occurs at the conclusion of the proof of theorem 2.15 in \cite{Lanius}, $$\dfrac{dx_i\wedge dy_i}{x_i^2y_i^2}\wedge (A_{00}-da_{00})$$ can be identified as an element of $${}^{\mathcal{B}_{i-1}} H^{p-2}(Z_{x_i}\cap Z_{y_i})\simeq {}^{\mathcal{B}_{i-1}} H^{p-2}(Z_{x_i}\cap Z_{y_i};|N_{v_i}^*Z_{x_i}|^{-1}\otimes |N_{v_i}^*Z_{y_i}|^{-1})$$ trivialized by $t_{{x_i}^*}$ and $t_{{y_i}^*}$, and where $v_i=Z_{x_i}\cap Z_{y_i}$. Thus $${}^{\mathcal{B}_{i}}H^p(M)\simeq{}^{\mathcal{B}_{i-1}} H^p(M)\oplus{}^{\mathcal{B}_{i-1}} H^{p-1}(Z_{x_i})\oplus{}^{\mathcal{B}_{i-1}} H^{p-1}(Z_{y_i})\oplus{}^{\mathcal{B}_{i-1}} H^{p-2}(Z_{x_i}\cap Z_{y_i})$$ $$\oplus {}^{\mathcal{B}_{i-1}} H^{p-2}(Z_{x_i}\cap Z_{y_i};|N^*Z_{x_i}|^{-1}\otimes |N^*Z_{y_i}|^{-1}).$$
Since $\mathcal{A}_m=\mathcal{B}_0$, by using Lemma \ref{Alemma} we can conclude that the cohomology ${}^{\mathcal{B}_m}H^p(M)$ is
$${}^{b}H^p(M)\oplus\bigoplus_{\mathscr{M}}H^{p-m}(\bigcap\underbrace{Z_{x_i}\cap Z_{y_i}\cap Z_{x_j}\cap Z_{y_k}\cap Z_{z_\ell}}_{i\in I,~j\in J,~ k\in K,~\ell\in L};\otimes\underbrace{|N_{v_i}^*Z_{x_i}|^{-1}\otimes|N_{v_i}^*Z_{y_i}|^{-1}}_{i\in I})$$ where $\mathscr{M}$ denotes all collections of sets $I,J,K,L$ satisfying $$I,J,K\subseteq\left\{1,\dots,k\right\}, L\subseteq\left\{1,\dots,n\right\}\text{ such that } I\neq\emptyset,\text{ and } I\cap J=I\cap K =\emptyset$$ with $m:=2|I|+|J|+|L|+|K|$ and $v_i$ denotes $Z_{x_i}\cap Z_{y_i}$. \end{proof}
Since $\mathcal{B}_m={}^{\log}\mathcal{R}$, we have reached the conclusion of Theorem \ref{theorem01}.
|
3,212,635,537,684 | arxiv | \section{Introduction}
Nowadays, consumer demand for more functionalities, low power consumption and compact systems. Memory is an important part of many electronics gadgets. The major concern for memories is soft errors which are caused by radiation \cite{Dixit2011}, \cite{Ibe2010}. These soft errors corrupt the digital data and multiple bit upsets (MBUs) have occurred. So to impart more reliability to these systems, errors must be detected and corrected. Several error detecting and correcting codes are already available. Many adjacent error correcting codes \cite{Hamming1950}, \cite{Hsiao1970}, \cite{Dutta2007} and CA-based error detecting and correcting codes \cite{Bhaumik2010}, \cite{Samanta2015}, \cite{Samanta2018} have already been introduced to detect and correct adjacent errors in communication and storage systems. Alternatively, Bose-Chaudhuri-Hocquenghem (BCH) code \cite{Zhang2018} and Reed Solomon (RS) code \cite{Rev2015}, \cite{Samanta2017} can protect MBUs. \\
Cha and Yoon proposed a technique to design an ECC processing circuits for SEC-DED code in memories. The area complexity of the ECC processing circuits have been minimized in \cite{Cha2012}. Adalid et al. presented a SEC-DED code for short data words which can detect the double bit errors and correct single error \cite{Adali2016}, \cite{Adalid2016}. \\
Alabady et al. proposed a coding technique to detect and correct single and multiple bit errors in \cite{Alabady20182}, \cite{Alabady20183}, \cite{Alabady20181}. The algorithms, flowchart, error patterns and its syndrome values are presented in \cite{Alabady20182}, \cite{Alabady20183}, \cite{Alabady20181}. Alabady et al. codes are unable to satisfy single error correction and double error detection functionality in some cases. Beside this limitation, there are some mistakes in flowchart, tables and figure which are rectified in \cite{Tripathi2018}. Ming et al. proposed a SEC-DED-DAEC code to diminish noise source in memories \cite{Ming2011}. These existing codes require more area, power and delay.\\
To mitigate these problems, this paper aims to develop new channel coding techniques. This work proposes a modified SEC-DED-DAEC code for memories. Also, this paper identifies the mistakes in proposed $H$-matrix construction procedures, the formation of equation 6 and one table of ref. \cite{Tripathi2019}. These typos do not affect the main contributions and results of the paper in \cite{Tripathi2019}. Here we have corrected these mistakes in \cite{Tripathi2019}. The main contributions are as follows:\\
i) New method to construct the parity check matrices $(H)$ for SEC-DED-DAEC code has been proposed. ii) SEC-DED-DAEC codes with different message length have been designed and implemented in ASIC platform and iii) proposed codes are fast and power efficient compared to existing designs. The rest of this paper is organized as follows. Section II provides design of proposed SEC-DED-DAEC codes. Section III presents estimation of logic gates for different designs. Section IV contains synthesis results and Section V presents the conclusion.
\section{Design of Proposed SEC-DED-DAEC Codes}
Proposed $(n-k)$ error correction code is a linear block code with parity check matrix $(H)$ which consists of $(n-k)$ number of rows and $n$ number of columns. There are some mistakes in the construction procedure of (14, 8) proposed $(H)$ matrix in ref. \cite{Tripathi2019}. In this section the corrected construction procedure of proposed $(H)$ matrices for both SEC-DED and SEC-DED-DAEC codes with different message lengths has been described.
\subsection{$H$-matrix construction procedures}
The procedure to generate the (14, 8) proposed $H$-matrix for both SEC-DED and SEC-DED-DAEC codes is as follows:\\
\textbf{Step 1}: The $H$-matrix consists of $(n-k)$ number of rows and $(n)$ numbers of columns with $k$ numbers of data columns and $(n-k)$ numbers of parity columns having identity property.\\
\textbf{Step 2}: Last data column $(d_8)$ has been selected to satisfy the weight 3 as well as modulo-2 sum of $d_8$ and parity column $(p_1)$ will generate `1' in positions 1, 3, 4 and 6.\\
\textbf{Step 3}: Data column $(d_7)$ is selected to satisfy the weight 3 as well as modulo-2 sum of $d_7$ and data column $(d_8)$ will generate `1' in positions 1, 2, 4 and 6.\\
\textbf{Step 4}: Process is continued up to the first data column $(d_1)$ using following $Q$-matrix with target to reduce delay and power consumption without violating $H$-matrix construction rules.\\
\begin{figure}[]
\centering
\[
Q=
\left[ {\begin{array}{cccccccccccccc}
1 1 1 3 1 2 1 1\\
2 3 4 4 2 3 2 3\\
4 5 5 5 3 4 4 4\\
5 6 6 6 4 6 6 6\\
\end{array} } \right]
\]
\caption{$Q$-matrix of (14, 8) proposed SEC-DED and SEC-DED-DAEC codes}
\label{fig1}
\end{figure}
The $H$-matrix of (14, 8) SEC-DED and SEC-DED-DAEC codes are obtained by employing the proposed $H$-matrix construction methodology and it is shown in Fig. \ref{fig1}. This $H$-matrix consists of 8 data columns and 6 parity columns as shown in Fig. \ref{fig2}.
\begin{figure}[]
\centering
\[
H=
\left[ {\begin{array}{cccccccccccccc}
1 0 1 0 0 1 1 0 1 0 0 0 0 0\\
0 1 1 1 1 0 1 0 0 1 0 0 0 0\\
1 1 0 0 1 0 1 1 0 0 1 0 0 0\\
1 0 0 1 0 1 0 1 0 0 0 1 0 0\\
0 1 0 1 0 0 0 0 0 0 0 0 1 0\\
0 0 1 0 1 1 0 1 0 0 0 0 0 1\\
\end{array} } \right]
\]
\caption{$H$-matrix of (14, 8) proposed SEC-DED and SEC-DED-DAEC codes}
\label{fig2}
\end{figure}
Similarly the other $H$-matrices are constructed employing proposed construction procedures.
\begin{figure}[]
\centering
\[
H=
\left[ {\begin{array}{cccccccc}
1 1 0 1 0 0 0 0\\
0 1 0 0 1 0 0 0\\
0 1 1 0 0 1 0 0\\
1 0 1 0 0 0 1 0\\
1 0 1 0 0 0 0 1\\
\end{array} } \right]
\]
\caption{$H$-matrix of proposed (8, 3) SEC-DED and SEC-DED-DAEC codes}
\label{fig3}
\end{figure}
The $H$-matrix of (8, 3) SEC-DED-DAEC code is provided in Fig. \ref{fig3}. This matrix contains $d_1$ to $d_3$ data columns and $c_1$ to $c_5$ parity columns.
\begin{figure}[]
\centering
\[
H=
\left[ {\begin{array}{ccccccccc}
0 1 1 0 1 0 0 0 0\\
1 0 1 0 0 1 0 0 0\\
0 1 0 1 0 0 1 0 0\\
1 1 0 1 0 0 0 1 0\\
1 0 1 1 0 0 0 0 1\\
\end{array} } \right]
\]
\caption{$H$-matrix of proposed (9, 4) SEC-DED and SEC-DED-DAEC codes}
\label{fig4}
\end{figure}
The $H$-matrix of (9, 4) SEC-DED-DAEC code is provided in Fig. \ref{fig4}. In this matrix consists of 4 data columns and 5 parity columns.
\begin{figure}[]
\centering
\[
H=
\left[ {\begin{array}{ccccccccccc}
0 0 1 1 0 1 0 0 0 0 0\\
1 1 0 1 0 0 1 0 0 0 0\\
0 1 0 1 1 0 0 1 0 0 0\\
1 0 1 0 1 0 0 0 1 0 0\\
1 0 0 0 0 0 0 0 0 1 0\\
0 1 1 0 1 0 0 0 0 0 1\\
\end{array} } \right]
\]
\caption{$H$-matrix of proposed (11, 5) SEC-DED and SEC-DED-DAEC codes}
\label{fig5}
\end{figure}
The $H$-matrix of (11, 5) SEC-DED-DAEC code is provided in Fig. \ref{fig5} which consists of 5 data columns and 6 parity columns.
\begin{figure}[]
\centering
\[
H=
\left[ {\begin{array}{ccccccccccccc}
0 1 0 0 1 1 0 1 0 0 0 0 0\\
0 0 1 1 0 1 0 0 1 0 0 0 0\\
1 1 0 1 0 1 1 0 0 1 0 0 0\\
0 1 1 0 1 0 1 0 0 0 1 0 0\\
1 0 1 0 0 0 0 0 0 0 0 1 0\\
1 0 0 1 1 0 1 0 0 0 0 0 1\\
\end{array} } \right]
\]
\caption{$H$-matrix of proposed (13, 7) SEC-DED and SEC-DED-DAEC codes}
\label{fig6}
\end{figure}
The $H$-matrix of (13, 7) SEC-DED-DAEC code is provided in Fig. \ref{fig6} where $d_1$-$d_7$ are data columns and $c_1$-$c_6$ are parity columns.
\begin{figure}[]
\centering
\[
H=
\left[ {\begin{array}{cccccccccccccccccccccccc}
1 0 1 0 0 0 1 0 1 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0\\
0 1 0 0 0 0 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 0 0 0\\
0 0 0 0 1 1 1 0 0 0 1 0 1 0 1 1 0 0 1 0 0 0 0 0\\
0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 1 0 0 0 1 0 0 0 0\\
1 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0\\
0 1 1 1 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0\\
0 0 1 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0\\
1 1 0 1 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1\\
\end{array} } \right]
\]
\caption{$H$-matrix of proposed (24, 16) SEC-DED and SEC-DED-DAEC codes}
\label{fig7}
\end{figure}
The $H$-matrix of (24, 16) SEC-DED-DAEC code is provided in Fig. \ref{fig7}. It has 16 data columns and 8 numbers of parity columns.
\subsection{Encoding and Decoding Techniques}
In this section, encoding an decoding processes of proposed $(8, 3)$ SEC-DED and SEC-DED-DAEC code has been described.\\
\textbf{{2.2.1 Encoding Process}}\\
The parity bits are collaborated with the data bits and form the codeword in the encoding process. The equations to generate check-bits of proposed $(8, 3)$ SEC-DED and SEC-DED-DAEC code are in the following. \\
\begin{eqnarray}
\label{equ2}
c_1 & = & d_1 \oplus d_2 \nonumber\\
c_2 & = & d_2 \nonumber\\
c_3 & = & d_2 \oplus d_3 \nonumber\\
c_4 & = & d_1 \oplus d_3\nonumber\\
c_5 & = & d_1 \oplus d_3 \nonumber\\
\end{eqnarray}\\
\textbf{{2.2.2 Decoding Process}}\\
Decoding technique has two parts-a) syndrome computation and b) error correction logic. In the first part, error detection can be done by calculating the syndrome value. No error in received codeword is indicated if the syndrome value is zero $(SY = 0)$ else there are some bit-errors. There are some typo errors in Error correction logic subsection in ref. \cite{Tripathi2019}. It is rectified here and the modified error correction logic has been described in the following.\\
The error can be corrected by using the error correction block. For single error in one of the data bits the syndrome corresponds to one of the data column. In case of double adjacent errors in $n^{th}$ and $(n+1)^{th}$ bits the corresponding syndrome is modulo-2 sum of nth and $(n+1)^{th}$ columns of $H$-matrix. Finally, the error pattern block compares the double adjacent error syndromes and single error syndrome using 2-input OR (OR2) gates to confirm the occurrence of error in $n^{th}$ bit. If error occurs in the $n^{th}$ bit, then output of OR2 gate is 1 and error correction is done by 2-input XOR (XOR2) logic, which takes $n^{th}$ bit and output of OR2 gate as inputs to produce corrected version of data stored in $n^{th}$ position of codeword.
\subsection{Calculation of parity-bits}
The main aim of the proposed codes is to minimize the number of 1's in each row and column of the $H$-matrix. The improvement in delay is occurred by minimizing the number of ones in the row of the matrix. The equation (\ref{equ6}) in ref. \cite{Tripathi2019} has been corrected in this section. For weight, $w$=3, the minimum number of parity bits $(n-k)$ are calculated by considering approximate value from the equation (\ref{equ6}).
\begin{equation}
\label{equ6}
(n-k)\geq(\sqrt{1+2.5k}+1.90)
\end{equation}
The equation (\ref{equ6}) is applicable for SEC-DED-DAEC codes but there is a limitation. This proposed equation is suitable up to 8-bit SEC-DED-DAEC codes. The number of parity bits required for a specific number of data bits are presented in Table \ref{tabmin}.
\begin{table}[]
\caption{Parity bits required}
\label{tabmin}
\centering
\resizebox{!}{0.1\textheight}{%
\begin{tabular}{|c|c|c|}
\hline
Codec & Data bit ($k$) & \begin{tabular}[c]{@{}c@{}} Number of \\ parity bit ($P$)\end{tabular} \\ \hline
(8, 3) & 3 & 5 \\ \hline
(9, 4) & 4 & 5 \\ \hline
(11, 5) & 5 & 6 \\ \hline
(12, 6) & 6 & 6 \\ \hline
(13, 7) & 7 & 6 \\ \hline
(14, 8) & 8 & 6 \\ \hline
\end{tabular}
}
\end{table}
\section{Logic gate estimation of complexity analysis}
This section presents the logic gate estimation of complexity analysis which consists of area complexity and critical path delay.
\subsection{Area complexity}
Area complexity in terms of logic gates of proposed and existing SEC-DED and SEC-DED-DAEC codes are presented in Table \ref{tabarea}. Proposed codes require lesser number logic gates compared to other existing codes. Also the area complexity comparison of existing and proposed codes has been presented in terms of 2-input NAND (NAND2) gates.
\begin{table}[]
\caption{Area complexity comparison of proposed codes and existing codes}
\label{tabarea}
\centering
\begin{tabular}{|c|l|c|c|c|c|c|}
\hline
Codec & \multicolumn{1}{c|}{Schemes} & XOR2 & AND2 & OR2 & NOT & Equivalent NAND2 \\ \hline
\multirow{7}{*}{\begin{tabular}[c]{@{}c@{}}Existing \\ SEC-DED\end{tabular}} & Alabady (9, 4) \cite{Alabady20182}, \cite{Alabady20183} & 31 & 16 & - & 4 & 160 \\ \cline{2-7}
& Alabady (9, 4) \cite{Alabady20181} & 15 & 16 & - & 12 & 104 \\ \cline{2-7}
& Adalid (8, 4) \cite{Adali2016}, \cite{Adalid2016} & 27 & 17 & 3 & 5 & 156 \\ \cline{2-7}
& Hsiao (13, 8) \cite{Hsiao1970} & 51 & 32 & - & 16 & 284 \\ \cline{2-7}
& Hamming (13, 8) \cite{Hamming1950} & 59 & 32 & - & 14 & 314 \\ \cline{2-7}
& Cha, Yoon (13, 8) \cite{Cha2012} & 58 & 32 & - & 13 & 309 \\ \cline{2-7}
& Adalid (16, 8) \cite{Adali2016}, \cite{Adalid2016} & 55 & 25 & 7 & 1 & 292 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Proposed \\ SEC-DED\end{tabular}} & Proposed (8, 3) & 16 & 6 & - & - & 76 \\ \cline{2-7}
& Proposed (9, 4) & 23 & 8 & - & - & 108 \\ \cline{2-7}
& Proposed (11, 5) & 29 & 10 & - & - & 136 \\ \cline{2-7}
& Proposed (13, 7) & 43 & 14 & - & - & 200 \\ \cline{2-7}
& Proposed (14, 8) & 50 & 16 & - & - & 232 \\ \cline{2-7}
& Proposed (24, 16) & 120 & 32 & - & - & 544 \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Existing\\ SEC-DED-DAEC\end{tabular}} & Ming (22, 16) \cite{Ming2011} & 112 & 235 & 31 & 120 & 1131 \\ \cline{2-7}
& Dutta (22, 16) \cite{Dutta2007} & 106 & 235 & 31 & 126 & 1113 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Proposed \\ SEC-DED-\\ DAEC\end{tabular}} & Proposed (8, 3) & 20 & 24 & 5 & - & 143 \\ \cline{2-7}
& Proposed (9, 4) & 27 & 33 & 7 & - & 195 \\ \cline{2-7}
& Proposed (11, 5) & 34 & 42 & 9 & - & 247 \\ \cline{2-7}
& Proposed (13, 7) & 48 & 60 & 13 & - & 351 \\ \cline{2-7}
& Proposed (14, 8) & 55 & 69 & 15 & - & 403 \\ \cline{2-7}
& Proposed (24, 16) & 127 & 141 & 31 & - & 883 \\ \hline
\end{tabular}
\end{table}
\subsection{Critical path delay}
Critical path delay of proposed and existing SEC-DED and SEC-DED-DAEC codes are provided in Table \ref{tabcric}. It has been observed that the performance of proposed codes is better than related existing SEC-DED and SEC-DED-DAEC codes.
\begin{table}[]
\caption{Critical path delay comparison of proposed codes and existing codes}
\label{tabcric}
\centering
\begin{tabular}{|c|l|c|c|c|c|c|}
\hline
Codec & \multicolumn{1}{c|}{Schemes} & XOR2 & AND2 & OR2 & NOT & \begin{tabular}[c]{@{}c@{}}Equivalent \\ NAND2\end{tabular} \\ \hline
\multirow{7}{*}{\begin{tabular}[c]{@{}c@{}}Existing \\ SEC-DED\end{tabular}} & Alabady (9, 4) \cite{Alabady20182}, \cite{Alabady20183} & 8 & 4 & - & 1 & 41 \\ \cline{2-7}
& Alabady (9, 4) \cite{Alabady20181} & 4 & 4 & - & 2 & 26 \\ \cline{2-7}
& Adalid (8, 4) \cite{Adali2016}, \cite{Adalid2016} & 9 & 4 & - & 2 & 46 \\ \cline{2-7}
& Hsiao (13, 8) \cite{Hsiao1970} & 10 & 4 & - & 1 & 49 \\ \cline{2-7}
& Hamming (13, 8) \cite{Hamming1950} & 20 & 4 & - & 1 & 89 \\ \cline{2-7}
& Cha, Yoon (13, 8) \cite{Cha2012} & 18 & 4 & - & 1 & 81 \\ \cline{2-7}
& Adalid (16, 8) \cite{Adali2016}, \cite{Adalid2016} & 10 & 3 & 7 & 1 & 68 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Proposed \\ SEC-DED\end{tabular}} & Proposed (8, 3) & 4 & 2 & - & - & 20 \\ \cline{2-7}
& Proposed (9, 4) & 6 & 2 & - & - & 28 \\ \cline{2-7}
& Proposed (11, 5) & 6 & 2 & - & - & 28 \\ \cline{2-7}
& Proposed (13, 7) & 10 & 2 & - & - & 44 \\ \cline{2-7}
& Proposed (14, 8) & 10 & 2 & - & - & 44 \\ \cline{2-7}
& Proposed (24, 16) & 16 & 2 & - & - & 68 \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Existing SEC-DED-\\ DAEC\end{tabular}} & Ming (22, 16) \cite{Ming2011} & 22 & 5 & 2 & 1 & 105 \\ \cline{2-7}
& Dutta (22, 16) \cite{Dutta2007} & 18 & 5 & 2 & 1 & 89 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Proposed SEC-DED-\\ DAEC\end{tabular}} & Proposed (8, 3) & 8 & 3 & 2 & - & 44 \\ \cline{2-7}
& Proposed (9, 4) & 10 & 3 & 2 & - & 52 \\ \cline{2-7}
& Proposed (11, 5) & 11 & 3 & 2 & - & 56 \\ \cline{2-7}
& Proposed (13, 7) & 15 & 3 & 2 & - & 72 \\ \cline{2-7}
& Proposed (14, 8) & 15 & 3 & 2 & - & 72 \\ \cline{2-7}
& Proposed (24, 16) & 23 & 3 & 2 & - & 104 \\ \hline
\end{tabular}
\end{table}
The critical path delays of proposed codes are compared to the Alabady et al. code \cite{Alabady20181}, Adalid et al. code \cite{Adali2016}, Hsiao code \cite{Hsiao1970}, Hamming code \cite{Hamming1950}, Cha-Yoon code \cite{Cha2012} and Ming et al. code \cite{Ming2011}.
\section{Synthesis results}
The proposed SEC-DED and SEC-DED-DAEC codes have been represented in Verilog hardware description language (HDL). All codes have been simulated and synthesized in ASIC platform using Cadence based Genus synthesis solution (TSMC18) tool. The The ASIC-based synthesis results in terms of area, power, delay, power delay product (PDP), power area product (PAP) and cost (product of area, power and delay) of proposed and existing SEC-DED and SEC-DED-DAEC codes have been presented in Table \ref{tabasic}. \\
\begin{table}[]
\caption{ASIC synthesis results of proposed codes and existing codes}
\label{tabasic}
\centering
\begin{tabular}{|c|l|c|c|c|c|c|c|}
\hline
Codec & \multicolumn{1}{c|}{Schemes} & Area & Power & Delay & PDP & PAP & Cost \\ \hline
\multirow{7}{*}{\begin{tabular}[c]{@{}c@{}}Existing\\ SEC-DED\end{tabular}} & Alabady (9, 4) \cite{Alabady20182}, \cite{Alabady20183} & 592.11 & 59.36 & 365.3 & 21.69 & 0.04 & 0.013 \\ \cline{2-8}
& Alabady (9, 4) \cite{Alabady20181} & 475.66 & 38.84 & 282.4 & 10.97 & 0.02 & 0.005 \\ \cline{2-8}
& Adalid (8, 4) \cite{Adali2016}, \cite{Adalid2016} & 708.54 & 70.67 & 365.3 & 25.82 & 0.05 & 0.018 \\ \cline{2-8}
& Hsiao (13, 8) \cite{Hsiao1970}& 1293.95 & 155.94 & 302.9 & 47.24 & 0.20 & 0.061 \\ \cline{2-8}
& Hamming (13, 8) \cite{Hamming1950}& 1204.17 & 145.41 & 328.9 & 47.83 & 0.18 & 0.058 \\ \cline{2-8}
& Cha, Yoon (13, 8) \cite{Cha2012} & 1200.84 & 159.58 & 365.3 & 55.55 & 0.19 & 0.070 \\ \cline{2-8}
& Adalid (16, 8) \cite{Adali2016}, \cite{Adalid2016} & 1380.45 & 180.83 & 423.5 & 76.58 & 0.25 & 0.106 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Proposed \\ SEC-DED\end{tabular}} & Proposed (8, 3) & 382.55 & 34.16 & 362.4 & 12.37 & 0.01 & 0.005 \\ \cline{2-8}
& Proposed (9, 4) & 542.22 & 56.92 & 365.3 & 20.80 & 0.03 & 0.011 \\ \cline{2-8}
& Proposed (11, 5) & 708.52 & 69.73 & 341.9 & 23.84 & 0.05 & 0.017 \\ \cline{2-8}
& Proposed (13, 7) & 997.93 & 110.38 & 337.5 & 37.25 & 0.11 & 0.037 \\ \cline{2-8}
& Proposed (14, 8) & 1150.95 & 138.02 & 292.7 & 40.40 & 0.16 & 0.046 \\ \cline{2-8}
& Proposed (24, 16) & 2318.51 & 344.57 & 308.4 & 106.27 & 0.80 & 0.246 \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Existing \\ SEC-DED-DAEC\end{tabular}} & Ming (22, 16) \cite{Ming2011} & 2877.34 & 486.99 & 467.4 & 227.62 & 1.40 & 0.655 \\ \cline{2-8}
& Dutta (22, 16) \cite{Dutta2007} & 2920.58 & 461.16 & 429.3 & 197.98 & 1.35 & 0.578 \\ \hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Proposed \\ SEC-DED\\ -DAEC\end{tabular}} & Proposed (8, 3) & 555.52 & 61.18 & 273.6 & 16.74 & 0.03 & 0.009 \\ \cline{2-8}
& Proposed (9, 4) & 751.77 & 87.33 & 289.7 & 25.30 & 0.07 & 0.019 \\ \cline{2-8}
& Proposed (11, 5) & 908.12 & 115.75 & 502.8 & 58.20 & 0.11 & 0.053 \\ \cline{2-8}
& Proposed (13, 7) & 1343.87 & 177.40 & 482.7 & 85.63 & 0.24 & 0.115 \\ \cline{2-8}
& Proposed (14, 8) & 1543.44 & 238.85 & 433.8 & 103.61 & 0.37 & 0.160 \\ \cline{2-8}
& Proposed (24, 16) & 2826.59 & 619.47 & 434.4 & 269.10 & 1.75 & 0.761 \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, SEC-DED and SEC-DED-DAEC codes have been proposed for different message lengths. These SEC-DED and SEC-DED-DAEC codes have been designed and implemented based on ASIC platform. Performance of our design has been analyzed in terms of area, power, delay, PDP, PAP and cost. The estimation of logic gates for proposed and existing SEC-DED and SEC-DED-DAEC codes are provided. Our proposed design is faster and power efficient than other related designs.
\section*{Acknowledgment}
Authors are thankful for research grant received from RUSA 2.0 of Jadavpur University. Authors would also like to thank to SMDP-C2SD, Jadavpur University for Cadence simulation software.
|
3,212,635,537,685 | arxiv | \section{Introduction}
Language technologies have become important tools for content moderation on several platforms. Offensive content is either flagged or deleted completely. Some of these language technologies are trained on data that are labeled by annotators who provide ground-truth labels for the data samples. However, one of the main challenges with such datasets is the consistency and reliability of labels provided by annotators \cite{martin2021ground}. For example, text that is labeled as offensive in one context by an annotator, may be perceived differently by another. This may adversely impact the accuracy and fairness of models built using such datasets \cite{geva2019we}. Consistent with extant literature, we reason that the difficulty with annotator reliability and consistency is largely due to the fact that language is contextual, and its interpretation subjective \cite{wiebe2005annotating}. Thus, without context and clear guidelines, annotation tasks can create a lot of noise in benchmark datasets.
To navigate the challenges with existing datasets, several studies have suggested alternative approaches to annotation tasks and model development. For example, Matthew \textit{et al.} \cite{mathew2020hatexplain} posit that training models by highlighting the portion of a particular text that people use to distinguish offensive text from normal text can improve model performance. Also, Sap \textit{et al.} \cite{sap2019risk} show that priming annotators before annotation tasks can reduce their insensitivity to different dialects and the occurrence of bias in ground-truth labels. Similarly, Sap \textit{et al.} \cite{sap2019social} show how nudging annotators to provide additional information such as context inference, biased implications, and targets, among others, can help to improve the quality of crowdsourced datasets. However, Ball-Burack \textit{et al.} \cite{ball2021differential} find that solutions developed to tackle issues in one dataset may not necessarily be effective in resolving issues with out-of-sample datasets. In certain instances, annotator information may be required to improve model performance, highlighting the problem that labels may not be independent of annotators \cite{geva2019we}. These insights highlight the need for a deeper understanding of the issues with crowdsourced toxic text datasets \cite{geiger2021garbage}. Therefore, in this present study, we contribute to discussions on the difficulty with annotating toxic text datasets and highlight some recommendations to help improve annotator consistency and reliability.
To help achieve the goal for this study, we adopt the design science research (DSR) \cite{hevner2004design} framework as a guide. The framework provides guidelines on developing innovative solutions to existing problems, especially where people and technology are concerned \cite{peffers2012design}, \cite{gregor2013positioning}. Using this framework as a guiding principle for problem identification and solution development, we find additional challenges to those that have already been highlighted in the extant literature. In examining three toxic text datasets that approach ground-truth labeling differently, we propose a multi-label approach to annotation. We use this multi-label approach to re-annotate three of the datasets, and find that 1) given different contexts, text samples can have different labels, 2) multiple labels for toxic text datasets can increase agreement with external ML annotators, but however, 3) this may not guarantee an improvement in inter-annotator agreement. We further discuss the implication of these results.
The remainder of the study is organized as follows. In the next section, we briefly discuss some related studies. After that, we provide information on the selected datasets and methods. Next, we present the results from our analysis and discuss the implications for theory and practice. We conclude the study by highlighting the limitations of the study and opportunities for future research.
\section{Related Work}
\subsection{Ground-Truth in Toxic Text Classification}
For language and image classification tasks, ML models are built on datasets labeled by annotators. Through strategies such as majority voting, final labels are created and largely accepted as the ground-truth \cite{davani2021dealing}. For example, for language datasets, if annotators A and B label a text sample as "offensive", and annotator C labels it as "normal", the final label will be "offensive" based on majority voting. In some cases, annotators are compensated for their efforts \cite{snow2008cheap}. Therefore, it is important that the labels provided are accurate, and free of bias \cite{bender2018data}. However, this is not always the case as several challenges have been highlighted in existing studies. These challenges are not only limited to natural language. For instance, for image classification, Northcutt et al. \cite{northcutt2021pervasive} find significant errors in the labels for ImageNet, a popular image classification dataset. In one example mentioned in the study, a kayak was labeled as a dolphin. In this example, one can objectively say that there is an error with the kayak's label. For toxic text classification, the problem is more complex. It is not as straight-forward to claim that a text sample has been given the wrong label. One of the reasons for this is the highly contextual nature of language. For instance, a phrase or sentence that is perceived as offensive in one context may be labeled as normal in another context. On the contrary, we can argue for example that a kayak will remain a kayak under all contexts. In addition to context, the background and values of the annotator play a significant role in the final label. As such, bound by the same context, a text sample can be labeled differently by different annotators. Hence, in some instances, ML models built on toxic text datsets may require annotator information to boost prediction accuracy \cite{geva2019we}. These challenges may adversely impact model performance metrics and point to the need to review the process of labeling data, especially for natural language datasets.
To a large extent, the challenges identified in labeling toxic text datasets fall under the broad categories of inter-annotator agreement and annotator consistency. Inter-annotator agreement refers to the extent to which annotators provide the same label for a particular text sample \cite{artstein2017inter}. On the other hand, annotator consistency refers to the extent to which a single annotator provides the same label for similar text samples \cite{ishita2020using}. Therefore, achieving high agreement and consistency rates in a dataset may have positive implications for model performance. In existing studies, metrics such as the Krippendorff's alpha \cite{hayes2007answering} and Choen's kappa \cite{cohen1960coefficient} have been used as indicators of annotated data quality and reliability. In addition to these challenges, at the training and predictions phases of toxic text classification, metrics such as accuracy and fairness are important in reviewing ML model performance. Especially for toxic text classification, in addition high prediction accuracy, researchers are concerned with ensuring that language constructs that are associated with minority groups are not falsely classified \cite{ball2021differential}. Largely, strategies to resolve some of the issues are implemented at the data processing, modeling and post-modeling stages in the ML pipeline. We are yet to fully understand how strategies can be implemented at the data collection stage to address some of the identified challenges. Thus, to contribute to efforts in this regard, we adopt a design science research approach \cite{hevner2004design} to examine selected toxic text datasets, discuss some identified challenges, and share insights from our proposed approach.
\subsection{Design Science Research (DSR)}
The DSR framework provides guidelines on creating innovative solutions to address problems, especially where people and technology are concerned \cite{hevner2004design}. The stages outlined by the proponents of the DSR framework include 1) problem identification, 2) objectives of a solution, 3) design and development, 4) demonstration, 5) evaluation, and 6) communication \cite{peffers2012design}. The problem identification stage is mainly concerned assessing the issues related to the problem at hand. The objectives phase is meant to provide directions for addressing the challenges. The design and development, demonstration, and evaluation stages focus on executing the proposed solutions and evaluating the extent to which the objectives are achieved. Beyond all this, it is important to effectively communicate the entire project to interested parties.
In addition to the DSR stages, some studies establish that design research should lead to two outcomes namely, a process, and/or an artifact \cite{gregor2013positioning}. In this study, we introduce a process for annotating toxic-text datasets as a strategy to address key issues that affect dataset quality. Therefore, an effective design science solution requires a good understanding of the existing problem \cite{storey2017using}. One of the ways to achieve this is by making use of case studies \cite{aken2004management}. Therefore, using these as guiding principles, we examine three toxic text datasets, and propose solutions to addressing some of the challenges. The overarching objective therefore is to enumerate the issues identified in the datasets, provide insights on their impact, and assess the extent to which the solutions we propose can address some of the challenges identified in these case studies.
\section{Data and Methods}
\subsection{Data}
The datasets selected for the study are the HateXplain \cite{mathew2020hatexplain}, Social Bias Inference Corpus (SBIC) \cite{sap2019social}, and the Jigsaw\footnote{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge} datasets. Our selection of these three datasets is founded on the basis that they address a similar problem (toxic text), yet they are diverse in how the annotations were collected. For instance, HateXplain has exactly three annotators for all text samples while the SBIC and Jigsaw datasets do not have a fixed number of annotators for samples. The SBIC dataset has additional background information on annotators (i.e., data statements \cite{bender2018data}). Further, while the HateXplain and SBIC sets use majority voting to determine the final label, the Jigsaw data uses a continuous final label (i.e, toxicity) that represents the proportion of annotators who labeled a particular sample as toxic. For example, if 4 out of 5 annotators label text sample \textit{A} as offensive, Jigsaw's final label will be 0.8 toxicity, while the HateXplain and SBIC datasets will have \textit{offensive} or \textit{hatespeech} as the final label. Table 1 provides a summary of the number of annotators, text samples, and the distribution of final labels. In the last column for Table 1, we include our definition for offensive text for each dataset for the purpose of this study. Appendix A shows text samples from the three datasets.
\begin{table}[h]
\caption{Dataset Summary}
\label{data-summary}
\centering
\begin{tabular}{p{1.5cm}p{2cm}p{2cm}p{2cm}p{1.5cm}p{3cm}}
\toprule
Dataset & Unique \newline Text Samples & Offensive \newline Text & Normal \newline Text & Total \newline Annotators & Offensive \newline Text Definition \\
\midrule
HateXplain & 20,148 & 12,334 (61\%) & 7,814 (39\%) & 253 & Both \textit{hatespeech} and \textit{offensive} text \\
SBIC & 45,318 & 25,073 (55\%) & 19,401 (45\%) & 307 & Samples given the labels 1 and 0.5 \\
Jigsaw & 1,804,874 & 120,084 (7\%) & 1,684,790 (93\%) & 8,899 & Samples with toxicity 0.5 and above\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Sampling}
To help with identifying issues, we re-annotated one hundred (100) randomly selected samples from each of the datasets. In Table 2 below, we show the distribution of the selected samples for each dataset. The samples were found to match the label distribution of offensive versus normal text samples in the original datasets. With the exception of the SBIC dataset, each of the samples was annotated by all five (5) authors of this study.
\begin{table}[h]
\caption{Sampled Data}
\label{samples-summary}
\centering
\begin{tabular}{p{1.5cm}p{2cm}p{2cm}p{2cm}p{1.5cm}p{3cm}}
\toprule
Dataset & Unique \newline Text Samples & Offensive \newline Text & Normal \newline Text & Total \newline Annotators & Offensive \newline Text Definition \\
\midrule
HateXplain & 100 & 61 & 39 & 5 & Both \textit{hatespeech} and \textit{offensive} text \\
SBIC & 100 & 60 & 40 & 3 & Samples given the labels 1 and 0.5 \\
Jigsaw & 100 & 9 & 91 & 5 & Samples with toxicity 0.5 and above\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Team Annotation Task Design}
Earlier, we established that clear instructions are necessary to serve as a guide to annotators on how to handle annotation tasks. Therefore, similar to Guest et al. \cite{guest2021expert}, we develop new guidelines to guide annotators for this task. In addition to the guidelines, we reason that it is important to allow annotators to skip text samples that are difficult to categorize due to the occurrence of contextless samples. Therefore providing a third label such as \textit{undecided} can help researchers identify problematic or difficult samples in the data. Appendix B contains the guidelines provided for the annotation task.
To address the issues of context, consistency and agreement, we propose three context-based label columns for re-annotating the three datasets. Our justification for this proposal is founded on our examination of existing datasets which supported the assertion that language is highly contextual. Hence, providing multiple contexts for labels may help to improve consistency and agreement between annotators. The label columns for the team annotation task are: \textit{strict label}, \textit{relaxed label}, and \textit{inferred group label}. For the \textit{strict label}, we asked annotators to consider the task as a bag-of-words approach where the appearance of certain words in a text makes the entire text either offensive or normal. For the \textit{relaxed label}, we asked annotators to consider any instance where an offensive text sample, under the strict label could be considered as normal. For the \textit{inferred group label} we asked annotators to consider whether an offensive text sample can be considered normal if it were uttered by a member of the identity group in the text.
In addition, we conducted a separate exercise to assess the extent to which the proposed multi-label columns can make annotators consistent. Therefore, we divided the team into two. Three team members labeled the same samples twice, at least one week apart using the multiple labeling scheme, while two of the team members used a single column for labeling the datasets. For this task, we used only the samples from the SBIC dataset.
Due to the limited number of annotators and samples, we present our results as propositions that can be tested further and validated statistically. We understand that additional studies with a larger sample pool can help make stronger and more generalizable claims.
\subsection{External Annotators}
Since we re-annotated a limited number of samples from the three datasets (i.e., one hundred samples from each), we used ML-based tools as external annotators to validate the labels provided since we did not have enough samples to split the data training and prediction by an ML model. We therefore used Detoxify \cite{Detoxify} and Perspective API\footnote{https://www.perspectiveapi.com/} as external annotators. In doing this, we were able to compare the final labels from the re-annotation task with the original labels provided in the datasets. The toxic text classification frameworks for Detoxify were built on two different datasets, leading to the development of two different packages; Detoxify Original (Dtx(Og)) built on Wikipedia comments and Detoxify Unbiased (Dtx(Unb)) built using the Jigsaw data. The Perspective API (PsAI) framework was built in a collaborative research effort between Jigsaw and Google.
In addition, Perspective API has four distinct categories for offensive text, namely toxic, obscene, insulting and threatening. For the purpose of this study, we deem any sample that scored 0.5 and above as offensive in these four categories. Detoxify also has scores for different categories, namely toxicity, severe toxicity, obscene, threat, insult, identity attack. Again, for the purpose of this study, we focus on the scores for toxicity and label any sample that scores 0.5 and above as offensive. These two popular toxic classification frameworks provide a good foundation for evaluating the final labels. In the sections that follow, we present our findings.
\section{Findings}
\subsection{Problem Identification}
\begin{table}[htbp]
\caption{Some Identified Challenges in the HateXplain, SBIC, and Jigsaw Datasets}
\label{challenges}
\centering
\begin{tabular}{p{7cm}p{7cm}}
\toprule
Problem Identified & Implications \\
\midrule
\textbf{Annotator Influence:} \newline
In HateXplain, one annotator contributed to the final label for 5,730 samples. 5,730 samples means a single annotator contributed to about 28\% of the final labels, about 3,000 samples and 18\% points more than the second-ranked annotator in the HateXplain set.
\newline
\newline
In SBIC, a majority (i.e., 80\%) of the annotators identified as white. We examined whether people in the minority groups were more likely to oppose the final label and found for example 17.4\% of annotators who identified as black, opposed the final label at least once, while this figure was 12\% for the white annotators. In Appendix C, we show the opposition rate by race.
&
\medskip
Since annotator behavior can be reflected in prediction outcomes \cite{geva2019we}, \cite{bender2018data}, this is problematic because the annotator highly influences the outcome (final label) of many samples. This can lead to one annotator's behavior being further amplified by a ML model.
\newline
\newline
Low diversity in annotators selected for a labeling task can result in the silencing of minority voices. Since bias is one of the key concerns for recent advances in toxic text classification, it is important to understand how low diversity can impact predictions.
\\
\textbf{Annotation (Im)Balance:} \newline
In Jigsaw and SBIC, we found instances where a large pool of annotators contributed to the final label for some text samples while others received labels from relatively fewer annotators. For example, the minimum annotators per text for the Jigsaw data is 3 annotators, while the maximum annotators per text is 4,936 annotators. As such, similar samples may have different toxicity rates due largely to very different numbers of annotations. In Appendix D, we show the distribution of sample annotaions for the SBIC and Jigsaw datasets.
&
\medskip
This is a problem, especially for the Jigsaw dataset, because the final label is the proportion of annotators who label a sample as offensive. Consider text sample \textit{B} labelled by 3 annotators and \textit{C} labeled by 1000 annotators. For sample \textit{B}, if even one annotator labels the text as offensive, toxicity will equal 0.33. However, for \textit{C}, if 100 annotators label the sample as offensive, toxicity will equal 0.10, suggesting that sample \textit{B} is more toxic than \textit{C} which might not be the case.
\\
\textbf{Inconsistent Labels:} \newline
We found instances where annotators provided different labels for similar text. For example, in HateXplain, \textit{has stupid rich h*e} was labeled as \textit{normal} while \textit{...b**** a** back to the east} was labeled \textit{offensive} by the same annotator. Appendix E shows another instance of this in the SBIC dataset.
&
\medskip
Inconsistency has been one of the major concerns for building good models using toxic text datasets. It creates noise in the data which ultimately leads to poor model performance \cite{gordon2021disagreement}, \cite{ishita2020using}.
\\
\textbf{Contextless Samples:} \newline
Some text samples were difficult to place in specific contexts. This made it difficult to know which labels to assign to them. However, annotators provided labels for these sample. For example, the sample \textit{"why Arabs lose wars"} from HateXplain is difficult to place in a specific context, making it difficult to categorize. In Appendix F, we show more examples of contextless samples.
&
\medskip
Contextless samples also have the tendency to lead to noisy labels because of the high level of uncertainty. This can also lead to large disagreement rates between annotators.
\\
\bottomrule
\end{tabular}
\end{table}
Table 3 above provides a summary of the problems identified in the datasets. The identified issues are grouped into four main categories, namely Annotator Influence, Annotator (Im)Balance, Inconsistent Labels, Contextless Samples.
\subsection{Distribution of Labels}
Using the proposed multi-label approach from the guidelines in Appendix B, we re-annotated one hundred samples each from all three datasets. The samples were annotated separately by the five authors and the final labels were determined by majority vote. Table 4 shows the distribution of the final labels from the original datasets (Og.) and three columns; Strict (St.), Relaxed (Rel.), and Inferred Group (Ig.) The distributions show that given different contexts (i.e. Str., Rel., and Ig.)), the final labels can change for a particular text sample. We found that for each of the proposed three columns, the final labels did not align perfectly with the original benchmarks, emphasizing our point for contextual labeling. We also found that if given the option, annotators may be undecided on some text samples.
\begin{table}[htbp]
\caption{Label Distribution from Team Annotation Task}
\label{label-distribution-team}
\centering
\begin{tabular}{l|cccc|cccc|cccc}
\toprule
& \multicolumn{4}{c|}{HateXplain} & \multicolumn{4}{c|}{Jigsaw} & \multicolumn{4}{c}{SBIC} \\
Label & Og. & Str. & Rel. & Ig. & Og. & Str. & Rel. & Ig. & Og. & Str. & Rel. & Ig. \\
\midrule
Normal & 61 & 68 & 61 & 59 & 91 & 96 & 93 & 93 & 40 & 59 & 46 & 51 \\
Offensive & 39 & 32 & 39 & 40 & 9 & 4 & 6 & 6 & 60 & 30 & 51 & 46 \\
Undecided & - & - & - & 1 & - & - & 1 & 1 & - & 1 & 3 & 3 \\
\bottomrule
\end{tabular}
\small
\\
\bigskip
Og.: original dataset label, Str.: strict label, Rel.: relaxed label, Ig.: inferred-group label
\end{table}
\subsection{Label Agreement}
In the previous section, we showed that different contexts can lead to different labels for text samples. In Table 5 below, we show the extent to which the original labels and final labels agreed. We find that to a large extent, our relaxed label and inferred group labels have a relatively higher agreement with each other. However, our strict label shows a much higher rate of disagreement with the original and relaxed labels for all datasets. Out of the three datasets, the Jigsaw data shows a much higher agreement rate with our new labels. It is worth noting that the Jigsaw data contains only a small percentage of offensive text compared to the other two datasets. We can therefore infer that offensive text is perhaps more difficult to label.
\begin{table}[htbp]
\caption{Label Agreement from Team Annotation Task}
\label{agreement-rate}
\centering
\begin{tabular}{lrrrr}
\toprule
Column & HateXplain & Jigsaw & SBIC \\
\midrule
Strict-Relaxed & 0.87 & 0.95 & 0.59\\
Strict-InGroup & 0.85 & 0.95 & 0.61 \\
Strict-Original & 0.77 & 0.91 & 0.54\\
Relaxed-InGroup & 0.97 & 1.00 & 0.96\\
Relaxed-Original & 0.80 & 0.92 & 0.86\\
InGroup-Original & 0.78 & 0.92 & 0.84\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Label Validation}
As mentioned in the methods section, to complement the label agreement results, we used two ML tools (external annotators) to assess the final labels for the original and new labels, namely Perspective API\footnote{https://www.perspectiveapi.com/} and Detoxify \cite{Detoxify}. Our aim for doing this is provide a proxy for label quality due to the limited samples we re-annotated. The results in Table 6 show that the new multi-labels have relatively higher agreement rates with Perspective API and Detoxify for all three datasets compared to the original labels. For the Jigsaw dataset, it was not surprising to see that original label had a higher agreement with Dextoxify(Unbiased) and Perspective API because both frameworks were developed using the Jigsaw dataset. . These results suggest to some extent that, providing multiple labels for different contexts can improve model performance, including out-of-sample predictions. We attribute the comparatively high agreement rate between the original Jigsaw labels and the Dtx(Unb) labels to the fact that framework was built using the Jigsaw dataset.
\begin{table}[htbp]
\caption{Label Agreement with External Annotators}
\label{label-agg}
\centering
\begin{tabular}{l|cccc|cccc|cccc}
\toprule
& \multicolumn{4}{c|}{HateXplain} & \multicolumn{4}{c|}{Jigsaw} & \multicolumn{4}{c}{SBIC} \\
Label & Og. & Dtx & Dtx(Unb). & PsAI.& Og. & Dtx & Dtx(Unb). & PsAI & Og. & Dtx & Dtx(Unb). & PsAI \\
\midrule
Strict & 0.77 & 0.74 & 0.75 & 0.75 & 0.91 & 0.97 & 0.94 & 0.91 & 0.54 & 0.83 & 0.83 & 0.75\\
Relaxed & 0.80 & 0.65 & 0.66 & 0.68 & 0.92 & 0.96 & 0.93 & 0.92 & 0.86 & 0.58 & 0.56 & 0.58\\
InGroup & 0.78 & 0.63 & 0.66 & 0.66 & 0.92 & 0.96 & 0.93 & 0.92 & 0.84 & 0.58 & 0.56 & 0.58\\
Original & - & 0.61 & 0.60 & 0.62 & - & 0.92 & 0.97 & 0.96 & - & 0.56 & 0.54 & 0.60\\
\bottomrule
\end{tabular}
\small
\\
\bigskip
Og.: Original dataset label, Dtx.: Detoxify original, Dtx(Unb).: Detoxify unbiased, PsAI.: Perspectve API
\end{table}
\subsection{Inter-Annotator Agreement}
According to extant literature, inter-annotator agreement is an important component of natural language datasets. For the purpose of this study, we define inter-annotator agreement as the extent to which text samples received a unanimous (whether offensive or normal) vote from annotators. In using a multi-label approach, we expected a high agreement rate between annotators. This is because we believed context-based columns could provide more clarity to annotators. However, this was not the case. In Table 7 below, we show that in some cases the new multi-label columns performed relatively poorer with regards to the overall inter-annotator agreement. Perhaps, this poor performance supports our argument that toxic text samples are difficult to label.
\begin{table}[htbp]
\caption{Rate of Agreement between Annotators}
\label{label-interannotator-agreement}
\centering
\begin{tabular}{l|cccc|cccc|cccc}
\toprule
& \multicolumn{4}{c|}{HateXplain} & \multicolumn{4}{c|}{Jigsaw} & \multicolumn{4}{c}{SBIC} \\
Label & Og. & Str. & Rel. & Ig. & Og. & Str. & Rel. & Ig. & Og. & Str. & Rel. & Ig. \\
\midrule
Normal & 0.74 & 0.46 & 0.55 & 0.60 & 0.77 & 0.84 & 0.68 & 0.69 & 0.70 & 0.78 & 0.78 & 0.76 \\
Offensive & 0.42 & 0.70 & 0.61 & 0.60 & 0.11 & 0 & 0 & 0 & 0.78 & 0.57 & 0.55 & 0.61 \\
Undecided & - & - & - & 0 & - & - & 0 & 0 & - & 1.00 & 0 & 0 \\
Overall & 0.62 & 0.61 & 0.58 & 0.58 & 0.71 & 0.82 & 0.64 & 0.65 & 0.75 & 0.64 & 0.64 & 0.67 \\
\bottomrule
\end{tabular}
\small
\\
\bigskip
Og.: original dataset label, Str.: strict label, Rel.: relaxed label, Ig.: inferred-group label
\end{table}
\subsection{Annotator Consistency (Intra-Annotator Agreement)}
In addition to inter-annotator agreement, another important component of ground-truth labels is annotator consistency. As outlined earlier, consistency refers to the extent to which similar samples are given similar labels by a single annotator. As highlighted in the methods section, we used only the samples from the SBIC dataset, dividing the team members into two groups with three and two team members. The results from the consistency task showed that, the team members who used the single column had an average of 91\% consistency while the team members who used the multi-label columns had an average of 86\%, 89\%, and 89\% for the strict, relaxed and inferred-group labels respectively. Again, while we are careful not to draw any statistical conclusions from this, we found that for the multi-label team, the highest level of consistency for the multi-label columns were 99\%, 97\%, and 97\% for the strict, relaxed and inferred-group labels respectively. However, the highest level of consistency for the single column team was 92\%.
In the section that follows, we discuss the implications of our results for both theory and practice.
\section{Discussion}
This study shares insights on why labeling toxic text datasets is a difficult task. By examining three existing datasets, we highlighted challenges that reinforce and complement
existing issues pointed out in extant literature. The goal was not to show how poor the existing datasets are, but rather contribute to discussions on how to address challenges for existing and future annotation tasks. The results from the problem identification and re-annotation tasks point to five main insights.
First, the fact that language is highly contextual signifies that a multi-label approach may be more beneficial for ML models and classification tasks. Our re-annotation task showed that the same text sample can have different labels given different contexts. To provide support for this first insight, the results in Table 6 showed a relatively higher agreement rate between the multi-label approach and external annotators. This suggests that using multiple labels can lead to positive outcomes such as less noisy datasets and improved metrics for out-of-sample predictions. To a large extent, ML models built using toxic text datasets are trained with a single column as the dependent (target) variable. Since our approach creates three columns, developers can first consider the context of a specific task, and then use the most suitable column in a multi-label scenario. Thus, a multi-label approach can afford developers the flexibility to switch between contexts. One interesting possibility for taking this a step further is to use \emph{multi-label classification} \cite{tsoumakas2007multi-label}, which builds a single ML model to predict all labels simultaneously and can exploit relationships between the labels.
In the absence of multiple labels, we propose that ML models can be trained to first understand the context of text samples before making predictions.
Second, one must reckon with the fact that disagreement in a general sense increases as the number of annotators increases. Taking the rate of unanimous agreement as a simple measure, for one hundred samples from the HateXplain dataset, the agreement rate at two annotators was 81\% but gradually decreased to 56\% when three additional annotators were added. Looking at the inter-annotator agreement rate separately for offensive and normal text samples, our expectation that offensive text would be more difficult to label seem to hold for the original labels in the datasets. For the team annotation exercise, while this was true for both the SBIC and Jigsaw datasets, the HateXplain labels showed a different pattern where the team had a higher agreement rate for offensive text. Understanding the relationship between text categories and agreement rates can help to tailor solutions to identified challenges. Hence, we believe this issue is worth exploring further to improve our understanding of the phenomenon.
Third, while the results for the inter-annotator agreement task was not as expected across all datasets, it is worth emphasizing that this may not signal poor dataset labels. Rather, contrary to some existing studies, we believe that annotator disagreement could be an indication of annotator diversity, which should be a desirable attribute. Annotator diversity ensures that members of a minority group are not adversely impacted by prediction tasks as has been discovered by some studies. For instance, Ball-Burack et al.~\cite{ball2021differential} found that text samples high in language from the African-American English dialect are more likely to be labeled as offensive. Our discussions at the problem identification stage of this study also show that annotators who belonged to minority groups were more likely to oppose the final label compared to annotators in the majority group. Hence, in agreement with Davani et al. \cite{davani2021dealing} we believe it is important to understand why annotators disagree rather than trying to achieve high agreement rates which may lead to biased outcomes.
Fourth,
while the average consistency scores from our preliminary re-annotation experiment
did not of course provide
conclusive results, we are intrigued by the observation that high levels of consistency are possible with the multi-label approach.
We believe that intra-annotator consistency is a significant benchmark for dataset quality, especially for toxic text datasets. As such, there should be additional efforts both to answer our particular question about single-label versus multi-label as well as to support research in this area more generally.
Fifth, for the SBIC dataset, we found that while some offensive text samples were easy to identify, a significant number were more difficult to label. This is mainly because the difficult ones did not have words that were toxic. To a large extent, toxicity was implied and not as glaring as in the other samples. Take for example the text samples from Table 8 below which are extracts from the SBIC dataset, labeled as offensive by both the team (under the strict label) and the original labels:
\begin{table}[htbp]
\caption{Implicit Offensiveness}
\label{imp-offensive}
\centering
\begin{tabular}{p{10cm}}
\toprule
Sample Text from SBIC \\
\midrule
\textbf{Sample A} \\
\textit{"i really HATE a ""come over"" ass n***** like wtf every time we communicate dont tell me to come over tf take me out or something f***"}\\
\\
\textbf{Sample B} \\
\textit{"what 's the difference between an <number> yr old girl and a washing machine? when you dump a load in the washer , it will not follow you around for two months"}\\
\\
\bottomrule
\end{tabular}
\end{table}
While text sample A is easy to classify as offensive focusing on keywords as directed in our guidelines, text sample B requires a deeper understanding of what is being implied. This could also have adverse implications for ML models when there are no keywords or phrases to focus on. In addition to the difficulty of identifying the toxicity, the label also heavily depends on annotator characteristics such as demographics, values, emotion, humour, among others.
Difficult samples such as these accounted for inconsistent labels for the most part.
\section{Limitations and Future Work}
While this study provides useful insights for toxic text datasets, there are some notable limitations. One of the main limitations of this study is the fact that although we reviewed three toxic text datasets with a large number of samples, we only annotated one hundred (100) samples from each of them. While a hundred samples may not be enough to provide an exhaustive list of insights, we believe that we have revealed important insights that complement the findings in the existing literature.
Furthermore, the study focuses only on three datasets as case studies. Three out of the many toxic text datasets may not be representative enough. Since there are several toxic text datasets with unique attributes, our findings may not be applicable to all of them. However, our propositions and findings may cut across many datasets because of the broad issues of annotator agreement and consistency.
Our preliminary experiment to understand how the multi-label and single-label columns both impact consistency was limited to 5 annotators.
For future research projects, we hope to recruit additional annotators to help us make adequate statistical inferences.
For future studies, we hope to address these challenges to provide additional insights to complement those discussed in this study. In spite of these limitations, we believe that the insights shared here can provide useful implications for theory and practice and can result in new directions/hypotheses that can be tested by future research studies.
\section{Conclusion}
In this study, we highlight the need to pay attention to the issue of context in existing and future toxic text datasets for NLP tasks. We enumerate the challenges in some selected datasets and add to conversations pertaining to addressing them. We find that while language is difficult to annotate, using multiple columns labels can help to reduce some of the identified challenges. We found that a multi-label approach may also has positive implications for reducing noise in datasets which can also improve out-of-sample predictions.
|
3,212,635,537,686 | arxiv | \section*{Acknowledgements}
We thank Adam Jermyn for useful discussions on typical stellar parameters and Liam O'Connor, Ben Hyatt, Adrian Fraser, Kyle Augustson, and Whitney Powers for their valuable feedback and discussions.
DL is supported in part by NASA HTMS grant 80NSSC20K1280.
EHA is funded as a CIERA Postdoctoral fellow and would like to thank CIERA and Northwestern University.
Computations were conducted with support from the NASA High End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center on Pleiades with allocation GID s2276.
This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
\section*{Data Availability}
All scripts used to generate data and make the figures for this paper are available at https://github.com/ekaufman5/PrendergastScripts.
\bibliographystyle{mnras}
\section{Numerical Details}\label{numdeets}
We want to determine if the Prendergast magnetic field is a good candidate for a fossil magnetic field configuration in the core of red giant stars. To do so, we examine a domain consisting of the radiative core and solve for linear perturbations about a background Prendergast magnetic field.
We time-evolve a series of spherical simulations solving the magnetohydrodynamics equations in the Boussinesq approximation linearized about a background magnetic field $\vec{B}_0$, which has an associated current $\vec{J}_0$. We nondimensionalize lengthscales using the radius of the simulation region, $R$, which is the distance from the origin to the edge of the radiative zone.
We nondimensionalize the magnetic field with the maximum of $|\vec{B}_0|$. This choice sets the nondimensional timescale to be the Alfven time, $t_A$, associated with this maximum magnetic field strength, where $t_A=\frac{R}{v_A}$, $v_A= \frac{\vec{B}_0}{\sqrt{4\pi \rho_c}}$, and $\rho_c$ is a constant reference density.
We solve the magnetohydrodynamic equations under the Coulomb gauge, given by
\begin{align}
\qquad
&\nabla \cdot \vec{u} = 0
\label{eq:incompressibility_density}
\\
\begin{split}
&\partial_t \vec{u} + \frac{1}{\rho_c}\nabla P - \vec{g}\frac{\rho'}{\rho_c} -\nu \nabla^2 \vec{u} = - \frac{1}{4 \pi \rho_c}\nabla^2 \vec{A} \times \vec{B}_0 \\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \frac{1}{4 \pi \rho_c}\vec{J}_0 \times (\nabla \times \vec{A})
\label{eq:momentum_density}
\end{split}
\\
&\partial_t \vec{A} - \nabla \Phi - \eta \nabla^2 \vec{A} = \vec{u} \times \vec{B}_0
\label{eq:mag_density}
\\
&\partial_t \rho' + \vec{u} \cdot \nabla \rho_0 - \kappa \nabla^2 \rho' = 0
\label{eq:density}
\\
&\nabla \cdot \vec{A} = 0
\label{eq:coloumb gauge}
\end{align}
where $\vec{u}$ is the velocity, $P$ is the pressure, $\nu$ is the viscosity, $\vec{A}$ is the vector potential, $\Phi$ is the electric potential, $\eta$ is the magnetic resistivity, $\vec{g} = -gr\vec{e}_r$ is gravitational acceleration, $\rho ' $ is density perturbation, $\kappa$ is thermal diffusivity, and $\rho_0$ is the background density, which we take to vary as $\nabla \rho_0 = -2\rho_c r/R^2 \vec{e}_r$. In the Boussinesq approximation, the Brunt-V{\"a}is{\"a}l{\"a} frequency, $N^2$ is
\begin{equation}
N^2 = (-\partial_r \rho_0) \frac{g}{\rho_c}
\label{eq:n2}
\end{equation}
In all cases $N^2$ is proportional to $r^2$, so we define $N_0^2$, the stable stratification strength, such that $N^2 = N_0^2 r^2$.
In order to study the case where $N_0^2 = 0$, we set $\partial_r\rho_0=0$ and $\rho'=0$.
We set our background magnetic field to the Prendergast profile outlined in \citet[][]{Prendergast1956, Loi},
\begin{equation}
\begin{split}
\vec{B}_0 &= \left(B_r, B_\theta, B_\phi\right) \\
&= \left(\frac{2}{r^2}\Psi(r) \cos \theta, -\frac{1}{r}\Psi'(r)\sin\theta,-\frac{\lambda}{r}\Psi(r)\sin\theta\right).
\label{eq:pren}
\end{split}
\end{equation}
The full field is a non-singular solution to the balance,
\begin{equation}
\lambda \vec{B}_{0} + \nabla \times \vec{B}_{0} = - \beta r \sin(\theta) \vec{e}_{\!\phi} = \beta \, ( y \vec{e}_{\!x} - x \vec{e}_{\!y}) \label{eq:B0-balance},
\end{equation}
with $\vec{B}_{0} = \vec{0}$ at the outer boundary, $r=R=1$, and $\beta$ is the field amplitude.
In terms of the radial profile,
\begin{equation}
\Psi(r) = \frac{\beta}{\lambda^{2}}
\left(r^{2} - r \frac{j_{1}(\lambda r)}{j_{1}(\lambda)}\right),
\label{eq:prendend-sin/cos}
\end{equation}
where $j_{1}(\xi) = \sin(\xi)/\xi^{2} - \cos(\xi)/\xi$ is a spherical Bessel function.
Automatically, $\Psi(r=1) = 0$.
The whole field vanishes at the outer boundary provided $\Psi'(r=1)=0$, requiring $\tan(\lambda) = 3\lambda/(3-\lambda^{2})$.
We choose the lowest-energy solution $\lambda \approx 5.76346$.
We set the overall amplitude $\beta = \lambda^{2} j_{1}(\lambda) /(2j_{1}(\lambda)-2\lambda/3) \approx 1.31765$, in accordance with our normalization.
We do not include resistivity in the leading-order magnetic balance. Using the stellar parameters from \S\ref{irlstars}, the resistive timescale is $R^2/\eta =$\num{2.95e13} years, longer than the age of the universe.
We solve equations \ref{eq:incompressibility_density}-\ref{eq:coloumb gauge} using the Dedalus pseudospectral code \citep[][]{dedalus}, in spherical coordinates. For simulations using potential boundary conditions we run on commit 8639c65 of the d3 branch. For simulations using perfectly conducting boundary conditions we run on commit 4348a2f of the d3 branch.
All variables are represented by spin-weighted spherical harmonics in the angular directions and radially weighted Jacobi polynomials in the radial direction \citep[][]{spheres_a,spheres_b}. The system we are studying is axisymmetric, so we are able to resolve the behavior in the $\phi$ direction even with a small number of points. Axisymmetry also allows the different $m$ modes to be independent of each other and thus studied separately. We utilize a 2nd-order semi-implicit SBDF scheme timestepper \citep[][]{SBDF2}.
We use two different types of magnetic boundary conditions: potential-field (POT) and perfectly conducting (PC). Potential boundary conditions
assume that the external magnetic field is current-free, and that magnetic field is continuous at the boundary of
the simulation.
We specify these conditions by decomposing $\vec{A}$ into spherical harmonic functions $\vec{A}_\ell$ where $\ell$ is the spherical harmonic degree and represent these conditions with $\partial_r\vec{A}_\ell + (\ell+1)\vec{A}_\ell / r = 0$.
Perfectly conducting boundary conditions
assume that the electric potential is constant at the boundary of the simulation
and that there is no normal magnetic field at the boundary. We impose these conditions with $\vec{e}_{\theta} \cdot \vec{A} = \vec{e}_{\phi} \cdot \vec{A} = \Phi = 0$.
In all cases we take the boundary to be stress free and impenetrable. We impose these conditions with
$\vec{e}_r \cdot \vec{u} = \vec{e}_\theta \cdot S \cdot \vec{e}_r = \vec{e}_\phi \cdot S \cdot \vec{e}_r = 0$
where $S = \frac{1}{2} (\nabla\vec{u} + (\nabla \vec{u})^T )$ is the rank-2 stress tensor. We also take the density perturbation to be zero at $R=1$.
Instabilities can be driven by an initial noise perturbation, forced at a certain frequency, or driven by a prescribed flow profile. Here, we focus on an instability driven by a prescribed flow profile.
To seed the instability, we specify a velocity profile with a specific azimuthal wavenumber $m$. We define $\vec{u}_0=\sin (\theta)^m \sin(m\phi) e^{\frac{-(r-r_0)}{\Delta r^2}} \vec{e}_r$, where $r_0=0.875$ and $\Delta r=0.02$. Unfortunately $\vec{u}_0$ violates the incompressibility constraint, so we use divergence cleaning to find an incompressible initial condition associated with $\vec{u}_0$. We solve the equation
\begin{equation}
\nabla \cdot (\vec{u}_0 + \nabla P) = 0
\end{equation}
for the scalar field $P$ with boundary condition $\partial_r P(r=R)=0$. Then $\vec{u}_i=\vec{u}_0+\nabla P$
satisfies $\nabla \cdot \vec{u}_i=0$ and the radial component of $\vec{u}_i$ goes to zero at $r=R$. We then initialize our simulation's velocity with $\vec{u}_i$.
We visualize the magnetic field structures and velocity flows using the Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers (VAPOR) \citep[][]{VAPOR}. We interpolate spherical data onto a Cartesian coordinate grid to create all the visualizations, which show bi-directional flow visualizations of desired fields.
To create the magnetic field visualizations, an even grid of seeds is created with 6 points in the direction of the symmetry axis and 3 points in the other directions. This distribution if seeds is chosen to limit the amount of field lines such that the central magnetic field structure is clear. In actuality, the magnetic field lines are continuous throughout the simulated region and form many concentric structures.
Each seed is integrated for 500 steps to find the magnetic field line. The color of the field lines indicates the strength of the magnitude of the field, with the darkest pink indicating the strongest magnetic field.
To create the velocity flow visualizations, we follow a similar procedure, but use 5 grid points in all directions and integrate for 200 steps. The color of the flow lines indicate the strength of the flow magnitude with darkest green indicating the highest magnitude flow.
\begin{comment}
\begin{align}
\qquad \qquad \qquad
\nabla \cdot u &= 0
\label{eq:incompressibility}
\\
\partial_t u + \nabla P -\nu \nabla^2 u &= - \nabla^2 A \times B_0 + J_0 \times \nabla \times A
\label{eq:momentum}
\\
\partial_t A - \nabla \Phi - \eta \nabla^2 A &= u \times B_0
\label{eq:mag}
\end{align}
\end{comment}
\section{Conclusions}
Convection is thought to generate large scale, core magnetic fields, which persist in stars after they evolve off the main sequence. It’s thought that the remnants of these fields may take the form of the Prendergast magnetic field, a combination of poloidal and toroidal field components which are expected to stabilize each other. Previous analytic and numerical stability calculations have suggested this magnetic field is stable. Our numerical calculations show a linear instability of this magnetic field on a timescale longer than previously studied.
Section \ref{instability} presented the underlying instability. We confirmed that the instability was physical by showing a consistent growth rate across increasing spatial resolution and decreasing temporal resolution.
In section \ref{diff} we discussed the effects of magnetic resistivity on the instability. We found that the growth rate decreased as resistivity decreased, indicating that our system exhibits a resistive instability.
Section \ref{stablestrat} discussed the addition of stable stratification into the system, and found that the growth rate became constant at high strengths of stable stratification. We visualized the unstable eigenmodes and discussed the similarities in the magnetic eigenmodes despite a difference in velocity eigenmodes.
Section \ref{mandBC} discussed the effect of changing azimuthal wavenumber and boundary conditions on the instability. We found that the $m=0$ and $m=1$ modes were unstable, while higher order modes were stable. We found the growth rates were larger when potential boundary conditions were used than when perfectly conducting boundary conditions were used.
Section \ref{r0} discussed the tearing mode instability as a possible mechanism for the Prendergast field instability.
The eigenmodes of the tearing mode instability exhibit continuous magnetic field perturbations normal to the boundary layer and guide field, and a discontinuous normal derivative of the normal field across the boundary layer.
We found that the eigenmodes of our $m=1$ simulations exhibit this behavior, while the eigenmodes of our $m=0$ simulations can not exhibit this behavior due to symmetry constraints.
Our $m=1$ simulations with different boundary conditions have similar boundary layer widths at $r=0$, indicating that the boundary layer at $r=0$ is not responsible for the difference in growth rates seen in simualtions with different boundary conditions.
While the tearing mode instability may contribute to the instability in the Prendergast field for $m=1$, it is not a necessary component for instability of the Prendergast magnetic field.
In section \ref{irlstars} we discussed implications of this instability for red giant stars. We calculated the e-folding timescale of the instability for stars, and found that it is much less than the evolution timescale, indicating that the Prendergast magnetic field is not a good model to study stable magnetic fields in stars.
Our calculations call into question the validity of past results which used the Prendergast field as a stable magnetic field.
The e-folding timescale of the instability is faster than the evolution timescale of a star, so the Prendergast field is a poor model for a stable magnetic field in stellar interiors.
However, since the instability timescale is long compared to the Alfven timescale, it is likely that results obtained on a short timescale will see no effect of this instability.
Future work should examine the nonlinear saturation of the instability. It is possible that in a nonlinear system this instability will saturate at low amplitude and thus have little effect on the system as a whole. Additionally, future work should aim to find a stable magnetic field that can be used to study magnetic waves in linear systems. Similar tests to the above work could be used to verify the stability of such a field.
\section{Introduction}
Stars have magnetic fields which influence dynamical processes relevant to stellar evolution such as stellar winds \citep[][]{stellarwinds}, chemical mixing \citep[][]{chmicakmixing}, heat transport \citep[][]{heattransport}, convection \citep[][]{convection}, and stellar pulsations \citep[][]{astroseismology}.
Surface magnetic fields have been observed for stars in various stages of evolution \citep[][]{obs_bfield}.
The presence of surface magnetic fields could be explained by two different hypotheses: the dynamo hypothesis and the fossil hypothesis. The dynamo hypothesis proposes that magnetic fields are generated within stellar convection zones by convective fluid motions \citep[][]{dynamofields1}.
The dynamo hypothesis explains surface observations in main-sequence solar type stars \citep[][]{solardynamo}, which have convective envelopes.
However, the dynamo hypothesis does not explain observations of surface magnetism in massive main-sequence stars \citep[][]{massive_no_dynamo}.
These stars have convective cores and there is no established mechanism to transport the dynamo field across through the radiative envelope to the surface.
The fossil hypothesis proposes that stellar magnetic fields are remnants from earlier stages of stellar evolution \citep[][]{fossilfield1}. This hypothesis explains magnetism in massive stars \citep[][]{massive_no_dynamo}, although it requires a stable field configuration which could survive over the lifetime of the star.
The fossil hypothesis can also explain core magnetic fields in post main sequence stars such as red giant branch (RGB) stars. Astroseismic observations indicate strong core magnetic fields in RGB stars \citep[][]{astroseismology, stellocantiello}. These stars have a stably stratified, radiative core which was convective when the star was on the main sequence.
This convective history provides a dynamo mechanism which could have generated magnetic field on the main sequence.
In order for these fields to be observable on the RGB, the field configuration must be stable.
This makes RGB stars a good model for investigating stable magnetic field configurations.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{B0_vis.png}
\caption{Magnetic field line visualization of the Prendergast magnetic field as given by equations \ref{eq:pren}-\ref{eq:prendend-sin/cos}. The color of the field lines indicates the strength of the magnitude of the field, with darker color indicating a stronger magnitude. The field consists of a combination of twisting toroidal and poloidal components with its axis of symmetry aligned with the z-axis. To aid in visualization we select 2 helical structures, but the field is continuous and has many nesting structures.}
\label{fig:B0}
\end{figure}
While purely toroidal \citep[][]{toroidal_stablility} and purely poloidal \citep[][]{poloidalstability} fields are unstable, a combination of the two may be stable. \citet{Prendergast1956} developed an equilibrium state with an axisymmetric mix of poloidal and toroidal magnetic field that we will call a ``Prendergast field" configuration. Figure \ref{fig:B0} shows a 3D visualization of the Prendergast field. The field consists of a combination of twisting toroidal and poloidal components and is axisymmetric about the z-axis. The magnitude of the field strength decreases radially.
Previous works have investigated the stability of the Prendergast field.
\citet[][]{Prendergast2} showed via the variational principle that the $m=0$ mode is unstable once the magnetic energy exceeds 0.4 times the gravitational binding energy of the star. He notes that this is likely an overestimate of the allowed magnetic energy for a star. This theoretical criterion is shown to be sufficient for instability but not a necessary criterion. Therefore, this result gives no indication of the stability of the model below this limit.
Higher order modes were not studied because of the difficulty of the calculations.
\citet[][]{cowling1960} used a similar approach, but focused on displacements near the outer boundary in a neutrally stratified star. He found the instability condition was met for all strengths of magnetic field, and concluded that the Prendergast field is always unstable in neutrally stratified stars.
More recently, \citet[][]{Duez2010} followed the nonlinear evolution of the Prendergast field over 10 Alfven times. They did not see any evidence of instability in this time frame.
\citet[][]{Braithwaite+nordlund2005} found for a stably stratified star that a random initial magnetic field configuration tends to decay into a field with mixed toroidal and poloidal components. They were able to follow the evolution for a few hundred Alfven times and found the magnetic energy to be roughly constant after the initial decay. Assuming this mixed toroidal and poloidal field was a Prendergast field, the authors then concluded that the Prendergast field is dynamically stable.
In contrast, \citet[][]{mitchell2014} looked at random initial magnetic field configurations for neutrally stratified stars, and followed the evolution for a hundred Alfven times. They found that the neutrally stratified star never reached a stable equilibrium state in this time frame.
This is evidence for an instability of the Prendergast field, consistent with the analytic work of \citet[][]{cowling1960}.
The Prendergast magnetic field is widely used as a stable field configuration.
Prendergast fields have been used to investigate the effects of a core magnetic field on stellar oscillations in red giant stars \citep[][]{loi2017}, the effects of strong magnetic fields on the propagation of gravity waves in stellar interiors \citep[][]{loi2018}, the effects of magnetic fields on the frequency of gravity modes in rotating stars \citep[][]{prat2019}, and the effects of core magnetic field on the observable mixed-mode frequencies of stars \citep[][]{bugnet2021}.
However, if the Prendergast field is unstable, then it may not be a good model for stellar magnetic fields.
It is unclear if these previous results would still hold for a different magnetic field configuration.
Here we present a linear stability analysis of the Prendergast magnetic field configuration. We find that the Prendergast field exhibits linear instability, with kinetic energy growing exponentially regardless of the value of the diffusivity, the strength of stable stratification, or the boundary conditions.
In Section \ref{numdeets}, we describe the setup of our simulations. Section \ref{instability} presents the underlying instability.
Section \ref{diff} discusses its dependence on the diffusivity, section \ref{stablestrat} discusses the addition of stable stratification into the system, and section \ref{mandBC} discusses the effect of changing azimuthal wavenumber and boundary conditions. Section \ref{r0} discusses possible mechanisms for the instability. In section \ref{irlstars} we discuss implications of this instability for red giant stars.
\section{Results}
Here we present a linear stability analysis of the Prendergast magnetic field configuration.
Section \ref{instability} presents the underlying instability. Section \ref{diff} discusses the effects of magnetic resistivity on the instability.
Section \ref{stablestrat} discusses the addition of stable stratification into the system.
Section \ref{mandBC} discusses the effect of changing azimuthal wavenumber and boundary conditions on the instability and section \ref{r0} discusses possible mechanisms for the instability.
In section \ref{irlstars} we discuss implications of this instability for red giant stars.
\subsection{Instability of the Prendergast Field}\label{instability}
We time-evolve a series of three dimensional, spherical simulations of a radiative region with a background Prendergast magnetic field. Our fiducial case has a diffusivity of $\nu =$\num{1.3e-5}, no stable stratification ($N_0^2=0$), flow azimuthal wavenumber $m=1$, and potential boundary conditions. We run for 1600 Alfven times and calculate the total kinetic energy over time for the fiducial case. The total kinetic energy is given by $KE = 0.5 \left\langle u^2\right\rangle$, with angle brackets denoting a volume average.
We plot kinetic energy as a function of time in figure \ref{fig:ke} with the simulation data shown in purple. We fit the exponential growth using a least squares linear fit given by $\ln(KE)=\gamma t$ where $\gamma$ is the growth rate.
The best fit is shown in orange.
After the first 115 time units, once the instability has grown larger than the initial decaying state, kinetic energy grows steadily.
We want to confirm that this instability is physical and not numerical. Our fiducial case has a growth rate of $\gamma=0.0431$, and we expect the growth rate to be indpendent of resolution and timestep size if the instability is physical. We first increase the spatial resolution to $N_r=N_{\theta}=255$, $N_{\phi}=4$ and find a growth rate of $\gamma=0.0433$ showing that the instability persists at higher resolution. We then decrease the timestep size to $10^{-4}$ and find a growth rate of $\gamma=0.0433$ showing that the instability persists at smaller timestep. Because the instability is robust even as we increase spatial resolution and decrease the timestep size (see table \ref{tab:sim_params_res}), we are confident that this instability is physical.
We visualize the magnetic field of the unstable eigenmode in the leftmost column of figure \ref{fig:eigs}.
We see a configuration very similar in shape to our background Prendergast field with a combination of twisting toroidal and poloidal components, but the axis of symmetry aligns with the x-axis, rather than the z-axis. While the background field is $m=0$ and thus must be symmetric about the z-axis, the perturbation has the same azimuthal wavenumber as the initial velocity flow, in this case $m=1$, and so cannot be symmetric around the z-axis, causing this flip in orientation for the unstable eigenmode.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ke.pdf}
\caption{Kinetic energy as a function of time for the fiducial simulation. Simulation data is shown in purple and the exponential fit $KE = e^{0.0431t}$ is shown in orange.}
\label{fig:ke}
\end{figure}
\begin{table}
\centering
\caption{Simulation parameters. Here $\eta$ is the magnetic resistivity, and $\eta = \nu = \kappa$. $N_r$, and $N_\phi$ are the r, and $\phi$ resolutions respectively. In all cases $N_r = N_\theta$. $\Delta t$ is the timestep size. $m$ is the azimuthal wavenumber. $N_0^2$ is the stable stratification strength. $\gamma$ is the growth rate of the instability. All simulations in this table utilize POT boundary conditions.}
\label{tab:sim_params_res}
\begin{tabular}{ccccccc}
\hline
Name & $\eta$ & $(N_r, N_\phi)$ & $\Delta t$ & $m$ & $N_0^2$ & $\gamma$\\
\hline
Fid &\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0 & 0.0431\\
R255 &\num{1.3e-5} & (255,4) & 0.0005 & 1 & 0 & 0.0433\\
dt &\num{1.3e-5} & (127,4) & 0.0001 & 1 & 0 & 0.0433\\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figures/eigs.png}
\caption{Columns (from left to right): Magnetic field eigenfunctions (pink) for the $N_0^2=0$ case. Velocity flow eigenfunctions (green) for the $N_0^2=0$ case. Magnetic field eigenfunctions (pink) for the $N_0^2=1.69$ case. Velocity flow eigenfunctions (green) for the $N_0^2=1.69$ case.}
\label{fig:eigs}
\end{figure*}
\subsection{Variation with Diffusivity}\label{diff}
Our fiducial simulation is run at a resistivity much higher than we would expect in a physical star as it is infeasible to resolve a simulation with a realistic resistivity. However, we want to determine if this instability could still be present in physical systems by determining the effect of changing resistivity on the growth rate of the system. We ran 4 simulations with $\eta = \nu$ ranging from \num{1.3e-3} to \num{1.3e-6}. In table \ref{tab:sim_params_diff} we show the growth rates for each simulation and note that the growth slows as resistivity decreases, with growth rates ranging from 0.212 at the highest resistivity to 0.0209 at the lowest resistivity. The growth rate follows a power law as resistivity changes (see figure \ref{fig:nugamma}) with growth rate getting smaller as resistivity goes towards zero.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{N21gammanu.pdf}
\caption{ Growth rate $\gamma$ as a function of resistivity $\eta$ for potential boundary conditions with $N_0^2=1.69$ and $m=1$ ($N^2$), potential boundary conditions with $N_0^2=0$ and $m=0$ at low (lm0) and high (hm0) resistivity, potential boundary conditions with $N_0^2=0$ and $m=1$ (POT), and perfectly conducting boundary conditions with $N_0^2=0$ and $m=1$ (PC). Each group of simulations is fit with a power law plotted as solid lines and given by the equation closest to each line. }
\label{fig:nugamma}
\end{figure}
Although resistivity often diffusively stabilizes perturbations, it can cause instability in certain systems. This is because the addition of resistive terms means plasma is no longer constrained to flow along the magnetic field lines. Resistive instabilities, such as the tearing mode instability \citep[][]{tearing}, draw on magnetic energy generated by plasma currents as the magnetic field tries to relax into a lower energy state. We discuss the possibility of a tearing mode like instability of the Prendergast magnetic field in section \ref{r0}.
\begin{table}
\centering
\caption{See table \ref{tab:sim_params_res} for column descriptions. All simulations in this table use POT boundary conditions}
\label{tab:sim_params_diff}
\begin{tabular}{ccccccc}
\hline
Name & $\eta$ & $(N_r, N_\phi)$ & $\Delta t$ & $m$ & $N_0^2$ & $\gamma$\\
\hline
D3 & \num{1.3e-3} & (127,4) & 0.0005 & 1 & 0 & 0.212\\
D4 & \num{1.3e-4} & (127,4) & 0.0005 & 1 & 0 & 0.0943\\
Fid &\num{1.3e-5}& (127,4) & 0.0005 & 1 & 0 & 0.0431\\
D6 & \num{1.3e-6}& (511,4) & 0.0001 & 1 & 0 & 0.0209\\
\hline
\end{tabular}
\end{table}
\subsection{Introduction of Stable Stratification}\label{stablestrat}
Up until now our simulations have not included stable stratification, but red giant stars are stably stratified in the core region where we expect magnetic fields to live.
Stable stratification damps out radial motions, and thus is expected to stabilize instabilities.
\begin{table}
\centering
\caption{See table \ref{tab:sim_params_res} for column descriptions. All simulations in this table use POT boundary conditions}
\label{tab:sim_params_n2}
\begin{tabular}{ccccccc}
\hline
Name & $\eta$ & $(N_r, N_\phi)$ & $\Delta t$ & $m$ & $N_0^2$ & $\gamma$\\
\hline
Fid &\num{1.3e-5}& (127,4) & 0.0005 & 1 & 0 & 0.0432\\
N0.00169& \num{1.3e-5}& (127,4) & 0.0005 & 1 & 0.00169 & 0.0420\\
N0.0169&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0.0169 & 0.0342\\
N0.0338&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0.0338 & 0.0277\\
N0.0845&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0.0845& 0.0166\\
N0.169&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0.169 & 0.0165\\
N0.338&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0.338 & 0.0164\\
N0.676&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 0.676 & 0.0164\\
N1.014&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 1.014 & 0.0164\\
N1.69&\num{1.3e-5}& (127,4) & 0.0005 & 1 & 1.69 & 0.0163\\
N16.9&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 16.9 & 0.0163\\
N169&\num{1.3e-5} & (127,4) & 0.0005 & 1 & 169 & 0.0163\\
\hline
\end{tabular}
\end{table}
In order to determine the effect of stable stratification strength on the growth rate of the kinetic energy, we change the simulation parameters such that $N_0^2$ varies from $0$ to $169$ (see table \ref{tab:sim_params_n2}). $N_0^2$ is a measure of the strength of stable stratification. The growth rates range from 0.0432 for $N_0^2=0$ to 0.0163 for $N_0^2=169$.
As described in section 3.1, we confirm that the instability is not altered by changing the resolution and timestep of the simulation. We note that the magnetic field of the unstable mode has roughly the same shape and orientation as seen in the case without stable stratification (see Figure \ref{fig:eigs}) indicating that the addition of stable stratification has no effect on the overall shape of the magnetic field of the unstable mode.
We plot the growth rate normalized to the unstratified case ($N_0^2=0$) as a function of $N_0^2$ in figure \ref{fig:n2_gamma}.
We find that for low $N_0^2$, shown in orange, $\gamma$ changes as $N_0^2$ changes. For high $N_0^2$, shown in purple, $\gamma$ is roughly constant as $N_0^2$ changes. Utilizing a typical value of $N=10^3 \mu$Hz from \citet[][]{astroseismology} and the stellar parameters given in section \ref{irlstars}, we find that a typical value of $N_0^2$ for stars would be \num{3e11} in our simulations. While this is much higher than our simulated values, the constant growth rate at high $N_0^2$ indicates that the behavior at stellar values will be similar to the behavior at our highest simulated $N_0^2$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/gammanorm.pdf}
\caption{Growth rates in simulations with stable stratification normalized by the growth rate in the unstratified ($N_0^2 = 0$) case are plotted as a function of stable stratification strength, $N_0^2$. Orange dots indicate simulations at low $N_0^2$ where growth rate changes significantly as $N_0^2$ increases. Purple dots indicate simulations at high $N_0^2$ where growth rate is roughly constant as $N_0^2$ increases. The dashed orange line indicates unity.}
\label{fig:n2_gamma}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/n2comb.pdf}
\caption{The top panel shows $\left \langle u_r^2 \right \rangle / \left \langle u^2\right \rangle$ as a function of $N_0^2$. The horizontal line shows the fraction of kinetic energy contained in the radial component for $N_0^2=0$. This fraction decreases over many orders of magnitude as $N_0^2$ increases.
The bottom panel shows $\left \langle B_r^2\right \rangle/ \left \langle B^2\right \rangle$ as a function of $N_0^2$. The horizontal line shows the fraction of magnetic energy contained in the radial component for $N_0^2=0$. This fraction stays fairly constant as $N_0^2$ increases.}
\label{fig:N2_ur_br}
\end{figure}
Although we originally assumed that there would be some cutoff point where the stable stratification overpowered the magnetic instability, our results seem to indicate that the presence of a Prendergast magnetic field will cause an instability in a stably stratified star, regardless of the strength of the stratification relative to the strength of the magnetic field. There appear to be two separate regions of instability: the low $N_0^2$ region over which $\gamma$ changes with $N_0^2$ and the high $N_0^2$ region over which $\gamma$ is roughly constant. Figure \ref{fig:eigs} shows the magnetic field lines (pink) and velocity flow lines (green) for our low $N_0^2$ region (left) and high $N_0^2$ region (right). We find that the magnetic field lines are almost identical in the low and high $N_0^2$ regimes while the velocity flow lines are visibly different. The low $N_0^2$ regime has a radially dominated velocity flow, with fast helical flows through the central region. The high $N_0^2$ regime has almost no radial flows. This is seen in the eigenmodes which have no flow through the center of the star, only concentric circular flows.
We quantify the difference in radial velocities in the top panel of figure \ref{fig:N2_ur_br} which shows the ratio of radial kinetic energy to total kinetic energy as a function of $N_0^2$. We see that this ratio decreases by 6 orders of magnitude as we go from low $N_0^2$ to high $N_0^2$.
In contrast, the bottom panel of figure \ref{fig:N2_ur_br} shows the ratio of radial magnetic energy to total magnetic energy as a function of $N_0^2$. We see that this ratio changes by less than an order of magnitude over the same range of $N_0^2$.
The ratio $\langle B_r^2\rangle/\langle B^2 \rangle$ and the growth rate $\gamma$ have similar variation with $N_0^2$ (figure \ref{fig:n2_gamma}). This leads to a question; how is the radial magnetic field maintained in simulations with very small radial velocities?
To answer this question we want to find an equation for the radial magnetic energy. We start with the radial component of equation \ref{eq:mag_density} and then multiply by the radial component of the magnetic field. We then take the volume average and find
\begin{equation}
\partial_t \left\langle B_r^2\right\rangle - \eta \nabla^2 \left\langle B_r^2\right\rangle = \left\langle \vec{e}_r \cdot \nabla \times \left( \vec{u} \times \vec{B}_0 \right) B_r\right\rangle
\label{eq:magenergy}
\end{equation}
which, assuming axisymmetry, becomes
\begin{equation}
\partial_t \left\langle B_r^2\right\rangle - \eta \nabla^2 \left\langle B_r^2\right\rangle = \left\langle r^{-1} \left( u_r B_{0,\theta} - u_\theta B_{0,r} \right) \partial_\theta B_r \right\rangle
\end{equation}
We see that the radial component of the magnetic energy depends on both $u_r$ and $u_\theta$. For high $N_0^2$, $u_r \rightarrow 0$, but the $u_\theta$ term does not approach zero as $N_0^2$ changes. This means that almost all of the radial magnetic energy in our high $N_0^2$ cases must come from the $\theta$ component of the velocity.
To quantify this, we take the right hand side of equation \ref{eq:magenergy} normalized with respect to magnetic energy. We find that this quantity changes from 0.008 at $N_0^2\sim 0$ to 0.002 at large $N_0^2$. This indicates that the radial magnetic energy is held roughly constant as we increase $N_0^2$, despite the difference in velocity flow lines with changing $N_0^2$.
Since the growth rate, radial magnetic energy, and right hand side of equation \ref{eq:magenergy} don't change significantly over a large range of $N_0^2$, we predict that this instability will persist across all strengths of stable stratification. While we are not able to simulate out the the high $N_0^2$ seen in stars, we predict that the behavior we see at the largest simulated value will be similar to what we see in stars because of all of these values become roughly constant across a large range of $N_0^2$.
We know from \citet[][]{Prendergast2} that the Prendergast field will be unstable if the magnetic energy exceeds 0.4 times the gravitational binding energy of the star, although Prendergast notes that this is a physically irrelevant limit. In the Boussinesq approximation the gravitational binding energy is infinite and so the magnetic energy is effectively zero times the gravitational binding energy.
That we see an instability even with magnetic energy at zero times the gravitational binding energy indicates that Prendergast's upper limit overestimates the stability of the model, and that the Prendergast field will be unstable at all strengths of magnetic energy for both uniform and stably stratified stars.
\subsection{Effect of Azimuthal Wavenumber and Boundary Conditions}\label{mandBC}
In our simulations, we are able to select for a particular azimuthal wave number $m$ of the initial velocity perturbations. However, in a star there are perturbations which take on a variety of $m$ values. Here we investigate the effect of $m$, the azimuthal wavenumber, on the growth rate of the kinetic energy. We select for different $m$ by initializing with $\vec{u}_0 = \sin (\theta)^m \sin(m\phi) e^{\frac{-(r-r_0)}{\Delta r^2}} \vec{e}_r$ and applying divergence cleaning to find an incompressible initial velocity. For these simulations, every 10,000 timesteps we zero the perturbations at all values of $m$ except for the one being studied. We do this because even growth of an unstable mode seeded at machine precision can be enough to overtake the kinetic energy of the system during the full time-integration of our simulation. We find that the $m=0$ and $m=1$ modes are unstable, with positive growth rate, while for all $m>1$, $\gamma$ is negative, i.e. the kinetic energy is decreasing over time (see table \ref{tab:sim_params_m}). This indicates that only the $m=1$ and $m=0$ modes are unstable.
\begin{table}
\centering
\caption{See table \ref{tab:sim_params_res} for column descriptions. All simulations in this table use POT boundary conditions.}
\label{tab:sim_params_m}
\begin{tabular}{ccccccc}
\hline
Name & $\eta$ & $(N_r, N_\phi)$ & $\Delta t$ & $m$ & $N_0^2$ & $\gamma$\\
\hline
m0 &\num{1.3e-5}& (127,4) & 0.0005 & 0 & 0 & 0.0421\\
Fid & \num{1.3e-5}& (127,4) & 0.0005 & 1 & 0 & 0.0412\\
m2 &\num{1.3e-5} & (127,8) & 0.0005 & 2 & 0 & -0.000650\\
m3 &\num{1.3e-5} & (127,16) & 0.0005 & 3 & 0 & -0.00915\\
\hline
\end{tabular}
\end{table}
We visualize the unstable $m=0$ mode in Figure \ref{fig:m0_vis} using VAPOR. We see that the magnitude of the magnetic field is again increasing towards the center, but the unstable mode is split across the x-y plane, showing two distinct but identical halves.
In figure \ref{fig:nugamma}, we plot the growth rate as a function of resistivity for the $m=0$ mode and find that the $m=0$ mode follows a broken power law. At low resistivity, the slopes are the same for the $m=1$ and $m=0$ modes, whereas the $m=1$ mode has a steeper slope than the $m=0$ mode at low resistivity. We discuss possible explanations for this difference in slope in section \ref{r0}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{m0_vis.png}
\caption{The unstable mode for azimuthal wavenumber $m=0$. Darker color indicates a stronger magnetic field magnitude.}
\label{fig:m0_vis}
\end{figure}
Next we determine the effects of our boundary condition choice on the growth rate of the instability.
We find that the instability is present for both potential and perfectly conducting boundary conditions, although the growth rate is greater for potential than perfectly conducting across all diffusivities (see figure \ref{fig:nugamma}). The power law index differs for the different boundary conditions.
It is known that boundary condition choice can have a significant effect on the growth rate of other magnetic instabilities, like magnetic dynamos \citep[][]{varela, Thelen}. \citet[][]{fontana} found that perfectly conducting boundary conditions had a larger growth rate than potential boundary conditions for magnetic dynamos in direct opposition to our findings here.
However, a major difference between those previous studies and our own is that the Prendergast magnetic field is unstable to a resistive instability.
\begin{table}
\centering
\caption{See table \ref{tab:sim_params_res} for column descriptions. All simulations in this table use PC boundary conditions.}
\label{tab:sim_params_PCBC}
\begin{tabular}{ccccccc}
\hline
Name & $\eta$ & $(N_r, N_\phi)$ & $\Delta t$ & $m$ & $N_0^2$ & $\gamma$\\
\hline
PC-D3 &\num{1.3e-3}& (128,4) & 0.0005 & 1 & 0 & 0.143\\
PC-D4 & \num{1.3e-4}& (128,4) & 0.0005 & 1 & 0 & 0.0617\\
PC-D5 &\num{1.3e-5} & (128,4) & 0.0005 & 1 & 0 & 0.0218\\
PC-D6 &\num{1.3e-6} & (512,4) & 0.0005 & 1 & 0 & 0.00801\\
\hline
\end{tabular}
\end{table}
\subsection{Interpretation of the Instability}\label{r0}
Here we discuss the tearing mode instability as a potential mechanism for the instability seen in the Prendergast magnetic field.
The tearing mode instability is a resistive instability that occurs at boundary layers, places where the magnetic field is zero \citep[][]{tearing}.
Tearing mode instabilities can also occur when a guide field is present at boundary layers \citep[][]{drake1, drake2}. A guide field is a nonzero magnetic field normal to the boundary layer.
The Prendergast magnetic field is zero at $r=0$ in all components except the z component.
While boundary layers are typically two dimensional, the Prendergast field has a zero dimensional ``boundary layer'' with a guide field at $r=0$, since the magnetic field goes to zero in all but one component.
The eigenmodes of the tearing mode instability have two distinct features. The magnetic field perturbations normal to the boundary layer are continuous across the boundary layer, while the normal derivative of the normal field is discontinuous across the boundary layer in the limit that $\eta\rightarrow 0$.
If the instability of the Prendergast magnetic field also has sharp gradients in the normal magnetic field, that would suggest it is a tearing mode-like instability.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/diffbl.pdf}
\caption{Top Panel: $B_x$ as a function of $x$ along the $x$-axis for $m=1$ simulations with varying resistivities, where D3 corresponds to a resistivity of $\num{1.3e-3}$ and so on. In all cases the simulations have potential boundary conditions. The vertical dashed lines denote the region of interest for the bottom panel. Bottom Panel: As above, but zoomed into the boundary layer around $r=0$.}
\label{fig:diffbl}
\end{figure}
We see in figure \ref{fig:eigs} that there is a line of magnetic field going straight through $r=0$ for the $m=1$ mode.
While there is no unique direction normal to a point, this magnetic field line aligns with the $x$-axis; in this case, the $x$ direction is similar to the normal direction for a two dimensional current sheet.
We check for continuity of the magnetic field by plotting $B_x$ of the eigenmode along the $x$ axis (see the top panel of figure \ref{fig:diffbl}). The magnetic field is symmetric about the $x$-axis and continuous.
In the limit of small resistivity, the gradient of the magnetic perturbation should become discontinuous for a tearing-mode-like instability.
In the bottom panel of figure \ref{fig:diffbl}, we plot the magnetic perturbations near $r=0$, in the region denoted by the vertical lines.
At the boundary layer, $B_x$ transitions from negative to positive curvature as the resistivity decreases. With the exception of $\eta=\num{1.3e-5}$ (where the curvature is close to zero) the boundary layer becomes increasingly sharp as resistivity decreases.
This suggests that the derivative of the magnetic perturbation would become discontinuous in the limit of $\eta\rightarrow0$.
This trend toward a less smooth function suggests that we are seeing a tearing mode like instability of the Prendergast magnetic field. The growth rate of a tearing mode instability scales as $\gamma \propto \eta^{3/5}$, which is a steeper power law than we see in our simulations (see figure \ref{fig:nugamma}), indicating that while the tearing mode like behavior at $r=0$ contributes to the instability, it may not be the only factor.
We note that the growth rates differ when the boundary conditions are changed. We find the boundary layer widths to be similar for the different boundary conditions.
While the concavity changes near $\eta \approx \num{1.3e-5}$ for potential boundary conditions,
it is always negative for perfectly conducting boundary conditions. As the boundary layer widths are similar, it appears that the boundary layer behavior does not account for the difference in growth rates between the two boundary conditions.
In contrast to our simulations with $m=1$, we see in figure \ref{fig:m0_vis} that simulations with $m=0$ have different behavior at $r=0$. While we still have a guide field in the z-direction, the magnetic field is exactly zero in all components at $r=0$.
For a tearing mode instability to occur, the perturbation field must have a nonzero component normal to the guide field. The $m=0$ mode requires axisymmetry, meaning that a non-zero component of magnetic field at $r=0$ can only lie along the axis of symmetry, in this case the z-axis. The requirements of axisymmetry are incompatible with the requirements for the tearing mode instability, indicating that $m=0$ simulations do not have tearing mode like behavior at the origin.
Figure \ref{fig:nugamma} shows that the $m=0$ and $m=1$ modes have different behavior as a function of resistivity. At high resistivity, the $m=1$ modes show a steeper power law than the $m=0$ modes while at low resistivity the power laws have the same slope. This suggests the tearing mode like behavior affects the growth rate at high resistivity but not at low resistivity.
While we see instability in both the $m=0$ and $m=1$ cases, the tearing mode like behavior is only present in the $m=1$ simulations. This indicates that a tearing mode like mechanism can contribute to the instability of the Prendergast magnetic field, but is not a requirement for instability.
\subsection{Extrapolating to astrophysical parameters}\label{irlstars}
Next we determine if this instability is relevant to physical stars. Realistic stellar diffusivities are too small to directly simulate. However, by fitting a trend between growth rate and diffusivity, we can extrapolate this trend down to the diffusivities seen in physical systems and determine the relevance of the instability.
Figure \ref{fig:nugamma} shows the instability growth rates decrease as power-laws in the diffusivity, the slope of which changes as boundary conditions or Brunt-V{\"a}is{\"a}l{\"a} frequency change. We look at two different stable stratification strengths, noting that growth rate is roughly constant at high $N_0^2$ and thus the power-law scaling should also be constant over high $N_0^2$. The PC and POT lines are at $N_0^2=0$ and the $N^2$ line is at $N_0^2=1.69$ Power laws are given in terms of our non-dimensional variables, but in order to extrapolate to physical stars we dimensionalize the variables.
In each case we fit the data to the dimensionalized power law
\begin{equation}
\gamma = \alpha \left(\frac{\eta}{v_AR}\right)^{\beta} \frac{v_A}{R}
\end{equation}
where $R$ is the radius of the radiative region of the star, and $v_A$ is the Alfven velocity. Then the e-folding timescale $\tau$ of the instability is
\begin{equation}
\tau = \frac{1}{\gamma} = \frac{1}{\alpha} \left(\frac{\eta}{v_AR}\right)^{-\beta} \frac{R}{v_A}
\end{equation}
Putting this in terms of the Lundquist number, $S=\frac{Rv_a}{\eta}$, we find that
\begin{equation}
\tau = \frac{1}{\alpha} S^{\beta} \frac{R}{v_a}
\end{equation}
From here, we can calculate the timescale of instability for a typical radiative zone of an RGB star, given a radius of ~0.66 $R_\odot$, mass of 1.6 $M_\odot$, magnetic field of $10^5 \, {\rm G}$ \citep[][]{astroseismology}, density of $10^5 \, {\rm g}/{\rm cm}^3$ and $\eta$ of 1 ${\rm cm}^2/{\rm s}$ \citep[][]{atlas}.
Using $v_A = \frac{B^2}{4\pi\rho}$, we find $v_A$ of 90 ${\rm cm}/ {\rm s}$.
We find different growth timescales for each of the fits reported in Fig.~\ref{fig:nugamma}; the longest timescale, corresponding to simulations with stable stratification is $\tau=$\num{1.2e6} years.
We find the same growth timescale in all simulations with strong stable stratification.
As the cores of RGB stars are strongly stably stratified, we expect this to be the most physically relevant timescale.
All simulations with no stable stratification have shorter timescales.
The lifetime of a RGB star is on the order of a few million years, so we expect the instability will grow to be relevant over the lifetime of the star. Because the Prendergast field is unstable over a fraction of the stellar evolutionary timescale, it is not a good model to use for studies of stable magnetic fields in stars.
|
3,212,635,537,687 | arxiv | \section{Introduction}
Diagrams are a common feature of many everyday media from newspapers to school textbooks, and not surprisingly, different forms of \emph{diagrammatic representation} have been studied from various perspectives. To name just a few examples, recent work has examined patterns in diagram design \cite{hullmanbach2018} and their interpretation in context \cite{tverskyetal2016}, and developed frameworks for classifying diagrams \cite{engelhardtrichards2018} and proposed guidelines for their design \cite{cheng2016}. There is also a long-standing interest in processing and generating diagrams computationally \cite{andrerist1995,batemanetal2001,batemanhenschel2007}, which is now resurfacing as advances emerging from deep learning for computer vision and natural language processing are brought to bear on diagrammatic representations \cite{sachanetal2018,choietal2018,haehnetal2019}.
From the perspective of computational processing, diagrammatic representations present a formidable challenge, as they involve tasks from both computer vision and natural language processing. On the one hand, diagrams have a \emph{spatial organisation} -- layout -- which needs to be segmented to identify meaningful units and their position. Making sense of how diagrams exploit the 2D layout space falls arguably within the domain of computer vision. On the other hand, diagrams also have a \emph{discourse structure}, which uses the layout space to set up discourse relations between instances of natural language, various types of images, arrows and lines, thus forming a unified discourse organisation. The need to parse this discourse structure shifts the focus towards the field of natural language processing.
Understanding and making inferences about the structure of diagrams and other forms of multimodal discourse may be broadly conceptualised as \emph{multimodal discourse parsing}. Recent examples of work in this area include \newcite{alikhanietal2019} and \newcite{ottoetal2019}, who model discourse relations between natural language and photographic images, drawing on linguistic theories of coherence and text--image relations, respectively. In most cases, however, predicting a single discourse relation covers only a part of the discourse structure. \newcite{sachanetal2019} note that there is a need for comprehensive theories and models of multimodal communication, as they can be used to rethink tasks that have been previously considered only from the perspective of natural language processing.
Unlike many other areas, the study of diagrammatic representations is particularly well-resourced, as several multimodal resources have been published recently to support research on computational processing of diagrams \cite{kembhavietal2016,choietal2018,hiippalaetal2019-ai2d}. This study compares two such resources, AI2D \cite{kembhavietal2016} and AI2D-RST \cite{hiippalaetal2019-ai2d}, which both feature the same diagrams, as the latter is an extension of the former. Whereas AI2D features crowd-sourced, non-expert annotations, AI2D-RST provides multiple layers of expert annotations, which are informed by state-of-the-art approaches to multimodal communication \cite{batemanetal2017} and annotation \cite{bateman2008,hiippala2015a}.
This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer.
Both AI2D and AI2D-RST represent the multimodal structure of diagrams using graphs. This enables learning their representations using graph neural networks, which are gaining currency as a graph is a natural choice for representing many types of data \cite{wuetal2019}. This article reports on two experiments that evaluate the capability of AI2D and AI2D-RST to represent the \emph{multimodal structure} of diagrams using graphs, focusing particularly on spatial layout, the hierarchical organisation of diagram elements and their connections expressed using arrows and lines.
\section{Data}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{datasets.pdf}
\caption{The relationship between crowd-sourced annotations in AI2D and AI2D-RST. AI2D-RST provides alternative, expert-annotated stand-off descriptions for a subset of 1000 diagrams from the original AI2D dataset. The grouping layer in AI2D provides a foundation for further annotation layers by allowing references to groups of nodes.}
\label{fig:datasets}
\end{center}
\end{figure*}
This section introduces the two multimodal resources compared in this study and discusses related work, beginning with the crowd-sourced annotations in AI2D and continuing with the alternative expert annotations in AI2D-RST, which are built on top of the crowd-sourced descriptions and cover a 1000-diagram subset of the original data. Figure \ref{fig:datasets} provides an overview of the two datasets, explains their relation to each other and provides an overview of the experiments reported in Section \ref{sec:experiments}
\subsection{Crowd-sourced Annotations from AI2D}
The Allen Institute for Artificial Intelligence Diagrams dataset (AI2D) contains 4903 English-language diagrams, which represent topics in primary school natural sciences, such as food webs, human physiology and life cycles, amounting to a total of 17 classes \cite{kembhavietal2016}. The dataset was originally developed to support research on diagram understanding and visual question answering \cite{kimetal2018}, but has also been used to study the contextual interpretation of diagrammatic elements, such as arrows and lines \cite{alikhanistone2018}.
The AI2D annotation schema models four types of diagram elements: text, graphics, arrows and arrowheads, whereas the semantic relations that hold between these elements are described using ten relations from a framework for analysing diagrammatic representations in \newcite{engelhardt2002}. Each diagram is represented using a Diagram Parse Graph (DPG), whose nodes stand for diagram elements while the edges between the nodes carry information about their semantic relations. The annotation for AI2D, which includes layout segmentations for the diagram images, DPGs and a multiple choice question-answer set, was created by crowd-sourced non-expert annotators on Amazon Mechanical Turk\footnote{https://www.mturk.com} \cite[243]{kembhavietal2016}.
I have previously argued that describing different types of multimodal structures in diagrammatic representations requires different types of graphs \cite{hiippalaorekhova2018}. To exemplify, many forms of multimodal discourse are assumed to possess a hierarchical structure, whose representation requires a tree graph. Diagrams, however, use arrows and lines to draw connections between elements that are not necessarily part of the same subtree, and for this reason representing connectivity requires a cyclic graph. AI2D DPGs, in turn, conflate the description of semantic relations and connections expressed using diagrammatic elements. Whether computational modelling of diagrammatic structures, or more generally, multimodal discourse parsing, benefits from pulling apart different types of multimodal structure remains an open question, which we pursued by developing an alternative annotation schema for AI2D, named AI2D-RST, which is introduced below.
\subsection{Expert Annotations from AI2D-RST}
AI2D-RST covers a subset of 1000 diagrams from AI2D, which have been annotated by trained experts using a new multi-layer annotation schema for describing the diagrams in AI2D \cite{hiippalaetal2019-ai2d}. The annotation schema, which draws on state-of-the-art theories of multimodal communication \cite{batemanetal2017}, adopts a stand-off approach to describing the diagrams. Hence the three annotation layers in AI2D-RST are represented using three different graphs, which use the same identifiers for nodes across all three graphs to allow combining the descriptions in different graphs. AI2D-RST contains three graphs:
\begin{enumerate}
\item \textbf{Grouping}: A tree graph that groups together diagram elements that are likely to be visually perceived as belonging together, based loosely on Gestalt principles of visual perception \cite{ware2012}. These groups are organised into a hierarchy, which represents the organisation of content in the 2D layout space \cite{bateman2008,hiippala2015a}.
\item \textbf{Connectivity}: A cyclic graph representing connections between diagram elements or their groups, which are signalled using arrows or lines \cite{tverskyetal2000}.
\item \textbf{Discourse structure}: A tree graph representing discourse structure of the diagram using Rhetorical Structure Theory \cite{mannthompson1988,taboadamann2006a}: hence the name AI2D-RST.
\end{enumerate}
The grouping graph, which is initially populated by diagram elements from the AI2D layout segmentation, provides a foundation for describing connectivity and discourse structure by adding nodes to the grouping graph that stand for groups of diagram elements, as shown in the upper part of Figure \ref{fig:datasets}. In addition, the grouping graph includes annotations for 11 different diagram types identified in the data (e.g. cycles, cross-sections and networks), which may be used as target labels during training, as explained in Section \ref{sec:graph_classification} The coarse and fine-grained diagram types identified in the data are shown in Figure \ref{fig:macro-groups}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.45]{macro-groups_counts.pdf}
\caption{Fine-grained classes, their number and frequencies in AI2D-RST ($N = 1134$). Note that the number of classes exceeds the number of diagrams in AI2D-RST, as some diagrams feature multiple diagram types. The arrows indicate choices: if the diagram designer chooses depiction, a further choice must be made between pictorial--diagrammatic and 2D/3D representations. The dashed lines indicate coarse groups of diagram types.}
\label{fig:macro-groups}
\end{center}
\end{figure}
\newcite{hiippalaetal2019-ai2d} show that the proposed annotation schema can be reliably applied to the data by measuring inter-annotator agreement between five annotators on random samples from the AI2D-RST corpus using Fleiss' $\kappa$ \cite{fleiss1971}. The results show high agreement on grouping ($N = 256, \kappa = 0.84$), diagram types ($N = 119, \kappa = 0.78$), connectivity ($N = 239, \kappa = 0.88$) and discourse structure ($N = 227, \kappa = 0.73$). It should be noted, however, that these measures may be affected by implicit knowledge that tends to develop among expert annotators who work towards the same task \cite{riezler2014}.
\section{Graph-based Representations}
Both AI2D and AI2D-RST use graphs to represent the multimodal structure of diagrams. This section explicates how the graphs and their node and edge types differ across the two multimodal resources.
\subsection{Nodes}
\label{sec:nodes}
\subsubsection{Node Types}
\label{sec:node_types}
AI2D and AI2D-RST share most node types that represent different diagram elements, namely text, graphics, arrows and the image constant, which is a node that stands for the entire diagram. In AI2D, generic diagram elements such as titles describing the entire diagram are typically connected to the image constant. In AI2D-RST, the image constant acts as the root node of the tree in the grouping graph. In addition to text, graphics, arrows and the image constant, AI2D-RST features two additional node types for groups and discourse relations, whereas AI2D includes an additional node for arrowheads. To summarise, AI2D contains five distinct node types, whereas AI2D-RST has six. Note, however, that only grouping and connectivity graphs used in this study, which limits the number to five for AI2D-RST.
\subsubsection{Node Features}
\label{sec:node_features}
The same features are used for both AI2D and AI2D-RST for nodes with layout information, namely text, graphics, arrows and arrowheads (in AI2D only). The position, size and shape of each diagram element are described using the following features: (1) the centre point of the bounding box or polygon, divided by the height and width of the diagram image, (2) area, or the number of pixels within the polygon, divided by the total number of pixels in the image, and (3) the solidity of the polygon, or the polygon area divided by the area of its convex hull. This yields a 4-dimensional feature vector describing the position and size of each diagram element in the layout. Each dimension is set to zero for grouping nodes in AI2D-RST and image constant nodes in AI2D and AI2D-RST.
\subsubsection{Discourse Relations}
AI2D-RST models discourse relations using nodes, which have a 25-dimensional, one-hot encoded feature vector to represent the type of discourse relation, which are drawn from Rhetorical Structure Theory \cite{mannthompson1988}. In AI2D, the discourse relations derived from \newcite{engelhardt2002} are represented using a 10-dimensional one-hot encoded vector, which is associated with edges connecting diagram elements participating in the relation. Because the two resources draw on different theories and represent discourse relations differently, I use the grouping and connectivity graph for AI2D-RST representations and ignore the edge features in AI2D, as these descriptions attempt to describe roughly the same multimodal structures. A comparison of discourse relations is left for a follow-up study focusing on representing the discourse structure of diagrams.
\subsection{Edges}
Whereas AI2D encodes information about semantic relations using edges, in AI2D-RST the information carried by edges depends on the graph in question. The edges of the grouping graph do not have features, whereas the edges of the connectivity graph have a 3-dimensional, one-hot encoded vector that represents the type of connection. The edges of the discourse structure graph have a 2-dimensional, one-hot encoded feature vector to represent nuclearity, that is, whether the nodes that participate in a discourse relations act as nuclei or satellites.
For the experiments reported in Section 4, self-loops are added to each node in the graph. A self-loop is an edge that originates in and terminates at the same node. Self-loops essentially add the graph's identity matrix to the adjacency matrix, which allow the graph neural networks to account for the node's own features during message passing, that is, when sending and receiving features from adjacent nodes.
\section{Experiments}
\label{sec:experiments}
This section presents two experiments that compare AI2D and AI2D-RST annotations in classifying diagrams and their parts using various graph neural networks.
\subsection{Graph Neural Networks}
I evaluated the following graph neural network architectures for both graph and node classification tasks:
\begin{itemize}
\item Graph Convolutional Network (GCN) \cite{kipfwelling2017}
\item Simplifying Graph Convolution (SGC) \cite{wuetal2019-sgc}, averaging incoming node features from up to 2 hops away
\item Graph Attention Network (GAT) \cite{velicovicetal2018} with 2 heads
\item GraphSAGE (SAGE) \cite{hamiltonetal2017} with LSTM aggregation
\end{itemize}
I implemented all graph neural networks using Deep Graph Library 0.4 \cite{wangetal2019} on the PyTorch 1.3 backend \cite{paszkeetal2017}. For GCN, GAT and SAGE, each network consists of two of the aforementioned layers with a Rectified Linear Unit (ReLU) activation, followed by a dense layer and a final softmax function for predicting class membership probabilities. For SGC, the network consists of a single SGC layer without an activation function. The implementations for each network are available in the repository associated with this article.
\subsection{Hyperparameters and Training}
I used the Tree of Parzen Estimators (TPE) algorithm \cite{bergstraetal2011} to tune model hyperparameters separately for each dataset, architecture and task using the implementation in the Tune \cite{liawetal2018} and \emph{hyperopt} \cite{bergstraetal2013} libraries. For each dataset, architecture and task, I evaluated a total of 100 hyperparameter combinations for a maximum of 100 epochs, using 850 diagrams for training and 150 for validation. The objective metric to be maximised was macro F1 score. Tables \ref{tab:hyperparams_gc} and \ref{tab:hyperparams_nc} give the hyperparameters and spaces searched for node and graph classification. Following \newcite{shcuretal2018}, I shuffled the training and validation splits for each run to prevent overfitting and used the same training procedure throughout. I used the Adam optimiser \cite{kingmaba2015} for both hyperparameter search and training.
To address the issue of class imbalance present in both tasks, class weights were calculated by dividing the total number of samples by the product of the number of unique classes and the number of samples for each class, as implemented in \emph{scikit-learn} \cite{pedregosaetal2011}. These weights were passed to the loss function during hyperparameter search and training.
\begin{table}[h]
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|}
\hline
Hyperparameter & Range \\
\hline
Learning rate & 0.01--0.0001 \\
\hline
Batch size & 4--32 \\
\hline
Hidden layer size & 5--30 \\
\hline
L$_2$ penalty & 0.001--0.00001 \\
\hline
\end{tabularx}
\caption{Hyperparameter ranges for graph classification} \label{tab:hyperparams_gc}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|}
\hline
Hyperparameter & Range \\
\hline
Learning rate & 0.01--0.0001 \\
\hline
Batch size & 2--16 \\
\hline
Hidden layer size & 5--30 \\
\hline
L$_2$ penalty & 0.001--0.00001 \\
\hline
\end{tabularx}
\caption{Hyperparameter ranges for node classification} \label{tab:hyperparams_nc}
\end{center}
\end{table}
\begin{table*}[t]
\begin{tabular}{| l | r | r | r | r | r | r | r | r | r | r | r | r |}
\hline
Model & \multicolumn{3}{c|}{GAT} & \multicolumn{3}{c|}{GCN} & \multicolumn{3}{c|}{SAGE} & \multicolumn{3}{c|}{SGC} \\
\hline
Graphs & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C \\
\hline
Accuracy & 0.77 & 0.79 & 0.75 & *0.81 & 0.79 & 0.78 & *\textbf{0.93} & 0.85 & 0.88 & 0.32 & *0.43 & 0.33 \\
Macro F1 & 0.78 & 0.77 & 0.73 & *0.81 & 0.78 & 0.75 & *\textbf{0.93} & 0.85 & 0.86 & 0.29 & *0.45 & 0.28 \\
Weighted F1 & 0.77 & 0.79 & 0.76 & *0.81 & 0.79 & 0.78 & *\textbf{0.92} & 0.85 & 0.88 & 0.3 & *0.42 & 0.3 \\
\hline
\end{tabular}
\caption{Mean accuracy, macro F1 and weighted F1 scores for node classification. The results are averaged over 20 runs. The following abbreviations indicate the graph used: `AI2D' for the original crowd-sourced graphs from AI2D, `G' for the grouping graph and `G+C' for the combination of grouping and connectivity graph from AI2D-RST. An asterisk indicates that the difference between AI2D and the best AI2D-RST graph is statistically significant at $p < 0.05$ when comparing the results for the given metric over 20 runs using Mann--Whitney $U$ test. The best result for each metric is marked using bold.} \label{tab:nc_results}
\end{table*}
After hyperparameter optimisation, I trained each model with the best hyperparameter combination for 20 runs, using 850 diagrams for training, 75 for validation and 75 for testing, shuffling the splits for each run while monitoring performance on the evaluation set and stopping training early if the macro F1 score failed to improve over 15 epochs for graph classification or over 25 epochs for node classification. I then evaluated the model on the testing set and recorded the result.
\subsection{Tasks}
\subsubsection{Node Classification}
\label{sec:node_classification}
\begin{table}
\begin{tabular}{| l | r | r | r | r | r | r |}
\hline
Graphs & \multicolumn{3}{c|}{AI2D} & \multicolumn{3}{c|}{AI2D-RST (G)} \\
\hline
Model & D & RF & SVM & D & RF & SVM \\
\hline
Accuracy & 0.26 & 0.76 & 0.54 & 0.25 & 0.69 & 0.62 \\
Mac. F1 & 0.20 & 0.79 & 0.60 & 0.20 & 0.56 & 0.48 \\
Wt. F1 & 0.26 & 0.76 & 0.56 & 0.25 & 0.66 & 0.58 \\
\hline
\end{tabular}
\caption{Baseline accuracy, macro F1 and weighted F1 scores for node classification from dummy (D), random forest (RF; 100 estimators) and support vector machine (SVM; $C=1.0$) classifiers with balanced class weights. The results are averaged over 20 runs. All models were implemented using \emph{scikit-learn} 0.21.3. Each node is represented by a 4-dimensional vector.}
\label{tab:nc_baseline}
\end{table}
The purpose of the \emph{node classification} task is to evaluate how well algorithms learn to classify the parts of a diagram using the graph-based representations in AI2D and AI2D-RST and node features representing the position, size and shape of the element, as described in Section \ref{sec:node_features} Identifying the correct node type is a key step when populating a graph with candidate nodes from object detectors, particularly if the nodes will be processed further, for instance, to extract semantic representations from CNN features or word embeddings. Furthermore, the node representations learned during this task can be used as node features for graph classification, as will be shown shortly below in Section \ref{sec:graph_classification}
Table \ref{tab:nc_baseline} presents a baseline for node classification from a dummy classifier, together with results for random forest and support vector machine classifiers trained on 850 and tested on 150 diagrams. Both AI2D and AI2D-RST include five node types, of which four are the same: the difference is that whereas AI2D includes arrowheads, AI2D-RST includes nodes for groups of diagram elements, as outlined in Section \ref{sec:nodes} The results seem to reflect the fact that image constants and grouping nodes have their features set to zero, and RF and SVM cannot leverage features incoming from their neighbouring nodes to learn node representations. This is likely to affect the result for AI2D-RST, which includes 7300 grouping nodes that are used to create a hierarchy of diagram elements.
Table \ref{tab:nc_results} shows the results for node classification using various graph neural network architectures. Because the results are not entirely comparable due to different node types present in the two resources, it is more reasonable to compare architectures. SAGE, GCN and GAT clearly outperform SGC in classifying nodes from both resources, as does the random forest classifier. AI2D nodes are classified with particularly high accuracy, which may result from having to learn representations for only one node type, that is, the image constant ($N = 1000$). AI2D-RST, in turn, must learn representations from scratch for both image constants ($N = 1000$) and grouping nodes ($N = 7300$).
Because SAGE learns useful node representations for both resources, as reflected in high performance for all metrics, I chose this architecture for extracting node features for graph classification.
\subsubsection{Graph Classification}
\label{sec:graph_classification}
This task compares the performance of graph-based representations in AI2D and AI2D-RST for \emph{classifying entire diagrams}. Here the aim is to evaluate to what extent graph neural networks can learn about the generic structure of primary school science diagrams from the graph-based representations in AI2D and AI2D-RST. Correctly identifying what the diagram attempts to communicate and \emph{how} carries implications for tasks such as visual question answering, as the type of a diagram constrains the interpretation of key diagrammatic elements, such as the meaning of lines and arrows \cite{tverskyetal2016,alikhanistone2018}.
\begin{table*}
\begin{tabular}{| l | r | r | r | r | r | r | r | r | r | r | r | r |}
\hline
Classes & \multicolumn{12}{c|}{Original classes from AI2D ($N = 17$)} \\
\hline
Model & \multicolumn{3}{c|}{GAT} & \multicolumn{3}{c|}{GCN} & \multicolumn{3}{c|}{SAGE} & \multicolumn{3}{c|}{SGC} \\
\hline
Graphs & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C \\
\hline
Accuracy & 0.53 & $^+$0.54 & 0.49 & 0.52 & $^+$0.55 & 0.5 & *\textbf{0.58} & 0.5 & 0.52 & 0.55 & 0.54 & 0.53 \\
Macro F1 & 0.26 & 0.25 & 0.26 & 0.23 & $^+$0.27 & 0.23 & *\textbf{0.32} & 0.25 & 0.26 & 0.23 & $^+$0.22 & 0.19 \\
Weighted F1 & 0.53 & 0.56 & 0.52 & 0.5 & 0.53 & 0.51 & *\textbf{0.6} & 0.54 & 0.54 & 0.5 & $^+$0.48 & 0.44 \\
\hline
Classes & \multicolumn{12}{c|}{Coarse classes from AI2D-RST ($N = 5$)} \\
\hline
Model & \multicolumn{3}{c|}{GAT} & \multicolumn{3}{c|}{GCN} & \multicolumn{3}{c|}{SAGE} & \multicolumn{3}{c|}{SGC} \\
\hline
Graphs & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C \\
\hline
Accuracy & 0.59 & $^+$0.61 & 0.58 & 0.6 & *\textbf{0.63} & 0.61 & 0.6 & 0.58 & 0.6 & *0.56 & 0.5 & 0.51 \\
Macro F1 & 0.46 & *$^+$\textbf{0.51} & 0.47 & 0.46 & *0.5 & 0.47 & 0.47 & 0.48 & *0.49 & 0.41 & 0.39 & 0.4 \\
Weighted F1 & 0.56 & *$^+$0.6 & 0.55 & 0.58 & \textbf{0.6} & 0.58 & 0.57 & 0.56 & 0.58 & 0.5 & 0.47 & 0.48 \\
\hline
Classes & \multicolumn{12}{c|}{Fine-grained classes from AI2D-RST ($N = 12$)} \\
\hline
Model & \multicolumn{3}{c|}{GAT} & \multicolumn{3}{c|}{GCN} & \multicolumn{3}{c|}{SAGE} & \multicolumn{3}{c|}{SGC} \\
\hline
Graphs & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C & AI2D & G & G+C \\
\hline
Accuracy & 0.39 & 0.36 & 0.37 & 0.4 & 0.39 & 0.42 & 0.42 & 0.41 & \textbf{0.43} & 0.39 & 0.31 & $^+$0.36 \\
Macro F1 & 0.27 & 0.25 & 0.25 & 0.24 & 0.26 & \textbf{*0.29} & 0.28 & 0.28 & 0.29 & 0.22 & 0.19 & $^+$0.21 \\
Weighted F1 & 0.36 & 0.34 & 0.34 & 0.35 & 0.38 & *0.39 & 0.39 & 0.38 & \textbf{0.4} & 0.31 & 0.27 & $^+$0.31 \\
\hline
\end{tabular}
\caption{Mean accuracy, macro F1 and weighted F1 scores for graph classification. The results are averaged over 20 runs. The following abbreviations indicate the graph used: `AI2D' for the original crowd-sourced graphs from AI2D, `G' for the grouping graph and `G+C' for the combination of grouping and connectivity graph from AI2D-RST. * indicates that the difference between AI2D and the best AI2D-RST graph is statistically significant at $p < 0.05$ when comparing the results over 20 runs for the given metric using Mann--Whitney $U$ test. $^+$ indicates the same for AI2D-RST grouping graph and the combination of grouping and connectivity graphs. The best result for each metric across all models and graphs is marked in bold.} \label{tab:gc_results}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{4210.png}
\caption{Diagram \#4120 in AI2D combines two diagram types: a \emph{cross-section} with a \emph{cycle} (cf. Figure \ref{fig:macro-groups})}
\label{fig:mixed}
\end{center}
\end{figure}
To enable a fair comparison, the target classes are derived from both AI2D and AI2D-RST. Whereas AI2D includes 17 classes that represent the \emph{semantic content} of diagrams, as exemplified by categories such as `parts of the Earth', `volcano', and `food chains and webs', AI2D-RST classifies diagrams into abstract \emph{diagram types}, such as cycles, networks, cross-sections and cut-outs. More specifically, AI2D-RST provides classes for diagram types at two levels of granularity, fine-grained (12 classes) and coarse (5 classes), which are derived from the proposed schema for diagram types in AI2D-RST \cite{hiippalaetal2019-ai2d}.
The 11 fine-grained classes in AI2D-RST shown in Figure \ref{fig:macro-groups} are complemented by an additional class (`mixed'), which includes diagrams that combine multiple diagram types, whose inclusion avoids performing multi-label classification (see the example in Figure \ref{fig:mixed}). The coarse classes, which are derived by grouping fine-grained classes for tables, tabular and spatial organisations, networks and cycles, diagrammatic and pictorial representations, and so on, are also complemented by a `mixed' class.
For this task, the node features consist of the representations learned during node classification in Section \ref{sec:node_classification} These representations are extracted by feeding the features representing node position, size and shape to the graph neural network, which in both cases uses the GraphSAGE architecture \cite{hamiltonetal2017}, and recording the output of the final softmax activation. Compared to a one-hot encoding, representing node identity using a probability distribution from a softmax activation reduces the sparsity of the feature vector. This yields a 5-dimensional feature vector for each node.
Table \ref{tab:gc_baseline} provides a baseline for graph classification from a dummy classifier, as well as results for random forest (RF) and support vector machine (SVM) classifiers trained on 850 and tested on 150 diagrams. The macro F1 scores show that the RF classifier with 100 decision trees offers competitive performance for all target classes and both AI2D and AI2D-RST, in some cases outperforming graph neural networks. It should be noted, however, that the RF classifier is trained with node features learned using GraphSAGE.
\begin{table}[h]
\begin{tabular}{| l | r | r | r | r | r | r |}
\hline
Classes & \multicolumn{6}{c|}{AI2D ($N = 17$)} \\
\hline
Dataset & \multicolumn{3}{c|}{AI2D} & \multicolumn{3}{c|}{AI2D-RST (G)} \\
\hline
Model & D & RF & SVM & D & RF & SVM \\
\hline
Acc. & 0.15 & 0.59 & 0.23 & 0.15 & 0.58 & 0.30 \\
Mac. F1 & 0.06 & 0.29 & 0.13 & 0.06 & 0.24 & 0.10 \\
Wt. F1 & 0.15 & 0.55 & 0.19 & 0.15 & 0.54 & 0.28 \\
\hline
Classes & \multicolumn{6}{c|}{AI2D-RST (coarse) ($N = 5$)} \\
\hline
Dataset & \multicolumn{3}{c|}{AI2D} & \multicolumn{3}{c|}{AI2D-RST (G)} \\
\hline
Model & D & RF & SVM & D & RF & SVM \\
\hline
Acc. & 0.24 & 0.61 & 0.53 & 0.25 & 0.61 & 0.43 \\
Mac. F1 & 0.19 & 0.47 & 0.38 & 0.20 & 0.47 & 0.34 \\
Wt. F1 & 0.24 & 0.58 & 0.46 & 0.24 & 0.59 & 0.40 \\
\hline
Classes & \multicolumn{6}{c|}{AI2D-RST (fine-grained) ($N=12$)} \\
\hline
Dataset & \multicolumn{3}{c|}{AI2D} & \multicolumn{3}{c|}{AI2D-RST (G)} \\
\hline
Model & D & RF & SVM & D & RF & SVM \\
\hline
Acc. & 0.15 & 0.45 & 0.26 & 0.13 & 0.44 & 0.26 \\
Mac. F1 & 0.09 & 0.28 & 0.14 & 0.08 & 0.25 & 0.13 \\
Wt. F1 & 0.15 & 0.42 & 0.19 & 0.14 & 0.41 & 0.20 \\
\hline
\end{tabular}
\caption{Baseline accuracy, macro F1 and weighted F1 scores for graph classification using dummy (D), random forest (RF; 100 estimators) and support vector machine (SVM; $C=1.0$) classifiers with balanced class weights. The results are averaged over 20 runs. All models were implemented using \emph{scikit-learn} 0.21.3. Each diagram is represented by a 5-dimensional vector acquired by averaging the features for all nodes in the graph.}
\label{tab:gc_baseline}
\end{table}
The results for graph classification using graph neural networks presented in Table \ref{tab:gc_results} show certain differences between AI2D and AI2D-RST. When classifying diagrams into the original semantic categories defined in AI2D ($N = 17$), the AI2D graphs significantly outperform AI2D-RST when using the GraphSAGE architecture. For all other graph neural networks, the differences between AI2D and AI2D-RST are not statistically significant. This is not surprising as the AI2D graphs were tailored for the original classes, yet the AI2D-RST graphs seem to capture generic properties that help to classify diagrams into semantic categories nearly as accurately as AI2D graphs designed specifically for this purpose, although no semantic features apart from the layout structure are provided to the classifier.
The situation is reversed for the coarse ($N = 5$) and fine-grained ($N = 12$) classes from AI2D-RST, in which the AI2D-RST graphs generally outperform AI2D, except for coarse classification using SGC. This classification task obviously benefits AI2D-RST, whose classification schema was originally designed for abstract diagram types. This may also suggest that the AI2D graphs do not capture regularities that would support learning to generalise about diagram types. The situation is somewhat different for fine-grained classification, in which the differences in performance are relatively small.
Generally, most architectures do not benefit from combining the grouping and connectivity graphs in AI2D-RST. This is an interesting finding, as many diagram types differ in terms of their connectivity structures (e.g. cycles and networks) \cite{hiippalaetal2019-ai2d}. The edges introduced from the connectivity graph naturally increase the flow of information in the graph, but this does not seem to help learn distinctive features between diagram types. On the other hand, it should be noted that the nodes are not typed, that is, the model cannot distinguish between edges from the grouping and connectivity graphs.
Overall, the macro F1 scores for both AI2D and AI2D-RST, which assigns equal weight to all classes regardless of the number of samples, underline the challenge of training classifiers using limited data with imbalanced classes. The lack of visual features may also affect overall classification performance: certain fine-grained classes, which are also prominent in the data, such as 2D cross-sections and 3D cut-outs, may have similar graph-based representations. Extracting visual features from diagram images may help to discern between diagrams whose graphs bear close resemblance to one another, but this would require advanced object detectors for non-photographic images.
\section{Discussion}
The results for AI2D-RST show that the grouping graph, which represents visual perceptual groups of diagram elements and their hierarchical organisation, provides a robust foundation for describing the spatial organisation of diagrammatic representations. This kind of generic schema can be expanded \emph{beyond} diagrams to other modes of expression that make use of the spatial extent, such as entire page layouts. A description of how the layout space is used can be incorporated into any effort to model discourse relations that may hold between the groups or their parts.
The promising results AI2D-RST suggest is that domain experts in multimodal communication should be involved in planning crowd-sourced annotation tasks right from the beginning. Segmentation, in particular, warrants attention as this phase defines the units of analysis: cut-outs and cross-sections, for instance, use labels and lines to pick out sub-regions of graphical objects, whereas in illustrations the labels often refer to entire objects. Such distinctions should preferably be picked out at the very beginning to be incorporated fully into the annotation schema.
Tasks related to grouping and connectivity annotation could be crowd-sourced relatively easily, whereas annotating diagram types and discourse relations may require multi-step procedures and assistance in the form of prompts, as \newcite{yungetal2019} have recently shown for RST. Involving both expert and crowd-sourced annotators could also alleviate problems related to circularity by forcing domain experts to frame the tasks in terms understandable to crowd-sourced workers \cite{riezler2014}.
In light of the results for graph classification, one should note that node features are averaged before classification \emph{regardless of their connections in the graph}. Whereas the expert-annotated grouping graph in AI2D-RST has been pruned from isolated nodes, which ensures that features are propagated to neighbouring nodes, the crowd-sourced AI2D graphs contain both isolated nodes and subgraphs. To what extent these disconnections affect the performance for AI2D warrant a separate study. Additionally, more advanced techniques than mere averaging, such as pooling, should be explored in future work.
Finally, there are many aspects of diagrammatic representation that were not explored in this study. To begin with, a comparison of representations for discourse structures using the question-answering set accompanying AI2D would be particularly interesting, especially if both AI2D and AI2D-RST graphs were enriched with features from state of the art semantic representations for natural language and graphic elements.
\section{Conclusion}
In this article, I compared graph-based representations of diagrams representing primary school science topics from two datasets that contain the same diagrams, which have been annotated by either crowd-sourced workers or trained experts. The comparison involved two tasks, graph and node classification, using four different architectures for graph neural networks, which were compared to baselines from dummy, random forest and support vector machine classifiers.
The results showed that graph neural networks can learn to accurately identify diagram elements from their size, shape and position in layout. These node representations could then be used as features for graph classification. Identifying diagrams, either in terms of what they represent (semantic content) or how (abstract diagram type), proved more challenging using the graph-based representations. Improving accuracy may require additional features that capture visual properties of the diagrams, as these distinctions cannot be captured by graph-based representations and features focusing on layout.
Overall, the results nevertheless suggest that simple layout features can provide a foundation for representing diagrammatic structures, which use the layout space to organise the content and set up discourse relations between different elements. To what extent these layout features can support the prediction of actual discourse relations should be explored in future research.
\section{Bibliographical References}
\label{main:ref}
\bibliographystyle{lrec}
|
3,212,635,537,688 | arxiv | \section{Introduction}
\label{sec:intro}
The tight-binding model on a honeycomb lattice with broken time-reversal
symmetry proposed by Haldane \cite{haldane1988model} is an interesting
example of a Chern band insulator \cite{kane13,rpp-rachel18}.
At half-filling, it can exhibit a quantized Hall conductance in the
absence of an external magnetic field.
This so-called anomalous quantum Hall effect \cite{qi11}
is indeed related to the fact that the electronic band
structure of Haldane's model is topologically nontrivial, i.e., the
corresponding Chern numbers of each band are finite \cite{kane13,rpp-rachel18}.
Interestingly, the model was experimentally implemented in a system
with ultracold fermions in an optical honeycomb lattice \cite{jotzu14}
(see also the reviews \cite{zoller16,rmp-cooper19}).
A lot of effort has also been devoted to the study of the interplay
between the topological properties of electronic band structures and
the electron-electron interaction \cite{rpp-rachel18,assaad13}.
A particular correlated Chern insulator that has been receiving some
attention in recents years is a natural extension of Haldane's
model \cite{haldane1988model}, the so-called Haldane-Hubbard model
\cite{he2011chiral,he2011topological,maciejko2013topological,
hickey15,hickey2016haldane,zheng15,vanhala16,wu2016quantum,arun16,troyer16}.
Here the electronic spin is explicitly taken into account,
time-reversal symmetry is broken, and correlation effects are
described via an on-site Hubbard repulsion term
[see Eq.~\eqref{eqHH0} below].
At half-filling, the phase diagram of the Haldane-Hubbard model has
been determined via different mean-field approaches
\cite{he2011chiral, he2011topological,hickey15,zheng15, arun16}
and numerical methods
\cite{hickey2016haldane, vanhala16, wu2016quantum, troyer16}.
It was shown that the model supports a Chern insulator phase for weak
interactions and a trivial Néel magnetic ordered phase for strong
ones. Moreover, there are evidences for a first-order transition
between these two phases \cite{troyer16},
but also for the presence of a distinct phase in the intermediate
coupling region \cite{he2011chiral, he2011topological,wu2016quantum}.
Differently from the time-reversal symmetric Kane-Mele Hubbard model
\cite{rpp-rachel18},
the breaking of time-reversal symmetry in the Haldane-Hubbard model
yields the so-called fermion sign problem,
limiting the use of quantum Monte Carlo simulations
\cite{wu2016quantum,troyer16}.
Away from half-filling, the study of correlation effects in Chern
band insulators have also considered the possibility of realizing
fractional quantum Hall phases in lattice models.
Indeed, the interest in fractional Chern insulators
\cite{sondhi13,liu13,neupert15} was motivated by the
studies \cite{neupert2011fractional,sun2011nearly,tang2011high},
which showed that a series of tight-binding models with only
short-range hoppings can display nearly-flat and
topologically nontrivial electronic bands once the model parameters
are properly chosen.
Due to the similarity between flat bands with nonzero Chern numbers
and Landau levels realized in a two-dimensional electron gas, it was
proposed that these lattice models could display a fractional quantum Hall
effect for partially filled bands
if electron-electron interaction is taking into account
\cite{neupert2011fractional,sun2011nearly,tang2011high}.
Indeed, numerical evidences for the stability of fractional Chern insulator
phases were later reported \cite{sheng11,regnault11}.
Correlation effects in a spinfull topological Hubbard model on a
square lattice with nearly-flat noninteracting bands but at a
commensurate filling were also discussed
\cite{neupert2012topological, doretto2015flat,su2019ferromagnetism}.
Here the noninteracting limit of the topological Hubbard model is
given by the $\pi$-flux model, whose parameters can be adjusted such
that the band structure is given by two lower and two higher
(doubly degenerated) nearly-flat bands separated by an energy gap
\cite{neupert2011fractional}.
At $1/4$-filling (half-filling of the lower band), it was shown
\cite{neupert2012topological, doretto2015flat,su2019ferromagnetism}
that such topological Hubbard model can realize a flat-band
ferromagnetic phase \cite{flat-fm}.
In particular, one of us calculated the spin-wave excitation spectrum
of the flat-band ferromagnetic phase within a bosonization formalism
\cite{doretto2015flat}. For the corresponding correlated Chern
insulator, it was shown that the spin-wave spectrum has one gapped
excitation branch and one gapless one, with the Goldstone mode at the
centre of the first Brillouin zone. These analytical results
qualitatively agrees with the numerical ones determined via exact
diagonalization \cite{su2019ferromagnetism}.
\begin{figure}[t]
\centerline{ \includegraphics[width=6.0cm]{figLattice.pdf} }
\caption{(a) Schematic representation of the Haldane-Hubbard model
\eqref{eqHH0} on the honeycomb lattice, indicating the
nearest-neighbor $t_1$ and next-nearest-neighbor
$t_2 e^{\pm i \phi}$ hoppings and the on-site Hubbard
repulsion energy $U_a$.
Blue and red circles indicate the sites of the (triangular)
sublattices $A$ and $B$, respectively.
$\mbox{\boldmath $\delta $}_i$ and $\mbox{\boldmath $\tau $}_i$ are the nearest-neighbor \eqref{deltavectors}
and next-nearest-neighbor \eqref{tauvectors} vectors, respectively.
(b) The first Brillouin zone and it's highly symmetrical points:
$\mathbf{K} = ( 4\pi/3\sqrt{3}, 0 )$,
$\mathbf{K'} = ( 2\pi/3\sqrt{3}, 2\pi/3 )$,
$\mathbf{M}_1 = ( \pi/\sqrt{3}, \pi/3)$, and
$\mathbf{M}_2 = ( 0, 2\pi/3 )$.
The nearest-neighbor distance of the honeycomb lattice $a = 1$.
}
\label{figLattice}
\end{figure}
In the present paper, we study the flat-band ferromagnetic phase of a
correlated Chern insulator on a honeycomb lattice described by the
Haldane-Hubbard model. We consider configurations close to the
nearly-flat band limit of the lower (noninteracting) bands that was previously
determined for the (spinless) Haldane model \cite{neupert2011fractional}.
We describe the flat-band ferromagnetic phase of the Haldane-Hubbard
model within the bosonization formalism for flat-band correlated Chern
insulators proposed in Ref.~\cite{doretto2015flat}.
Such a bosonization scheme is based on the method proposed to study
the quantum Hall ferromagnetic phase of a two-dimensional electron
gas at filling factor $\nu=1$ \cite{doretto2005lowest}. It was later
employed to described the quantum Hall ferromagnetic phases realized
in graphene at filling factors $\nu = 0$ and $\nu = \pm 1$
\cite{doretto2007quantum}.
We show that the bosonization scheme allow us to map the
Haldane-Hubbard model at the nearly-flat band limit of its lower band
to an effective interacting boson model.
Our main finding is the flat-band ferromagnetic phase spin-wave
spectrum, which corresponds to the dispersion relation of
the bosons determined from the effective boson model within a
harmonic approximation. We find that the spin-wave excitation
spectrum has one gapped and one gapless excitation branches, with a
Goldstone mode at the center of the first Brillouin zone and Dirac
points at the $K$ and $K'$ points [see Fig.~\ref{figEspectro1}(a), below].
Introducing an energy offset in the on-site Hubbard repulsion energies
associated with the (triangular) sublattices $A$ and $B$, one finds
that an energy gap opens in the spin-wave excitation spectrum at
the $K$ and $K'$ points.
The effects on the spin-wave spectrum due to the presence of a
staggered on-site energy term related to the sublattices $A$ and $B$
is also discussed.
Our paper is organized as follows.
In Sec.~\ref{sec:TBmodel}, we introduce the Haldane-Hubbard model on a
honeycomb lattice and discuss in details the band structure of the
noninteracting term close to the nearly flat-band limit determined in
Ref.~\cite{neupert2011fractional}.
In Sec.~\ref{sec:boso}, we briefly review the bosonization scheme for
flat-band Chern insulators \cite{doretto2015flat}.
Sec.~\ref{sec:flatferromagnetism} is devoted to the description of the
flat-band ferromagnetic phase of the Haldane-Hubbard model:
we quote the expression of the effective interacting boson model
derived within the bosonization scheme and determine
the spin-wave excitation spectra in the nearly flat-band limit and
slightly away from this limit;
the effects of an energy offset in the on-site Hubbard repulsion term
and of a staggered on-site energy term are also discussed.
In Sec.~\ref{sec:summary},
we discuss our results and provide a brief summary of our main
findings.
Details of the bosonization formalism are presented in Appendices
\ref{ap:functions} and \ref{ap:BosoDetails} while additional results
derived within the bosonization scheme for the topological Hubbard
model on a square lattice previously studied in Ref.~\cite{doretto2015flat}
are reported in Appendix~\ref{ap:square}.
\section{Haldane-Hubbard model}
\label{sec:TBmodel}
Let us consider $N_e$ spin-$1/2$ electrons on a honeycomb lattice
described by the Haldane-Hubbard model, whose Hamiltonian is
given by
\cite{he2011chiral,he2011topological,maciejko2013topological,
hickey15,hickey2016haldane,zheng15,vanhala16,wu2016quantum,arun16,troyer16}.
\begin{equation}
H = H_0 + H_U,
\label{eqHH0}
\end{equation}
where
\begin{align}
H_0 &= t_1 \sum_{i \in A, \delta, \sigma} \left( c_{i A \sigma}^{\dagger} c_{i + \delta B \sigma}
+ {\rm H.c.} \right)
\nonumber \\
&+ t_2 \sum_{i \in A, \tau, \sigma}
\left( e^{-i\phi} c_{i A \sigma}^{\dagger} c_{i + \tau A \sigma} + {\rm H.c.} \right)
\nonumber \\
&+ t_2 \sum_{i \in B, \tau, \sigma}
\left( e^{+i\phi} c_{i B \sigma}^{\dagger} c_{i + \tau B \sigma} + {\rm H.c.} \right)
\label{eqHH2}
\end{align}
and
\begin{equation}
H_U = \sum_i \sum_{a = A,B} U_a \hat{\rho}_{i a \uparrow} \hat{\rho}_{i a \downarrow}.
\label{eqHH}
\end{equation}
Here the operator $c_{i a \sigma}^{\dagger}$ ($c_{i a \sigma}$) creates (destroys)
an electron with spin $\sigma = \uparrow, \downarrow$ on site $i$
of the (triangular) sublattice $a=A$, $B$ of the honeycomb lattice.
$t_1 \ge 0$ and $t_2 e^{\pm i \phi}$ with $t_2 \ge 0$ are, respectively, the
nearest-neighbor and next-nearest-neighbor hoppings.
One notices that the electron acquires a $+\phi$ ($-\phi$) phase as it moves
in the same (opposite) direction of the arrows within the same
sublattice [see dashed lines in Fig.~\ref{figLattice}(a)].
Indeed, the complex next-nearest-neighbor hopping $t_2 e^{\pm i \phi}$
results in a fictitious flux pattern with zero net flux per unit cell
[see, e.g, Fig.~1(a) from Ref.~\cite{neupert2011fractional} for details].
The index $\delta$ corresponds to the nearest-neighbor vectors
[Fig.~\ref{figLattice}(a)]
\begin{align}
\mbox{\boldmath $\delta $}_1 &= -a\hat{y},
\nonumber \\
\mbox{\boldmath $\delta $}_2 &= \frac{a}{2}\left( \sqrt{3}\hat{x} + \hat{y} \right), \quad\quad
\mbox{\boldmath $\delta $}_3 = -\frac{a}{2}\left( \sqrt{3}\hat{x} - \hat{y} \right),
\label{deltavectors}
\end{align}
while $\tau$ indicates the next-nearest-neighbor vectors
\begin{align}
\mbox{\boldmath $\tau $}_1 &= \mbox{\boldmath $\delta $}_2 - \mbox{\boldmath $\delta $}_3 = a\sqrt{3}\hat{x},
\nonumber \\
\mbox{\boldmath $\tau $}_2 &= \mbox{\boldmath $\delta $}_3 - \mbox{\boldmath $\delta $}_1 = -\frac{a}{2}\left( \sqrt{3}\hat{x} - 3\hat{y} \right),
\nonumber \\
\mbox{\boldmath $\tau $}_3 &= \mbox{\boldmath $\delta $}_1 - \mbox{\boldmath $\delta $}_2 = -\frac{a}{2}\left( \sqrt{3}\hat{x} + 3\hat{y} \right).
\label{tauvectors}
\end{align}
In the following, we set the nearest-neigbhor distance to unit, i.e.,
$a = 1$.
Finally, the $H_U$ term [Eq.~\eqref{eqHH}] is the one-site Hubbard repulsion term,
which represents an energy cost paid by double occupation of site $i$,
and
\begin{equation}
\hat{\rho}_{i a\sigma} = c_{i a \sigma}^{\dagger} c_{i a \sigma}
\label{dens-op}
\end{equation}
is the density operator associated with electrons with spin $\sigma$
at site $i$ of sublattice $a$. We consider that
the one-site repulsion energy $U_a > 0$ can depend on the sublattice $a$.
\subsection{Tight-binding term with nearly-flat topological bands}
\label{sec:FreeModel}
In this section, we discuss in details the noninteracting term $H_0$
[Eq.~\eqref{eqHH2}] of the Hamiltonian \eqref{eqHH0} and show that it
can display an almost flat (lower) electronic band that is topologically
nontrivial.
\begin{figure*}[t]
\centerline{
\includegraphics[width=5.4cm]{spectrumFree.pdf}
\hskip0.7cm
\includegraphics[width=5.4cm]{spectrumFreephi.pdf}
\hskip0.7cm
\includegraphics[width=5.4cm]{spectrumFreephi2.pdf}
}
\caption{ Band structure \eqref{eq:omega} of the noninteracting hopping term
\eqref{eqHH2} (in units of the nearest-neighbor hopping amplitude $t_1$)
along paths in the first Brillouin zone [Fig.~\ref{figLattice}(b)] for
different values of the next-nearest-neighbor hopping amplitude $t_2$
and phase $\phi$:
(a) $t_2 = 0.3155\, t_1$, $\phi=0$ (green) and
$t_2 = 0.3155\, t_1$, $\phi=0.656$ (magenta);
(b) $\phi = 0.4$ (blue),
$\phi = 0.5$ (green), and
$\phi = 0.656$ (magneta),
with $t_2$ given by the relation $\cos(\phi) = t_1/ (4 t_2)$; and
(c) $\phi = 0.656$ (magneta)
$\phi = 0.75$ (green), and
$\phi = 0.85$ (blue),
with $t_2$ given by the relation $\cos(\phi) = t_1/ (4 t_2)$.
}
\label{figEspectro}
\end{figure*}
The first step to diagonalize the free-electron Hamiltonian \eqref{eqHH2}
is to perform a Fourier transform,
\begin{equation}
c_{i a \sigma}^{\dagger} = \frac{1}{\sqrt{N_a}} \sum_{{\bf k} \in {\rm BZ}}
e^{-i \mathbf{k} \cdot \mathbf{R}_i} c_{ \mathbf{k} a \sigma}^{\dagger} ,
\label{eq:Fourier}
\end{equation}
where the momentum sum runs over the first Brillouin zone (BZ)
[Fig.~\ref{figLattice}(b)] associated with the underline triangular
Bravais lattice and $N_a = N$ is the number of sites of the sublattice $a$.
It is then easy to show that the noninteracting Hamiltonian \eqref{eqHH2}
can be written in a matrix form,
\begin{equation}
H_0 = \sum_{\mathbf{k}} \Psi_{\mathbf{k} }^{\dagger} H_{\bf k} \Psi_{\mathbf{k} },
\label{eqH0k}
\end{equation}
where the $4 \times 4$ $H_{\bf k}$ matrix reads
\begin{equation}
H_{\bf k} = \left(\begin{array}{cc}
h^{\uparrow}_{\mathbf{k}} & 0 \\
0 & h^{\downarrow}_{\mathbf{k}}
\end{array} \right)
\end{equation}
and the four-component spinor $\Psi_{\mathbf{k}} $ is given by
\begin{equation}
\Psi_{\mathbf{k}} = \left(
c_{ \mathbf{k} A \uparrow} \;\;
c_{ \mathbf{k} B \uparrow} \;\;
c_{ \mathbf{k} A \downarrow} \;\;
c_{ \mathbf{k} B \downarrow} \right)^T.
\end{equation}
The $2 \times 2$ matrices $h^\sigma_{\bf k}$ associated with each spin
sector are such that $h^\uparrow_{\bf k} = h^\downarrow_{\bf k} = h_{\bf k}$, with the $h_{\bf k}$
matrix given by
\begin{align}
h_{\bf k} &= \left( \begin{array}{cc}
2 t_2\sum_{\tau} \cos(\mathbf{k} \cdot \mbox{\boldmath $\tau $} + \phi)
& t_1 \sum_{\delta} e^{i \mathbf{k} \cdot \mbox{\boldmath $\delta $} } \\
t_1 \sum_{\delta} e^{-i \mathbf{k} \cdot \mbox{\boldmath $\delta $} }
& 2 t_2 \sum_{\tau} \cos(\mathbf{k} \cdot \mbox{\boldmath $\tau $} - \phi )
\end{array} \right).
\nonumber
\end{align}
It is possible to write the $h_{\bf k}$ matrix in terms of the identity
matrix $\tau_0$ and the vector $\hat{\tau} = (\tau_1$, $\tau_2$ ,$\tau_3)$, whose
components are Pauli matrices,
\begin{equation}
h_{\bf k} = B_{0, \mathbf{k} } \tau_0 + \mathbf{B}_{\mathbf{k} } \cdot \hat{\tau} ,
\label{eqPauli}
\end{equation}
where the $B_{0, \mathbf{k} }$ function and the components of the vector
$\mathbf{B}_{\mathbf{k} }=( B_{1,\mathbf{k} }, B_{2, \mathbf{k} }, B_{3, \mathbf{k} })$
are given by
\begin{align}
B_{0,\mathbf{k} } & = 2 t_2 \cos(\phi )\sum_{\mathbf{\tau}} \cos(\mathbf{k} \cdot \mbox{\boldmath $\tau $} ) ,
\nonumber \\
B_{1,\mathbf{k} } &= t_1 \sum_{\mathbf{\delta}} \cos(\mathbf{k} \cdot \mbox{\boldmath $\delta $}) ,
\nonumber \\
B_{2,\mathbf{k} } &= t_1 \sum_{\mathbf{\delta}} \sin(\mathbf{k} \cdot \mbox{\boldmath $\delta $} ) ,
\nonumber \\
B_{3,\mathbf{k} } &= -2 t_2 \sin(\phi ) \sum_{\mathbf{\tau}} \sin(\mathbf{k} \cdot \mbox{\boldmath $\tau $} ),
\label{eqBs}
\end{align}
with the indices $\delta$ and $\tau$ corresponding to the
nearest-neighbor \eqref{deltavectors} and next-nearest-neighbor
\eqref{tauvectors} vectors, respectively.
The fact that the matrices $h^\sigma_{\bf k}$ related to each spin sector
$h^\uparrow_{\bf k} = h^\downarrow_{\bf k} = h_{\bf k}$ indicates that the
noninteracting model \eqref{eqHH2} breaks time-reversal symmetry
(see, e.g, Appendix A from Ref.~\cite{doretto2015flat} for details).
The Hamiltonian \eqref{eqH0k} can be diagonalized via the canonical
transformation
\begin{align}
d_{ \mathbf{k} \sigma} = u_{\mathbf{k}} c_{ \mathbf{k} A \sigma} + v_{\mathbf{k}} c_{ \mathbf{k} B \sigma} ,
\nonumber \\
c_{ \mathbf{k} \sigma} = v_{\mathbf{k}}^{*} c_{ \mathbf{k} A \sigma} - u_{\mathbf{k}}^{*} c_{ \mathbf{k} B \sigma},
\label{eq:BogoTransf}
\end{align}
where the coefficients $u_{\bf k}$ and $v_{\bf k}$ are given by
\begin{align}
|u_{\mathbf{k}}|^2 &= \frac{1}{2} \left(1+\hat{B}_{3, \mathbf{k}} \right),
\quad
|v_{\mathbf{k}}|^2 = \frac{1}{2} \left(1-\hat{B}_{3, \mathbf{k}} \right),
\nonumber \\
u_{\mathbf{k}} v_{\mathbf{k}}^{*} &= \frac{1}{2} \left( \hat{B}_{1, \mathbf{k}} + i \hat{B}_{2, \mathbf{k}} \right),
\label{eq:Bogocoef}
\end{align}
with the hatted $B_i$ standing for the $i$-component of the normalized
vector $\hat{\mathbf{B}}_{\bf k} = \mathbf{B}_{\bf k} /|\mathbf{B}_{\bf k}|$.
After the diagonalization, the Hamiltonian \eqref{eqH0k} then reads
\begin{align}
H_0 = \sum_{\mathbf{k} \sigma }
\omega_{\mathbf{k}}^c c_{\mathbf{k} \sigma}^{\dagger} c_{\mathbf{k} \sigma}
+ \omega_{\mathbf{k}}^d d_{\mathbf{k} \sigma}^{\dagger} d_{\mathbf{k} \sigma},
\label{eq:Hfree}
\end{align}
with the dispersions of the lower band $c$ ($-$ sign)
and the upper one $d$ ($+$ sign) given by
\begin{align}
\omega^{d/c}_{\mathbf{k}} = B_0 \pm \sqrt{B_{1, \mathbf{k}}^2 + B_{2, \mathbf{k}}^2 + B_{3, \mathbf{k}}^2 } .
\label{eq:omega}
\end{align}
Notice that both $c$ and $d$ free-electronic bands are doubly degenerated
with respect to the spin degree of freedom.
Figure \ref{figEspectro}(a) shows the electronic bands \eqref{eq:omega}
along paths in the first Brillouin zone [Fig.~\ref{figLattice}(b)] for two different
parameter sets.
For $t_2 = 0.3155\, t_1$ and $\phi=0$, the spectrum is gapless due to
the presence of Dirac points at the $K$ and $K'$ points, i.e., the
upper and lower bands touch at these points and the bands disperse
linearly with momentum around them.
A finite phase $\phi$ breaks time-reversal symmetry and opens a
gap $\Delta$ between the lower and upper bands at the $K$ and $K'$
points, as exemplified for the
parameter choice $t_2 = 0.3155\, t_1$ and $\phi=0.656$.
Moreover, a finite phase $\phi$ yields free-electronic bands
topologically nontrivial, since the corresponding Chern numbers
\cite{kane13, neupert2011fractional}
\begin{equation}
C_{\sigma}^{c/d} =\pm \frac{1}{4 \pi} \int_{BZ} d^2k
\hat{\mathbf{B}}_k \cdot (\partial_{k_x} \hat{\mathbf{B}}_k \times \partial_{k_y} \hat{\mathbf{B}}_k )
\label{eqCn}
\end{equation}
are finite. One finds that $C^c_\sigma = +1$ and $C^d_\sigma = -1$
respectively for the lower and the upper bands regardless the spin.
As mentioned in the Introduction,
such nonzero Chern numbers combined with broken time-reversal
symmetry indicates that the gapped phase of the noninteracting model
\eqref{eqHH2} at half-filling is indeed a Chern band insulator
\cite{kane13,rpp-rachel18}.
The phase diagram $t_2/t_1$ v.s. $\phi$ for the noninteracting model
\eqref{eqHH2} at half-filling can be found, e.g., in Ref.~\cite{hickey15}:
in addition to a (gapped) Chern band insulator phase with quantized
Hall conductivity $\sigma_{xy} = \pm e^2/h$ per spin, the model
also displays a Chern metal phase with nonquantized $\sigma_{xy}$.
For the parameter choice (nearly flat-band limit)
\begin{equation}
t_2 = 0.3155 \, t_1 \quad\quad {\rm and}
\quad\quad \phi=0.656,
\label{optimal-par}
\end{equation}
one also sees that the lower band $c$ is almost flat. Indeed, such a choice obeys
the relation $\cos(\phi) = t_1/ (4 t_2) = 3\sqrt{3/43}$
\cite{neupert2011fractional} which yields a large flatness ratio
for the lower band $f_c = \Delta/W_c = 7$, where
$\Delta = {\rm min}(\omega_{d,{\bf k}}) - {\rm max}(\omega_{c,{\bf k}})$
is the energy gap and
$W_c = {\rm max}(\omega^c_{\bf k}) - {\rm min}(\omega^c_{\bf k})$
is the width of the lower band $c$.
It is easy to see that the flatness ratio decreases as one moves away
from the optimal parameter choice \eqref{optimal-par}.
For instance, in Fig.~\ref{figEspectro}(b), we plot the band structure
\eqref{eq:omega} for $\phi = 0.4$, $0.5$, and $0.656$ with $t_2$ given by
$\cos(\phi) = t_1/ (4 t_2)$. One notices that, as the phase $\phi$
decreases, the flatness ratio $f_c$ also decreases, i.e., the energy
gap at the $K$ and $K'$ points decreases while the band width $W_c$ of
the lower band $c$ increases.
The flatness ratio $f_c$ also decreases for $\phi > 0.656$, see
Fig.~\ref{figEspectro}(c).
In the following, we consider configurations close to the
nearly flat-band limit \eqref{optimal-par} of the lower band $c$.
It is worth mentioning that previous studies
\cite{he2011chiral,he2011topological,zheng15,arun16,troyer16,vanhala16}
about the Haldane-Hubbard model \eqref{eqHH0} focus on configurations
with $\phi=\pi/2$, which yields a particle-hole symmetric
noninteracting band structure.
\begin{figure}[b]
\centerline{
\includegraphics[width=6.5cm]{spectrumFreeM.pdf}
}
\caption{Band structure \eqref{eq:omega} of the noninteracting hopping term
\eqref{eqHH2} with the additional staggered on-site energy term \eqref{eq:Hmass}
(in units of the nearest-neighbor hopping amplitude $t_1$)
along paths in the first Brillouin zone for the next-nearest-neighbor
hopping amplitude $t_2 = 0.3155\, t_1$, phase $\phi=0.656$,
and staggered on-site energy
$M =0$ (magenta),
$0.1 $ (green),
$0.2 $ (blue), and
$0.3\, t_1$ (orange).
}
\label{figEspectro2}
\end{figure}
\subsection{Staggered on-site energy term}
\label{sec:smass}
An additional interesting term, that is also present in Haldane's original model
\cite{haldane1988model}, is a staggered on-site energy term that
breaks inversion-symmetry:
\begin{equation}
H_M = M \sum_{i \sigma} \left( c_{i A \sigma}^{\dagger} c_{i A \sigma}
- c_{i B \sigma}^{\dagger} c_{i B \sigma} \right).
\label{eq:Hmass}
\end{equation}
Adding the staggered on-site energy term \eqref{eq:Hmass} to the tight-binding model
\eqref{eqHH2}, one easily finds that the new Hamiltonian $H_0 + H_M$
also assumes the form \eqref{eqH0k} with the $B_{0,{\bf k}}$ and
the $B_{i,{\bf k}}$ ($i=1,2,3$) functions given by Eq.~\eqref{eqBs} apart from the
replacement
\begin{equation}
B_{3, \mathbf{k}} \rightarrow B_{3, \mathbf{k}} + M.
\label{B3M}
\end{equation}
In Fig.~\ref{figEspectro2}, we plot the band structure
\eqref{eq:omega} for the parameters \eqref{optimal-par} and
$M = 0$, $0.1$, $0.2$, and $0.3\, t_1$.
We notice that, for a finite on-site energy $M > 0$ ($M < 0$), the
energy gap is located at the $K'$ ($K$) point.
Moreover, as the parameter $M$ increases,
the energy gap decreases,
the difference ($\omega^c_{K'} - \omega^c_K$) increases, and
the flatness ratio of the lower band $c$ decreases.
Indeed, the increasing of the parameter $M$ can induce a gap closure
that destroys the topological phase,
see Fig.~2 from Ref.~\cite{haldane1988model}.
In Sec.~\ref{sec:dispersiviness} below, we consider a finite staggered
on-site energy $M$ as a source of departure of the lower band $c$ from the
nearly flat-band limit \eqref{optimal-par}.
\subsection{Hubbard term in momentum space}
\label{sec:hubbard}
The expression of the on-site Hubbard interaction \eqref{eqHH2} easily
follows from the Fourier transform of the electron density operator
\eqref{dens-op}, which is given by
\begin{equation}
\hat{\rho}_{i a \sigma} = \frac{1}{N} \sum_{{\bf q} \in {\rm BZ}}
e^{i \mathbf{q} \cdot \mathbf{R}_i} \hat{\rho}_{a \sigma}({\bf q}).
\label{eq:Fourier-rho}
\end{equation}
Substituting Eq.~\eqref{eq:Fourier-rho} into the Hamiltonian
\eqref{eqHH2}, one finds
\begin{equation}
H_U = \frac{1}{N}\sum_{a = A,B} \sum_{\bf q} U_a
\hat{\rho}_{a \uparrow}(-{\bf q}) \hat{\rho}_{a \downarrow}({\bf q}).
\label{hu-k}
\end{equation}
It is also useful to determine the expansion of the density operator
$\hat{\rho}_{a \sigma}({\bf k})$ in terms of the fermion operators $c_{{\bf k}\,a\,\sigma}$.
With the aid of Eqs.~\eqref{dens-op}, \eqref{eq:Fourier}, and \eqref{eq:Fourier-rho},
one shows that
\begin{equation}
\hat{\rho}_{a \sigma}({\bf q}) = \sum_{\bf p} c^\dagger_{{\bf p}-{\bf q}\, a\,\sigma}c_{{\bf p}\, a\,\sigma}.
\label{density-op-2}
\end{equation}
The canonical transformation \eqref{eq:BogoTransf} allows us to
express \eqref{density-op-2} in terms of the fermion operators
$c_{{\bf k}\,\sigma}$ and $d_{{\bf k}\,\sigma}$. In particular, the density
operator \eqref{density-op-2} {\sl projected} into the lower
noninteracting band $c$ reads \cite{doretto2015flat}
\begin{equation}
\bar{\rho}_{a\, \sigma}({\bf q}) = \sum_{\bf p}
G_a({\bf p},{\bf q})c^\dagger_{{\bf p}-{\bf q}\,\sigma}c_{{\bf p}\,\sigma},
\label{proj-dens-op}
\end{equation}
with the $ G_a({\bf p},{\bf q})$ function given by Eq.~\eqref{eq:Ga}.
Once the expression of the projected density operator
\eqref{proj-dens-op} is known,
we can determine the projection $\bar{H}_U$ of the on-site Hubbard
term \eqref{hu-k} into the noninteracting lower bands $c$. Indeed,
$\bar{H}_U$ assumes the form \eqref{hu-k} with the replacement
$\hat{\rho}_{a \sigma}({\bf q}) \rightarrow \bar{\rho}_{a \sigma}({\bf q})$, i.e.,
\begin{equation}
\bar{H}_U = \frac{1}{N}\sum_{a = A,B} \sum_{\bf q} U_a
\bar{\rho}_{a \uparrow}(-{\bf q}) \bar{\rho}_{a \downarrow}({\bf q}).
\label{hu-k-bar}
\end{equation}
\section{Bosonization formalism for flat-band Chern insulators}
\label{sec:boso}
In this section, we briefly summarize the bosonization scheme for a Chern
insulator introduced in Ref.~\cite{doretto2015flat} for the
description of the flat-band ferromagnetic phase of a correlated Chern
insulator on a square lattice.
\begin{figure}[t]
\centerline{
\includegraphics[width=6.0cm]{figExcitation.pdf}
}
\caption{Schematic representation of
(a) the ground state \eqref{eq:FM} of the noninteracting term \eqref{eqHH2}
in the nearly-flat limit \eqref{optimal-par} of the lower band $c$
at $1/4$-filling and
(b) the particle-hole pair excitation above the ground state \eqref{eq:FM}.
Although the free bands $c$ and $d$ are doubly degenerated with respect
to the spin degree of freedom, an offset between the
$\sigma = \uparrow$ and $\downarrow$ bands are introduced for clarity.}
\label{figExcitation}
\end{figure}
Let us consider a spinfull Chern insulator on a bipartite lattice whose
Hamiltonian assumes the form \eqref{eqH0k}. We choose the model
parameters such that (at least) the lower band $c$ is (nearly-)
flat and consider that the number of electrons
$N_e = N_A = N_B = N$, where $N_A$ and $N_B$ are, respectively, the
number of sites of the sublattices $A$ and $B$. Such a choice
corresponds to a $1/4$-filling, i.e., the lower (nearly-flat) band $c$
is half-filled.
In particular, let us assume that the lower band $c \, \uparrow$
is completely occupied, as illustrated in Fig.~\ref{figExcitation}(a).
In this case, the ground state of the noninteracting system
(the {\sl reference} state) is completely spin polarized and it can be
written as a product of single particle states,
\begin{equation}
|{\rm FM} \rangle = \prod_{\mathbf{k} \in BZ} c_{\mathbf{k} \uparrow}^{\dagger} |0 \rangle .
\label{eq:FM}
\end{equation}
Since the lower flat bands $c$ are separated from the upper
bands $d$ by an energy gap, the lowest-energy neutral excitations
above the ground state \eqref{eq:FM} are given by particle-hole pairs
within the lower bands $c$, i.e., they are spin-flips that can be
written as [see Fig.~\ref{figExcitation}(b)]
\begin{equation}
| \Psi_{\mathbf{k}} \rangle \propto S_{\bf k}^{-} | {\rm FM} \rangle.
\label{eq:Sexcitation}
\end{equation}
It is possible to show that such particle-hole pair excitations can be
treated approximated as bosons. Indeed, one can define the
boson operators
\begin{align}
b_{\alpha,\mathbf{q}} &= \frac{\bar{S}_{-\mathbf{q},\alpha}^{+}}{F_{\alpha\alpha,\mathbf{q}}}
= \frac{1}{F_{\alpha\alpha,\mathbf{q}}} \sum_{\mathbf{p}} g_{\alpha} (\mathbf{p}, -\mathbf{q})
c_{\mathbf{p+q}\uparrow}^{\dagger} c_{\mathbf{p}\downarrow},
\nonumber \\
b_{\alpha,\mathbf{q}}^{\dagger} &= \frac{\bar{S}_{\mathbf{q},\alpha}^{-}}{F_{\alpha\alpha,\mathbf{q}}}
= \frac{1}{F_{\alpha\alpha,\mathbf{q}}} \sum_{\mathbf{p}} g_{\alpha} (\mathbf{p}, \mathbf{q})
c_{\mathbf{p-q}\downarrow}^{\dagger} c_{\mathbf{p}\uparrow},
\label{eq:bosons}
\end{align}
with $\alpha = 0,1$, that satisfy the canonical commutation relations
\begin{align}
[b_{\alpha,\mathbf{k}} , b_{\beta, \mathbf{q}}^{\dagger} ] &= \delta_{\alpha, \beta} \; \delta_{\mathbf{k}, \mathbf{q}},
\nonumber \\
[b_{\alpha,\mathbf{k}} , b_{\beta,\mathbf{q}} ] &= [b_{\alpha,\mathbf{k}}^{\dagger} , b_{\beta,\mathbf{q}}^{\dagger} ] = 0,
\label{eq:BComutations}
\end{align}
and whose vacuum state is given by the (reference) spin-polarized
state \eqref{eq:FM}, i.e.,
\begin{equation}
b_{\alpha,{\bf q}} | {\rm FM} \rangle = 0.
\label{vacuum}
\end{equation}
Here the operators $\bar{S}_{{\bf q},\alpha}^\pm$ are linear combinations
of {\sl projected} spin operators associated with sublattices $A$ and $B$,
\begin{equation}
\bar{S}_{{\bf q},\alpha}^\pm = \bar{S}_{{\bf q},A}^\pm + (-1)^\alpha \bar{S}_{{\bf q},B}^\pm,
\label{eq:SprojAlpha}
\end{equation}
with $\alpha = 0,1$ and $\bar{S}^\pm_{{\bf q},a} = \bar{S}^x_{{\bf q},a} \pm i \bar{S}^y_{{\bf q},a}$.
The operator $\bar{S}^\lambda_{{\bf q},a}$, with $\lambda = x,y,z$, is the
$\lambda$-component of the spin operator $S^\lambda_{{\bf q},a}$ {\sl projected}
into the lower bands $c$ with $S^\lambda_{{\bf q},a}$ being the Fourier
transform of the spin operator $S^\lambda_{i,a}$ at site $i$ of the
sublattice $a$.
Indeed, the projected operator $\bar{S}^\lambda_{{\bf q},a}$ is determined
from $S^\lambda_{i,a}$ following the same procedure outlined in
Sec.~\ref{sec:hubbard} for the density operator \eqref{proj-dens-op}.
Finally, the $F_{\alpha\beta,\mathbf{q}}$ function is given by
\begin{equation}
F_{\alpha \beta,\mathbf{q}}^2 = \sum_{\bf p} \: g_{\alpha}(\mathbf{p}, \mathbf{q})
g^*_{\beta}(\mathbf{p},\mathbf{q}),
\label{eq:F2}
\end{equation}
with the $g_{\alpha}(\mathbf{p}, \mathbf{q})$ function being related
to the coefficients \eqref{eq:Bogocoef} of the canonical
transformation \eqref{eq:BogoTransf},
\begin{equation}
g_\alpha (\mathbf{p}, \mathbf{q}) = v_{\mathbf{p}-\mathbf{q}}^* v_{\mathbf{p}}
+ (-1)^{\alpha} u_{\mathbf{p}-\mathbf{q}}^* u_{\mathbf{p}}.
\label{eq:ga}
\end{equation}
Any operator expanded in terms of the fermion operators $c^\dagger_{{\bf k}\sigma}$
and $c_{{\bf k}\sigma}$ can, in principle, be rewritten in terms of the bosons \eqref{eq:bosons}.
In particular, the density operator \eqref{proj-dens-op}
projected into the lower bands $c$ assumes the form
\begin{align}
\bar{\rho}_{a \sigma}(\mathbf{k}) &= \frac{1}{2}N\delta_{\sigma, \uparrow}\delta_{\mathbf{k}, 0}
+ \sum_{\alpha,\beta}\sum_{\mathbf{q}} \: \mathcal{G}_{\alpha \beta a \sigma}(\mathbf{k}, \mathbf{q})
b_{\beta,\mathbf{k}+\mathbf{q} }^{\dagger} b_{\alpha, \mathbf{q}},
\label{eq:rhoBoson}
\end{align}
where the $\mathcal{G}_{\alpha \beta a \sigma}({\bf k},{\bf q})$ function is
given by Eq.~\eqref{Gcal}.
Importantly, both $F_{\alpha \beta,\mathbf{q}}^2$ and $\mathcal{G}_{\alpha \beta a \sigma}({\bf k},{\bf q})$
functions can be explicitly written in terms of the coefficients \eqref{eqBs},
see Eqs.~\eqref{eq:ApF201} and \eqref{Gcal2}, respectively.
\section{Flat-band ferromagnetism in the Haldane-Hubbard model}
\label{sec:flatferromagnetism}
In this section, we study the flat-band ferromagnet phase of the
Haldane-Hubbard model \eqref{eqHH0} within the bosonization formalism
\cite{doretto2015flat} for flat-band Chern insulators.
In particular, we concentrate on the determination of the dispersion
relation of the elementary (neutral) particle-hole pair excitations,
i.e., we calculate the spectrum of the spin-wave
excitations above the (flat-band) ferromagnetic ground state \eqref{eq:FM}.
\begin{figure*}[t]
\centerline{
\includegraphics[width=6.0cm]{spectrumOmegaBar.pdf}
\hskip1.7cm
\includegraphics[width=6.0cm]{spectrumEab.pdf}
}
\caption{The real (solid magenta line) and imaginary (dashed green line) parts of
(a) the coefficient $\bar{\omega}^{01}_{\bf q}$ [Eq.~\eqref{eq:omegaBar}] and
(b) the coefficient $\epsilon^{01}_{\bf q}$ [Eq.~\eqref{eq:Epsilon}]
along paths in the first Brillouin zone [Fig.~\ref{fig:omegabar}(b)]
for the Haldane-Hubbard model \eqref{eqHH0} in the nearly-flat band
limit \eqref{optimal-par} of the lower noninteracting band $c$.}
\label{fig:omegabar}
\end{figure*}
\subsection{Effective interacting boson model}
\label{sec:ChernInsu}
Let us consider the Haldane-Hubbard model \eqref{eqHH0}
on a honeycomb lattice with the noninteracting lower bands $c$ in the
nearly-flat band limit \eqref{optimal-par}
and at $1/4$-filling of its corresponding noninteracting limit, i.e.,
we assume that the number of electrons
$N_e = N_A = N_B = N$, with $N_A$ and $N_B$ being the
number of sites of the (triangular) sublattices $A$ and $B$, respectively.
In this case, the bosonization scheme \cite{doretto2015flat} allows us
to map the Hamiltonian \eqref{eqHH0} into an effective interacting
boson model.
In order to derive such an effective boson model,
the first step is to project the Hamiltonian \eqref{eqHH0} into the
lower noninteracting bands $c$ (see Eq.~(28) from
Ref.~\cite{doretto2015flat} for details):
\begin{align}
H \rightarrow \bar{H} &= \bar{H}_0 + \bar{H}_U.
\label{H-projected}
\end{align}
Here the projected noninteracting term $\bar{H}_0$ follows from
Eq.~\eqref{eq:Hfree},
\begin{align}
\bar{H}_0 = \sum_{\mathbf{k} \sigma } \omega^c_{ \mathbf{k}}
c_{\mathbf{k} \sigma}^{\dagger} c_{\mathbf{k} \sigma},
\label{eq:omegaProje}
\end{align}
while the projected on-site Hubbard term $\bar{H}_U$ is given by
Eq.~\eqref{hu-k-bar}.
The expression of the noninteracting (kinetic) term $\bar{H}_0$ in terms of the
bosons \eqref{eq:bosons} is given by (see Appendix B
from Ref.~\cite{doretto2015flat} for details)
\begin{equation}
\bar{H}_{0,B} = E_0 + \sum_{\alpha \beta} \sum_{\mathbf{q} \in BZ} \bar{\omega}^{\alpha \beta}_{\mathbf{q} }
b_{\beta, \mathbf{q}}^{\dagger} b_{\alpha, \mathbf{q}},
\label{eq:H0B1}
\end{equation}
where $E_0 = \sum_{\bf k} \omega^c_{\bf k}$ is a constant
associated with the action of $\bar{H}_0$ into the reference state \eqref{eq:FM} and
\begin{align}
\bar{\omega}^{\alpha \beta}_{\mathbf{q}} &= \frac{1}{F_{\alpha\alpha, \mathbf{q}} F_{\beta\beta, \mathbf{q}}} \sum_{\mathbf{p}}
\left( \omega^c_{\mathbf{p-q}} - \omega^c_{\mathbf{p}} \right)
g_{\alpha} (\mathbf{p}, \mathbf{q}) g_{\beta}^{*} (\mathbf{p}, \mathbf{q}),
\label{eq:omegaBar}
\end{align}
with the $F_{\alpha\beta, \mathbf{q}}$ and the $g_{\alpha} (\mathbf{p}, \mathbf{q})$
functions given by Eqs.~\eqref{eq:F2} and \eqref{eq:ga}, respectively.
The bosonic representation of the projected on-site Hubbard term
$\bar{H}_U$ follows from Eqs.~\eqref{hu-k-bar} and \eqref{eq:rhoBoson}:
After normal ordering the expression resulting from the substitution
of Eq.~\eqref{eq:rhoBoson} into \eqref{hu-k-bar}, one shows that
\cite{doretto2015flat}
\begin{align}
\bar{H}_{U,B} &= \bar{H}_{U,B}^{(2)}+ \bar{H}_{U,B}^{(4)},
\end{align}
where the quadratic and quartic boson terms read
\begin{align}
\bar{H}_{U,B}^{(2)} &= \sum_{\alpha \beta} \sum_{\mathbf{q} } \epsilon^{\alpha \beta}_{\mathbf{q} }
b_{\beta, \mathbf{q}}^{\dagger} b_{\alpha, \mathbf{q}},
\label{H42} \\
\bar{H}_{U,B}^{(4)} &= \frac{1}{N} \sum_{\mathbf{k} , \mathbf{q}, \mathbf{p}} \sum_{\alpha \beta \alpha' \beta'}
V^{\alpha \beta \alpha' \beta'}_{\mathbf{k}, \mathbf{q}, \mathbf{p} }
b_{\beta', \mathbf{p+k}}^{\dagger} b_{\beta, \mathbf{q-k}}^{\dagger} b_{\alpha \mathbf{q}} b_{\alpha' \mathbf{p}},
\label{H44}
\end{align}
with the coefficient
\begin{align}
\epsilon^{\alpha \beta}_{\mathbf{q} } &= \frac{1}{2} \sum_{a}
U_a\mathcal{G}_{\alpha \beta a \downarrow} (0, \mathbf{q})
\nonumber \\
&+ \frac{1}{N} \sum_{a,\alpha', \mathbf{k}}
U_a\mathcal{G}_{\alpha' \beta a \uparrow}(-\mathbf{k}, \mathbf{k+q})
\mathcal{G}_{\alpha \alpha' a \downarrow}(\mathbf{k}, \mathbf{q})
\label{eq:Epsilon}
\end{align}
and the boson-boson interaction given by
\begin{align}
V^{\alpha \beta \alpha' \beta'}_{\mathbf{k}, \mathbf{q}, \mathbf{p} } &= \frac{1}{N} \sum_a
U_a\mathcal{G}_{\alpha \beta a \uparrow}(-\mathbf{k}, \mathbf{q})
\mathcal{G}_{\alpha' \beta' a \downarrow} (\mathbf{k}, \mathbf{p}).
\label{eq:Vkq}
\end{align}
The effective {\sl interacting} boson model that describes
the flat-band ferromagnetic phase of the Haldane-Hubbard model \eqref{eqHH0}
then assumes the form
\begin{align}
\bar{H}_B = \bar{H}_{0,B} + \bar{H}^{(2)}_{U,B} + \bar{H}^{(4)}_{U,B}.
\label{heffective}
\end{align}
\subsection{Spin-wave spectrum in the nearly flat-band limit}
\label{sec:spin-wave}
In this section, we consider the effective boson model \eqref{heffective}
in the lowest-order (harmonic) approximation, which consists of
keeping only terms up to quadratic order in the boson
operators \eqref{eq:bosons} of the Hamiltonian \eqref{heffective}, i.e.,
we consider
\begin{align}
\bar{H}_B &\approx \bar{H}_{0,B} + \bar{H}^{(2)}_{U,B}.
\label{h-harm}
\end{align}
In principle, the Hamiltonian \eqref{h-harm} can be diagonalized via a canonical
transformation yielding the spectrum of elementary excitations (spin-waves)
in terms of $\bar{\omega}^{\alpha \beta}_{\bf q}$ [Eq.~\eqref{eq:omegaBar}]
and $\epsilon^{\alpha \beta}_{\mathbf{q} }$ [Eq.~\eqref{eq:Epsilon}].
However, before proceeding, we would like to discuss both contributions in
details.
The coefficient $\bar{\omega}^{\alpha \beta}_{\bf q}$ [Eq.~\eqref{eq:omegaBar}]
represents the (kinetic) contribution to the energy of the elementary
excitations explicitly related to the dispersion of the
noninteracting (lower) bands $c$. One can see that,
if the free band $c$ is completely flat ($ \omega^c_{\bf q} =$ constant),
this coefficient vanishes while, in the nearly flat-band limit, it can be finite.
For the noninteracting term \eqref{eqHH2} on the honeycomb lattice
in the nearly flat-band limit \eqref{optimal-par},
we find that $\bar{\omega}^{\alpha \alpha}_{\bf q} = 0$ while
$\bar{\omega}^{01}_{\bf q} = \bar{\omega}^{10}_{\bf q}$ are finite but rather
small in units of the nearest-neighbor hopping energy $t_1$
[see Fig.~\ref{fig:omegabar}(a)].
Such a result is distinct from the square lattice $\pi$-flux model,
where symmetry considerations yield $\bar{\omega}^{\alpha \beta}_{\bf q} = 0$
\cite{doretto2015flat}.
We believe that the finite values of the coefficients
$\bar{\omega}^{01}_{\bf q}$ and $\bar{\omega}^{10}_{\bf q}$
for the Haldane model
might be related not only to the symmetries of the
noninteracting Hamiltonian \eqref{eqHH2}, but also to the fact that
the condition
\begin{equation}
F_{\alpha \beta,{\bf q}} = \delta_{\alpha,\beta}F_{\alpha \alpha,{\bf q}}
\label{conditionF}
\end{equation}
is not fulfilled for the Haldane model, an important feature distinct
from the square lattice $\pi$-flux model.
We refer the reader to Appendix \ref{ap:BosoDetails} for a detailed
discussion about the implications of the condition \eqref{conditionF} for the
approximations involved in the bosonization scheme.
Due to the smallness of $\bar{\omega}^{01}_{\bf q}$ and $\bar{\omega}^{10}_{\bf q}$,
in the following, we assume that $\bar{\omega}^{\alpha \beta}_{\bf q} \approx 0$,
i.e., we neglected the (explicit) kinetic contribution
\eqref{eq:omegaBar} to the energy of the elementary excitations.
Concerning the coefficients \eqref{eq:Epsilon},
which are related to the one-site Hubbard term \eqref{eqHH}, we find that
$\epsilon^{\alpha \alpha}_{\bf q} $ are real quantities while
$\epsilon^{01}_{\bf q} = -\epsilon^{10}_{\bf q} = \epsilon^{10}_{-{\bf q}} = -\epsilon^{01}_{-{\bf q}}$
are complex ones, implying that the Hamiltonian \eqref{H42} is
non-Hermitian. Such a feature is also in contrast with the square
lattice $\pi$-flux model \cite{doretto2015flat} for which
$\epsilon^{01}_{\bf q} = \epsilon^{10}_{\bf q} = 0$ (see also Ref.~\cite{note01}).
In particular, for the nearly flat-band limit \eqref{optimal-par}, one
finds that $\epsilon^{01}_{\bf q}$ is quite pronounced around the $M_1$ and $M_2$
points and it is also finite close to the $K$ and $K'$ points of the first
Brillouin zone [see Fig.~\ref{fig:omegabar}(b)].
Again, we believe that the non-Hermiticity of the Hamiltonian $\bar{H}^{(2)}_{U,B}$
might be an artefact of the bosonization scheme related to the fact that the
condition \eqref{conditionF} is not fulfilled for the Haldane model
(see Appendix \ref{ap:BosoDetails} for details).
Since such an issue is not completely understood at the moment, in the
following, we determine the spin-wave spectrum both in the presence
and in the absence of the off-diagonal terms
$(\alpha,\beta) = (0,1)$ and $(1,0)$ of the Hamiltonian \eqref{H42}.
The Hamiltonian \eqref{h-harm} with $\bar{\omega}^{\alpha \beta}_{\bf q} = 0$
can be diagonalized via a canonical transformation similar to
Eq.~\eqref{eq:BogoTransf},
\begin{align}
b_{0, {\bf q} } = u^\dagger_{\bf q} a_{+, {\bf q}} + v_{\bf q} a_{-, {\bf q}},
\nonumber \\
b_{1, {\bf q} } = v^\dagger_{\bf q} a_{+, {\bf q}} - u_{\bf q} a_{-, {\bf q}},
\label{eq:BogoTransf2}
\end{align}
where the coefficients $u_{\bf q}$ and $v_{\bf q}$ are now given by
\begin{align}
|u_{\bf q}|^2, |v_{\bf q}|^2 &= \frac{1}{2} \pm
\frac{1}{4\epsilon_{\bf q}}\left( \epsilon^{00}_{\bf q} - \epsilon^{11}_{\bf q} \right),
\nonumber \\
u_{\bf q} v_{\bf q}^{*} &= \frac{\epsilon^{01}_{\bf q}}{4\epsilon_{\bf q}},
\quad
v_{\bf q} u_{\bf q}^{*} = \frac{\epsilon^{10}_{\bf q}}{4\epsilon_{\bf q}},
\label{eq:Bogocoef2}
\end{align}
with
\begin{equation}
\epsilon_{\bf q} = \frac{1}{2}\sqrt{ \left( \epsilon^{00}_{\bf q} - \epsilon^{11}_{\bf q} \right)^2
+ 4 \epsilon^{01}_{\bf q} \epsilon^{10}_{\bf q}}.
\label{aux-epsilon}
\end{equation}
It is then easy to show that the Hamiltonian \eqref{h-harm} assumes
the form
\begin{equation}
\bar{H}_B = E_0 + \sum_{\mu = \pm } \sum_{{\bf q} \in BZ }
\Omega_{\mu, {\bf q}} a_{\mu, \mathbf{q}}^{\dagger} a_{\mu, \mathbf{q}},
\label{eq:HFinalFlat}
\end{equation}
where the constant $E_0 = \sum_{\bf k} \omega^c_{\bf k} = (-1.69\,t_1)N$
for the nearly flat-band limit \eqref{optimal-par} and
the dispersion relation of the bosons $a_\pm$ reads
\begin{equation}
\Omega_{\pm,{\bf q}} = \frac{1}{2}\left( \epsilon^{00}_{\bf q} + \epsilon^{11}_{\bf q} \right)
\pm \epsilon_{\bf q},
\label{omega-b}
\end{equation}
with $\epsilon_{\bf q}$ given by Eq.~\eqref{aux-epsilon} (see also Ref.~\cite{note02}).
Assuming that $\epsilon^{01}_{\bf q} = \epsilon^{10}_{\bf q} = 0$, the
dispersion relation \eqref{omega-b} reduces to
\begin{equation}
\Omega_{-,{\bf q}} = \epsilon^{00}_{\bf q}
\quad \quad {\rm and} \quad\quad
\Omega_{+,{\bf q}} = \epsilon^{11}_{\bf q},
\label{omega-b2}
\end{equation}
since $\epsilon^{00}_{\bf q} < \epsilon^{11}_{\bf q}$.
\begin{figure}[t]
\centerline{
\includegraphics[width=7.0cm]{spectrumMag.pdf}
}
\caption{The elementary excitation (spin-wave) energies of the
effective boson model \eqref{heffective} in the harmonic approximation for
the nearly-flat band limit \eqref{optimal-par}:
dispersion relations \eqref{omega-b} (real part, solid green line)
and \eqref{omega-b2} (solid magenta line) along paths in the first
Brillouin zone [Fig.~\ref{figLattice}(b)].
The dashed blue line indicates the imaginary part of
$\Omega_{+,{\bf q}} = -\Omega_{-,{\bf q}} $
[see Eq.~\eqref{omega-b}], which is multiplied by a factor of 50 for clarity.
On-site repulsion energies:
(a) $U_A = U_B = U$;
(b) $U_B = 0.8\, U_A = 0.8\, U$; and
(c) $U_B = 0.6\, U_A = 0.6\, U$.}
\label{figEspectro1}
\end{figure}
One notices that the ground state of the Hamiltonian \eqref{eq:HFinalFlat}
is the vacuum (reference) state for both bosons $b_{0,1}$ and $a_\pm$, which
corresponds to the spin-polarized ferromagnet state $|{\rm FM}\rangle$
[see Eqs.~\eqref{eq:FM} and \eqref{vacuum}].
Such a result is a first indication of the stability of a flat-band ferromagnetic
phase for the Haldane-Hubbard model \eqref{eqHH0}.
The dispersion relations \eqref{omega-b} and \eqref{omega-b2} of the bosons $a_\pm$,
which indeed corresponds to the spin-wave spectrum above the flat-band
ferromagnetic ground state \eqref{eq:FM}, for the
nearly flat-band limit \eqref{optimal-par}
and $U_A = U_B = U$ is shown in Fig.~\ref{figEspectro1}(a).
Due to the absence of the kinetic coefficients \eqref{eq:omegaBar} associated with the
dispersion of the noninteracting bands $c$, one sees that the energy
scale of the spin-wave spectrum is determined by the on-site repulsion
energy $U$.
Both spin-wave spectra \eqref{omega-b} and \eqref{omega-b2} have two branches:
the acoustic (lower) branch $\Omega_{-,{\bf q}}$ is gapless, with a Goldstone mode
at the Brillouin zone center ($\Gamma$ point) and the
characteristic quadratic dispersion of ferromagnetic spin-waves near
the $\Gamma$ point;
the optical (upper) one $\Omega_{+,{\bf q}}$ is gapped, with the lowest energy
excitation at the $K$ and $K'$ points.
The presence of the Goldstone mode indicates the stability of the
flat-band ferromagnetic phase.
Interestingly, for the dispersion relation \eqref{omega-b2}, one finds
a quite small energy gap at the $K$ and $K'$ points
($\Delta^{(K)} = \Omega_{+,K} - \Omega_{-,K} = 2.01 \times 10^{-3}\, U$)
while the excitation spectrum \eqref{omega-b} displays Dirac points at
the $K$ and $K'$ points.
Indeed, the presence of the Dirac points is related to the fact that
$\epsilon^{01}_{\bf q}$ and $\epsilon^{10}_{\bf q}$ are finite at the $K$
and $K'$ points, see Fig.~\ref{fig:omegabar}(b).
Moreover, the fact that $\epsilon^{01}_{\bf q} = -\epsilon^{10}_{\bf q}$
yields a very small decay rate (the imaginary part of
$\Omega_{\pm,{\bf q}}$) for the spin-wave excitations
\eqref{omega-b} at the border of the first Brillouin zone
[see the dashed line in Fig.~\ref{figEspectro1}(a) and
note the multiplicative factor 50].
In addition to a configuration with homogeneous on-site Hubbard energy
$U_A = U_B = U$, we also consider the Haldane-Hubbard model with a
sublattice dependent on-site Hubbard energy.
The spin-wave spectra \eqref{omega-b} and \eqref{omega-b2} for the nearly
flat-band limit \eqref{optimal-par} and with
$U_B = 0.8\, U_A = 0.8\, U$ and $U_B = 0.6\, U_A = 0.6\, U$ are shown
in Figs.~\ref{figEspectro1}(b) and (c), respectively.
One notices that both spin-wave spectra \eqref{omega-b} and
\eqref{omega-b2} have a Goldstone mode at the $\Gamma$ point,
the energies of the excitations decreases as the diference
$\Delta U = U_A - U_B$ increases, and
the difference between the energies at the $K$ and $K'$ points
(e.g., $\Omega_{-,K} - \Omega_{-,K'}$) also increases with $\Delta U$.
For $U_B > U_A$, we find similar features, but the energy at the $K$
point is lower than the one at the $K'$ point.
Importantly, the dispersion relation \eqref{omega-b2} has a small
gap at the $K$ and $K'$ points, similar to the homogeneous case $U_A = U_B$:
$\Delta^{(K)} = 1.81 \times 10^{-3}\, U$ ($\Delta U = 0.2\,U$)
and $1.61 \times 10^{-3}\, U$ ($\Delta U = 0.4\,U$).
On the other hand, for the dispersion relation \eqref{omega-b}, a
finite energy gap opens at the $K$ and $K'$ points in contrast with the
homogeneous case $\Delta U = 0$. One finds that
$\Delta^{(K)} = 3.18 \times 10^{-2}\, U$ ($\Delta U = 0.2\,U$)
and $6.40 \times 10^{-2}\, U$ ($\Delta U = 0.4\,U$).
Such a finite energy gap might be related to the fact that a Hubbard
term with $U_A \not= U_B$ breaks inversion symmetry.
Similar to the homogeneous configuration, the spin-wave excitations
\eqref{omega-b} at the first Brillouin zone border have a
finite decay rate.
Gu and collaborators \cite{gu2019itinerant} performed exact
diagonalization calculations and determined the spin-wave spectrum
for the Haldane-Hubbard model \eqref{eqHH0} in the nearly flat-band
limit \eqref{optimal-par} neglecting the dispersion of the
noninteracting electronic bands, which corresponds to the
approximation $\bar{\omega}^{\alpha \beta}_{\bf q} = 0$ considered above.
For homogeneous on-site Hubbard energies $U_A = U_B$, it was
found that the spin-wave spectrum has Dirac points at the $K$ and $K'$
points (see Fig.~2(a$_1$) from Ref.~\cite{gu2019itinerant})
while, for a finite $\Delta U$, the energies of the excitations
decrease with $\Delta U$ and energy gaps open at the
$K$ and $K'$ points
(see Figs.~2(b$_1$) and 2(c$_1$) from Ref.~\cite{gu2019itinerant}).
Remarkably, the spin-wave spectrum \eqref{omega-b} determined within
the bosonization scheme qualitatively agrees with the numerical one,
apart from the fact that the numerical results do not indicate a
finite decay rate.
One should mention that the presence of Dirac points at the $K$ and $K'$
points is not only a feature of the spin-wave spectrum of the
flat-band ferromagnetic phase of the Haldane-Hubbard model.
Indeed, recent exact diagonalization calculations \cite{gu21} for a
topological Hubbard model on a kagome lattice also indicate such a
feature in the excitation spectrum of the corresponding flat-band
ferromagnetic phase when the dispersion of the (lower) noninteracting
electronic band is neglected.
As mentioned above, although the non-Hermiticity of the Hamiltonian
\eqref{H42} (and consequently finite decay rates) might be an artefact
of the bosonization scheme, the off-diagonal terms
$\epsilon^{01}_{\bf q}$ and $\epsilon^{10}_{\bf q}$ of the quadratic
Hamiltonian \eqref{H42} should be considered in order to properly
describe the spin-wave spectrum at the border of the first Brillouin
zone. Therefore, in the following, we determine the spin-wave spectrum
away from the nearly flat-band limit \eqref{optimal-par} with the aid
of Eq.~\eqref{omega-b}.
\subsection{Spin-wave spectrum away from the nearly flat-band limit}
\label{sec:dispersiviness}
Although the main focus of our discussion is the description of the
flat-band ferromagnetic phase of the Haldane-Hubbard model in the
nearly-flat band limit \eqref{optimal-par}, we also consider
configurations such that the noninteracting band $c$
has smaller flatness ratio $f_c < 7$.
In particular, we consider the effects on the spin-wave spectrum
\eqref{omega-b} related to the increasing of the band
width $W_c$ (decreasing of the flatness ratio $f_c$) of the noninteracting
band $c$ due to
(i) the decrease/increase of the phase $\phi$ [see
Figs.~\ref{figEspectro}(b) and (c)] and
(ii) the presence of a staggered on-site energy term \eqref{eq:Hmass} in the total
Hamiltonian (see Fig.~\ref{figEspectro2}).
These perturbations furnish some clues about the stability of the
flat-band ferromagnetic phase.
\begin{figure}[t]
\centerline{\includegraphics[width=6.5cm]{spectrumMagphi.pdf}}
\vskip0.5cm
\centerline{\includegraphics[width=6.5cm]{spectrumMagphi2.pdf}}
\caption{The real part of the dispersion relation \eqref{omega-b}
(solid line) along paths in the first
Brillouin zone [Fig.~\ref{figLattice}(b)] for on-site repulsion energy
$U_A = U_B = U$ and
$t_2$ given by the relation $\cos(\phi) = t_1/ (4 t_2)$.
(a) $\phi = 0.4$ (blue),
$\phi = 0.5$ (green),
$\phi = 0.656$ (magneta) and
(b) $\phi = 0.656$ (magneta),
$\phi = 0.75$ (green),
$\phi = 0.85$ (blue).
The corresponding dashed line indicates the imaginary part of
$\Omega_{+,{\bf q}} = -\Omega_{-,{\bf q}} $
[see Eq.~\eqref{omega-b}], which is multiplied by a factor of 50 for clarity.}
\label{fig:Omega-phi}
\end{figure}
\begin{figure*}[t]
\centerline{
\includegraphics[width=6.5cm]{spectrumMassB3nw.pdf}
\hskip1.0cm
\includegraphics[width=6.5cm]{spectrumMassB3e00w00.pdf}
}
\caption{The real part of the dispersion relation \eqref{omega-b}
(solid line) along paths in the first Brillouin zone [Fig.~\ref{figLattice}(b)]
for the optimal parameters \eqref{optimal-par},
on-site Hubbard energy $U_A = U_B = U = t_1$,
and staggered on-site energy
$M = 0.1$ (magenta) and
$0.2\, t_1$ (green).
(b) Similar results for the spin-wave spectrum \eqref{omega-b},
but with the replacement \eqref{omega-b3}.
The corresponding dashed line indicates the imaginary part of
$\Omega_{+,{\bf q}} = -\Omega_{-,{\bf q}} $
[see Eq.~\eqref{omega-b}], which is multiplied by a factor of 50 for clarity.
}
\label{fig:Omega-Mass}
\end{figure*}
In Fig.~\ref{fig:Omega-phi}(a), we show the spin-wave spectrum
\eqref{omega-b} for $\phi = 0.4$, $0.5$, and $0.656$, the hopping
amplitude $t_2$ given by
the relation $\cos(\phi) = t_1/ (4 t_2)$, and the on-site repulsion energy
$U_A = U_B = U$.
One sees that the spin-wave spectrum
(in units of the on-site repulsion energy $U$) for $\phi = 0.4$ and $0.5$ is
quite similar to the one derived for the nearly-flat band limit \eqref{optimal-par}.
As the flux parameter $\phi$ decreases, the excitation energies near the
border of the Brillouin zone [the $K$-$M_1$-$K'$ line] decrease,
while the energies of the optical branch in the vicinity of the $\Gamma$ point
increase.
The fact that the spin-wave spectrum displays a Goldstone mode at the
$\Gamma$ point, regardless the value of the phase $\phi$, indicates the
stability of the flat-band ferromagnetic phase with respect to the simultaneous
variations of the phase $\phi$ and the next-nearest-neighbor hopping
amplitude $t_2$.
Finite decay rates are still found at the border of the first
Brillouin zone.
The flat-band ferromagnetic phase seems also to be stable for $\phi > 0.656$,
see Fig.~\ref{fig:Omega-phi}(b). Here, however,
as the flux parameter $\phi$ increases, the excitation energies near the
border of the Brillouin zone increase
and the energies of the upper branch in the vicinity of the $\Gamma$ point
decrease.
The effect on the spin-wave spectrum of a finite staggered on-site
energy $M$ [Eq.~\eqref{eq:Hmass}] is quite distinct.
In Fig.~\ref{fig:Omega-Mass}(a), we plot the spin-wave spectrum
\eqref{omega-b} for the optimal parameters \eqref{optimal-par},
$M = 0.1$ and $0.2\, t_1$, and the on-site Hubbard energy
$U_A = U_B = U = t_1$.
Comparing with the homogeneous on-site energy $M = 0$ configuration
[Fig.~\ref{figEspectro1}(a)], one sees that the whole spin-wave spectrum
shifts downward in energy as $M$ increases and energy gaps open at the
$K$ and $K'$ points. The latter is indeed related to the fact that the
staggered on-site energy term \eqref{eq:Hmass} breaks inversion
symmetry.
Most importantly, the energies of the acoustic branch are negative in
the vicinity of the $\Gamma$ point, indicating an instability of the
flat-band ferromagnetic phase for finite values of the staggered
on-site energy $M$.
Such features are also found for the square lattice $\pi$-flux model
\cite{doretto2015flat}, see Fig.~\ref{fig:MassSquare} in
Appendix \ref{ap:square}.
A finite staggered on-site energy $M$ also modifies the (kinetic) coefficients
\eqref{eq:omegaBar} directly related to the dispersion of the noninteracting
band $c$. In particular, we find that $\bar{\omega}^{\alpha\alpha}_{\bf q}$
no longer vanishes for a finite $M$ [see Fig.~\ref{fig:omegabar}(a) for $M = 0$].
Such an effect can be easily included in the spin-wave spectrum \eqref{omega-b}
with the replacement
\begin{equation}
\epsilon^{\alpha \alpha}_{\bf q} \rightarrow \bar{\omega}^{\alpha\alpha}_{\bf q} + \epsilon^{\alpha \alpha}_{\bf q}.
\label{omega-b3}
\end{equation}
Figure~\ref{fig:Omega-Mass}(b) shows the spin-wave spectrum \eqref{omega-b}
with the replacement \eqref{omega-b3}
(in units of the nearest-neighbor hopping amplitude $t_1$)
for the optimal parameters \eqref{optimal-par},
$M = 0.1$ and $0.2\, t_1$, and the on-site Hubbard energy
$U_A = U_B = U = t_1$.
One notices that $\bar{\omega}^{\alpha\alpha}_{\bf q}$ does not modify the
excitation energies in the vicinity of the $\Gamma$ point, but only
changes the excitation energies near the border $K$-$M_1$-$K'$ of the
first Brillouin zone. Such an effect resembles the one found when
distinct on-site repulsion energies $U_A \not= U_B$ are considered, see
Figs.~\ref{figEspectro1}(b) and (c).
\section{Summary and discussion}
\label{sec:summary}
The effective boson model \eqref{heffective}
is not only restricted to the Haldane-Hubbard model \eqref{eqHH0} on a
honeycomb lattice, but, in principle, it can also be employed to study
the flat-band ferromagnetic phase of a correlated Chern insulator described by a
topological Hubbard model on a bipartite lattice whose noninteracting
(kinetic) term breaks time-reversal symmetry and assumes the form \eqref{eqH0k}:
Notice that a tight-binding model of the form \eqref{eqH0k} is
completely defined by the $B_{0,{\bf k}}$ and $B_{i,{\bf k}}$ ($i=1,2,3$)
functions \eqref{eqBs};
the $F_{\alpha\beta,{\bf q}}$ function, which is important in the
definition of the boson operators \eqref{eq:bosons}, can be written in
terms of the normalized $\hat{B}_{i,{\bf k}} = B_{i,{\bf k}}/|{\bf B}_{i,{\bf k}}|$
functions, see Eq.~\eqref{eq:ApF201};
finally, the coefficients \eqref{eq:omegaBar} and \eqref{eq:Epsilon}
and the boson-boson interaction \eqref{eq:Vkq}, which completely
characterise the effective boson model \eqref{heffective}, can also be
expressed in terms of the normalized $\hat{B}_{i,{\bf k}}$ functions, see
Appendix~\ref{ap:functions}.
An important requirement for the application of the bosonization
scheme \cite{doretto2015flat} to a topological Hubbard model is that
the condition \eqref{conditionF} is fulfilled by the noninteracting
term of the model. Once the validity of such a condition is verified,
the two sets of independent boson operators \eqref{eq:bosons} can be
defined and the bosonic representation of a operator written in terms
of the original fermions, such as the projected density operator
\eqref{proj-dens-op}, is well defined.
This is indeed the case for the square lattice topological Hubbard
model whose noninteracting limit is giving by the $\pi$-flux model
\cite{doretto2015flat}.
As mentioned above, the spin-wave spectrum \cite{su2019ferromagnetism}
determined via exact diagonalization calculations for the flat-band
ferromagnetic phase of the square lattice $\pi$-flux model in the
(completely) flat-band limit qualitatively agrees with the harmonic one
calculated within the bosonization formalism.
Additional results for the square lattice $\pi$-flux model derived
within the bosonization scheme are presented in Appendix~\ref{ap:square}.
For the Haldane-Hubbard model on a honeycomb lattice in the
nearly-flat band limit \eqref{optimal-par} of the noninteracting
(lower) band $c$ (the second application of the bosonization
formalism), the condition \eqref{conditionF} is not fulfilled for all
momenta ${\bf q}$ [see Fig.~\ref{fig:F2}(b)],
which implies that additional considerations are needed in order
to apply the bosonization scheme.
As discussed in Appendix~\ref{ap:BosoDetails}, in order to preserve the
form of the original effective boson model \eqref{heffective},
one should assume that the two set of boson operators $b_{0,1}$
defined by Eq.~\eqref{eq:bosons} are independent and that the bosonic
expression \eqref{eq:rhoBoson} for the projected electron density
operator $\bar{\rho}_{a\sigma}({\bf k})$ holds for the Haldane model.
Moreover, although the non-Hermiticity of the quadratic Hamiltonian
\eqref{H42} might be related to the fact that the condition
\eqref{conditionF} is not completely valid for the Haldane model, one
should keep the off-diagonal terms $\epsilon^{01}_{\bf q}$ and $\epsilon^{10}_{\bf q}$
[see Eq.~\eqref{eq:Epsilon}] in order to properly describe the
spin-wave excitations at the border of the first Brillouin zone
(see Fig.~\ref{figEspectro1}) as indicated by the comparison between
the dispersion relations \eqref{omega-b} and \eqref{omega-b2} and the
numerical results \cite{gu2019itinerant}.
Importantly, it is not clear at the moment whether the finite decay
rates found for high-energy spin-wave excitations are an artefact of
the bosonization scheme.
Even considering these additional approximations, the
qualitatively agreement between the real part of the dispersion
relation \eqref{omega-b} and the numerical spin-wave spectrum
\cite{gu2019itinerant} indicates that the effective boson model
\eqref{heffective} provides an appropriated description for the
flat-band ferromagnetic phase of the Haldane-Hubbard model.
In addition to the nearly-flat band limit \eqref{optimal-par}, we also
study the flat-band ferromagnetic phase of the Haldane-Hubbard model
when the noninteracting lower band $c$ gets more dispersive.
While the ferromagnetic phase seems to be less sensible to the
increase of the band width $W_c$ of the noninteracting band $c$
due to variations of the phase $\phi$ and the next-nearest neighbor
amplitude $t_2$ (Fig.~\ref{fig:Omega-phi}),
an instability of the ferromagnetic ground state is
found when a staggered on-site energy \eqref{eq:Hmass} is included
[Fig.~\ref{fig:Omega-Mass}(a) and (b)].
Interestingly, for the latter, one notices that the $F^2_{\alpha\alpha,{\bf q}}$
function and ${\rm Im}\, F^2_{01,{\bf q}} = {\rm Im}\,F^2_{10,{\bf q}}$ are
not affected by a finite staggered on-site energy $M$, while
${\rm Re}\, F^2_{01,{\bf q}} = {\rm Re}\,F^2_{10,{\bf q}}$
acquire a constant value proportional to the staggered on-site energy
$M$ [see Figs.~\ref{fig:F2}(b) and (c)]: indeed, it is easy to
see that the replacement \eqref{B3M} modifies the second term of the
integrand \eqref{eq:ApF201}, yielding an additional term proportional
to the parameter $M$.
Notice that the instability of a flat-band ferromagnetic phase due to
a finite $M$ is also related to a stronger violation of the condition
\eqref{conditionF}.
At the moment, it is not clear whether such an instability is an
artefact of the bosonization scheme related to some difficulties in
including kinetic effects, see the discussion below.
Interestingly, such kind of instability is also found for the square
lattice $\pi$-flux model, see Appendix~\ref{ap:square} for details.
The stability of a flat-band ferromagnetic phase was studied by
Kusakabe and Aoki via exact diagonalization calculations
performed for the two-dimensional Mielke model \cite{kusakabe1994ferromagnetic}
and Mielke and Tasaki models \cite{kusakabe1994B}.
A parameter $\gamma$ was introduced in the original models, such that
$\gamma = 0$ corresponds to (lower) noninteracting bands completely
flat (flat-band limit). It was found that, for $\gamma = 0$, a
ferromagnetic phase is stable regardless the value of the on-site
repulsion energy $U$. For finite values of the parameter $\gamma$
(system away from the flat-band limit), a ferromagnetic ground state
is stable only if $U \ge U_c(\gamma)$ (see Figs.~1 and 2 from
Ref.~\cite{kusakabe1994B}).
Such a scenario agrees with more recently numerical results
for the square lattice $\pi$-flux model in the nearly-flat band limit
\cite{su2019ferromagnetism}, which indicates that a ferromagnetic
phase sets in only if $U \ge U_c(t_2)$, with $t_2$ being the
next-nearest neighbor hopping amplitude.
M\"uller {\sl et al.} studied one- and two-dimensional
Hubbard models with nearly-flat bands that are not in the class of
Mielke and Tasaki flat-band models, since they do not obey some
connectivity conditions \cite{muller16}.
They found that small and moderate noninteracting band dispersion may
stabilize a ferromagnetic phase for $U \ge U_c$, i.e., the
ferromagnetic phase is driven by the kinetic energy.
In particular, for a two-dimensional bilayer model,
$U_c(\delta_l)$ is a nonmonotonic function of the parameter $\delta_l$ that
controls the width of the band (see Fig.~7 from \cite{muller16}),
i.e., the ferromagnetic phase sets in only for a finite band dispersion.
For rigorous results about the stability of a ferromagnetic phase on
Hubbard models with nearly-flat bands, we refer the reader to the
review by Tasaki \cite{tasaki1996stability}.
The fact that a ferromagnetic phase is stable in Hubbard models with
nearly-flat (noninteracting) bands only for $U \ge U_c$ is related to
the competition between the kinetic energy (dispersion of the
noninteracting bands) and the (short-range) Coulomb interaction $U$
\cite{tasaki1996stability}.
The bosonization formalism \cite{doretto2015flat}
partially takes into account such a competition:
although the explicitly contribution \eqref{eq:omegaBar} of the
dispersion of the noninteracting bands $c$ is not included in the
effective boson model \eqref{heffective}, such kinetic effects are
partially considered by the bosonization scheme, since the
coefficients \eqref{eq:Epsilon} and the boson-boson interaction
\eqref{eq:Vkq} depend on the $\hat{B}_{i,{\bf q}}$ functions
\eqref{eqBs} that completely determines the free-band structure
\eqref{eq:omega}.
At the moment, it is not clear how to properly include in the effective
boson model \eqref{heffective} the main effects related to the
noninteracting band dispersion.
Due to this limitation, we expected that the results derived within
the bosonization scheme for flat-band Chern insulators get more
accurate as the (lower) noninteracting bands gets less dispersive.
One should recall that the bosonization scheme \cite{doretto2015flat}
is based on the formalism \cite{doretto2005lowest} that was proposed
to describe the quantum Hall ferromagnet realized in a two-dimensional
electron gas at filling factor $\nu=1$: here, the noninteracting
bands corresponds to (completely flat) Landau levels.
Concerning the topological properties of the spin-wave excitations,
one would expect that the nontrivial topological properties of the
noninteracting electronic bands of the Haldane-Hubbard model
may yield a flat-band ferromagnetic phase with topologically
non-trivial spin-wave excitation bands.
Indeed, topological magnons in Heisenberg ferromagnets
\cite{zhang13,malki2019topological,owerre2016first,owerre2016topological,chen18,mook20}
and, in particular, magnets on a honeycomb lattice
\cite{owerre2016first,owerre2016topological,chen18,mook20}
have been studied. An important ingredient for such topological magnon
insulators is the Dzyaloshinskii-Moriya interaction that may open
energy gaps in the magnon spectrum and yields magnon bands with
nonzero Chern numbers.
For the Haldane-Hubbard model in the nearly flat-band limit
\eqref{optimal-par}, it was shown that the spin-wave excitation bands
have nonzero Chern numbers only when the dispersion of the
noninteracting electronic bands is explicitly taking into account
(see Figs.~2(a$_1$), (a$_2$) and (d) from \cite{gu2019itinerant}).
We calculate the Chern numbers of the spin-wave bands for
configurations of the Haldane-Hubbard model whose spin-wave spectrum
displays an energy gap at the $K$ and $K'$ points: we expand the
Hamiltonian \eqref{h-harm} in terms of Pauli matrices as done in Eq.~\eqref{eqPauli},
determine the corresponding $B_{i,{\bf q}}$ coefficients assuming that
$\epsilon^{01}_{\bf q} = (\epsilon^{10}_{\bf q})^*$, and calculate the Chern
numbers using Eq.~\eqref{eqCn}. In agreement with the exact
diagonalization results \cite{gu2019itinerant}, we find that the Chern
numbers of the spin-wave bands vanish.
In summary, in this paper we studied the flat-band ferromagnetic
phase of a correlated Chern insulator on a honeycomb lattice described
by the Haldane-Hubbard model.
We considered the system at $1/4$-filling of the noninteracting bands
and in the nearly-flat band limit of the noninteracting lower bands.
We determined the spin-wave excitation spectrum within a bosonization
scheme for flat-band correlated Chern insulators and found that it has a
Goldstone mode at the first Brillouin zone center and Dirac
points at the $K$ and $K'$ points.
We also studied how the spin-wave excitation spectrum changes
as an offset in the on-site Hubbard energies associated with the
sublattices $A$ and $B$ is introduced
and as the width of the lower noninteracting bands increase due to
variations of the kinetic term parameters and
the presence of a staggered on-site energy term. In particular, we
found that the flat-band ferromagnetic phase might be unstable when a
finite staggered on-site energy term is included in the kinetic term
of the Haldane-Hubbard model.
The bosonization scheme for flat-band correlated Chern insulators
provides an effective interacting boson model for the description of
the flat-band ferromagnetic phase of a topological Hubbard model.
In the near future, we intend to study the effects of the boson-boson
interaction not only in the Haldane-Hubbard model, but also in the
square lattice $\pi$-flux model previously studied in
Ref.~\cite{doretto2015flat}. Motivated by the similarities with the
quantum Hall ferromagnetic phase realized in a two-dimensional
electron gas at filling factor $\nu=1$ \cite{doretto2005lowest}, we
would like to check whether the boson-boson interaction may yield
two-boson bound states.
It would be interesting to see whether the bosonization formalism
\cite{doretto2015flat}, eventually combined with the approximations
discussed in this paper, can also be employed to study twisted bilayer
graphene near a magic angle
\cite{cao2018unconventional, lu19, sharpe19, morell10, xu18, wu18, seo19,chen20}.
Here the resulting moir\'e pattern induces an effective
superlattice and a set of flat-minibands in the moir\'e Brillouin zone.
In addition to a superconducting phase \cite{cao2018unconventional, lu19},
evidences for a ferromagnetic phase at $3/4$-filling of the conduction
miniband are also found \cite{sharpe19}.
In principle, a possible flat-band ferromagnetic phase of the
effective lattice model introduced in Ref.~\cite{seo19} for twisted
bilayer graphene could be studied within the bosonization scheme.
Finally, we should mention that a similar study reported in this paper
for a correlated flat-band Z$_2$ topological insulator on a honeycomb
lattice, described by a topological Hubbard model similar to Eq.~\eqref{eqHH0}
but that preserves time-reversal symmetry, is currently in progress
and it will be published elsewhere.
For $\phi = \pi/2$, such a model corresponds to the
Kane-Mele-Hubbard model, see Ref.~\cite{rpp-rachel18} for details.
\acknowledgments
L.S.G.L. kindly acknowledges the financial support of Brazil "Ministry of Science,
Technology and Innovation” and the “National Council for Scientific
and Technological Development – CNPq”.
|
3,212,635,537,689 | arxiv | \section{introduction}
Propagation of optical waves in waveguide arrays has become an important and
effective means to investigate various optical phenomena which have analogues
in many fields of physics \cite{Longhi}. Special attention has been paid to
nonlinear surface waves and nonlinear guided waves in planar layered
structures \cite{M1,M2,M3,M4,M5,M6,M7,M8,M9,M10,M11,M12} and nonlinear
couplers \cite{Marcuse,Yangcc,Yangcc1,Chen,Yasumoto}, generation and
properties of solitons in nonlinear waveguide arrays
\cite{Nat,Lederer,PO,YS,Ablowitz,YS4} and other nonlinear periodic systems,
such as optically induced lattices \cite{MS,Neshev,Yaroslov,Zhangyp}. The
investigation of light beam propagation in waveguide arrays attracts
increasing attention due to their potential applications in all-optical signal
processing in fiber optic networks and devices, the passive mode-locking by
using waveguide array \cite{R2}, and the beam steering \cite{PRE53,ZhangOC,Xu}.
The behavior of light beam propagation in a coupler composed by two-channel
nonlinear waveguide gained particular attention because it can exhibit some of
universal properties in nonlinear periodic systems and nonlinear waveguide
arrays \cite{Yaroslov1}. It is shown that the coupler can support the
symmetry-preserving solutions which have the linear counterparts and the
symmetry-breaking solutions without any linear counterparts
\cite{Kevrekidis,Birnbaum,Jia}, in which the spontaneous symmetry-breaking has
been experimentally demonstrated in optically induced lattices with a local
double-well potential \cite{Kevrekidis}.
In this paper, we consider light beam propagation in an asymmetric
double-channel waveguide with Kerr-type nonlinear response, and derive various
analytical stationary solutions in detail. It is found that the asymmetric
double-channel waveguide can break the symmetric form of the
symmetry-preserving modes otherwise in the symmetric double-channel waveguide,
and such a coupler supports the symmetry-breaking modes. We also investigate
how the type of nonlinear response affects the existence and properties of
nonlinear optical modes in the asymmetric double-channel waveguide. The
dispersion relation shows that the degenerate modes exist in the system with
self-focusing nonlinear response, while for the coupler with self-defocusing
response the degenerate modes do not exist. In addition, based on these
optical modes supported in asymmetric double-channel waveguide we demonstrate
the control and manipulation of optical modes in different nonlinear media by
tuning the waveguide parameters.
The paper is organized as follows. In Section II, the model equation
describing beam propagation in a double-channel waveguide is derived. In
Section III, various analytical forms of optical modes are presented both in
self-focusing and self-defocusing media. Meanwhile, the dispersion relations
between the total energy and the propagation constant are discussed. In
Section IV, we study the nonlinear manipulation of optical modes in
double-channel waveguide. The conclusion is summarized in Section V.
\section{Model equation and reductions}
We consider a planar graded-index waveguide with refractive inde
\begin{equation}
n(z,x)=F(x)+n_{2}I(z,x). \label{ref_ind
\end{equation}
Here the first term on the right hand side presents a two-channel waveguide
with the different refractive index, namely, $F(x)=$ $n_{11}$ as
$-L_{0}/2-D_{0}<x<-L_{0}/2$, and $F(x)=n_{12}$ as $L_{0}/2<x<L_{0}/2+D_{0}$,
otherwise, $F(x)=$ $n_{0}$ ($<n_{11},n_{12}$), where $D_{0}$ and $L_{0}$
represent the width of waveguide and the separation between waveguides,
respectively; while $n_{0}$, $n_{11}$ and $n_{12}$ denote the refractive index
of cladding and waveguide, respectively. The second term denotes Kerr-type
nonlinearity, $I(z,x)$ is the optical intensity, and positive (negative) value
of nonlinear coefficient $n_{2}$ indicates self-focusing (self-defocusing)
medium. Under slowly varying envelope approximation, the nonlinear wave
equation governing beam propagation in such a waveguide with the refractive
index given by Eq. (\ref{ref_ind}) can be written as
\begin{equation}
i\frac{\partial\psi}{\partial z}+\frac{1}{2k_{0}}\frac{\partial^{2}\psi
}{\partial x^{2}}+\frac{k_{0}\left[ F(x)-n_{0}\right] }{n_{0}}\psi
+\frac{k_{0}n_{2}}{n_{0}}\left\vert \psi\right\vert ^{2}\psi=0, \label{eq1
\end{equation}
where $\psi(z,x)$ is the envelope function, and $k_{0}=2\pi n_{0}/\lambda$ is
wave number with $\lambda$ being wavelength of the optical source generating
the beam. By introducing the normalized transformations $\psi(z,x)=(k_{0
\left\vert n_{2}\right\vert L_{D}/n_{0})^{-1/2}\varphi(\zeta,\xi)$,
$\xi=x/w_{0}$ and $\zeta=z/L_{D}$ with $L_{D}=2k_{0}w_{0}^{2}$, which
represents the diffraction length, we get the dimensionless form of Eq.
(\ref{eq1}) as follow
\begin{equation}
i\frac{\partial\varphi}{\partial\zeta}+\frac{\partial^{2}\varphi}{\partial
\xi^{2}}+V(\xi)\varphi+\eta\left\vert \varphi\right\vert ^{2}\varphi=0,
\label{eq2
\end{equation}
where $\eta=n_{2}/\left\vert n_{2}\right\vert =\pm1$ corresponds to
self-focusing ($+$) and self-defocusing ($-$) nonlinearity of the waveguides,
respectively, and $V(\xi)=2k_{0}^{2}w_{0}^{2}\left[ F(w_{0}\xi)-n_{0}\right]
/n_{0}$ is of the form
\begin{equation}
V(\xi)=\left\{
\begin{array}
[c]{cc
V_{1}, & -L/2-D<\xi<-L/2,\\
V_{2}, & L/2<\xi<L/2+D,\\
0, & \text{otherwise,
\end{array}
\right. \label{V
\end{equation}
which describes the dimensionless two-channel waveguide structure with
different refractive index, where $V_{1}=2k_{0}^{2}w_{0}^{2}(n_{11
-n_{0})/n_{0}$ and $V_{2}=$ $2k_{0}^{2}w_{0}^{2}(n_{12}-n_{0})/n_{0}$ being
the modulation depth of the refractive index of the left and right waveguide,
and $L=L_{0}/w_{0}$ and $D=D_{0}/w_{0}$ corresponding to scaled separation and
width of waveguide, respectively. Here, we use the typical waveguide
parameters $D=3.5$, $L=5$, $V_{2}=2.525$, and vary $V_{1}$. Figure 1 shows the
profile of the two-channel waveguide structure given by Eq. (\ref{V}). It
should be pointed out that such structure can be realized experimentally
\cite{FL}. It is shown that the Eq. (\ref{eq2}) conserves the total energy
flow $P(\zeta)=\int_{-\infty}^{+\infty}\left\vert \varphi(\zeta,\xi
)\right\vert ^{2}d\xi=P_{0}$, where $P_{0}$ is the dimensionless initial total energy.
\begin{figure}[ptb]
\centering\vspace{-0.0cm} \includegraphics[width=6.0cm]{figure1.eps}
\vspace{-0.3cm}\caption{The profile of a two-channel waveguide with different
refractive index.
\end{figure}
Assuming that the stationary solution of Eq. (\ref{eq2}) is of the form
$\varphi(\zeta,\xi)=u(\xi)\exp(i\beta\zeta)$, where $u(\xi)$ is a real
function, and $\beta$ is the propagation constant, and substituting it into
Eq. (\ref{eq2}), we find that the function $u(\xi)$ obeys the following
nonlinear equatio
\begin{equation}
\frac{d^{2}u}{d\xi^{2}}+\left[ V(\xi)-\beta\right] u+\eta u^{3}=0,
\label{sta_equ
\end{equation}
where $\eta=\pm1$ corresponds to self-focusing ($+$) and self-defocusing ($-$)
nonlinearity of the waveguides, respectively. It should be pointed out that
Eq. (\ref{sta_equ}) not only can describe the optical modes in the
double-channel waveguide structure with the different refractive index, but
also can describe one-dimensional Bose-Einstein condensate trapped in a finite
asymmetry double square well potential $-V(\xi)$. In particular, when
$V_{1}=V_{2}$, the corresponding optical modes in the symmetric double-channel
waveguide structure have been studied, and the results have shown that the
coupler not only supports symmetry-preserving modes but also symmetry-breaking
modes \cite{Jia}.
\section{Optical modes}
In this section, we will present the analytical solutions of Eq.
(\ref{sta_equ}) with the potential (\ref{V}) for $\eta=\pm1$. Generally, the
solutions of Eq. (\ref{sta_equ}) can be constructed in terms of the Jacobi
elliptic functions depending on the values of the variable $\xi$, and have the
same propagation constants in different regions. Within the double-channel
waveguides of $-L/2-D<\xi<-L/2$ and $L/2<\xi<L/2+D$, the solution of Eq.
(\ref{sta_equ}) is the oscillatory function, so it can be selected in the form
\cite{WDLi
\begin{equation}
u_{1}(\xi;A,K,\delta)=A\operatorname*{sn}\left( K\xi+\delta,-\frac{\eta
A^{2}}{2K^{2}}\right) , \label{sn
\end{equation}
with $\beta=V_{1}-K^{2}+\eta A^{2}/2$ in the region of $-L/2-D<\xi<-L/2$ and
$\beta=V_{2}-K^{2}+\eta A^{2}/2$ in the region of $L/2<\xi<L/2+D$. In the
region of $\left\vert \xi\right\vert <L/2$, the solution of Eq. (\ref{sta_equ
) has two different Jacobi elliptic functions for the symmetric and the
antisymmetric case, respectively. For the symmetric case, the solution is
\cite{WDLi}
\begin{equation}
u_{2}(\xi;B,Q,\sigma)=B\operatorname*{nc}\left( Q\xi+\sigma,1+\frac{\eta
B^{2}}{2Q^{2}}\right) , \label{nc
\end{equation}
with $\beta=Q^{2}+\eta B^{2}$; and for the antisymmetric case, the solution is
\cite{WDLi
\begin{equation}
u_{2}(\xi;B,Q,\sigma)=B\operatorname*{sc}\left( Q\xi+\sigma,1+\frac{\eta
B^{2}}{2Q^{2}}\right) , \label{sc
\end{equation}
with $\beta=Q^{2}-\eta B^{2}/2$. It should be noted that those two solutions
are precise solutions of Eq. (\ref{sta_equ}) for one node and no node within
the region of $\left\vert \xi\right\vert <L/2$. In other regions, the solution
of Eq. (\ref{sta_equ}) is required to tend to zero as $\xi\rightarrow\pm
\infty$, so it is taken as \cite{Jia}
\begin{equation}
u_{3}(\xi;b)=\frac{1}{be^{-\sqrt{\beta}\xi}+ce^{\sqrt{\beta}\xi}},
\label{sech
\end{equation}
with $\beta>0$. Substituting (\ref{sech}) into Eq. (\ref{sta_equ}), one
obtains $c=\eta/(8\beta b)$.
Note here that although the modulus in the usual Jacobi elliptic function is
restricted from 0 to 1, this problem can be solved by the modular
transformation such that the modulus in the Jacobi elliptic functions given by
Eqs. (\ref{sn}-\ref{sc}) can take any positive or negative values in our
investigations, as shown in Refs. \cite{Byrd,WDLi}, so those solutions do not
depend on nonlinearity sign.
In the following, we show the analytical global solutions of Eq.
(\ref{sta_equ}). With the help of Eqs. (\ref{sn}), (\ref{nc}) [or (\ref{sc})],
and (\ref{sech}), the solutions of Eq. (\ref{sta_equ}) can be written a
\begin{equation}
u(\xi)=\left\{
\begin{array}
[c]{ll
u_{3}(\xi;b_{1}), & \text{ \ }\xi<-L/2-D,\\
u_{1}(\xi;A_{1},K_{1},\delta_{1}), & \text{ \ }-L/2-D<\xi<-L/2,\\
u_{2}(\xi;B,Q,\sigma), & \text{ \ }\left\vert \xi\right\vert <L/2,\\
u_{1}(\xi;A_{2},K_{2},\delta_{2}), & \text{ \ }L/2<\xi<L/2+D,\\
u_{3}(\xi;b_{2}), & \text{ \ }\xi>L/2+D.
\end{array}
\right. \label{solution
\end{equation}
The continuity conditions for $u$ and $\partial u/\partial\xi$ at the
boundaries of $\xi=\pm L/2$ and $\xi=\pm(L/2+D)$ requir
\begin{align}
u_{3}\left( -L/2-D;b_{1}\right) & =u_{1}\left( -L/2-D;A_{1},K_{1
,\delta_{1}\right) ,\nonumber\\
\frac{du_{3}}{d\xi}\left( -L/2-D;b_{1}\right) & =\frac{du_{1}}{d\xi
}\left( -L/2-D;A_{1},K_{1},\delta_{1}\right) ,\nonumber\\
u_{1}\left( -L/2;A_{1},K_{1},\delta_{1}\right) & =u_{2}(-L/2;B,Q,\sigma
),\nonumber\\
\frac{du_{1}}{d\xi}\left( -L/2;A_{1},K_{1},\delta_{1}\right) &
=\frac{du_{2}}{d\xi}(-L/2;B,Q,\sigma),\nonumber\\
u_{2}(L/2;B,Q,\sigma) & =u_{1}(L/2;A_{2},K_{2},\delta_{2}),\nonumber\\
\frac{du_{2}}{d\xi}(L/2;B,Q,\sigma) & =\frac{du_{1}}{d\xi}(L/2;A_{2
,K_{2},\delta_{2}),\nonumber\\
u_{1}(L/2+D;A_{2},K_{2},\delta_{2}) & =u_{3}(L/2+D;b_{2}),\nonumber\\
\frac{du_{1}}{d\xi}(L/2+D;A_{2},K_{2},\delta_{2}) & =\frac{du_{3}}{d\xi
}(L/2+D;b_{2}), \label{con
\end{align}
with $\beta=V_{1}-K_{1}^{2}+\eta A_{1}^{2}/2=V_{2}-K_{2}^{2}+\eta A_{2}^{2
/2$, and $\beta=Q^{2}+\eta B^{2}$ for the symmetric case given by Eq.
(\ref{nc}) or $\beta=Q^{2}-\eta B^{2}/2$ for the antisymmetric case given by
Eq. (\ref{sc}). In Eq. (\ref{solution}), there are eleven parameters $A_{1}$,
$K_{1}$, $\delta_{1}$, $A_{2}$, $K_{2}$, $\delta_{2}$, $B$, $Q$, $\sigma$,
$b_{1}$, and $b_{2}$, which can be calculated by solving numerically Eqs.
(\ref{con}) with the conditions that the propagation constants in different
regions should be same. Once those parameters are determined numerically, we
can obtain the exact optical modes for asymmetric double-channel nonlinear waveguides.
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=9cm]{figure2.eps}
\vspace{-0.5cm}\caption{(Color Online) Various different optical modes
existing in self-defocusing medium ($\eta=-1$), where the dash-dotted red line
is the optical mode for $V_{1}=2.500$, the solid green line is the optical
mode for $V_{1}=2.525$, and the dashed blue line is the optical mode for
$V_{1}=2.550$. Here $\beta=2.00$ in (a) and (b), $\beta=0.85$ in (c) and (d).
\end{figure}
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=9cm]{figure3.eps}
\vspace{-0.5cm}\caption{(Color Online) Various different optical modes
existing in self-focusing medium ($\eta=1$), where the dash-dotted red line is
the optical mode for $V_{1}=2.500$, the solid green line is the optical mode
for $V_{1}=2.525$, and the dashed blue line is the optical mode for
$V_{1}=2.550$. Here $\beta=2.30$ in (a) and (b), $\beta=1.10$ in (c) and (d).
\end{figure}
Fig. 2 and Fig. 3 show several different optical modes in a nonlinear
asymmetric double-channel waveguide in the self-defocusing medium and the
self-focusing medium, respectively. These optical modes are induced from the
symmetry-preserving optical modes in the symmetric double-channel waveguide,
where for comparison, we also plotted the corresponding symmetry-preserving
optical modes in the symmetric double-channel waveguides in the same figure.
From Figs. 2 and 3, we found that the symmetry of the modes is broken due to
asymmetry of the two-channel waveguide, and the amplitude of the modes in the
lower refractive index waveguide is smaller than that in the higher refractive
index waveguide for the self-defocusing medium, while for the self-focusing
medium, the amplitude of the modes in the lower refractive index waveguide is
larger than that in the higher refractive index waveguide.
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=9cm]{figure4.eps}
\vspace{-0.5cm}\caption{(Color Online) Several optical modes with different
$\beta$ in a nonlinear asymmetrical double-channel waveguide for the
self-defocusing medium ($\eta=-1$). Here the parameters are $V_{1}=2.500$ and
$V_{2}=2.525$.
\end{figure}
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=9cm]{figure5.eps}
\vspace{-0.5cm}\caption{(Color Online) Several optical modes with different
$\beta$ in a nonlinear asymmetrical double-channel waveguide for the
self-focusing medium ($\eta=1$). Here the parameters are the same as in Fig.
4.
\end{figure}
We also demonstrate the profiles of optical modes with dependence on the
propagation constant $\beta$. Fig. 4 and Fig. 5 present several corresponding
modes shown in Figs. 2 and 3 for different propagation constant $\beta$. From
them, one can see that, for self-defocusing media the profile of nonlinear
modes is shrunk, and the corresponding amplitude becomes smaller with an
increase of the propagation constant $\beta$, while for self-focusing case it
is opposite, namely, the profile of nonlinear modes becomes more prominent and
the corresponding amplitude becomes larger. This feature can be depicted by
the dispersion relations between the total energy $P_{0}$ and the propagation
constant $\beta$. As shown in Fig. 8 and Fig. 9, one can see that the total
energy decreases with the increase of the propagation constant for
self-defocusing media (see Fig. 8), whereas it is an increasing function of
the propagation constant for self-focusing case (see Fig. 9).
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=9cm]{figure6.eps}
\vspace{-0.5cm}\caption{(Color Online) Several symmetry-breaking optical
modes, where the dash-dotted red lines are optical modes for $V_{1}=2.500$,
the solid green lines are optical modes for $V_{1}=2.525$, and the dashed blue
lines are optical modes for $V_{1}=2.550$. Here $\eta=-1$ in (a) and (c),
$\eta=1$ in (b) and (d) and $\beta=1.78$ in (a), $\beta=2.39$ in (b),
$\beta=0.66$ in (c) and $\beta=1.15$ in (d).
\end{figure}
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=9cm]{figure7.eps}
\vspace{-0.5cm}\caption{(Color Online) Several symmetry-breaking optical modes
with different $\beta$ in a nonlinear asymmetrical double-channel waveguide.
Here the parameters are the same as in Fig. 6.
\end{figure}
As discussed in Ref. \cite{Jia}, besides the symmetry-preserving optical
modes, a double-channel waveguide also supports the symmetry-breaking optical
modes, and the corresponding optical modes in a nonlinear asymmetric
double-channel waveguide are presented in Fig. 6, in which we also plotted the
corresponding symmetry-breaking optical modes in a symmetric double-channel
waveguide in the same figure for comparison. One can find that the optical
modes in a nonlinear asymmetric double-channel waveguide are almost the same
as the modes in a symmetric one for the given $\beta$.
Similarly, the corresponding modes shown in Fig. 6 for different propagation
constant $\beta$ are presented in Fig. 7. It is shown that the profile of
nonlinear modes becomes shrunk with the increase of the propagation constant
$\beta$ for the self-defocusing case, while for the self-focusing case the
profile of nonlinear modes becomes more pronounced. This feature is depicted
by the dispersion relations between the total energy $P_{0}$ and the
propagation constant $\beta$. It should be pointed out that the modes shown in
Fig. 6 only exist in a small region, as shown in Fig. 8 and Fig. 9.
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=11cm]{figure8.eps}
\vspace{-0.5cm}\caption{(Color Online) The dependence of the total energy
$P_{0}$ on the propagation constant $\beta$ for the modes existing in
self-defocusing medium ($\eta=-1$). Here the parameters are $V_{1}=2.500$ and
$V_{2}=2.525$. The labeled shadow areas are enlarged in corresponding (a)-(b).
And the labels 2a, 2b, $\cdots$, mean the corresponding modes shown in Fig.
2a, 2b, $\cdots$, respectively.
\end{figure}
\begin{figure}[ptb]
\centering\vspace{-0cm} \includegraphics[width=11cm]{figure9.eps}
\vspace{-0.5cm}\caption{(Color Online) The dependence of the total energy
$P_{0}$ on the propagation constant $\beta$ for the modes existing in
self-focusing medium ($\eta=1$). Here the parameters are $V_{1}=2.500$ and
$V_{2}=2.525$. The labeled shadow areas are enlarged in corresponding (a)-(d).
And the labels 3a, 3b, $\cdots$, mean the corresponding modes shown in Fig.
3a, 3b, $\cdots$, respectively.
\end{figure}
From the dispersion relations shown in Figs. 8 and 9, one can see that for the
self-defocusing medium there is no intersection between dispersion curves (see
Fig. 8 and the enlarged Figs. 8a and 8b), which indicates that there is no
degenerate modes existing, and the total energy of the mode shown in Fig. 2a
is the highest for a given propagation constant $\beta$. While for the
self-focusing medium the dispersion curves can intersect (see the enlarged
Figs. 9c and 9d), which implies that there exists two different modes with the
same total energy at the intersection point, namely, the degeneracy occurs at
the intersection point. Note that for our choices of the parameters, the
intersection points of the dispersion curves for the modes shown in Figs. 3a
and 3b, and Figs. 3c and 3d are about $2.5485$ and $1.3595$, respectively, and
the corresponding degenerate modes are shown by the green solid curves in Fig.
5. Here, to distinguish the case of the intersection, we rotate the dispersion
curve for the modes shown in Fig. 3a (Fig. 3c) for an angle of $\pi/6$
anticlockwise as the center of intersection point and the same angle for
dispersion curve of the mode shown in Fig. 3b (Fig. 3d) but rotate clockwise,
as shown in Figs. 9c and 9d.
\begin{figure}[ptb]
\includegraphics[width=9.5cm]{figure10.eps} \vspace{0.0cm}\caption{(Color
Online) The evolution of optical modes shown in Fig. 2 into the self-focusing
Kerr medium without channels, where $\eta^{\prime}=10$ in (a) and (b), and
$\eta^{\prime}=20$ in (c) and (d). Here the parameters are $V_{1}=V_{2}=2.525$
and $\zeta_{0}=10$. The labels (a), (b), (c), and (d) mean the corresponding
modes shown in Fig. 2a, 2b, 2c, and 2d, respectively.
\end{figure}
\begin{figure}[ptb]
\includegraphics[width=9.5cm]{figure11.eps} \vspace{0.0cm}\caption{(Color
Online) The evolution of optical modes shown in Fig. 3 into the self-focusing
Kerr medium without channels, where $\eta^{\prime}=5$ in (a) and (b), and
$\eta^{\prime}=10$ in (c) and (d). Here the parameters are $V_{1}=V_{2}=2.525$
and $\zeta_{0}=10$. The labels (a), (b), (c), and (d) mean the corresponding
modes shown in Fig. 3a, 3b, 3c, and 3d, respectively.
\end{figure}
\section{The nonlinear manipulation of optical modes in double-channel
waveguide}
In this section, we will demonstrate the control and manipulation of optical
modes in reconfigurable nonlinear media. Thus our interest is to investigate
the evolution dynamics of optical beams existing in double-channel waveguide
propagating into a uniform nonlinear medium. In this case, the governing
equation can be generally written a
\begin{equation}
i\frac{\partial\psi}{\partial z}+\frac{1}{2k_{0}}\frac{\partial^{2}\psi
}{\partial x^{2}}+\frac{k_{0}\Delta n(z,x)}{n_{0}}\psi=0, \label{Gen_eq
\end{equation}
where the refractive index change $\Delta n(z,x)$ is a function of $z$ and
$x$, and $\Delta n(z,x)=n(z,x)-n_{0}$. Here we assume that when $0\leq z\leq
Z_{0}$, $n(z,x)$ is in the form of Eq. (\ref{ref_ind}), $\Delta
n(z,x)=F(x)+n_{2}I(z,x)-n_{0}$; while for $z>Z_{0}$, $\Delta n(z,x)=n_{2
^{\prime}I(z,x)-n_{0}$. Here, $n_{2}$ and $n_{2}^{\prime}$ are the Kerr
nonlinear coefficients of different media in the region of $0\leq z\leq Z_{0}$
and $z>Z_{0}$, respectively. Thus, when $0\leq z\leq Z_{0}$, namely in the
region of $0\leq\zeta\leq\zeta_{0}$, Eq. (\ref{Gen_eq}) can be normalized to
Eq. (\ref{eq2}), where $\zeta=z/L_{D}$ and $\zeta_{0}=Z_{0}/L_{D}$ with
$L_{D}=2k_{0}w_{0}^{2}$. At the same time, in the region of $z>Z_{0}$, namely
$\zeta>\zeta_{0}$, Eq. (\ref{Gen_eq}) is reduced to the dimensionless form as
follow
\begin{equation}
i\frac{\partial\varphi}{\partial\zeta}+\frac{\partial^{2}\varphi}{\partial
\xi^{2}}+\eta^{\prime}\left\vert \varphi\right\vert ^{2}\varphi=0,
\label{eq22
\end{equation}
where $\eta^{\prime}=n_{2}^{\prime}/\left\vert n_{2}\right\vert $. Note that
Eq. (\ref{eq22}) is different from Eq. (\ref{eq2}), in which Eq. (\ref{eq2})
includes a potential function $V(\xi)$ given by Eq. (\ref{V}), while Eq.
(\ref{eq22}) does not include and can describe the dynamics of beams in Kerr
media without any refractive index modulation.
In the following analysis, optical beams of different modes existing in
double-channel waveguide are injected into the uniform nonlinear medium after
propagating diffraction length of $\zeta_{0}$ in double-channel waveguide.
First we consider the situation that optical beams are injected from
symmetrical double-channel waveguide. For the optical modes shown in Fig. 2,
which exist in self-defocusing medium ($\eta=-1$), the numerical simulations
show that when $\eta^{\prime}<0$, these optical modes are diffracted quickly
after entering into a uniform Kerr medium. However, when $\eta^{\prime}>0$ and
is large enough, the evolution of optical beams exhibits different scenarios
in the Kerr medium without any channels, as shown in Fig. 10. Similarly, for
the optical modes shown in Fig. 3, which exist in self-focusing media
($\eta=1$) with double-channel waveguide, as shown in Fig. 11, our numerical
simulations show that the evolution of optical modes exhibit almost similar
properties with that in Fig. 10.
From Figs. 10 and 11, one can see that when the optical modes existing both in
self-defocusing and self-focusing media with double-channel waveguide are
injected into the self-defocusing medium without any channels, the beams
should be diffracted quickly. However,\ when the optical modes are injected
into the self-focusing medium without any channels and the corresponding
nonlinear coefficient $\eta^{\prime}$ is enough larger, the beams could be
manipulated effectively. In this situation, when the optical modes shown in
Fig. 2a and Fig. 3a are injected into self-focusing medium without any
channels the beams appear to attract and repel each other periodically, as
shown in Figs. 10a and 11a. While when the modes shown in Figs. 2b-2d and
Figs. 3b-3d are injected into self-focusing medium without any channels, the
beams repel each other, as shown in Figs. 10b-10d and Figs. 11b-11d. Note that
the escape angle are the same for the beams in Fig. 2b (Fig. 3b) due to the
symmetry of the double-channel waveguide.
\begin{figure}[ptb]
\includegraphics[width=9.5cm]{figure12.eps} \vspace{-0.5cm}\caption{(Color
Online) The evolution of optical modes shown in Fig. 6 into the self-focusing
Kerr medium without channels, where $\eta^{\prime}=10$. Here the parameters
are $V_{1}=V_{2}=2.525$ and $\zeta_{0}=10$. The labels (a), (b), (c), and (d)
mean the corresponding modes shown in Fig. 6a, 6b, 6c, and 6d, respectively.
\end{figure}
\begin{figure}[t]
\includegraphics[width=9cm]{figure13.eps} \vspace{-0.5cm}\caption{(Color
Online) Energy sharing (row 1) and escape angles (row 2) of optical beams as a
function of parameter $V_{1}$. In all cases, the solid-blue and the dashed-red
curves correspond to the left and right beams, respectively. Here the
parameter are $V_{2}=2.525$, $\eta^{\prime}=10$ in (a) and (c) corresponding
to the modes shown in Fig. 2b (namely Fig. 11b) and $\eta^{\prime}=5$ in (b)
and (d) corresponding to the modes shown in Fig. 3b (namely Fig. 12b).
\end{figure}
The evolution of optical modes shown in Fig. 6 is presented in Fig. 12, in
which Figs. 12a and 12c (Figs. 12b and 12d) demonstrate the evolution of
optical modes in the self-focusing medium without any channels with initial
input beams injected from self-defocusing (self-focusing) medium with
double-channel waveguide. As shown in Fig. 12a and 12b, one can see that
optical beams with a single hump can be compressed effectively. and as shown
in Figs. 12c and 12d the optical modes with two peaks are separated during the
evolution due to the repulsive interaction force resulted from the phase
difference between the two peaks.
It should be pointed out that these results only take into account the optical
modes existing in the symmetrical double-channel waveguide. Then, one will
naturally ask what is the influence of the asymmetrical double-channel on the
evolution of optical modes. In order to understand this question, we launch
optical beams from an asymmetrical double-channel waveguide into the
self-focusing medium to observe their evolution by tuning the depth of left
channel of the waveguide for a fixed depth of the right channel, namely, the
value of $V_{1}$ varies from $2.500$ to $2.550$ for $V_{2}=2.525$. As an
example, we demonstrated the evolution dynamics for the modes shown in Fig. 2b
and Fig. 3b. In Fig. 13, we present the dependence of the energy sharing, the
ratio of the energy carried by each component in the mode over the total
energy, and the escape angle, the angle of the each peak in the mode and the
propagation direction $\zeta$, on the value of $V_{1}$. As shown in Fig. 13,
one can see that the energy carried by each beam is different due to the
asymmetry of the double-channel waveguide [shown in Figs. 13a and 13b]. Also,
one can see clearly that the escape angles of the two beams take different
values with the change of the value $V_{1}$, which means that the beams can be
controlled by tuning the depth of the left channel of the waveguide.
\section{Conclusion}
We have studied light beam propagation in an asymmetric double-channel
waveguide in the form of a nonlinear coupler. A family of analytical solutions
with symmetric and antisymmetric forms has been obtained for both
self-focusing and self-defocusing nonlinear media, and the dispersion
relations between the total energy and the propagation constant has been
discussed in detail. Our results reveal that the system with self-focusing
nonlinear response supports the degenerate modes, while for self-defocusing
medium the degenerate modes do not exist. In addition, we explored new ways to
steer optical modes propagating from double-channel waveguide into a uniform
self-focusing medium. The compression of beam with single hump and split of
beams with two humps have been demonstrated by tuning the depth of the channel
of the waveguide. These properties may be applied in practical optical
devices, and be useful for optical processing, optical switching or optical routing.
\section{\textbf{ACKNOWLEDGEMENT}}
The authors acknowledge fruitful discussions with Professor Yuri Kivshar. This
research is supported by the National Natural Science Foundation of China
grant 61078079, the Shanxi Scholarship Council of China grant 2011-010, and
the Australian Research Council.
\bigskip
|
3,212,635,537,690 | arxiv | \section{Introduction}
The quark-gluon plasma produced in heavy-ion collisions at RHIC is very opaque to energetic partons as shown by the observed
strong jet quenching \cite{Adler:2003qi,Adler:2002tq} due to multiple scattering and induced parton energy loss \cite{Wang:1991xy}.
The energy and momentum lost by a propagating parton will be transferred to the medium via
radiated gluons and recoiled partons and lead to collective medium excitation in the form of Mach
cones \cite{CasalderreySolana:2004qm,Stoecker:2004qu}. Such collective excitation by a propagating jet
is expected to be responsible for the observed azimuthal dihadron correlation \cite{Adams:2005ph,Adler:2005ee} that has
a double-peak on the back-side of a triggered high-$p_{T}$ hadron. However, hadron spectra from the
freeze-out of the Mach cone in both hydrodynamics with realistic energy-momentum deposition by
jets \cite{CasalderreySolana:2004qm} and string calculations in the
hydrodynamic regime \cite{Gubser:2009sn} fail to reproduce the observed conic azimuthal correlations.
In this talk, we will report a recent study \cite{Li:2010ts} of medium excitation by a propagating jet shower
using both a linear Boltzmann transport and AMPT model \cite{Zhang:1999bd}. We will illustrate
that while a Mach-cone-like excitation by a propagating jet in a uniform medium cannot give rise to
a conic distribution of the final partons, deflection of the jet shower and the
Mach-cone-like excitation in an expanding medium will result in a double-peaked azimuthal distribution.
Recent studies also found that hydrodynamic expansion of hot spots or local fluctuation in the initial
parton density under the influence of radial flow \cite{Takahashi:2009na} and the triangular flow
of dense matter with fluctuating initial geometry \cite{Alver:2010gr} all contribute to a double-peaked
back-to-back dihadron correlation. With these different mechanisms
contributing to the dihadron correlation, it is important to explore ways to separate different contributions
and study the characteristics of the dihadron correlation from each of them. We will also report
a recent study \cite{Ma:2010dv} on dihadron correlation
as a result of harmonic flow or high order azimuthal anisotropy of hadron spectra,
expanding hot spots, jets and jet-induced medium excitation. The dihadron correlation after subtraction of contributions
from harmonic flow should come from medium modified jets, jet-induced medium excitation and expanding hot spots under
strong radial flow in high-energy heavy-ion collisions. By successively
randomizing the azimuthal angle of transverse momenta and transverse coordinates of initial jet shower
partons, we can isolate the effects of medium modified dijets, jet-induced medium excitation and
expanding hot spots. Because of the azimuthal uniform emission of direct photons, $\gamma$-hadron correlation should be free
of contributions from harmonic flow and hot spots and therefore is caused only by jet-induced medium
excitations. We therefore propose to use comparative study of $\gamma$-hadron and dihadron azimuthal correlations
to disentangle contributions from expanding hot spots and shed light on the dynamics of jet-induced Mach-cone-like
excitation in high-energy heavy-ion collisions
One can study jet shower propagation and medium excitation through a linearized Monte
Carlo simulation of the Boltzmann transport equation,
\begin{eqnarray}
p _1 \cdot \partial f_1 (p_1 ) &=& - \int {dp_2 } {dp_3 } {dp_4 } (f_1 f_2 - f_3 f_4 )
\left| {M_{12 \to 34} } \right|^2 \nonumber \\
&\times&
(2\pi )^4 \delta ^4 (p_1 + p_2 - p_3 - p_4 ),
\label{eq:boltz}
\end{eqnarray}
including only elastic $1+2\rightarrow 3+4$ processes as given by the matrix elements $M_{12 \to 34}$,
where $ dp_i \equiv d^3 p_i/[2E_i (2\pi )^3]$, $ f_{i} =1/(e^{p_{i}\cdot u/T} \pm 1)$ $(i=2,4)$ are thermal parton
phase-space densities in a medium with local temperature $T$ and flow velocity $u=(1,\vec v)/\sqrt{1-v^{2}}$,
$f_{i}=(2\pi)^{3}\delta^{3}(\vec p-\vec p_{i})\delta^{3}(\vec x-\vec x_{i}-\vec v_{i}t)$ $(i=1,3)$ are the jet shower parton
phase-space densities before and after scattering, and we neglect the quantum statistics in the final state
of the scattering. We will consider quark propagation in a thermal medium and assume small
angle approximation of the elastic scattering amplitude $\left| {M_{12 \to 34} } \right|^{2} = C g^{4}(s^2 + u^2
)/(t + \mu^{2 })^2 $ with a screened gluon propagator, where $s$, $t$ and $u$ are Mandelstam variables,
$C$=1 (9/4) is the color factor for quark-gluon (gluon-gluon) scattering and $\mu$ is the screening mass
which we consider here as a constant cut-off of small angle scattering. The corresponding elastic cross section
is $d\sigma/dt=\left| {M_{12 \to 34} } \right|^{2}/16\pi s^{2}$.
Our numerical simulations show \cite{Li:2010ts} that recoil thermal partons from jet-medium interaction in a uniform medium
do cause Mach-cone-like excitation. The low energy recoil partons around the neck of the Mach-cone-like
excitation have a double-hump feature in the azimuthal distribution. However, low energy partons from the
body of the Mach-cone-like excitation become dominate at later times and the final azimuthal distribution has
only a broad single peak along the direction of the propagating jet due to diffusion of the wake front.
\begin{figure}[!ht]
\centerline{\includegraphics[width=6.5cm]{cone.pdf}\includegraphics[width=6.5cm]{xydis.pdf}}
\caption{(Color online) (left upper) Contour plot in the transverse ($x$-$y$) and beam ($x$-$z$) plane of energy density
excited by a quark jet shower with $E_{T}=20$ GeV and initial position at $(x,y,z)=(-4,0,0)$ fm and travels toward the center of
the expanding medium. The azimuthal distribution of medium and jet shower partons when the jet shower
travels against (left lower left) and perpendicular (left lower right) to the transverse flow.
(right) Contour plot of initial parton density (in arbitrary unit) $dN/dxdy$ in transverse
plane in a typical AMPT central $Au+Au$ event ($b=0$) at $\sqrt{s}=200$ GeV, with ellipticity $\epsilon_{2}=0.02$
and triangularity $\epsilon_{3}=0.02$ of the transverse parton distribution.}
\label{fig1}
\end{figure}
In an expanding medium as described by a (3+1)D ideal hydrodynamical calculation \cite{Hirano:2005xf},
both the shape of the medium excitation and the azimuthal distribution of partons from the jet shower
and jet-induced medium excitation are distorted by the transverse flow and the non-uniformity of the dense medium
as shown in the left panel of Fig.~\ref{fig1}. For a tangentially propagating jet shower low $p_{T}$ partons from the jet shower and
Mach-cone-like excitation are clearly deflected by both the density gradience and the radial flow, giving rise to
the azimuthal distributions that peak at an angle away from the initial jet direction. For jet showers
that travel against the radial flow, the same deflection splits the azimuthal
distribution of low $p_{T}$ partons to become a double-peaked one.
Multiple scatterings in heavy-ion collisions lead to fluctuation in local parton number density or hot spots
from both soft and hard interactions. Shown in the right panel of Fig.~\ref{fig1} is a contour plot of initial parton
density distribution in transverse plane $dN/dxdy$ of a typical AMPT event for central ($b=0$) $Au+Au$
collisions at $\sqrt{s}=200$ GeV. The irregular distribution of hot spots will lead to harmonic flow due to
collection expansion. The contributions from these harmonic flows to dihadron correlations can be calculated as
\begin{equation}
f(\Delta \phi ) = B\left(1 + \sum\limits_{n = 1}^\infty {2\langle v_n^{\rm trig} v_n^{\rm asso}\rangle \cos n\Delta \phi} \right), \label{eq:BG}
\end{equation}
where B is a normalization factor determined by the ZYAM (zero yield at minimum) scheme of
background subtraction , $v_n^{trig}$ and $v_n^{asso}$
are harmonic flow coefficients for trigger and associated hadrons.
For the study of jet-induced medium
excitation, it is important to isolate and subtract contributions from harmonic flows, especially the triangular flow,
since it contributes the most to the double-peak structure of back-to-back dihadron correlation.
Shown in the right panel of Fig.~\ref{fig-dih1}
are dihadron correlations before (dot-dashed) and after (solid) the removal of
contributions from harmonic flows for $p_{T}^{\rm trig}>2.5$ GeV/$c$
and $1 < p_{T}^{\rm asso} < 2$ GeV/$c$. Also shown in the figure are contributions from each harmonic flow $n=2$-6 (dashed).
These contributions are significant for up to $n=5$ harmonics.
\begin{figure}
\centerline{\includegraphics[width=7.5cm]{dihadron1.pdf}\includegraphics[width=7.5cm]{dihadron2.pdf}}
\caption{(Color online) (left) AMPT results on dihadron azimuthal correlation before (dot-dashed) and after (solid)
subtraction of contribution from harmonic flow $v_{n} (n=2-6)$ for $p_{T}^{\rm trig}>2.5$ GeV/$c$
and $1 < p_{T}^{\rm asso} < 2$ GeV/$c$. (right) Dihadron correlations after subtraction of harmonic flow
with different values of geometric triangularity $\epsilon_{3}$ for $p_{T}^{\rm trig}>2.5$ GeV/$c$
and $1 < p_{T}^{\rm asso} < 2$ GeV/$c$ .}
\label{fig-dih1}
\end{figure}
As seen in the left panel of Fig.~\ref{fig-dih1}, dihadron correlation after subtraction of contributions from harmonic flows still has
a double-peak feature on the away-side of the trigger due to jet-induced medium excitation and hadrons
from expanding hot spots. The structure should
be intrinsic to the jet-induced medium excitation and hot spots themselves and insensitive to the fluctuation of the initial geometry of the dense matter at a fixed
impact-parameter. As shown in the right panel of Fig.~\ref{fig-dih1}, the dihadron correlations after subtraction of
contributions from harmonic flows become independent on the initial geometric triangularity $\epsilon_{3}$.
To study the structure of dihadron azimuthal correlation from jets and hot spots separately, we successively
switch off each mechanism in AMPT model calculations. By randomizing the azimuthal angle of each jet shower
parton in the initial condition from HIJING simulations, we effectively switch off the initial
back-to-back correlation of dijets. The dihadron correlation (dashed) after subtraction of hadronic flows denoted as ``hot spots'' in the
left panel of Fig.~\ref{fig-dih3} still exhibits a double-peak on the away-side that comes only from hot spots.
It has roughly the same opening angle $\Delta\phi\sim 1$ (rad) as in the full simulation (solid). However, the magnitude
of the double-peaked away-side correlation is reduced by about 40\%, which can be attributed to
dihadron from medium modified dijets and jet-induced medium excitation.
When we turn off jet production in the HIJING initial condition, soft partons from string materialization
can still form ``soft hot spots'' that lead to a back-to-back dihadron correlation (dot-dashed) with a weak
double-peak. Jet shower partons increase the parton density in ``hot spots'' and lead
to a stronger double-peak dihadron correlation than that of ``soft hot spots''.
Without jets in AMPT, one can further randomize the polar angle of transverse coordinates of
soft partons and therefore eliminate the ``soft hot spots''. The dihadron correlation from such smoothed initial
condition becomes almost flat (dotted).
\begin{figure}
\centerline{\includegraphics[width=7.5cm]{dihadron3.pdf}\includegraphics[width=7.95cm]{gamma.pdf}}
\caption{(Color online) (left) Dihadron correlation (with harmonic flow subtracted) from AMPT with different
initial conditions for $p_{T}^{\rm trig}>2.5$ GeV/$c$ and $1 < p_{T}^{\rm asso} < 2$ GeV/$c$.
See text for details on the different initial conditions. (right) Dihadron correlation (solid)
compared with $\gamma$-hadron correlation (dashed) from AMPT for $p_{T}^{\rm trig}(h)>4$ GeV/$c$,
$p_{T}^{\rm trig}(\gamma)\ge 15$ GeV/$c$ and $1 < p_{T}^{\rm asso} < 2$ GeV/$c$ .}
\label{fig-dih3}
\end{figure}
Since high-$p_{T}$ $\gamma$'s do not interact with the dense medium, their emission should be uniform
in the azimuthal angle and uncorrelated with the harmonic flow and collective flow of the hot spots. Therefore,
$\gamma$-hadron correlation should only come from $\gamma$-triggered jets and their shape should
reflect directly the medium modification of the jets and jet-induced medium excitation.
Shown in the right panel of Fig.~\ref{fig-dih3} are dihadron correlations
(solid) after subtraction of harmonic flow as compared with $\gamma$-hadron
correlation. The two correlations are comparable in magnitude but dihadron has a more pronounced
double-peak which can be attributed to the addition of dihadrons from hot spots and the
geometric bias toward surface and tangential emission that enhances deflection
of jet showers and jet-induced medium excitation \cite{Li:2010ts} by the radial flow.
Such difference is important to measure in experiments
that will provide critical information on jet-induced medium excitation and evolution of hot
spots in high-energy heavy-ion collisions.
This work is supported
by the NSFC of China under Projects Nos. 10610285, 10635020, 10705044, 10825523, 10975059,
11035009, the Knowledge Innovation Project of Chinese Academy of Sciences under Grant No. KJCX2-EW-N01
and by the U.S. DOE under Contract No. DE-AC02-05CH11231 and within the framework of the JET Collaboration.
\bibliographystyle{elsarticle-num}
|
3,212,635,537,691 | arxiv | \section{Introduction}
\label{sec:introduction}
A time-honored possibility to study the low-energy regime of the theory of strong interactions,
Quan\-tum Chromo\-dy\-na\-mics (QCD), is given by Effective Field Theories (EFTs).
In this approach, one investigates QCD-inspired effective models that describe the interactions
of the relevant low-energy degrees of freedom, namely, those of hadrons. In order to capture the
low-energy dynamics of the strong interaction properly, the construction of these models is
mainly based on the internal and spacetime symmetries of QCD and their possible breaking.
In the context of low-energy models for QCD, the most central symmetry is given by the
chiral $SU(N_{f})_{L}\times SU(N_{f})_{R}$ symmetry accidentally arising from the quark
sector of the QCD Lagrangian. Here, $N_{f}$ refers to the number of dynamical quark flavors,
which will be set to two, $N_{f}=2$, throughout the rest of this work.
The importance of chiral symmetry is twofold: On the one hand, the hadronic
currents arise as multiplets with respect to the chiral group. Therefore, group theory allows
for a systematic construction of chirally invariant Lagrangians. On the other hand, the explicit
and spontaneous breaking of chiral symmetry is of immediate physical relevance.
In particular, one observes so-called pseudo
Nambu-Goldstone bosons (pNGBs) in the associated particle spectrum. Since these particles are
very light, they dominate the low-energy dynamics of the theory and, consequently, are
of crucial importance for a proper low-energy description. In the case of two-flavor QCD,
these pNGBs are associated with the pion fields.
In the framework of QCD, the most important example of an EFT is given by Chiral
Perturbation Theory (ChPT) \cite{Gasser:1983yg, Gasser:1984gg}. Conceptually, this approach
corresponds to a simultaneous expansion of the QCD generating functional in powers of
pion momenta and quark masses. The associated effective Lagrangian contains an infinite tower
of pion self-interactions coupled by so-called low-energy constants (LECs).
These coupling constants encode essential information about the low-energy
regime of QCD. At lowest order in the chiral expansion, the ChPT Lagrangian is equivalent to the
Nonlinear Sigma Model (NLSM) \cite{GellMann:1960np}.
Apart from ChPT, it is also possible to construct effective low-energy models
from a linear realization of chiral symmetry. The resulting family of models is usually referred
to as Linear Sigma Models (LSMs). In contrast to the nonlinear models, the pNGB fields and their
chiral partners are here treated on the same footing. The simplest and most
prominent example of such a model is given by the $O(4)$ LSM, which describes the interaction of the
$\sigma$ meson and the three pions.
In a recent work \cite{Eser:2018jqo}, we studied the low-energy
limit of the $O(4)$ LSM coupled to quarks, the so-called Quark-Meson Model (QMM), within
the Functional Renormalization Group (FRG) approach. Besides
the Yukawa coupling of the scalar and pseudoscalar mesons to the quark fields,
the $O(4)$ LSM has been extended by complete sets of derivative couplings of
order $\mathcal{O}(\partial^{2})$ and $\mathcal{O}(\partial^{4})$
for the pion fields. This corresponds to an approximation well beyond
the usual local potential approximation (LPA) and LPA' truncations, where these
derivative couplings are solely generated from meson and
quark fluctuations.
After its calculation from the FRG flow, the effective action of the QMM has
been reduced to an effective pion action by integrating out the $\sigma$ field,
similarly to Refs.\ \cite{Jungnickel:1997yu, Divotgey:2016pst}. In this action, the
higher-derivative terms are parametrized by the low-energy couplings of the $O(4)$ QMM.
In the present work, we improve and extend
the previous analysis of these low-energy couplings in several crucial ways:
\begin{itemize}
\item [(i)] We transform the effective action of the QMM into a nonlinearly
realized pion action by explicitly restricting the dynamics of the system onto the vacuum manifold
$SO(4)/SO(3)$. This manifold is associated to the spontaneous breakdown of the $O(4)$ symmetry.
After choosing a specific set of coordinates on $SO(4)/SO(3)$, we will deduce relations between the
higher-derivative couplings in
the linearly realized QMM and the nonlinear model featuring only pion dynamics. This nonlinear model is
then referred to as the low-energy limit of the QMM within this work.
\item [(ii)] In order to determine the range of validity of the low-energy effective theory, we
investigate the relative importance of mesonic and fermionic loop contributions to the renormalization
group (RG)-scale dependence of the low-energy couplings in the nonlinear model.
\item [(iii)] We introduce the higher-derivative couplings in the linear model
in a completely $O(4)$-invariant
manner, i.e., taking also momentum dependences of the interaction of the pion fields with the
$\sigma$ meson into account, and also include a scale dependence of the Yukawa interaction.
\end{itemize}
The paper is organized as follows: In Secs.\ \ref{sec:models} and \ref{sec:methods}, we introduce the basic
models, methods, and concepts that are used in this work. To this end,
Secs.\ \ref{sec:LSM} and \ref{sec:NLSM} briefly review the $O(4)$ LSM, the NLSM, as
well as the nonlinear formalism. Afterwards, Sec.\ \ref{sec:FRG} focuses on the
higher-derivative interactions in terms of the FRG truncation. In Sec.\ \ref{sec:effpion},
we transform the linearly realized effective action of the QMM into its nonlinear counterpart.
Finally, Sec.\ \ref{sec:results} presents the numerical results of the FRG flow within the
linear QMM, cf.\ Sec.\ \ref{sec:higher_order}, and the computed low-energy couplings of the
associated nonlinear model, cf.\ Sec.\ \ref{sec:LECs}. The conclusions of this work as well
as an outlook for further investigations is given in Sec.\ \ref{sec:summary}.
\section{Models}
\label{sec:models}
In this section, we briefly summarize the most important features of the linear QMM
and the nonlinear model with pionic degrees of freedom.
\subsection{The linear Quark-Meson Model}
\label{sec:LSM}
As already mentioned in the introduction, the simplest model based on a linear
realization of chiral symmetry is given by the $O(4)$ LSM. The basic object of this
model is the four-dimensional Euclidean field-space vector
\begin{equation}
\varphi = \begin{pmatrix} \vec{\pi} \\ \sigma \end{pmatrix}. \label{eq:phi}
\end{equation}
This vector constitutes the fundamental representation of $O(4)$ and, hence, transforms as
\begin{equation}
\varphi \xrightarrow{O(4)} O\varphi, \quad O \in O(4).
\label{eq:phitrafo}
\end{equation}
The Lagrangian of the $O(4)$ LSM is then constructed as
\begin{equation}
\mathcal{L}_{\text{LSM}} = \frac{1}{2}\left(\partial_{\mu}\varphi\right)\cdot\partial^{\mu}\varphi
- \frac{m_{0}^{2}}{2}\varphi\cdot\varphi - \frac{\lambda}{4}\left(\varphi\cdot\varphi\right)^{2}.
\label{eq:lagrangianLSM}
\end{equation}
The QMM is obtained from the above Lagrangian by including quarks in a chirally invariant way,
\begin{IEEEeqnarray}{rCl}
\mathcal{L}_{\text{QMM}} & = & \frac{1}{2}\left(\partial_{\mu}\varphi\right)\cdot\partial^{\mu}\varphi
- \frac{m_{0}^{2}}{2}\varphi\cdot\varphi - \frac{\lambda}{4}\left(\varphi\cdot\varphi\right)^{2} +
h_{\text{ESB}}\sigma \nonumber\\
& & +\, \bar{\psi}\left(i\gamma^{\mu}\partial_{\mu} - y\Phi_{5}\right)\psi, \label{eq:lagrangianQMM}
\end{IEEEeqnarray}
with
\begin{equation}
\Phi_{5} = \sigma t_{0} + i\gamma_{5}\vec{\pi}\cdot\vec{t},
\end{equation}
where $t_{0} = \mathbbmss{1}_{2}/2$ and $\vec{t} = \vec{\tau}/2$. Here, $\vec{\tau}$
denotes the usual vector of the Pauli matrices. The normalization of the generators
is chosen such that $\mathrm{tr}\left(t_{a}t_{b}\right) = \delta_{ab}/2$, $a,b = 0,\ldots,3$.
The additional term $\sim \!h_{\text{ESB}}\sigma$ in the above Lagrangian describes the explicit
breaking of chiral symmetry (ESB) by tilting the mesonic potential into the direction
of the $\sigma$ field.
In addition to the ESB, also the spontaneous breaking of chiral symmetry
has to be modelled. The latter is signaled by a nonvanishing order parameter identified
with the vacuum expectation value $\sigma_{0}$ of the $\sigma$ field. This order
parameter is typically introduced by shifting the $\sigma$ field according to
\begin{equation}
\sigma \rightarrow \sigma_{0} + \sigma.
\end{equation}
\subsection{The nonlinear model}
\label{sec:NLSM}
In contrast to the LSM, described by Eq.\ (\ref{eq:lagrangianLSM}), the field space
of the associated nonlinear model is not given by the four-dimensional Euclidean space,
but by a three-dimensional submanifold \cite{Meetz:1969as}. This field space arises as a consequence
of the pattern of spontaneous symmetry breaking. It is defined by the degenerate vacua and is
usually denoted as the vacuum manifold.
Since the coordinates of this space are in one-to-one correspondence with the pNGBs, the basic
degrees of freedom of the NLSM are given by the three pion fields.
For the following discussion, we only consider the $SO(4)$ subgroup of $O(4)$, such that the
vacuum manifold of the LSM is given by the space of (left) cosets $SO(4)/SO(3)$, cf.\ Refs.\
\cite{Weinberg:1968de, Coleman:1969sm, Callan:1969sn}. Using a representative of this coset
space, in the following denoted as $\Sigma\left(\zeta\right)$, it is possible to construct
the so-called Maurer-Cartan form $\alpha_{\mu}(\zeta)$ as
\begin{equation}
\alpha_{\mu}\left(\zeta\right) = \Sigma^{-1}\left(\zeta\right)\partial_{\mu}\Sigma
\left(\zeta\right).
\label{eq:MCform1}
\end{equation}
It should be emphasized that the coordinates of the coset space, $\zeta^{\alpha}$,
$\alpha = 1,2,3$, are usually not exactly identical to the pion fields, but directly related
to them. The Maurer-Cartan form is defined in the Lie algebra $\mathfrak{so}(4)$ and can
therefore be expanded as
\begin{equation}
\alpha_{\mu}\left(\zeta\right) = ie^{a}_{\mu}\left(\zeta\right)x_{a} + i\omega^{i}_{\mu}
\left(\zeta\right)s_{i},
\label{eq:MCform2}
\end{equation}
with coefficients
\begin{equation}
e^{a}_{\mu}\left(\zeta\right) = e^{\;\;a}_{\alpha}\left(\zeta\right)\partial_{\mu}\zeta^{\alpha},
\qquad \omega^{i}_{\mu}\left(\zeta\right) = \omega^{\;\;i}_{\alpha}\left(\zeta\right)\partial_{\mu}
\zeta^{\alpha},
\label{eq:MCform3}
\end{equation}
where $x_{a}$, $a = 1, 2, 3$, denotes the coset generators and $s_{i}$, $i = 1, 2, 3$,
those of the unbroken $SO(3)$ subgroup. The coefficients $e^{\;\;a}_{\alpha}(\zeta)$
define a frame on $SO(4)/SO(3)$ and the related metric reads
\begin{equation}
g_{\alpha\beta}\left(\zeta\right) = \delta_{ab}\, e^{\;\;a}_{\alpha}\left(\zeta\right)
e^{\;\;b}_{\beta}\left(\zeta\right),
\label{eq:metric}
\end{equation}
where $\alpha, \beta = 1,2,3$ represent curved coset indices.
The simplest Lagrangian that can be constructed from the above objects is given by
\begin{equation}
\mathcal{L}_{\text{NLSM}} = \frac{1}{2}F_{ab}\, e^{a}_{\mu}\left(\zeta\right)e^{b,\mu}
\left(\zeta\right),
\label{eq:NLSMlag}
\end{equation}
where the real-valued matrix $F$ is needed for dimensional reasons. This Lagrangian
is usually called the $SO(4)$ NLSM and contains the pion self-interaction
terms to arbitrary order in the fields with at most two spacetime derivatives.
In Appendix \ref{sec:trafoprops}, we review the general transformation properties of the coset representative
and the Maurer-Cartan form. In addition, we present the transformation behavior of the
nonlinear pion fields for an explicit choice of coordinates.
\section{Methods}
\label{sec:methods}
In this section, we discuss the calculation of the linearly realized effective action of the QMM
and its relation to the nonlinear model featuring only pionic degrees of freedom.
A necessary prerequisite for the determination of the low-energy couplings of
the nonlinear model from the effective action in terms of linearly realized pion fields
is the integration of all nonpionic QCD fluctuations, and, in particular, the quark loops.
Moreover, such a determination of the low-energy couplings is only meaningful at scales,
where fluctuations of nonpionic fields have already decoupled from the dynamics.
Before going on, we want to point out that the quark fluctuations of the QMM simulate full QCD dynamics.
In contrast to the purely mesonic linear model, including these quark fluctuations entails the qualitatively
correct decoupling of the mesonic degrees of freedom above the scale of chiral symmetry breaking.
Quantitatively, pure quark fluctuations are not capable of fully capturing QCD dynamics and, e.g.,
this decoupling happens too slowly \cite{Braun:2014ata,
Mitter:2014wpa, Cyrol:2017ewj}. An investigation of the effect of full QCD dynamics on the determination
of the low-energy couplings is deferred to future work.
\subsection{Effective action from the\\Functional Renormalization Group}
\label{sec:FRG}
The FRG is a nonperturbative continuum method that formulates the integration of quantum
fluctuations in terms of the RG-scale ($k$) dependence of the effective average
action $\Gamma_{k}$, which smoothly interpolates between the renormalized
classical action $S$ at the ultraviolet (UV) cutoff scale $\Lambda$, $\Gamma_{k \rightarrow \Lambda}
= S$, and the quantum effective action $\Gamma$ in the infrared (IR), $\Gamma_{k \rightarrow 0}
= \Gamma$. The action $\Gamma$ is the generating functional for all one-particle
irreducible vertex functions.
The scale evolution of $\Gamma_{k}$ from the UV to the IR is described by the
Wetterich equation \cite{Wetterich:1992yh},
\begin{IEEEeqnarray}{rCl}
\partial_{k}\Gamma_{k} & = & \frac{1}{2}\tr\left[
\partial_{k} R_{k}\left(
\Gamma^{(2)}_{k} + R_{k}\right)^{-1}\right] \nonumber\\
& = & \frac{1}{2} \! \!
\vcenter{\hbox{
\begin{pspicture}(1.5,2.0)
\psarc[linewidth=0.02](0.75,1.0){0.6}{113}{67}
\pscircle[linewidth=0.03](0.75,1.6){0.25}
\psline[linewidth=0.03](0.75,1.6)(0.92,1.77)
\psline[linewidth=0.03](0.75,1.6)(0.58,1.77)
\psline[linewidth=0.03](0.75,1.6)(0.58,1.43)
\psline[linewidth=0.03](0.75,1.6)(0.92,1.43)
\end{pspicture}
}} \! \! , \label{eq:Wetterich}
\end{IEEEeqnarray}
where the second line introduces the common diagrammatical notation
\begin{equation}
\left(\Gamma^{(2)}_{k} + R_{k}\right)^{-1} =
\! \! \! \vcenter{\hbox{
\begin{pspicture}[showgrid=false](1.5,0.5)
\psline[linewidth=0.02](0.1,0.25)(1.4,0.25)
\end{pspicture}
}} \! \! ,\quad
\partial_{k}R_{k} = \! \! \!
\vcenter{\hbox{
\begin{pspicture}[showgrid=false](1.5,0.8)
\psline[linewidth=0.02](0.1,0.4)(0.5,0.4)
\psline[linewidth=0.02](1.0,0.4)(1.4,0.4)
\pscircle[linewidth=0.03](0.75,0.4){0.25}
\psline[linewidth=0.03](0.75,0.4)(0.92,0.57)
\psline[linewidth=0.03](0.75,0.4)(0.58,0.57)
\psline[linewidth=0.03](0.75,0.4)(0.58,0.23)
\psline[linewidth=0.03](0.75,0.4)(0.92,0.23)
\end{pspicture}
}} \! \! . \label{eq:rep}
\end{equation}
The above flow equation contains the regulator function $R_{k}$, which gives
an additional mass contribution for low-energy modes. This means that it
effectively acts as an IR cutoff separating those soft modes from the integration
process. By successively lowering the scale $k$, the effective average action
$\Gamma_{k}$ includes increasingly more fluctuations and, in the limit $k \rightarrow 0$,
all quantum fluctuations are integrated out.
In order to compute $\Gamma_{k}$, one has to truncate the system of
vertex functions, since the right-hand side of Eq.\ (\ref{eq:Wetterich}) involves
the two-point function $\Gamma_{k}^{(2)}$, which, itself, couples through the flow
to higher $n$-point functions. A typical truncation scheme for these vertex functions
is given by a derivative expansion, which we will focus on in the following;
cf.\ also the introductory Refs.\ \cite{Bonini:1992vh, Ellwanger:1993mw, Morris:1993qb,
Bagnuls:2000ae, Berges:2000ew, Pawlowski:2005xe, Blaizot:2006rj, Gies:2006wv, Schaefer:2006sr,
Kopietz:2010zz, vonSmekal:2012vx}.
Along the lines of our previous study \cite{Eser:2018jqo}, we choose the following
(Euclidean) truncation for the linearly realized $O(4)$ QMM based on Eq.\ (\ref{eq:lagrangianQMM})
in Sec.\ \ref{sec:LSM} and introduce the $k$-dependent higher-derivative couplings $C_{2,k}$
and $Z_{2,k}$, as well as $C_{i,k}$, $i = 3,\ldots , 8$:
\begin{IEEEeqnarray}{rCl}
\Gamma_{k} & = & \int_{x}\bigg\lbrace
\frac{Z_{k}}{2}
\left(\partial_{\mu}\varphi\right) \cdot
\partial_{\mu}\varphi + U_{k} - h_{\mathrm{ESB}}\sigma
\nonumber\\
& & \qquad +\, C_{2,k}\left(\varphi \cdot
\partial_{\mu}\varphi\right)^{2}
+ Z_{2,k}\, \varphi^{2}\left(\partial_{\mu}\varphi\right)
\cdot \partial_{\mu}\varphi
\nonumber\\
& & \qquad -\, C_{3,k}
\left[\left(\partial_{\mu}\varphi\right)
\cdot \partial_{\mu}\varphi\right]^{2}
- C_{4,k} \left[\left(\partial_{\mu}\varphi\right)
\cdot \partial_{\nu}\varphi\right]^{2} \nonumber\\
& & \qquad -\, C_{5,k} \,
\varphi \cdot \left(\partial_{\mu}\partial_{\mu}\varphi\right)
\left(\partial_{\nu}\varphi\right) \cdot \partial_{\nu}\varphi
\nonumber\\
& & \qquad -\, C_{6,k} \,
\varphi^{2} \left(\partial_{\mu}\partial_{\nu}\varphi\right)
\cdot \partial_{\mu}\partial_{\nu}\varphi
\nonumber\\
& & \qquad -\, C_{7,k}
\left(\varphi \cdot \partial_{\mu}\partial_{\mu}\varphi\right)^{2}
- C_{8,k} \,
\varphi^{2}\left(\partial_{\mu}\partial_{\mu}\varphi\right)^{2}
\nonumber\\
& & \qquad +\, \bar{\psi}\left(Z_{k}^{\psi}\gamma_{\mu}
\partial_{\mu}+
y_{k} \Phi_{5}\right)
\psi\bigg\rbrace .\label{eq:truncation}
\end{IEEEeqnarray}
This truncation constitutes a derivative expansion up to order
$\mathcal{O}(\partial^{4})$. It is well beyond the LPA and its minimal extension
known as the LPA'. The first would only consider a scale-dependent effective
potential $U_{k}$, which is a function of the $O(4)$ invariant $\rho$,
\begin{equation}
\rho \equiv \varphi \cdot \varphi = \vec{\pi}^{2} + \sigma^{2},
\end{equation}
while the latter would also take into account the scaling of
the field variables by means of the flow of the wave-function renormalization
factors for bosons and fermions, $Z_{k}$ and $Z_{k}^{\psi}$. In both of these two approximations,
the higher-derivative couplings would be absent, i.e., they are set to zero.
In the light of Ref.\ \cite{Eser:2018jqo}, the momentum-independent coupling
$C_{1,k}(\varphi \cdot \varphi)^{2}$ is omitted in the above equation.
It is to be indentified with the quartic interaction term of the effective potential.
The parameter $h_{\mathrm{ESB}} \neq 0$ explicitly breaks the $O(4)$ symmetry,
as mentioned in the last section, and remains $k$ independent. For the scale-dependent
factors $Z_{k}$, $Z_{k}^{\psi}$, and the Yukawa interaction $y_{k}$ we suppress
a general field dependence. The couplings $\lbrace C_{2,k}, Z_{2,k}\rbrace$ as well as
$\lbrace C_{i,k} \rbrace$, $i = 3,\ldots , 8$, beyond the LPA' form a complete set of terms
of order $\mathcal{O}(\varphi^{4}, \partial^{2})$ and $\mathcal{O}(\varphi^{4},
\partial^{4})$, respectively. As an extension of Ref.\ \cite{Eser:2018jqo}
[cf.\ Eq.\ (20) therein], the low-energy couplings in Eq.\ (\ref{eq:truncation})
now include momentum-de\-pen\-dent $\sigma\pi$ and $\sigma$ self-interactions.
On the level of the two-point functions $\Gamma_{k}^{(2)}$, we define different
effective wave-function renormalization factors for the $\sigma$
and $\pi$ fields,
\begin{IEEEeqnarray}{rCl}
Z_{k}^{\sigma} & = & Z_{k} + 2 \sigma^{2} \left(Z_{2,k} + C_{2,k}\right) \nonumber\\
& & - 2 \sigma^{2} p^{2} \left(C_{6,k} + C_{7,k} + C_{8,k}\right), \label{eq:Zsigma}\\
Z_{k}^{\pi} & = & Z_{k} + 2 \sigma^{2} Z_{2,k} - 2 \sigma^{2} p^{2}
\left(C_{6,k} + C_{8,k}\right), \label{eq:Zpi}
\end{IEEEeqnarray}
where $p$ is the external momentum from the functional derivatives with respect
to the fields. The corrections to $Z_{k}$ in these definitions
obviously arise from the presence of the higher-derivative couplings.
It should be noted that the distinction between $Z_{k}^{\sigma}$ and $Z_{k}^{\pi}$ is not
in contradiction to the $O(4)$ symmetry of the model. As soon as the $\sigma$ field acquires
a nonvanishing expectation value, $\sigma \neq 0$, the wave-function renormalizations will
naturally split.
For later reference, we define the following renormalized quantities based on
the wave-function renormalization factors $Z_{k}^{\pi}$ and $Z_{k}^{\psi}$:
\begin{IEEEeqnarray}{rCl}
\tilde{\sigma} & = & \sqrt{Z_{k}^{\pi}} \sigma, \quad
\tilde{\vec{\pi}} = \sqrt{Z_{k}^{\pi}} \vec{\pi}, \nonumber\\
\tilde{\psi} & = & \sqrt{Z_{k}^{\psi}} \psi, \quad
\tilde{\bar{\psi}} = \sqrt{Z_{k}^{\psi}} \bar{\psi}, \nonumber\\
\tilde{h}_{\mathrm{ESB}} & = & \frac{h_{\mathrm{ESB}}}{\sqrt{Z_{k}^{\pi}}}, \quad
\tilde{y}_{k} = \frac{y_{k}}{Z_{k}^{\psi} \sqrt{Z_{k}^{\pi}}}, \nonumber\\
\tilde{C}_{i,k} & = & \frac{C_{i,k}}{\left(Z_{k}^{\pi}\right)^{2}}, \quad
i = 1, \ldots , 8, \quad
\tilde{Z}_{2,k} = \frac{Z_{2,k}}{\left(Z_{k}^{\pi}\right)^{2}} ,\label{eq:rquantities}
\end{IEEEeqnarray}
where we will, for simplicity, evaluate the above wave-function renormalization factors
at vanishing external momentum, $p=0$; cf.\ the discussion in Appendix \ref{sec:floweqns}.
By choosing $Z_{k}^{\pi}$ for both bosonic fields in the definitions (\ref{eq:rquantities})
one directly obtains the correct renormalization factors in the nonlinear model,
as presented in the next section.
The flow equations for all scale-dependent quantities in truncation (\ref{eq:truncation})
are obtained by projecting the functional derivatives of Eq.\ (\ref{eq:Wetterich}) (the flows
of vertex functions) onto the respective coupling. The resulting expressions for these equations
and further technical aspects, such as the regulator functions, are shown in Appendix \ref{sec:floweqns}.
Finally, the integration of the coupled system of differential equations then allows for a computation
of the quantities in the linearly realized effective average action (\ref{eq:truncation}) in the IR limit,
$\Gamma_{k \rightarrow 0} = \Gamma$; see Appendix \ref{sec:solv_floweqns} for details.
\subsection{Effective pion action}
\label{sec:effpion}
A physically meaningful transition from the effective action to the nonlinear model as effective
low-energy theory requires that all fluctuations, except for those of the pions, should have
decoupled from the dynamics at the energy-momentum scale, where this transition is to be realized.
The low-energy limit of the effective action (\ref{eq:truncation}), expressed in terms of the
renormalized quantities (\ref{eq:rquantities}), is then constructed by integrating out all (already
decoupled) fields,
\begin{equation}
\Gamma_{k}\left[\tilde{\sigma},\tilde{\vec{\pi}},\tilde{\bar{\psi}},\tilde{\psi}\right] \quad
\longrightarrow \quad \Gamma_{k}\left[\tilde{\Pi}\right].
\end{equation}
The vector $\tilde{\Pi}=(\tilde{\Pi}^{1},\tilde{\Pi}^{2},\tilde{\Pi}^{3})$ represents the
renormalized nonlinear pNGB fields, which will be defined below, and the symbol $\Gamma_k$ is kept
for the resulting action.
The quark fields are immediately dropped from the effective action, similarly
to the investigations in Refs.\ \cite{Divotgey:2016pst, Eser:2018jqo}, since this integration
process is restricted to (at most) tree-level diagrams. As a consequence, $\Gamma_{k}$ reduces to the
effective action of the usual LSM, modified by the higher-derivative couplings.
As already mentioned in Sec.\ \ref{sec:NLSM}, the $SO(4)$ NLSM is defined on the
coset space $SO(4)/SO(3)$, which is diffeomorphic to the three-sphere $S^{3}$.
The explicit diffeomorphism is given by
\begin{equation}
\tilde{\varphi} \equiv \sqrt{Z_{k}^{\pi}}\varphi = \Sigma\bigl(\tilde{\zeta}\bigr)\tilde{\phi},
\label{eq:iso}
\end{equation}
with $\tilde{\phi} = (\vec{0}, \tilde{\theta})$, where $\tilde{\theta}
= \sqrt{\tilde{\varphi}^{2}}$ defines the radius of the three-sphere. For the
time being, we keep the field $\tilde{\theta}$ and allow fluctuations in radial
direction.
The coordinates $\tilde{\zeta}^{a}$ parametrizing the coset representative
$\Sigma(\tilde{\zeta})$ are chosen as stereographic projections,
\begin{equation}
\tilde{\zeta}^{a} = \frac{\tilde{\pi}^{a}}{\tilde{\theta} + \tilde{\sigma}},
\quad a=1,2,3,
\label{eq:coords}
\end{equation}
where $\tilde{\pi}^{a}$ and $\tilde{\sigma}$ are the renormalized Euclidean
coordinates of the linear QMM, cf.\ Sec.\ \ref{sec:FRG}. An explicit
parametrization of the coset representative then reads
\begin{equation}
\Sigma\bigl(\tilde{\zeta}\bigr) = \begin{pmatrix} \delta^{a}_{\;\; b} -
\frac{2\tilde{\zeta}^{a}
\tilde{\zeta}_{b}}{1 + \tilde{\zeta}^{2}} && \frac{2\tilde{\zeta}^{a}}{1 + \tilde{\zeta}
^{2}} \\[0.2cm]
-\frac{2\tilde{\zeta}_{b}}{1 + \tilde{\zeta}^{2}} && \frac{1 - \tilde{\zeta}^{2}}{1 +
\tilde{\zeta}^{2}}
\end{pmatrix}. \label{eq:cosetrep}
\end{equation}
The coefficients of the Maurer-Cartan form proportional to the broken generators as well
as the metric in Eq.\ (\ref{eq:metric}) are thus evaluated as
\begin{equation}
e^{a}_{\mu}\bigl(\tilde{\zeta}\bigr) \equiv e^{\;\;a}_{\alpha}\bigl(\tilde{\zeta}\bigr)
\partial_{\mu}\tilde{\zeta}^{\alpha} =
\frac{2\delta^{\;\;a}_{\alpha}}{1 + \tilde{\zeta}^{2}}\partial_{\mu}\tilde{\zeta}^{\alpha},
\label{eq:mcframe}
\end{equation}
and
\begin{equation}
g_{\alpha\beta} \equiv g_{\alpha\beta}\bigl(\tilde{\zeta}\bigr) = \frac{4\delta_{\alpha\beta}}{\bigl(1 +
\tilde{\zeta}^{2}\bigr)^{2}},
\end{equation}
respectively.
The transition of the LSM to its associated NLSM is now realized by inserting
Eq.\ (\ref{eq:iso}) into Eq.\ (\ref{eq:truncation}) and identifying the $\tilde{\theta}$
field with the pion decay constant $f_{\pi}$ [PCAC relation; cf.\ Ref.\ \cite{Eser:2018jqo}],
\begin{equation}
\tilde{\theta} = f_{\pi}.
\end{equation}
This step fixes the radial fluctuations $\tilde{\theta}$ to a constant radius
and eliminates the $\tilde{\theta}$ field from the effective action. The resulting nonlinear
model can then be written as
\begin{IEEEeqnarray}{rCl}
\Gamma_{k} & = & \int_{x} \bigg\lbrace \frac{f^{2}_{\pi}}{2}g_{\alpha\beta}\bigl(\nabla_{\mu}
\tilde{\zeta}^{\alpha}\bigr)\nabla_{\mu}\tilde{\zeta}^{\beta} \nonumber\\
& & \quad\quad -\, \bigl(\tilde{C}_{6,k}+\tilde{C}_{8,k}\bigr) \, f_{\pi}^{4} \,
g_{\alpha\beta}\bigl(\nabla_{\mu}\nabla_{\mu}
\tilde{\zeta}^{\alpha}\bigr)
\bigl(\nabla_{\nu}\nabla_{\nu}\tilde{\zeta}^{\beta}\bigr) \nonumber\\
& & \quad\quad -\, \bigl(\tilde{C}_{3,k} - \tilde{C}_{5,k} + \tilde{C}_{6,k} + \tilde{C}_{7,k}
+ \tilde{C}_{8,k}\bigr)\, f^{4}_{\pi} \nonumber\\
& & \qquad\qquad \times \, g_{\alpha\beta}g_{\gamma\delta}\bigl(\nabla_{\mu}
\tilde{\zeta}^{\alpha}\bigr)\bigl(\nabla_{\mu}\tilde{\zeta}^{\beta}\bigr)\bigl(\nabla_{\nu}
\tilde{\zeta}^{\gamma}\bigr)\nabla_{\nu}\tilde{\zeta}^{\delta} \nonumber\\
& & \quad\quad -\, \tilde{C}_{4,k} \, f^{4}_{\pi} \,
g_{\alpha\beta}g_{\gamma\delta}\bigl(\nabla_{\mu}
\tilde{\zeta}^{\alpha}\bigr)\bigl(\nabla_{\nu}\tilde{\zeta}^{\beta}\bigr)\bigl(\nabla_{\mu}
\tilde{\zeta}^{\gamma}\bigr)\nabla_{\nu}\tilde{\zeta}^{\delta} \nonumber\\
& & \quad\quad -\, \tilde{h}_{\text{ESB}}\, f_{\pi}\frac{1 - \tilde{\zeta}^{2}}{1 + \tilde{\zeta}^{2}}
\bigg\rbrace, \label{eq:integration}
\end{IEEEeqnarray}
with $\nabla_{\mu}\tilde{\zeta}^{\alpha} \equiv \partial_{\mu}\tilde{\zeta}^{\alpha}$.
The action of the covariant derivative $\nabla_{\mu}$ on a vector $\partial_{\nu}\tilde{\zeta}^{\alpha}$ is defined as
\begin{equation}
\nabla_{\mu}\nabla_{\nu}\tilde{\zeta}^{\alpha} \equiv
\nabla_{\mu}\partial_{\nu}\tilde{\zeta}^{\alpha} =
\partial_{\mu}\partial_{\nu}\tilde{\zeta}^{\alpha} +
\Gamma^{\alpha}_{\;\;\beta\gamma}\bigl(\partial_{\mu}\tilde{\zeta}^{\beta}\bigr)\partial_{\nu}
\tilde{\zeta}^{\gamma}.
\end{equation}
In stereographic coordinates, the above Christoffel symbols $\Gamma^{\alpha}_{\;\;\beta\gamma}$ read
\begin{equation}
\Gamma^{\alpha}_{\;\;\beta\gamma} = \frac{2}{1 + \tilde{\zeta}^{2}}\left(
-\delta^{\alpha}_{\;\;\gamma}\tilde{\zeta}_{\beta} - \delta^{\alpha}_{\;\;\beta}\tilde{\zeta}_{\gamma}
+ \delta_{\beta\gamma}\tilde{\zeta}^{\alpha}\right).
\end{equation}
The effective action (\ref{eq:integration}) resembles Eq.\ (\ref{eq:NLSMlag})
with $F_{ab} = f_{\pi}^{2}\delta_{ab}$ for these specific coordinates.
Furthermore, it features corrections from the ESB term and the higher-derivative
couplings. Its general form is consistent with the studies on nonlinear
sigma models in Refs.\ \cite{Percacci:2009fh, Flore:2012ma}.
In order to obtain a canonically normalized kinetic term for the pNGB fields, we introduce a
field redefinition according to
\begin{equation}
\tilde{\zeta}^{a} \rightarrow \frac{\tilde{\Pi}^{a}}{2f_{\pi}}, \quad a = 1, 2, 3.
\end{equation}
Expanding all quantities up to fourth order in the new field variables $\tilde{\Pi}^{a}$,
the above effective action becomes
\begin{IEEEeqnarray}{rCl}
\Gamma_{k} & = & \int_{x} \biggl\lbrace \frac{1}{2}
\left(\partial_{\mu}\tilde{\Pi}_{a}\right)\partial_{\mu}\tilde{\Pi}^{a} + \frac{1}{2}
\tilde{\mathcal{M}}^{2}_{\Pi,k}\, \tilde{\Pi}_{a}\tilde{\Pi}^{a} \nonumber\\
& & \quad\quad -\, \tilde{\mathcal{C}}_{1,k}\left(\tilde{\Pi}_{a}\tilde{\Pi}^{a}
\right)^{2} + \tilde{\mathcal{Z}}_{2,k}\, \tilde{\Pi}_{a}\tilde{\Pi}^{a}
\left(\partial_{\mu}\tilde{\Pi}_{b}\right)\partial_{\mu}\tilde{\Pi}^{b} \nonumber\\
& & \quad\quad -\, \tilde{\mathcal{C}}_{3,k}\left[\left(\partial_{\mu}\tilde{\Pi}_{a}\right)
\partial_{\mu}\tilde{\Pi}^{a}\right]^{2} \nonumber\\
& & \quad\quad -\, \tilde{\mathcal{C}}_{4,k}\left[\left(\partial_{\mu}\tilde{\Pi}_{a}
\right)\partial_{\nu}\tilde{\Pi}^{a}\right]^{2} \nonumber\\
& & \quad\quad -\, \tilde{\mathcal{C}}_{5,k}\, \tilde{\Pi}_{a}\left(\partial_{\mu}\partial_{\mu}
\tilde{\Pi}^{a}\right)\left(\partial_{\nu}\tilde{\Pi}_{b}\right)\partial_{\nu}
\tilde{\Pi}^{b} \nonumber\\
& & \quad\quad -\, \tilde{\mathcal{C}}_{6,k}\, \tilde{\Pi}_{a}\tilde{\Pi}^{a}
\left(\partial_{\mu}\partial_{\nu}\tilde{\Pi}_{b}\right)\partial_{\mu}\partial_{\nu}\tilde{\Pi}^{b}
\nonumber \\
& & \quad\quad -\, \tilde{\mathcal{C}}_{8,k}\, \tilde{\Pi}_{a}\tilde{\Pi}^{a}
\left(\partial_{\mu}\partial_{\mu}\tilde{\Pi}_{b}\right)\partial_{\nu}\partial_{\nu}\tilde{\Pi}^{b}
\biggr\rbrace ,
\label{eq:finalaction}
\end{IEEEeqnarray}
where we defined the squared mass of the $\tilde{\Pi}^{a}$ fields,
\begin{equation}
\tilde{\mathcal{M}}^{2}_{\Pi,k} = \frac{\tilde{h}_{\text{ESB}}}{f_{\pi}},
\end{equation}
as well as the low-energy couplings
\begin{IEEEeqnarray}{rCl}
\tilde{\mathcal{C}}_{1,k} & = & \frac{\tilde{\mathcal{M}}^{2}_{\Pi,k}}{8f^{2}_{\pi}}, \nonumber\\
\tilde{\mathcal{Z}}_{2,k} & = &-\frac{1}{4f^{2}_{\pi}}, \nonumber\\
\tilde{\mathcal{C}}_{3,k} & = & \tilde{C}_{3,k}-\tilde{C}_{5,k}+\tilde{C}_{7,k}
+ 2\bigl(\tilde{C}_{6,k}+\tilde{C}_{8,k}\bigr), \nonumber \\
\tilde{\mathcal{C}}_{4,k} & = & \tilde{C}_{4,k}, \nonumber \\
\tilde{\mathcal{C}}_{5,k} & = & 2\bigl(\tilde{C}_{6,k}+\tilde{C}_{8,k}\bigr), \nonumber \\
\tilde{\mathcal{C}}_{6,k} & = & -\, \tilde{C}_{6,k}-\tilde{C}_{8,k}, \nonumber \\
\tilde{\mathcal{C}}_{8,k} & = & \frac{1}{2}\bigl(\tilde{C}_{6,k} + \tilde{C}_{8,k}\bigr).
\label{eq:LECrelations}
\end{IEEEeqnarray}
It should be underlined that Eq.\ (\ref{eq:finalaction}) is calculated from
Eq.\ (\ref{eq:integration}) by integrating several terms by parts in order to
reobtain the term structures of the derivative expansion (\ref{eq:truncation}).
\begin{figure*}[t!]
\centering
\includegraphics{fig1.eps}
\caption{Higher-derivative couplings of the linear QMM. (a)
Scale evolution of the renormalized couplings $\tilde{C}_{i,k}$, $i = 3, \ldots , 8$.
(b) Scale evolution of the renormalized couplings $\tilde{C}_{2,k}$ and $\tilde{Z}_{2,k}$;
$k_{\mathrm{IR}} = 1\ \mathrm{MeV}$.}
\label{fig:c}
\end{figure*}
Equation (\ref{eq:finalaction}) is one of the central results of this paper. It is the nonlinear
counterpart of the linear QMM and, as repeatedly pointed out, has to be understood as its
low-energy limit. Moreover, we obtained relations between the low-energy couplings in the linear
and the nonlinear model, cf.\ Eq.\ (\ref{eq:LECrelations}).
It is remarkable that the geometrical constraint of fixing the $\tilde{\theta}$ field
to the constant radius of the three-sphere restricts the number of possible couplings of
order $\mathcal{O}(\partial^{2})$ as well as $\mathcal{O}(\partial^{4})$. In the chosen set of
coordinates, only the analogue of $\tilde{Z}_{2,k}$, the coupling $\tilde{\mathcal{Z}}_{2,k}$ out of the terms
of order $\mathcal{O}(\partial^{2})$, ``survives'' in the nonlinear framework, while the interaction
$\sim \tilde{C}_{2,k}$ vanishes. The same holds true for the couplings $\tilde{C}_{3,k}$, $\tilde{C}_{4,k}$,
$\tilde{C}_{5,k}$, $\tilde{C}_{6,k}$, and $\tilde{C}_{8,k}$ in the case of the terms of order
$\mathcal{O}(\partial^{4})$. Besides, it obviously arises a linear dependence in the nonlinear model
between the couplings $\tilde{\mathcal{C}}_{5,k}$, $\tilde{\mathcal{C}}_{6,k}$, and
$\tilde{\mathcal{C}}_{8,k}$ after fixing the $\tilde{\theta}$ field, see once more
Eq.\ (\ref{eq:LECrelations}).
We observe that the mo\-men\-tum-in\-de\-pen\-dent quar\-tic coupling
$\tilde{C}_{1,k}$ and the effective potential $U_{k}$, in general, do
not enter the low-energy limit. Also, the couplings $\tilde{C}_{2,k}$ and $\tilde{Z}_{2,k}$ are
irrelevant for the results in the nonlinear effective pion action.
They only indirectly influence the result through the integration process of the system
of flow equations. Furthermore, the couplings $\tilde{\mathcal{C}}_{1,k}$ and $\tilde{\mathcal{Z}}_{2,k}$ are
identical to the analytical results for the respective terms in the ChPT Lagrangian formulated
in stereographic coordinates, as can be easily deduced from Ref.\ \cite{Divotgey:2016pst}. In fact,
within the geometrical constraints on the vacuum manifold, the coupling $\tilde{\mathcal{Z}}_{2,k}$
is only a function of the pion decay constant $f_{\pi}$.
\section{Numerical Results and Discussion}
\label{sec:results}
We now present the numerical results for the higher-derivative interactions
as obtained from the linear QMM, renormalized at a hadronic cutoff scale of
$\Lambda = 500\ \mathrm{MeV}$, as well as the derived low-energy couplings of the
nonlinear model. All additional constituents of truncation (\ref{eq:truncation})
and details on their calculation within the FRG framework are described in
Appendix \ref{sec:solv_floweqns}.
\subsection{Linear model: Higher-derivative couplings}
\label{sec:higher_order}
The results for the higher-derivative pion couplings of the linear QMM are shown in
Fig.\ \ref{fig:c}. The subfigures \ref{fig:c}(a) and \ref{fig:c}(b) explicitly
show how these couplings, initialized at zero in the UV, become nonzero as soon
as the RG scale $k$ decreases. Their final numerical values in the IR limit, $k_{\mathrm{IR}}
= 1\ \mathrm{MeV}$, are collected in the column ``Linear model'' of Table \ref{tab:LECs}.
\begin{table}[b!]
\caption{\label{tab:LECs}Low-energy (derivative) couplings.\footnote{Values
are given at $k = 1\ \mathrm{MeV}$.}}
\begin{ruledtabular}
\begin{tabular}{lclc}
\multicolumn{2}{c}{\textbf{Linear model}} & \multicolumn{2}{c}{\textbf{Nonlinear model}}\\[0.1cm]
\colrule\\[-0.4cm]
$\tilde{C}_{2}\ [1/f_{\pi}^{2}]\times 10$ & $-0.88$ & \multicolumn{2}{c}{$\ldots$} \\
$\tilde{Z}_{2}\ [1/f_{\pi}^{2}]\times 10$ & $-2.30$ &
$\tilde{\mathcal{Z}}_{2}\ [1/f_{\pi}^{2}]\times 10$ & $-2.50$ \\
$\tilde{C}_{3}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $2.88$ &
$\tilde{\mathcal{C}}_{3}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $-4.20$ \\
$\tilde{C}_{4}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $1.27$ &
$\tilde{\mathcal{C}}_{4}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $1.27$ \\
$\tilde{C}_{5}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $4.69$ &
$\tilde{\mathcal{C}}_{5}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $-2.41$ \\
$\tilde{C}_{6}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $-2.35$ &
$\tilde{\mathcal{C}}_{6}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $1.21$ \\
$\tilde{C}_{7}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $0.02$ & \multicolumn{2}{c}{$\ldots$} \\
$\tilde{C}_{8}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $1.14$ &
$\tilde{\mathcal{C}}_{8}\ [1/f_{\pi}^{4}]\times 10^{2}$ & $-0.60$
\end{tabular}
\end{ruledtabular}
\end{table}
Since the $\mathcal{O}(\partial^{2})$ couplings of the nonlinear model do not depend
on the corresponding couplings of the linear QMM, we discuss the $\mathcal{O}(\partial^{4})$
couplings first. Figure \ref{fig:c}(a) reveals that the main contribution to the couplings
$\tilde{C}_{i,k}$, $i = 3,\ldots, 8$, comes from fluctuations with energy-momentum scales of
$k \simeq 50 - 200\ \mathrm{MeV}$, which is significantly below the scale of spontaneous chiral symmetry
breaking; $k_{\text{SSB}}\simeq 300\ \mathrm{MeV}$. We thus conclude that these couplings are
well captured by the low-energy dynamics of the QMM. Also, given their slow running
above the scale of chiral symmetry breaking, the initial value of zero seems to be a very
reasonable approximation for these couplings.
Although of less relevance for the low-energy couplings of the nonlinear model, it is still
interesting to investigate also the $\mathcal{O}(\partial^{2})$ couplings of the linear QMM.
As already observed in the preceding investigation \cite{Eser:2018jqo}, these couplings experience
a rapid initial change at RG scales close to the UV cutoff. This rapid initial change of the couplings
of order $\mathcal{O}(\partial^{2})$ is an indication that a hadronic cutoff scale of around $500\ \mathrm{MeV}$
is actually too low for a precise determination of these couplings.
However, the fixed point-like scale evolution of $\tilde{C}_{2,k}$ and $\tilde{Z}_{2,k}$ at intermediate RG scales
suggests that their IR values do not drastically dependent on the choice of $\Lambda$.
In future investigations, the dynamical-hadronization approach
\cite{Gies:2001nw, Gies:2002hq, Pawlowski:2005xe, Floerchinger:2009uf} will allow for a smooth transition
between the fundamental interactions and the bosonic operators presented in this study and, in turn,
for a computation of these couplings directly from quark and gluon fluctuations at QCD scales,
cf.\ also Refs.\ \cite{Braun:2014ata, Mitter:2014wpa, Cyrol:2017ewj}.
\subsection{Nonlinear model: Low-energy couplings}
\label{sec:LECs}
As already argued, the computation of the low-energy limit of the QMM requires the
integration of all nonpionic fields. Nevertheless, we present the nontrivial $\mathcal{O}(\partial^{4})$
low-energy couplings on all RG scales.
Figure \ref{fig:cnonlinear} shows the results of
applying (\ref{eq:LECrelations}) at every RG scale. Additionally, we have decomposed
the contributions to these couplings into quark and meson loops, respectively.
Despite the fact that the main contribution to the linear $\mathcal{O}(\partial^{4})$
couplings, cf.\ Fig.\ \ref{fig:c}(a), stems from fluctuations below the scale
of chiral symmetry breaking, these couplings are almost exclusively determined by the quark
fluctuations. The mesonic fluctuations and, in particular, the pionic fluctuations contribute
noticeably to the total couplings only at energies below $200\ \mathrm{MeV}$. However,
also in this energy regime, the main contribution is due to quark fluctuations. This is
in accordance with functional QCD calculations
\cite{Braun:2014ata, Mitter:2014wpa, Cyrol:2017ewj} of other
low-energy couplings in the linear realization.
\begin{figure*}[t!]
\centering
\includegraphics{fig2.eps}
\caption{Low-energy couplings of the nonlinear model. (a) and (b)
Scale evolution of the renormalized couplings
$\tilde{\mathcal{C}}_{3,k}$ and $\tilde{\mathcal{C}}_{4,k}$, respectively.
(c) Scale evolution of $\tilde{C}_{6,k} + \tilde{C}_{8,k}$ representing
the renormalized couplings $\tilde{\mathcal{C}}_{5,k}$,
$\tilde{\mathcal{C}}_{6,k}$, and $\tilde{\mathcal{C}}_{8,k}$.
The evolution of the couplings (solid lines) is decomposed into
fermionic (dashed lines) and bosonic contributions (dash-dotted lines);
$k_{\mathrm{IR}} = 1\ \mathrm{MeV}$.}
\label{fig:cnonlinear}
\end{figure*}
The most astonishing observation in Figs.\ \ref{fig:cnonlinear}(a), \ref{fig:cnonlinear}(b),
and \ref{fig:cnonlinear}(c) is that the loop contributions from the quark degrees of
freedom are only fully integrated out at scales below $50 - 100\ \mathrm{MeV}$. This
is almost certainly also the case in full QCD, since functional QCD calculations
\cite{Braun:2014ata, Mitter:2014wpa, Cyrol:2017ewj} exhibit an even stronger dominance of
qualitatively similar quark fluctuations. Therefore, we conclude that the low-energy couplings
can only be determined and defined from QCD below these scales, at least in the used
renormalization scheme. Furthermore, this also leads to a natural cutoff scale for theories that
are exclusively based on pion fluctuations.
Lastly, a comparison with the according values from ChPT would require
compatible renormalization schemes and comparable renormalization scales.
It is also clear that the computed low-energy couplings presented in this work
do not yet include the effect of resonances, especially, the light scalar
and vector channels.
The numerical IR values of the higher-derivative couplings in the nonlinearly
realized effective pion action are listed in the last column (``Nonlinear model'')
of Table \ref{tab:LECs}. The couplings that vanish in the nonlinear picture are
denoted by three dots. The pion mass amounts to $\tilde{\mathcal{M}}_{\Pi,k_{\mathrm{IR}}} =
138.5\ \mathrm{MeV}$. The momentum-in\-de\-pen\-dent coupling $\tilde{\mathcal{C}}_{1,k_{\mathrm{IR}}}$
has a value of $0.27$.
\section{Summary and Outlook}
\label{sec:summary}
In this work, we studied the low-energy limit of the $O(4)$ QMM within the FRG approach
by transforming the corresponding effective action into an effective pion theory.
This corresponds to a transition from the QMM based on a linear realization
of the $O(4)$ symmetry to a model where the pions enter according to a nonlinear
realization.
Our approach yields precise statements about the energy-momentum scale below
which the dynamics is dominated by pionic degrees of freedom. We find that
for physical pion masses:
\begin{itemize}
\item [(i)] The pion loop contributions to the low-energy couplings
are strongly suppressed as compared to the quark loops.
\item [(ii)] In our renormalization scheme, the scale for the decoupling of the quark fluctuations
and, in turn, the range of validity of the low-energy effective theory is roughly given by $50-100$ MeV.
\end{itemize}
Due to the qualitative similarity of the QMM dynamics to the low-energy limit of QCD, these statements most
likely extrapolate to full QCD, which has to be checked explicitly in a future investigation.
Upcoming studies will also shed more light on the relation to ChPT,
where this work can be understood as an extension of Refs.\ \cite{Jungnickel:1997yu,
Divotgey:2016pst}. In particular, it would be very interesting if the computed low-energy
couplings of the QMM and related effective theories, like the extended LSM \cite{Parganlija:2012fy},
are consistent with the low-energy limit of QCD (the latter is formalized by means of the LECs in ChPT).
However, as stated above, this would require a meaningful choice of the renormalization scale
in ChPT with regard to the physical relevance of pion fluctuations. Moreover, such a
comparison implies a profound discussion of the effect of resonances on the low-energy
couplings within our approach. One also has to carefully evaluate how much this comparison
would be distorted by the fact that our analysis is formulated in terms of the effective
action, which has been generated from the flow of the linearly realized QMM.
Finally, and on a more technical level, the presented work is an extension of our previous
exploratory study \cite{Eser:2018jqo} of higher-derivative pion self-interactions within the
FRG formalism. The used truncation was improved by including higher-derivative $\sigma\pi$
as well as $\sigma$ self-interactions in a completely $O(4)$-symmetric way. Additionally,
we took also the flow of the Yukawa coupling into account.
All higher-derivative couplings were dynamically generated from the FRG flow, which was
initialized at a hadronic scale of $500\ \mathrm{MeV}$. The large corrections of the
interactions of order $\mathcal{O}(\partial^{2})$ right after the initialization suggests
a determination of these couplings from QCD scales in future investigations. Such an
analysis could be carried out using the dynamical-hadronization procedure \cite{Braun:2014ata,
Mitter:2014wpa, Cyrol:2017ewj}, which consistently describes mesonic degrees of freedom
as quark-antiquark bound states.
\begin{acknowledgments}
The authors thank M.\ Birse, J.-P.\ Blaizot, D.\ D.\ Dietrich, J.\ Goity, A.\ Koenigstein,
J.\ M.\ Pawlowski, R.\ D.\ Pisarski, J.\ Qiu, S.\ Rechenberger, F.\ Rennecke, D.\ H.\ Rischke, B.-J.\ Schaefer,
L.\ von Smekal, and C.\ Weiss for valuable discussions.
J.\ E.\ acknowledges funding by the German National Academic Foundation and HIC for FAIR.
M.\ M.\ is supported by the DFG grant MI 2240/1-1
and the U.S.\ Department of Energy under contract de-sc0012704.
\end{acknowledgments}
|
3,212,635,537,692 | arxiv | \section{Introduction}\label{sec:introduction}
\subsection{Dark Patterns}
Dark patterns are user interface designs on online services that make users behave in unintended ways. Dark patterns have been called into question in recent years.
In 2010, Harry \cite{darkpattern.org} defined dark patterns as “tricks used in websites and apps that make a user do things that the user did not mean to, like buying or signing up for something.” Fig. \ref{dpex} shows an example of dark patterns, classified as Obstruction \cite{11kScale}. The obstruction makes it difficult for users to conduct a specific task, such as cancellation and withdrawal. For example, in Fig. \ref{dpex}, the website only shows an “ADD TO CART” button but no cancel button. Instead, the website shows “If you would like to cancel your membership, please call ... or contact us via email at ...”, which makes it difficult for the user to cancel by limiting the cancellation to telephone calls or email.
\begin{figure*}[h]
\includegraphics[width=\linewidth]{obstruction_example_2-3.png}
\caption{An Example of Dark Patterns (platinumonlineretail.com)}
\label{dpex}
\end{figure*}
Prior studies \cite{dpYoutube,11kScale,mobileApp,cookieConsent} have reported that dark patterns exist everywhere on online services, including e-commerce sites, social networking services (SNS), consent to cookies, and apps.
\subsection{Dark Patterns and privacy}
Dark patterns also pose problems to user privacy protection, including UX designs that induce users to provide personal data or consent to cookies in online services. Discussions on the impact of dark patterns to protect user privacy are not limited to academic research and have been widely discussed in various places.
In 2018, the California Legislature passed the Consumer Privacy Protection Act (CCPA) \cite{ccpa} to ban dark patterns on the Internet, which became effective in 2020 and had a critical impact on privacy-related choices. In 2019, the Commission Nationale de l'Informatique et des Libertés (CNIL) in France published a report \cite{cnil} on the impact of UX design on privacy protection. The report argued that manipulative and/or misleading interfaces on online services could influence critical decisions related to user privacy. The report also raised awareness of such dark patterns and called on designers to collaborate for privacy-friendly designs. In 2020, the Organization for Economic Development and Cooperation (OECD) discussed the privacy and purchasing behavior risks that dark patterns pose to consumers \cite{oecd}. During the meeting, the risk of dark patterns exposing personal information on online services without the consumer’s genuine consent was mentioned. In 2021, the European Data Protection Board (EDPB) discussed dark patterns in social media that can negatively impact users’ decisions regarding the handling of personal information \cite{edpb}. The main objective was to discuss protecting users from dark patterns that may harm their privacy.
As ever-increasing dark patterns have become a social problem, as evidenced by the policies of various countries, urgent actions are required.
\subsection{Study Contributions}
Prior works \cite{dpYoutube,11kScale,mobileApp,cookieConsent,darkSideUx,darkpattern.org} investigated and classified dark patterns on e-commerce sites, social networking services sites, news sites, apps, and cookie consent. However, to the best of our knowledge, auto-detection of dark patterns was outside their scope. Although a few studies \cite{webbasedDp,autoCookie} tackled auto-detection, they relied on manual methods to extract features for auto-detection.
This work distributed a dataset of dark patterns consisting of positive and negative samples, which will help new research in this area to detect dark patterns automatically on websites by distributing a dataset of dark patterns consisting of positive and negative samples. Besides, we provide baseline results of auto-detection with state-of-the-art machine learning methods, including BERT \cite{bert}, RoBERTa \cite{roberta}, ALBERT \cite{albert}, and XLNet \cite{xlnet}, for easy comparison with future algorithms.
\subsection{Organization of the Paper}
Related work is introduced in Section \ref{sec:relatedWork}, followed by a description of the creation of the dark pattern dataset in Section \ref{sec:ecommerceDarkPatternDataset}. Section \ref{sec:baselineMethodsForAutomaticDetection} investigates the performance of auto-detection with state-of-the-art machine learning algorithms as baselines. Finally, we conclude this paper in Section \ref{sec:conclusion}.
\section{Related Work}\label{sec:relatedWork}
This section introduces related work from three categories: dark pattern taxonomies, dark patterns at scale, and automated dark pattern detection.
\subsection{Dark Pattern Taxonomies}
The taxonomies of dark patterns have been defined in various ways, including the research aiming to define the taxonomies of dark patterns and the research classifying dark patterns from the gathered large-scale web pages.
The term “dark patterns” was first defined on Harry's website \cite{darkpattern.org} in 2010. He classified dark patterns into 12 types and introduced examples of each type. In 2018, Gray et al. \cite{darkSideUx} gathered examples of dark patterns through searches with the keyword “dark patterns” and its derivatives on Google, Bing, and social networking sites, such as Facebook and Instagram. Gray et al. analyzed the contents to classify dark patterns into 5 types. Table \ref{tab:dptype:graym} lists the types of dark patterns defined by Gray et al. In 2021, Mathur et al.\cite{whatDp} conducted a comprehensive study to summarize the various definitions of dark patterns, which included 84 types of dark patterns defined by 11 studies in the fields of human-computer interaction, security and privacy, law, and psychology.
\begin{table*}[t]
\centering
\caption{Types of Dark Patterns Defined by Gray et al. \cite{darkSideUx}}
\def1.5{1.5
\begin{tabularx}{\textwidth}{l|X} \hline
Types of Dark Patterns & Description \\ \hline
Nagging & Redirection of expected functionality that persists beyond one or more interactions. \\
Obstruction & Making a process more difficult than it needs to be, with the intent of dissuading certain action(s) \\
Sneaking & Attempting to hide, disguise, or delay the divulging of information that is relevant to the user. \\
Interface Interference & Manipulation of the user interface that privileges certain actions over others. \\
Forced Action & Requiring the user to perform a certain action to access (or continue to access) certain functionality \\ \hline
\end{tabularx}
\label{tab:dptype:graym}
\end{table*}
As described above, the taxonomies of dark patterns have been defined based on various criteria in several studies.
\subsection{Dark Patterns at Scale}
Many researchers have conducted studies on dark patterns at a large scale across online content to determine the extent to which dark patterns exist on online services.
In 2018, Mathur et al. \cite{dpYoutube} conducted a large-scale study to determine whether influencers disclose that their content is advertising when they promote specific products. The Federal Trade Commission (FTC) prohibits creators from promoting specific products without disclosing the fact to users. Mathur et al. found that approximately 10\% of the affiliate advertisements content did not disclose that they were advertisements. In 2019, Mathur et al.\cite{11kScale} extensively studied dark patterns on shopping websites. They obtained 11,286 shopping sites by extracting English shopping sites from 361,102 websites with the highest number of accesses obtained by Alexa Traffic Rank. As a result, they found 7 types of 1,818 dark patterns from 1,254 shopping sites, which is approximately 11.1\% of the total, and they published the dark pattern text data \footnote{https://github.com/aruneshmathur/dark-patterns}.
In 2020, Di Geronimo et al. \cite{mobileApp} studied 240 popular Android applications. Two researchers performed a set of common activities, including user registration and configuration changes, to investigate whether dark patterns existed in the applications. Di Geronimo et al. analyzed the dark patterns found and classified them into the five types of dark patterns defined by Gray et al. \cite{darkSideUx}.
In 2020, Soe et al. \cite{cookieConsent} conducted a large-scale study of dark patterns in cookie consent notifications. The work targeted the consent notices for cookies on 300 news sites written in Nordic and English. Two researchers asked the following questions: “Are there dark patterns in the consent notices?”, If so, “what is the type of dark pattern (of the five types of dark patterns defined by Gray et al.)?”, “Is it possible to refuse cookies?”, and “Where does the cookie consent notice appear?”. As a result, they identified 297 dark patterns for inducing consent to cookies.
\subsection{Automated Detection for Dark Patterns}
A few studies have addressed the automatic detection of dark patterns as follows.
In 2021, Andrea et al. \cite{webbasedDp} proposed a detection framework for dark patterns with a combination of manual and automatic methods. They targeted the 12 dark pattern types defined by Harry \cite{darkpattern.org}. The weakness of the framework is that only simple keyword matching is adopted for automated detection methods. For example, the automated detection is based on whether the keywords opt-in or opt-out are included or not. Other features to detect dark patterns rely on manual methods, so the framework cannot detect dark patterns automatically.
In 2022, Soe et al. \cite{autoCookie} conducted automatic dark pattern detection in cookie banners. They experimented using machine learning techniques on a manually collected dataset of cookie consents on 300 news websites \cite{cookieConsent}. They used manually extracted features, such as text in banners, location of the pop-up, number of clicks to reject all consent, purpose of the cookies' inclusion, and the existence of any third-party cookies. The automatic detection targeted categorizing 5 types of dark and non-dark patterns by adopting 10 features, where the 5 types of dark patterns were those defined by Gray et al. [7]. The experimental results with gradient-boosted tree classifiers showed accuracies of 0.50 (obstruction) to 0.72 (nagging). The weaknesses of this research are 1) only targeting cookie banners, 2) features were extracted manually, and 3) the obtained accuracies were low.
\section{E-commerce Dark Pattern Dataset}\label{sec:ecommerceDarkPatternDataset}
The purpose of the proposed dataset is to enable a wide range of research for automatic dark pattern detection on e-commerce sites. Although we may prepare various features that Soe et al. \cite{autoCookie} adopted, we only prepare texts that could be automatically extracted from web pages because we target automatic detection without manually extracted features. Our dataset is inspired by Mathur et al.’s work in 2019 \cite{11kScale}, which consisted of 1,818 dark pattern texts from shopping sites. We added non-dark pattern texts to Mathur et al.’s dataset to prepare a dataset with a balanced number of dark and non-dark pattern texts.
\subsection{Dark Pattern Texts in E-commerce Sites}\label{sbsec:dpTextInEcomSites}
We modified the dataset of dark patterns manually constructed by Mathur et al. \cite{11kScale}, where 1,818 dark pattern texts from 1,254 shopping sites, approximately 11.1\% of the 11,000 shopping sites they studied, were included. The original Mathur et al. dataset consists of the manually tagged features listed in Table \ref{tab:feature:mathur}. As our goal is to prepare a dark pattern text dataset, we excluded missing, i.e., null in the “Pattern String” field in Table \ref{tab:feature:mathur}, or duplicate text data from the original dataset, followed by tagging them as dark patterns, i.e., positive examples. Finally, we have 1,178 text data.
\begin{table}[b]
\centering
\caption{Mathur et al.’s Dataset Features\cite{11kScale}}
\def1.5{1.5
\begin{tabularx}{250px}{l|X} \hline
Feature Name & Description \\ \hline
Pattern String & Text of dark pattern \\
Comment & Comments from researchers \\
Pattern Category & Category of dark patterns
\\
Pattern Type & Detailed type of dark patterns (e.g., trick question, hard to cancel) \\
Where on the website? & Where on the website is the dark pattern present \\
Deceptive? & Whether the dark pattern instance is deceptive or not \\
Website Page & Page's URL that has dark patterns \\ \hline
\end{tabularx}
\label{tab:feature:mathur}
\end{table}
\subsection{Non-Dark Pattern Texts in E-commerce Site}\label{sbsec:nonDpTextInEcomSites}
The collection of negative samples, i.e., non-dark pattern texts, is described in this section. The following two steps were conducted to create non-dark pattern texts:
\begin{enumerate}
\item Collect the web pages on e-commerce sites.
\item Extract texts from the collected web pages to segment.
\item Exclude dark patterns from the segmented texts.
\end{enumerate}
\subsubsection{Collecting web pages}
The negative samples were retrieved from the same websites, i.e., e-commerce sites, where the dark patterns prepared in Section \ref{sbsec:dpTextInEcomSites} are included, using headless Chrome. If a website was unreachable, encountering “not found” or “access denied,” we ignored the website. The content was gathered after executing JavaScript when accessing each web page because most websites adopt JavaScript to create the web page.
\begin{algorithm*}[t]
\newcommand{\textbf{continue;}}{\textbf{continue;}}
\newcommand{\textbf{end}}{\textbf{end}}
\label{alg:segmentation}
\caption{Segmentation Algorithm (modified from Algorithm 1 of Mathur et al. \cite{11kScale})}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\LinesNumbered
\Input{$element$: HTML Element \\ $ignoreElements$: Tag names ignored in segmentation ['script','style','noscript','br','hr'] \\
$blockElements$: Tag names('p','div',...) of all Block-level Elements \\
$inlineElements$: Tag names('button','span',...) of all Inline Elements \\
}
\Output{text list of segmented html $texts$ \\}
\textbf{function segmentElement(element):} \\
\Begin{
\If{$element$ is null}
{\KwRet {[empty list]}}
\If{All child nodes of $element$ are not included in $ignoreElements$}
{
\If{All child nodes of $element$ are TEXT\_NODE or included in $inlineElements$}
{
$text \leftarrow$ text content of $element$ \;
\If{$text$ is not null}
{
\KwRet{[$text$]}
}
}
}
$childNodes \leftarrow$ child nodes of $element$ \;
$texts \leftarrow$ [empty list] \;
\For {$i = 0$ to $|childNodes|$}{
$child \leftarrow childNodes[i]$\;
$nodeType \leftarrow $ node type of $child$\;
\If{$nodeType =$ TEXT\_NODE}
{
$textContent \leftarrow$ text content of $child$ \;
\If{$textContent \ne$ null}{
Append $textContent$ to $texts$ \;
}
}
$tagName \leftarrow $ tag name of $child$ \;
\If{($tagName = $ null) OR ($tagName \in ignoreElements$)}{
\textbf{continue;}
}
\If{$tagName \in blockElements$}
{
$childTexts \leftarrow segmentElements(child)$ \;
Concatenate $texts$ with $childTexts$ \;
}
\If{$tagName$ is included in $inlineElements$}{
$textContent \leftarrow$ text content of $child$ \;
\If{$textContent \ne$ null}{
Append $textContent$ to $texts$
}
}
}
\KwRet $texts$
}
\textbf{end}
\end{algorithm*}
\subsubsection{Extracting texts}
After collecting web pages, we adopted a Puppeteer\footnote{\url{https://github.com/puppeteer/puppeteer}} library to scrape each content, followed by collecting its screenshot and text. Mathur et al. targeted the text inside the UI components, not the whole web page, as the unit of extraction for dark patterns. Besides, Mathur et al.'s \cite{11kScale} restricts the target HTML element to be 1) a UI element that occupies more than 30\% of the page and 2) a specific block element to obtain dark pattern candidates efficiently. On the contrary, our goal is to extract non-dark pattern texts widely from the page so that we target all the block elements to extract the text. Specifically, we target the block elements containing at least one TextNode as their child element but not containing block-level elements \footnote{\url{https://developer.mozilla.org/en-US/docs/Web/HTML/Block-level\_elements}}. Because the only difference from the work of Mathur et al. is target elements, we can use the same segmented texts. The detailed segmentation algorithm is given as Algorithm \ref{alg:segmentation}, which is modified from Mathur et al.'s algorithm. We applied Algorithm \ref{alg:segmentation} to the body element of each web page.
\subsubsection{Excluding dark patterns}
The collected texts in the previous step may contain dark patterns, so we filtered out the texts collected by Mathur et al. that contain dark pattern texts. Then, we manually confirmed the filtered texts were non-dark patterns. Note that we ignored numerical values, capitalization, and punctuation in the text when applying the filtering.
Through the above process, 14,208 non-dark pattern texts were collected. From these, 1,178 texts were extracted randomly and used as negative samples of the dataset.
\subsection{Validation of the Segmentation Algorithm}
To validate the segmentation by the implemented Algorithm \ref{alg:segmentation}, we segmented the same web pages included in Mathur et al.'s dataset into texts. Then, we manually checked whether the segmented texts contained the same units of dark pattern texts. Table \ref{tab:dp:cmp} shows a comparison of the dark patterns collected by Mathur et al. and an example of the text obtained by Algorithm \ref{alg:segmentation}. As shown in Table \ref{tab:dp:cmp}, we confirmed the texts are the same as Mathur et al.'s dark pattern texts except for numerical values and punctuations.
\begin{table*}[t]
\centering
\caption{Comparison of Text Extraction Units}
\def1.5{1.5}
\begin{tabularx}{\textwidth}{X|X|X} \hline
Websites URL & Dark patterns texts by Mathur et al. & Texts by our segmentation algorithm \\ \hline
\url{anuradhaartjewellery.com} & "Hurry Up! Only 1 Piece Left" & "Hurry Up! Only 1 Piece Left" \\
\url{annthegran.com} & "No thanks, I don't like free stuff" & "No thanks, I don't like free stuff" \\
\url{savethebeesproject.com} & "9 people are viewing this." & "24 People are viewing this!" \\ \hline
\end{tabularx}
\label{tab:dp:cmp}
\end{table*}
For illustrative purposes, we show several examples of the segmentation results in Table \ref{tab:segment:result}.
\begin{table*}[t]
\centering
\caption{Examples of HTML Segmentation by Algorithm \ref{alg:segmentation}}
\def1.5{1.5
\begin{tabular}{l|l} \hline
$element$(Input) & $texts$(Output) \\ \hline
\begin{lstlisting}
<body>
<p>This is in p (Block-level) tag. </p>
</body>
\end{lstlisting} & [
'This is in p (Block-level) tag. '
] \\ \hline
\begin{lstlisting}
<body>
<div>
<p>This is in p (Block-level) tag. </p>
<p>This is in p (Block-level) tag. </p>
</div>
</body>
\end{lstlisting} & [ 'This is in p (Block-level) tag. ', 'This is in p (Block-level) tag. ' ]
\\ \hline
\begin{lstlisting}
<body>
<div>
<p>This is in p (Block-level) tag.
<span>This is in span (Inline-level) tag. </span>
</p>
</div>
</body>
\end{lstlisting} & [
'This is in p (Block-level) tag. This is in span (Inline-level) tag. '
] \\ \hline
\begin{lstlisting}
<body>
<p>This is in p (Block-level) tag. </p>
<script>console.log("script will be ignored")</script>
</body>
\end{lstlisting} & [
'This is in p (Block-level) tag. '
] \\ \hline
\end{tabular}
\label{tab:segment:result}
\end{table*}
\subsection{Summary}
The final dataset consists of 1,178 positive (dark pattern) and 1,178 negative (non-dark pattern) texts, totaling 2,356 texts from e-commerce websites.
An example of the data is shown in Table \ref{tab:dp:example}. In this example, the dark pattern taxonomy of the first positive example "3,081 people have viewed this item" is Social Proof, which misleads users to purchase by displaying information as if other users have purchased the product. It exploits the bandwagon effect \cite{11kScale}. We published the dataset on Github\footnote{https://github.com/yamanalab/ec-darkpattern} with the code used to collect non-dark pattern texts.
\begin{table*}[t]
\centering
\caption{Examples of Positive and Negative Texts}
\def1.5{1.5}
\begin{tabularx}{\textwidth}{X|X} \hline
Positive (dark pattern) & Negative (non-dark pattern) \\ \hline
"3,081 people have viewed this item" & "Unique and personalized products to go as yourself." \\
"In Stock only 3 left" & "newsletter signup (privacy policy)." \\
"No thanks. I don't like free things..." & "International Shipping Policy" \\
"894 Claimed! Hurry, only a few left!" & "Clothes, Shoes \& Accessories" \\
"Your order is reserved for 08:48 minutes!" & "READY FOR YOUR NEXT AUTHENTIC NHL JERSEY?" \\
\hline
\end{tabularx}
\label{tab:dp:example}
\end{table*}
\section{Baseline Methods for Automatic Detection}\label{sec:baselineMethodsForAutomaticDetection}
\subsection{Selection of Baseline Methods}
In recent years, many machine learning-based text classification methods have been proposed. In the security and privacy field, fake news and phishing detection are representative text classification tasks. Various machine learning-based methods have been proposed for text-based fake news and phishing detection, including classical machine learning and deep learning-based methods.
In natural language processing, deep learning-based models have surpassed the accuracy of classical machine learning models in various text classification tasks. Among deep learning-based models, models adopting transformer-based pre-trained language models like BERT \cite{bert} have attracted the most attention and are the current state-of-the-art.
Thus, we chose the following two types of machine learning methods as the baseline methods:
\begin{itemize}
\item Classical NLP methods: logistic regression, SVM, random forest, and gradient boosting (LightGBM \cite{lgbm}).
\item Transformer-based pre-trained language models: BERT \cite{bert}, RoBERTa \cite{roberta}, ALBERT \cite{albert}, and XLNet \cite{xlnet}.
\end{itemize}
\subsection{Metrics}
Automatic detection for dark patterns is a binary classification that predicts whether a given text is a dark or non-dark pattern. We adopted accuracy, precision, recall, F1-score, and AUC to evaluate the models over 5-fold cross-validation.
\subsection{Experimental Evaluations using Classical NLP Methods}
We adopted the following two-step procedure for classical NLP methods:
\begin{enumerate}[label=(\roman*)]
\item Extract features of the texts using bag-of-words (BoW), followed by feeding each classifier.
\item Train each dark pattern classifier.
\end{enumerate}
We used scikit-learn\footnote{https://github.com/scikit-learn/scikit-learn} to implement the logistic regression, SVM, and random forest classifiers. For gradient boosting (LightGBM), we used lightgbm\footnote{https://github.com/microsoft/LightGBM}. We tuned hyper-parameters by using Optuna \cite{optuna}. Table \ref{tab:classicalNlp:hyper} in the Appendix shows the best hyper-parameters for the classical NLP methods we adopted.
The experimental evaluation results are listed in Table \ref{tab:classicalNlp:result}, showing accuracies of 0.954 to 0.962.
\begin{table}[b]
\centering
\caption{Experimental Result of Classical NLP Methods}
\def1.5{1.5}
\begin{tabular}{cccccc} \hline
Model & Accuracy & AUC &F1 score & Precision & Recall \\ \hline
Logistic Regression & 0.961 & 0.989 & 0.960 & 0.981 & 0.940 \\
SVM & 0.954 & 0.987 & 0.952 & 0.986 & 0.922 \\
Random Forest & 0.958 & 0.989 & 0.957 & 0.984 & 0.932 \\
Gradient Boosting & 0.962 & 0.989 & 0.961 & 0.976 & 0.947 \\ \hline
\end{tabular}
\label{tab:classicalNlp:result}
\end{table}
\subsection{Experimental Evaluations using Transformer-based Pre-trained Language Models }
BERT is a model with multiple layers of transformer encoders. BERT acquires knowledge about languages and domains through pre-training with a masked language model (MLM) and next sentence prediction (NSP) using a large-scale text corpus.
When applying BERT to text classification, it is common to perform transfer learning and fine-tuning on a task-specific dataset based on BERT that has been pre-trained on the language corpus targeted by the task. An overview of the BERT-based model for automatic dark pattern detection used in this study is shown in Fig. \ref{bertfig}. "[CLS]" and "[SEP]" are special tokens. "[CLS]" is used for classification as sentence representation. "[SEP]" is used for separating segments. The model was trained so that the classification result is output from the classification layer that linearly transforms the "[CLS]" representation output by BERT.
\begin{figure}[t]
\includegraphics[width=\linewidth]{bert.png}
\caption{Overview of Transformer-based Pre-trained Language Models}
\label{bertfig}
\end{figure}
We used the transformers\footnote{https://huggingface.co/transformers/} to develop and train the BERT-based models. We used AdamW \cite{adamw} optimizer and linear learning-rate scheduler. When training deep learning models, we tuned hyper-parameters by grid search. Table \ref{tab:dlhyper:result} in the Appendix lists the best hyper-parameters for the transformer-based pre-trained language models we adopted.
The experimental results of transformer-based pre-trained language models are listed in Table \ref{tab:dl:result}, which shows the best accuracy of 0.975 using $\text{RoBERTa}_{large}$.
\begin{table}[b]
\centering
\caption{Experimental Results of Transformer-based Pre-trained Language Models}
\def1.5{1.5}
\begin{tabular}{cccccc} \hline
Model & Accuracy & AUC &F1 score & Precision & Recall \\ \hline
$\text{BERT}_{base}$ & 0.972 & 0.993 & 0.971 & 0.982 & 0.961 \\
$\text{BERT}_{large}$ & 0.965 & 0.993 & 0.965 & 0.973 & 0.957 \\
$\text{RoBERTa}_{base}$ & 0.966 & 0.993 & 0.966 & 0.979 & 0.954 \\
$\text{RoBERTa}_{large}$ & $\mathbf{0.975}$ & $\mathbf{0.995}$ & $\mathbf{0.975}$ & $\mathbf{0.984}$ & $\mathbf{0.967}$ \\
$\text{ALBERT}_{base}$ & 0.959 & 0.991 & 0.959 & 0.972 & 0.946 \\
$\text{ALBERT}_{large}$ & 0.965 & 0.986 & 0.965 & 0.973 & 0.957 \\
$\text{XLNet}_{base}$ & 0.966 & 0.992 & 0.966 & 0.975 & 0.958 \\
$\text{XLNet}_{large}$ & 0.942 & 0.988 & 0.940 & 0.968 & 0.914 \\ \hline
\end{tabular}
\label{tab:dl:result}
\end{table}
\section{Conclusion}\label{sec:conclusion}
This study constructed a dark pattern dataset for e-commerce sites with baseline automatic detection performance. We hope the dataset will help researchers to pursue various research on the automatic detection of dark patterns. As for the baseline performance, we experimented with automatic dark pattern detection using a set of machine learning methods, including classical NLP-based models and transformer-based pre-trained language models. The results show that the RoBERTa-large model achieved a maximum accuracy of 0.975.
Our future work will include 1) enhancing the dataset to include other websites, not just e-commerce sites, and 2) expanding the dataset to include other UX-related features besides texts, such as images and scripts because UX-related features are also helpful in detecting dark patterns.
\bibliographystyle{IEEEtran}
|
3,212,635,537,693 | arxiv | \section{Introduction} \label{sec:intro}
In the recent work~\cite{kh-vwtriang}, we have shown how, after a
separation of variables, the radial mode equations of the vector wave
equation $\square v_\mu = 0$ on the Schwarzschild black hole spacetime
may be significantly simplified by systematically decoupling them into an
upper triangular form, where the diagonal components are generalized
Regge-Wheeler operators and only a few of the off-diagonal components
are non-vanishing. The original radial mode equations constitute a
$4\times 4$ second order linear ODE, whose components are coupled in a
rather complicated way. The Regge-Wheeler operators appearing on the
diagonal of the upper triangular form are second order scalar
differential operators, with a well-studied spectral theory. This
simplification makes it possible to transfer that knowledge to the study
of the spectral theory of the original radial mode equations, which was
otherwise rather unapproachable.
What is remarkable is that the original equations, the simplified upper
triangular form, as well as the decoupling transformation are all
ordinary differential operators with rational coefficients (at least in
the standard Schwarzschild radial coordinate $r$). The existence of such
such a simplification, in particular the ability to set to zero most of
the off-diagonal terms in the upper triangular form, follows from
specific identities previously discovered by trial and error
in~\cite{berndtson}, and in part also in~\cite{rosa-dolan}
(see~\cite{kh-vwtriang} for a full discussion), and is certainly not
obvious \emph{a priori}. This naturally raises the questions of how
these identities could be recovered in a systematic approach, and
whether more could be discovered to push the simplifications described
above as far as possible. These questions become particularly relevant
for trying to repeat the same simplifications for the Lichnerowicz
equation $\square p_{\mu\nu} - 2 R_{(\mu}{}^{\lambda\kappa}{}_{\nu)}
p_{\lambda\kappa} = 0$, which has a role relative to linearized Einstein
equations analogous to that of the vector wave equation relative to
Maxwell equations. The relevant identities were also discovered
in~\cite{berndtson}, but only by means of voluminous trial and error
calculations and without a clear answer to whether they could be further
improved. We will revisit this point when considering examples in
Section~\ref{sec:examples}.
The above questions were left open in~\cite{kh-vwtriang} and are
answered in this work. The main systematic tool at our disposal is the
theory of rational solutions of ordinary differential equations (ODEs)
with rational coefficients. Under appropriate mild hypotheses on the
equation, the search for such solutions can be reduced to a finite
dimensional linear algebra problem (Theorem~\ref{thm:univ-mult}). We
summarize this theory in Section~\ref{sec:rat-sols}. The theory of
rational solutions of scalar rational ODEs is fairly well developed
(cf.\ the monograph~\cite{abramov} and the references therein; more
precise references are given in the text). Our innovation is to
synthesize this approach into an economical form, based on what we call
\emph{leading} (or \emph{trailing}) \emph{multipliers}, that is directly
applicable to our examples of interest, but also more generally to ODE
systems of arbitrary size and order. In Section~\ref{sec:offdiag}, we
consider the problem of setting to zero an off-diagonal block in an
upper triangular rational ODE system by a transformation with rational
coefficients. This problem is first reduced to an operator identity
(Equation~\eqref{eq:offdiag}), which in turn can be solved by converting
it into an inhomogeneous rational ODE system
(Theorem~\ref{thm:reduce-order}). Finally, in Section~\ref{sec:rw}, we
combine the results of Sections~\ref{sec:rat-sols} and~\ref{sec:offdiag}
to show how the special identities used in~\cite{kh-vwtriang}
and~\cite{berndtson} can be recovered with minimal effort, especially
when aided by computer algebra. In particular, we can conclusively
decide when simplifications of the kind described earlier do or do not
exist, with several examples given in Section~\ref{sec:examples}.
Section~\ref{sec:discuss} concludes with a discussion of the results and
an outlook to further work.
\section{Rational solutions of ODEs with rational coefficients}
\label{sec:rat-sols}
The main objects under our study will be ordinary differential operators
and equations (ODEs) with rational coefficients. We will usually denote the
independent variable by $r$. A differential operator $e$ applied to a
function $u = u(r)$ will be denoted by $e[u]$. Both $u$ and $e[u]$ could
be vector valued. We do not put any \emph{a priori} bounds on the dimensions or
differential order of $e$. Hence, $e$ can also be seen as a matrix of
scalar differential operators. Hence, when $e$ is of order zero, $e[u]$
will correspond to multiplication of $u$ by an $r$-dependent matrix. We
will denote the composition of differential operators by $\circ$, so
that $e\circ f[u] = e[f[u]]$. We will restrict our attention only to
differential operators with rational coefficients and in general complex
valued.
In this section, we will eventually show how to find all the rational
solutions $u = u(r)$ of a rational ODE $e[u] = v$, by reducing it to a
finite dimensional linear algebra problem. Our approach starts with a
Laurent series representation $u = \sum_n u_n r^n$ converts the equation
$e[u] = v$ into a recurrence relation on the coefficients of $u$. At
different stages, it would be useful to consider Laurent series of
different kinds. In particular, we will mostly deal with formal series
(no requirement of convergence). However, convergence will be automatic
if we know in advance that the series has only finitely many terms or
that it comes from the expansion of a rational function. Thus, we may
distinguish \emph{unbounded} Laurent series $\mathbb{C}[[r,r^{-1}]]$,
\emph{bounded (from below)} Laurent series $\mathbb{C}[r^{-1}][[r]]$, \emph{bounded from
above} Laurent series $\mathbb{C}[r][[r^{-1}]]$ and \emph{Laurent polynomials}
$\mathbb{C}[r,r^{-1}]$. Of course, we could also consider Laurent series
centered at some other $r = \rho \ne 0$, but for convenience of notation
whenever possible we will stick with $\rho=0$.
For bounded (from below) Laurent series, it is helpful to define
leading or trailing orders and coefficients. For $u = \sum_n u_n r^n \in
\mathbb{C}[r^{-1}][[r]]$, if we can write
\begin{equation}
u = \begin{bmatrix}
u^1_{n_1} r^{n_1} (1 + O(r)) \\
u^2_{n_2} r^{n_2} (1 + O(r)) \\
\vdots
\end{bmatrix} ,
\end{equation}
with each $O(r) \in r\mathbb{C}[[r]]$, the $n_1,n_2,\ldots$ are the
\emph{leading orders} of the components $u^1,u^2,\ldots$ of $u$, with
the exception when $u^i_{n_i} = 0$, in which case we set $n_i = +\infty$.
We denote by $|\check{u}|$ the vector where each component of $u$ is
replaced by its leading order, and we refer to it as the \emph{leading
order} of $u$. When $n = \min_i n_i < \infty$, $u_n \ne 0$ and we call it
the \emph{leading coefficient} of $u$. We define the leading coefficient
of $0$ to be $0$.
Similarly, for bounded from above Laurent series $u \in C[r][[r^{-1}]]$,
if we can write
\begin{equation}
u = \begin{bmatrix}
u^1_{n_1} r^{n_1} (1 + O(r^{-1})) \\
u^2_{n_2} r^{n_2} (1 + O(r^{-1})) \\
\vdots
\end{bmatrix} ,
\end{equation}
with each $O(r^{-1}) \in r^{-1} \mathbb{C}[[r^{-1}]]$, the $n_1,n_2,\ldots$ are
the \emph{trailing orders} of the components $u^1,u^2,\ldots$ of $u$,
with the exception when $u^i_{n_i} = 0$, in which case we set $n_i =
-\infty$. The \emph{trailing order} $|\hat{u}|$ of $u$ is the vector of the
trailing orders of the components of $u$. When $n = \max_i n_i > -\infty$,
$u_n \ne 0$ and we call it the \emph{trailing coefficient} of $u$. We
define the trailing coefficient of $0$ to be $0$.
Clearly the leading (trailing) coefficient of a bounded (from above)
Laurent series vanishes if and only if the whole series vanishes. For
Laurent polynomials $u \in \mathbb{C}[r,r^{-1}]$, both the leading and trailing
orders, and coefficients, are well-defined.
Consider an ODE $e[u] = 0$ on bounded Laurent series $u = \sum_n u_n r^n
\in \mathbb{C}[r^{-1}][[r]]$. What we like to do is turn $e[u] = 0$ into a
linear recurrence relation on the coefficients $u_n$ of the form $E_n
u_n = f_n(u_{n-1},u_{n-2},\ldots)$ and then uniquely solve for $u_n$ as
a function of $u_{n-1}$ and lower order coefficients, for almost all $n$
(that is, all but finitely many). Those finitely many $n$ for which the
solution for $u_n$ would not be unique would then determine the
dimension of the solution space of the ODE. This approach requires that
the coefficients $E_n$ in the recurrence relation be invertible for
almost all $n$. For scalar equations this is an almost trivial
requirement, but in matrix equations different components of $e$ may be
weighted so differently by powers of $r$ that the coefficient $E_n$
comes out as a singular matrix for infinitely many $n$. Often this
problem can be remedied by applying suitable transformations to $u$ and
to $e[u]$.
Let $S = S(r)$ and $T = T(r)$ be matrices with Laurent polynomial
components. For future convenience, we also require that the inverses
$S^{-1}$ and $T^{-1}$ also have Laurent polynomial components (or,
equivalently, the determinants of $S$ and $T$ are proportional to single
powers of $r$). We say that $S$ and $T$ are respectively the
\emph{source} and \emph{target leading multipliers} of $e$ when, after
expanding all rational coefficients as bounded Laurent series, we have
\begin{equation} \label{eq:lead-mult}
e[S(r) u_n r^n] = T(r) (E_n u_n r^n + r^n O(r)) ,
\end{equation}
with the components of $O(r)$ all in $r\mathbb{C}[[r]]$ and $E_n$ an
$r$-independent matrix that is invertible for almost all $n$. We call
$E_n$ the \emph{trailing characteristic matrix} of $e$ with respect to
the given multipliers. Similarly, we say that $S$ and $T$ are
respectively the \emph{source} and \emph{target trailing multipliers} of
$e$ when, after expanding all rational coefficients as bounded from
above Laurent series, we have
\begin{equation} \label{eq:trail-mult}
e[S(r) u_n r^n] = T(r) (E_n u_n r^n + r^n O(r^{-1})) ,
\end{equation}
with the components of $O(r^{-1})$ all in $r^{-1}\mathbb{C}[[r^{-1}]]$ and $E_n$
an $r$-independent matrix that is invertible for almost all $n$. We call
$E_n$ the \emph{trailing characteristic matrix} of $e$ with respect to
the given multipliers.
Those integer $n\in \mathbb{Z}$ such that $\det E_n = 0$, which is a
polynomial in $n$, are called (respectively \emph{leading} or
\emph{trailing}) \emph{(integer) characteristic roots} or
\emph{exponents} of $e$ with respect to given multipliers $S,T$. We
denote the set of such leading characteristic exponents by
$\check{\sigma}(e)$ and the set of such trailing characteristic
exponents by $\hat{\sigma}(e)$, with implicit dependence on the $S,T$
multipliers, of course.
We will not dwell on when leading or trailing multipliers exist, but
will just assume that they are given for any particular problem. Often
$S$ and $T$ may be taken to be diagonal, with appropriately chosen
powers of $r$ on the diagonal. Otherwise, they could be determined by a
recursive procedure similar to that used in the analysis of regular and
irregular singularities for ODEs with meromorphic
coefficients~\cite{wasow}.
Any rational $u \in \mathbb{C}(r)$ will have a (convergent) bounded Laurent
series expansion about any point $r=\rho$. Without loss of generality,
let us take $\rho = 0$. We would like to
prove some bounds on the leading order of $u$ at $r=0$ when it solves
$e[u] = v$, with some rational $v \in \mathbb{C}(r)$. For this purpose, it is
actually more natural to allow $u$ and $v$ to be bounded Laurent series.
\begin{lem} \label{lem:leading-bounds}
Let $e[u]=0$ be an ODE with rational coefficients, leading multipliers
$S,T$, and leading characteristic matrix $E_n$. Let $u,v \in
\mathbb{C}[r^{-1}][[r]]$ with leading orders $m = \min_i |\check{u}^i|$, $n =
\min_i |\check{v}^i|$ (the values $m=\infty$ or $n=\infty$ are both
permissible). If $e[Su] = Tv$, then either (a) $m = n$ and (provided
$n<\infty$) $v_n = E_n u_n$ or (b) $m$ is a leading characteristic exponent
of $e$, $E_m u_m = 0$, and $m < n$. In other words
\begin{equation} \label{eq:leading-bounds}
\min \; \{ n\} \cup \check{\sigma}(e) \le m .
\end{equation}
\end{lem}
This result and proof are analogous to those presented in~\textsection6
of~\cite{abramov}, where only the case of scalar equations and
polynomial coefficients is treated. The monograph~\cite{abramov}
cites~\cite{abramov89a,abramov89b,abp95} as the original sources for the
basic ingredients of the approach. A generalization of the approach to
systems of arbitrary size and order can be found in~\cite{abramov14}
(which also cites slightly earlier related work). Our innovation is to
synthesize this approach into an economic method, as presented in this
section, based on the convenient notion of leading (trailing)
multipliers $S,T$ and the way they lead to the leading (trailing)
characteristic matrix $E_n$.
\begin{proof}
If $m=\infty$, this means that $u=0$. Then also $v=0$ and $n=\infty$, meaning
that (a) holds. For the rest we will assume that $m<\infty$, meaning that
$u$ has the non-vanishing leading coefficient $u_m \ne 0$. If $E_m u_m
\ne 0$, then the defining property~\eqref{eq:lead-mult} of leading
multipliers $S$ and $T$ directly implies part (a), that is $n = m$. On
the other hand, if $E_m u_m = 0$ and since by definition $u_m \ne 0$,
the leading order of $u$ must be a characteristic exponent of $e$, $m
\in \check{\sigma}(e)$. Using again~\eqref{eq:lead-mult}, we also find
$m < n$.
Part (a) implies $n \le m$, while part (b) implies
$\min\check{\sigma}(e) \le m$ and $m<n$. Since at least one of (a) or
(b) always holds, the lower bound~\eqref{eq:leading-bounds} on $m$ is
always true.
\end{proof}
All the same arguments apply to Laurent expansions about $r=\infty$, though
after making use of the transformation $r \mapsto 1/r$. For convenience,
we state the corresponding result without the need to invoke this
transformation.
\begin{lem} \label{lem:trailing-bounds}
Let $e[u]=0$ be an ODE with rational coefficients, trailing multipliers
$S,T$, and trailing characteristic matrix $E_n$. Let $u,v \in
\mathbb{C}[r][[r^{-1}]]$ with trailing orders $m = \max_i |\hat{u}^i|$, $m =
\max_i |\hat{v}^i|$ (the values $m=-\infty$ or $n=-\infty$ are both
permissible). If $e[Su] = Tv$, then either (a) $m = n$ and (provided
$n>-\infty$) $v_n = E_n u_n$ or (b) $m$ is a trailing characteristic
exponent of $e$ and $E_m u_m = 0$. In other words
\begin{equation} \label{eq:trailing-bounds}
m \le \max \; \{ n\} \cup \hat{\sigma}(e) .
\end{equation}
\end{lem}
Now we know how to bound the order of the pole of a rational solution
$u$ at any particular value of $r=\rho \in \mathbb{C}$. For the
following class of ODEs, we can also identify all the potential
locations of the poles of $u$.
\begin{lem} \label{lem:pole-locations}
Let $e[u] = 0$ be an ODE of differential order $p$ with rational
coefficients, for which there exists an invertible matrix $P=P(r)$ with
rational coefficients such that $P e[u] = \frac{d^p}{dr^p} u +
\tilde{e}[u]$, where $\tilde{e}$ is of differential order at most $p-1$.
For rational $u,v \in \mathbb{C}(r)$, if $e[u] = v$, then $u(r)$ is smooth
(i.e.,\ has no pole) at all but finitely many points of $\mathbb{C}$. The only
possible exceptions are $r=\rho$, with $\rho$ one of the poles of $Pv$
or of the coefficients of $\tilde{e}$.
\end{lem}
\begin{proof}
By our hypotheses, we can put the equation $e[u] = v$ into the
equivalent form
\begin{equation} \label{eq:standard-ode}
\frac{d^p}{dr^p} u + \tilde{e}[u] = Pv ,
\end{equation}
where $Pv$ is rational and $\tilde{e}[u]$ has rational coefficients.
Consider a point $r=\rho \in \mathbb{C}$, that is not pole of $Pv$ or of
the coefficients of $\tilde{e}[u]$. There are obviously only finitely
many such excluded points. If $u$ has a pole of type $(r-\rho)^{-k}$,
then $\frac{d^p}{dr^p} u$ has a pole of type $(r-\rho)^{-k-p}$, while
$Pv$ and $\tilde{e}[u]$ will only have poles of lower order. But this
means that such a $u$ cannot be a solution of our equation. Hence, any
rational solution $u \in \mathbb{C}(r)$ can have poles only in the already
mentioned excluded set.
\end{proof}
Given a rational ODE $e[u] = 0$, when considering Laurent expansions at
$r=\rho$, let us denote the corresponding leading multipliers by
$S_\rho, T_\rho$, which are by definition rational and have poles only
at $r=\rho$ (and $r=\infty$, of course). For a rational $u \in \mathbb{C}(r)$, if
we know that its poles are restricted to a finite set of points in $\mathbb{C}$
and we have a bound on the degree of the pole at each of these points,
then we can find a rational matrix $R=R(r)$ such that $Ru$ has no poles
in $\mathbb{C}$.
\begin{thm} \label{thm:univ-mult}
Let $e[u] = v$ be an ODE with rational coefficients and rational $v \in
\mathbb{C}(r)$, satisfying the hypotheses of Lemma~\ref{lem:pole-locations}.
Suppose also that we have the leading multipliers $S_\rho,T_\rho$ of $e$
at each of the finitely many exceptional points $r=\rho\in \mathbb{C}$
identified in Lemma~\ref{lem:pole-locations}. Then, there exists a
rational matrix $R=R(r)$ such that, for any rational $u\in \mathbb{C}(r)$
satisfying $e[u] = v$, there is a Laurent polynomial $\tilde{u} \in
\mathbb{C}[r,r^{-1}]$ satisfying $u = R\tilde{u}$.
\end{thm}
We call such a matrix $R$ a \emph{universal multiplier} for the rational
inhomogeneous ODE $e[u] = v$. A universal multiplier certainly need not
be unique. The existence of universal multipliers for scalar equations
is discussed in~\cite[\textsection7]{abramov}, which
cites~\cite{abramov89b,abp95} as original references. For systems of
arbitrary size and order, the existence of universal multipliers is
discussed for instance in~\cite{abramov14} (with references to slightly
earlier work).
\begin{proof}
A rational $u \in \mathbb{C}(r)$ has only finitely many poles, and at each of
those poles it has a bounded Laurent series expansion. By invoking
Lemma~\ref{lem:pole-locations} we can constrain the poles of $u$ to a
finite set of points. Then, by invoking Lemma~\ref{lem:leading-bounds},
for each of those points, say $r=\rho$, we can find a finite lower bound
$\check{n}_\rho$ for the leading Laurent order of $S^{-1}_\rho u$ at
$r=\rho$. Recall that one of the defining properties of $S_\rho$ is that
both it and $S^{-1}_\rho$ only have poles at $r=\rho$ (and of course at
$r=\infty$). This means that $\tilde{u} = \prod_\rho S^{-1}_\rho
(r-\rho)^{-\check{n}_\rho} u$, where the product is taken over the
potential pole locations (possibly excluding $\rho=0$), is still
rational but no longer has any poles in $r\in \mathbb{C}$, with the possible
exception of $r=0$. But that can only be if $\tilde{u} \in \mathbb{C}[r,r^{-1}]$
is a Laurent polynomial. Therefore, we can take
\begin{equation} \label{eq:univ-mult}
R(r) = \prod_{\rho} S_\rho(r) (r-\rho)^{\check{n}_\rho}
\end{equation}
as the desired universal multiplier. Since any of the $\check{n}_\rho$
can be decreased without breaking this result, we have many possible
choices for $R$.
\end{proof}
\begin{cor}
Let $e$ and $v$ be as in Theorem~\ref{thm:univ-mult}, with universal
multiplier $R$. In addition, suppose that we have the leading
multipliers $S_0,T_0$ at $r=0$ and the trailing multipliers
$S_\infty,T_\infty$ at $r=\infty$ for $\tilde{e} = e \circ R$. Then the equation
$e[u] = v$ for $u$ can be reduced to a finite dimensional linear system,
and hence its solution space is finite dimensional.
\end{cor}
\begin{proof}
By invoking Theorem~\ref{thm:univ-mult}, solving $e[u] = v$ for $u\in
\mathbb{C}(r)$ is equivalent to solving $\tilde{e}[\tilde{u}] = v$ for
$\tilde{u} \in \mathbb{C}[r,r^{-1}]$, with $u = R \tilde{u}$ and
$\tilde{e}[\tilde{u}] = e[Ru]$. Invoking Lemma~\ref{lem:leading-bounds}
we can find a finite lower bound on the leading order of $S^{-1}_0
\tilde{u}$ and hence of $\tilde{u}$, which we will call $\check{n}$.
Invoking Lemma~\ref{lem:trailing-bounds} we can find a finite upper
bound on the leading order of $S^{-1}_\infty \tilde{u}$ and hence of
$\tilde{u}$, which we will call $\hat{n}$. Therefore, we can parametrize
all solutions as Laurent polynomials
\begin{equation} \label{eq:u-ansatz}
\tilde{u} = \sum_{n=\check{n}}^{\hat{n}} u_n r^n ,
\end{equation}
which has $\hat{n} - \check{n} + 1 < \infty$ undetermined coefficients.
Plugging this parametrization into the equation $\tilde{e}[\tilde{u}] =
v$, putting both sides over a common denominator, and comparing
coefficients reduces the original problem to a finite dimensional linear
system of equations. The dimension of the solution space of this system
is of course finite, and (crudely) bounded by the number of coefficients
in~\eqref{eq:u-ansatz}.
\end{proof}
Of course, once an equation has been reduced to an explicit finite
dimensional linear system, it can be solved on a computer, even
symbolically.
\section{Reducing triangular ODE systems with rational coefficients}
\label{sec:offdiag}
In this section, we are interested in the following question. Given an
ODE system in block upper triangular form, is it possible to find an
equivalent ODE system where the off-diagonal block has been set to zero,
hence in diagonal form? If possible, we call this a \emph{reduction to
block diagonal form} and say that the original system can be
\emph{reduced}. A refined version of the question is whether a rational
ODE system can be reduced while remaining rational.
The first thing we need to clarify is the notion of equivalence. Roughly
speaking, two ODE systems should be equivalent when there is an
isomorphism between their solution spaces. A further practical
requirement is that this isomorphism be given, in either direction, by
differential operators. After all, transformations given by differential
operators tend to be easier to write down in terms of explicit formulas,
while also allowing rather precise control over some properties of the
coefficients of the ODE systems, like rationality or upper triangular
form. Also, an equivalence should make explicit the transformation of
one ODE system into the other one, again by a differential operator.
We formalize these ideas as follows. Given two ODE systems, $e[u] = 0$
and $\bar{e}[\bar{u}] = 0$, an equivalence between them consists of
pairs of differential operators $k,g$ and $\bar{k},\bar{g}$ obeying the
operator identities, for any $u,\bar{u}$ and $v,\bar{v}$,
\begin{align}
\bar{e}[k[u]] &= g[e[u]] ,
&
\bar{k}[k[u]] &= u ,
&
\bar{g}[g[v]] &= v ,
\\
e[\bar{k}[\bar{u}]] &= \bar{g}[\bar{e}[\bar{u}]] ,
&
k[\bar{k}[\bar{u}]] &= \bar{u} ,
&
g[\bar{g}[\bar{v}]] &= \bar{v} .
\end{align}
Graphically, if we represent each differential operator by an arrow and
appropriate function spaces by $\bullet$'s, these identities mean that
the squares in the following diagram are commutative and that the
horizontal arrows compose to identity in either direction:
\begin{equation}
\begin{tikzcd}[column sep=large,row sep=large]
\bullet \ar{d}{e} \ar[shift left]{r}{k} \&
\bullet \ar[swap]{d}{\bar e} \ar[shift left]{l}{\bar k}
\\
\bullet \ar[shift left]{r}{g} \&
\bullet \ar[shift left]{l}{\bar g}
\end{tikzcd} \, .
\end{equation}
Basically, these identities imply that for a solution $u$ of $e[u] = 0$,
$\bar{u} = k[u]$ is a solution of $\bar{e}[\bar{u}] = 0$, and vice
versa, where the barred and unbarred transformation operators are
mutually inverse. Finally, when dealing with ODE systems with rational
coefficients, we require the coefficients of the operators $k,g$ and
$\bar{k},\bar{g}$ to be rational as well.
The above notion of equivalence is actually somewhat more rigid than
absolutely necessary, but it will be sufficient for our purposes. A
discussion of a somewhat looser notion of equivalence can be found
in~\cite{kh-vwtriang}, with references to deeper literature on this
topic. Below, we use this notion of equivalence to discuss reduction of
triangular ODE systems. A similar discussion can already be found
in~\cite[Sec.2.3]{kh-vwtriang}.
An ODE system of the form
\begin{equation} \label{eq:triang-ode}
\begin{bmatrix}
e_0 & \Delta \\
0 & e_1
\end{bmatrix}
\begin{bmatrix}
u_0 \\ u_1
\end{bmatrix}
=
\begin{bmatrix}
0 \\ 0
\end{bmatrix}
\end{equation}
is said to be \emph{(block) upper triangular}, or \emph{(block)
diagonal} if $\Delta = 0$. We will always assume that this system has
rational coefficients. We presume also that the equations $e_0[u_0]=0$
and $e_1[u_1]=0$ are ODE systems of unspecified dimensions and
differential orders, but such that they can be solved for the highest
derivatives, as in the hypotheses of Lemma~\ref{lem:pole-locations}.
A reduction of the upper triangular ODE system~\eqref{eq:triang-ode} is
an equivalence given by the following pair of commutative diagrams
\begin{equation} \label{eq:triang-reduce}
\begin{tikzcd}[column sep=large,row sep=large]
\bullet
\ar[swap]{d}{\begin{bmatrix} e_0 & \Delta \\ 0 & e_1 \end{bmatrix}}
\ar{r}{\begin{bmatrix} \mathrm{id} & \delta \\ 0 & \mathrm{id} \end{bmatrix}}
\&
\bullet
\ar{d}{\begin{bmatrix} e_0 & 0 \\ 0 & e_1 \end{bmatrix}}
\\
\bullet
\ar[swap]{r}{\begin{bmatrix} \mathrm{id} & \varepsilon \\ 0 & \mathrm{id} \end{bmatrix}}
\&
\bullet
\end{tikzcd}
\, , \quad
\begin{tikzcd}[column sep=large,row sep=large]
\bullet
\ar[swap]{d}{\begin{bmatrix} e_0 & 0 \\ 0 & e_1 \end{bmatrix}}
\ar{r}{\begin{bmatrix} \mathrm{id} & -\delta \\ 0 & \mathrm{id} \end{bmatrix}}
\&
\bullet
\ar{d}{\begin{bmatrix} e_0 & \Delta \\ 0 & e_1 \end{bmatrix}}
\\
\bullet
\ar[swap]{r}{\begin{bmatrix} \mathrm{id} & -\varepsilon \\ 0 & \mathrm{id} \end{bmatrix}}
\&
\bullet
\end{tikzcd} \, ,
\end{equation}
where the corresponding horizontal arrows are clearly mutual inverses.
Of course, we require the differential operators $\delta$ and $\varepsilon$ to
have rational coefficients. By direct calculation, we can check that the
above diagrams commute if and only if $\delta$ and $\varepsilon$
satisfy the operator identity
\begin{equation} \label{eq:offdiag}
e_0 \circ \delta = \Delta + \varepsilon \circ e_1 .
\end{equation}
Note that solutions of~\eqref{eq:offdiag} are certainly not unique. For
instance, for any $\delta,\varepsilon$ solution pair, $(\delta + \alpha\circ
e_1), (\varepsilon + e_0\circ \alpha)$ is another solution, with arbitrary
$\alpha$, since $e_0 \circ (\alpha \circ e_1) = (e_0\circ \alpha) \circ
e_1$. In addition, having a solution pair $\delta,\varepsilon$ for a given
$\Delta$, automatically gives us the solution pairs $(\delta - \alpha),
(\varepsilon+\beta)$ for $\Delta$ replaced with $\Delta + e_0 \circ \alpha +
\beta\circ e_1$. When $e_0[u_0] = 0$ and $e_1[u_1] = 0$ can be solved
for their highest derivatives, we can use the above freedom to reduce
equation~\eqref{eq:offdiag}, with $\delta,\varepsilon$ and $\Delta$ of
potentially high differential orders, to the same equation, but with the
differential orders of $\delta,\varepsilon$ and $\Delta$ bounded by the orders
of $e_0$ and $e_1$.
\begin{thm} \label{thm:reduce-order}
Suppose that the rational ODE systems $e_0[u_0] = 0$ and $e_1[u_1] = 0$
of differential orders $p_0$ and $p_1$, respectively. Suppose also that
they can be solved for the highest order derivatives, that is, for
$i=0,1$ there exist rational invertible matrices $P_i$ such that $P_i
e_i[u] = \frac{d^{p_i}}{dr^{p_i}} u + \tilde{e}_i[u]$, where
$\tilde{e}_i$ is of differential order $<p_i$. (a) The
knowledge of $\delta$ and $\Delta$ in~\eqref{eq:offdiag} is sufficient
to reconstruct $\varepsilon$ uniquely. (b) For given $\Delta$ and a $\delta$ of
fixed differential order, the existence of an $\varepsilon$
satisfying~\eqref{eq:offdiag} is equivalent to a rational ODE system on
the coefficients of $\delta$. (c) If $\Delta$ if of differential order
$<p_0+p_1$, then~\eqref{eq:offdiag} has a solution if and only if it has
a solution where $\delta$ is of differential order $<p_1$ and $\varepsilon$ is
of differential order $<p_0$.
\end{thm}
\begin{proof}
We first make the standard observation is that, under our hypotheses on
$e_i$ ($i=0,1$), for any differential operator $f_i[u_i]$, we can find a
unique differential operators $g_i$ and $\tilde{f}_i$ such that $f_i =
g_i + \tilde{f}_i \circ e_i$, with $g_i$ of differential order $<p_i$.
This is easy to prove by noting that we cannot decrease the differential
order of $e_i$ by pre-composing it with a non-zero differential operator
and then recursively rewriting the highest order derivatives in $f_i$,
say $\frac{d^{p_i+q}}{dr^{p_i+q}} u_i$, as $-\frac{d^q}{dr^q}
\tilde{e}_i[u] + \frac{d^q}{dr^q} P_i e_i[u_i]$. Obviously, both $g_i$
and $\tilde{f}_i$ also have rational coefficients and, in fact, their
coefficients are linear rational differential operators applied to the
coefficients of $f_i$.
To prove part (a), note that the identity $e_0\circ \delta - \Delta = 0
+ \varepsilon\circ e_1$, combined with our initial observation, implies that
$\varepsilon$ is uniquely fixed once we know $\Delta$ and $\delta$.
To prove part (b), consider the decomposition $e_0\circ \delta - \Delta
= \tilde{\Delta} + \varepsilon\circ e_1$, with $\tilde{\Delta}$ of differential
order $<p_1$, which by our initial observation always exists and is
unique. Thus, $\delta,\varepsilon$ and $\Delta$ satisfy~\eqref{eq:offdiag} if
and only if the coefficients of $\tilde{\Delta}$ are all zero. But
construction, the coefficients of $\tilde{\Delta}$ are linear rational
differential operators acting on the coefficients of $\delta$ and
$\Delta$.
To prove part (c), we first apply our initial observation to get the
decomposition $\delta = \tilde{\delta} + \varepsilon_1\circ e_1$, where the
differential order of $\tilde{\delta}$ is $<p_1$. Then $e_0 \circ
\tilde{\delta} = \Delta + \tilde{\varepsilon}\circ e_1$, with $\tilde{\varepsilon} =
\varepsilon-\varepsilon_1$. The differential orders of $e_0 \circ \tilde{\delta}$ and
$\Delta$ are both $<p_0+p_1$, hence by comparison we can conclude that
the differential order of $\tilde{\varepsilon}$ is $<p_0$.
\end{proof}
\section{Systems of Regge-Wheeler equations}
\label{sec:rw}
Let
\begin{equation}
f(r) = 1 - \frac{2M}{r}, \quad
f'(r) = \frac{f_1(r)}{r}, \quad
f_1(r) = \frac{2M}{r} .
\end{equation}
Define the \emph{(generalized) spin-$s$ Regge-Wheeler operator} with
\emph{mass parameter} $M$, \emph{angular momentum quantum number} $l$
and \emph{frequency} $\omega$ by
\begin{equation}
\mathcal{D}_s \phi = \partial_r f \partial_r \phi
- \frac{1}{r^2} [\mathcal{B}_l + (1-s^2)f_1] \phi + \frac{\omega^2}{f} \phi ,
\end{equation}
where $\mathcal{B}_l = l(l+1)$, with $l=0,1,2,\ldots$. We will assume that
$\omega \ne 0$ and that $s$ is a non-negative integer. Of course, for
any $s$, $\mathcal{D}_s$ has rational coefficients and satisfies the hypotheses
of Lemma~\ref{lem:pole-locations}.
Consider the upper triangular rational ODE system
\begin{equation} \label{eq:rw-sys}
\begin{bmatrix}
\mathcal{D}_{s_0} & \Delta \\
0 & \mathcal{D}_{s_1}
\end{bmatrix}
\begin{bmatrix}
u_0 \\ u_1
\end{bmatrix}
=
\begin{bmatrix}
0 \\ 0
\end{bmatrix} ,
\end{equation}
where we suppose that $\Delta$ is of differential order at most $1$. As
discussed in Section~\ref{sec:offdiag}, this system is reducible to
diagonal by an equivalence (Section~\ref{sec:offdiag}) with rational
coefficients if and only if the following version of
Equation~\ref{eq:offdiag} is satisfied:
\begin{equation} \label{eq:rw-offdiag}
\mathcal{D}_{s_0} \circ \delta = \Delta + \varepsilon \circ \mathcal{D}_{s_1} .
\end{equation}
By Theorem~\ref{thm:reduce-order}, without loss of generality, we can
consider this problem restricted to the following class of
operators:
\begin{align}
\label{eq:Delta-param}
\Delta &= \frac{1}{r^2} (\Delta_1 r \partial_r + \Delta_0) , \\
\label{eq:delta-param}
\delta &= \delta_1 r \partial_r + \delta_0 , \\
\label{eq:eps-param}
\varepsilon &= \delta_1 r \partial_r + [2\partial_r(r\delta_1)-\frac{f_1}{f}\delta_1
+ \delta_0] ,
\end{align}
where $\delta_i,\Delta_i$, for $i=0,1$, are all rational functions.
Plugging this parametrization into~\eqref{eq:rw-offdiag} and comparing
coefficients, we find the equivalent ODE system
\begin{multline} \label{eq:rw-decoupling}
e \begin{bmatrix}
\delta_0 \\ \delta_1
\end{bmatrix}
:=
\begin{bmatrix}
f & 0 \\
0 & f
\end{bmatrix}
r^2 \partial_r^2 \begin{bmatrix} \delta_0 \\ \delta_1 \end{bmatrix}
+
\begin{bmatrix}
f_1 & -2\frac{\omega^2 r^2}{f} + 2[\mathcal{B}_l+f_1(1-s_1^2)] \\
2 f & 2f-f_1
\end{bmatrix}
r \partial_r \begin{bmatrix} \delta_0 \\ \delta_1 \end{bmatrix}
\\
+
\begin{bmatrix}
f_1 (s_0^2-s_1^2) & -2\frac{\omega^2 r^2(f-f_1)}{f^2} - \frac{f_1}{f} [\mathcal{B}_l+1-s_1^2] \\
0 & f_1 (s_0^2-s_1^2 + \frac{1}{f})
\end{bmatrix}
\begin{bmatrix} \delta_0 \\ \delta_1 \end{bmatrix}
=
\begin{bmatrix} \Delta_0 \\ \Delta_1 \end{bmatrix}
\end{multline}
for $\delta_0,\delta_1$, with $\Delta_0,\Delta_1$ as inhomogeneous
sources.
Next, we will apply the analysis of Section~\ref{sec:rat-sols} to check
the conditions under which the system~\eqref{eq:rw-decoupling} has
rational solutions for $\delta_0,\delta_1$, with given
$\Delta_0,\Delta_1$.
It is easy to see that the only singular points of the
equation~\eqref{eq:rw-decoupling} are $r=0,2M,\infty$ (recall that $f(2M) =
0$). In this work, we will not consider $\Delta_0, \Delta_1$ with poles
at other values of $r$, which means that these points are the only
possible locations of the poles of $\delta_0, \delta_1$.
For each of the singular points, we have the following multipliers ($S$
and $T$), characteristic matrices ($E_n$) and characteristic exponents
($\sigma(e)$).
\begin{itemize}
\item $r=0$:
$\check{\sigma}_0(e) = \{\pm s_0 \pm s_1\}$, with
$\det E_n = (2M)^2 (n+s_0+s_1) (n+s_0-s_1) (n-s_0+s_1) (n-s_0-s_1)$
and
\begin{equation}
S = \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix} , \quad
T = \begin{bmatrix}
r^{-1} & 0 \\
0 & r^{-1}
\end{bmatrix} , \quad
E_n = -2M \begin{bmatrix}
n^2 - 2n - s_0^2 + s_1^2 & -2n (1-s_1^2) \\
2n & n^2 + 2n - s_0^2 + s_1^2
\end{bmatrix} .
\end{equation}
\item $r=2M$:
$\check{\sigma}_{2M}(e) = \{ -1 \}$, with
$\det E_n = (n+1)^2 [(n+1)^2 + 16M^2\omega^2]$
and
\begin{equation}
S = \begin{bmatrix}
\frac{(r-2M)}{2M} & 0 \\
0 & \frac{(r-2M)^2}{4M^2}
\end{bmatrix} , \quad
T = \begin{bmatrix}
1 & 0 \\
0 & \frac{2M}{(r-2M)}
\end{bmatrix} , \quad
E_n = \begin{bmatrix}
(n+1)^2 & -8M^2\omega^2 (n+1) \\
2(n+1) & (n+1)^2
\end{bmatrix} .
\end{equation}
\item $r=\infty$:
$\hat{\sigma}_\infty(e) = \{ -1, 0 \}$, with
$\det E_n = 4\omega^2 n (n+1)$
and
\begin{equation}
S = \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix} , \quad
T = \begin{bmatrix}
r^2 & 0 \\
0 & 1
\end{bmatrix} , \quad
E_n = \begin{bmatrix}
0 & -2\omega^2 (n+1) \\
2n & n(n+1)
\end{bmatrix} .
\end{equation}
\end{itemize}
Suppose that $A_\rho = T^{-1}_\rho [\begin{smallmatrix} \Delta_0 \\
\Delta_1 \end{smallmatrix}]$ has leading order $\check{m} = \min_i
|\check{A}^i_0|$, and trailing order $\hat{m} = \max_i |\hat{A}^i_\infty|$.
At $r=2M$, we do not need the specific order, but just some lower bound
$m \le \min_i |\check{A}^i_{2M}|_{r=2M}$ for some integer $m\le -1$. We
choose a bound of this form because then the identity
\begin{equation}
\min\{m\} \cup \check{\sigma}_{2M}(e)
= \min \{m, -1\} = m
\end{equation}
determines that the Laurent series expansion of the solution $\delta =
[\begin{smallmatrix} \delta_0 \\ \delta_1 \end{smallmatrix}]$ must
belong to
\begin{equation}
\delta
\in \begin{bmatrix}
\frac{(r-2M)}{2M} & 0 \\
0 & \frac{(r-2M)^2}{4M^2}
\end{bmatrix}
(r-2M)^{\min\{m\} \cup \check{\sigma}_{2M}(e)} \mathbb{C}[[(r-2M)]]
= \begin{bmatrix}
1 & 0 \\
0 & \frac{(r-2M)}{2M}
\end{bmatrix}
(r-2M)^{m+1} \mathbb{C}[[r]] .
\end{equation}
Since this is the only condition to be satisfied for poles other than at
$r=0,\infty$, without loss of generality, we can take $R=f^{m+1} =
(\frac{r-2M}{2M})^{m+1}$ as a convenient universal multiplier. So that,
according to Theorem~\ref{thm:univ-mult}, any rational solution
of~\eqref{eq:rw-decoupling} must satisfy
\begin{equation}
\delta \in R \, \mathbb{C}[r,r^{-1}] = f^{m+1} \mathbb{C}[r,r^{-1}] .
\end{equation}
Finally, we can parametrize any such solution with the following bounded
order Laurent polynomial:
\begin{equation} \label{eq:bounded-rat-sol}
\delta
= f^{m+1} \sum_{n=\check{n}}^{\hat{n}} d_n r^n ,
\quad \text{where} \quad
\left\{
\begin{aligned}
\hat{n} &= \max \{\hat{m}\} \cup \sigma_\infty(e)
= \max\{\hat{m},0\} , \\
\check{n} &= \min \{\check{m}\} \cup \sigma_0(e)
= \min\{\check{m},-s_0-s_1\} .
\end{aligned}
\right.
\end{equation}
\subsection{Examples} \label{sec:examples}
We finish with a few explicit examples. Sometimes, in specific examples,
a $\delta,\epsilon$ solution for a given $\Delta$ can be found by trial
and error. However, when unguided, such a process can be quite
laborious. And at the end, if no solution was found, one cannot
automatically conclude that a solution does not exist. Using the method
presented above, when the trial and error method becomes too laborious,
it can be automated using a computer algebra system. Moreover, our
method can also furnish a proof that in some situation no solution
exists.
Below, we find it helpful to use the notation $\check{O}(r^p)$ to denote
the leading order of a Laurent series at $r=0$ and $\hat{O}(r^q)$ to
denote the trailing order of a Laurent series at $r=\infty$.
\begin{enumerate}
\item \label{itm:f1-s01}
The equation
\begin{equation}
\mathcal{D}_0 \circ \delta = \frac{f_1}{r^2} + \varepsilon \circ \mathcal{D}_1 ,
\end{equation}
where
\begin{equation}
\begin{bmatrix}
\Delta_0 \\ \Delta_1
\end{bmatrix}
= \begin{bmatrix}
f_1 \\ 0
\end{bmatrix}
= \begin{bmatrix}
\check{O}(r^{-1}) \\ 0
\end{bmatrix}
= \begin{bmatrix}
\hat{O}(r^{-1}) \\ 0
\end{bmatrix} ,
\end{equation}
gives rise to the normalized sources
\begin{equation}
|\check{A}_0|
= \begin{bmatrix}
0 \\ \infty
\end{bmatrix} , \quad
|\check{A}_{2M}|
= \begin{bmatrix}
0 \\ \infty
\end{bmatrix} , \quad
|\hat{A}_\infty|
= \begin{bmatrix}
-3 \\ -\infty
\end{bmatrix}
.
\end{equation}
Since $A_{2M} = T_{2M}^{-1}\Delta$ has no pole at $r=2M$, we can
choose the universal multiplier to be $R = 1$. $A_0$ and $A_\infty$ give
the orders $\check{m} = 0$ and $\hat{m} = -3$ and hence the Laurent
polynomial bounds $\check{n} = \min\{0,-0-1\} = -1$ and $\hat{n} =
\max\{-3, 0\}$. From this information, we can conclude that there
exists the \emph{unique solution}
\begin{equation}
\delta = -1 , \quad
\varepsilon = -1 .
\end{equation}
\item \label{itm:f1-s00}
The equation
\begin{equation}
\mathcal{D}_0 \circ \delta = \frac{f_1}{r^2} + \varepsilon \circ \mathcal{D}_0
\end{equation}
where $\Delta_0$, $\Delta_1$ and the corresponding normalized sources
are the same as in Example~\ref{itm:f1-s00}. Again, since $A_{2M} =
T_{2M}^{-1}\Delta$ has no pole at $r=2M$, we can choose the universal
multiplier to be $R = 1$. $A_0$ and $A_\infty$ give the orders $\check{m}
= 0$ and $\hat{m} = -3$ and hence the Laurent polynomial bounds
$\check{n} = \min\{0,-0-0\} = 0$ and $\hat{n} = \max\{-3,0\} = 0$.
From this information, we can conclude that there exists \emph{no
solution} for $\delta,\varepsilon$.
\item \label{itm:vw-offdiag}
The equation
\begin{equation}
\mathcal{D}_0 \circ \delta
= -\frac{f_1}{r^2} \left(\mathcal{B}_l + \frac{f_1}{2}\right)
+ \varepsilon \circ \mathcal{D}_0
\end{equation}
where
\begin{equation}
\begin{bmatrix}
\Delta_0 \\ \Delta_1
\end{bmatrix}
= \begin{bmatrix}
-f_1 \left(\mathcal{B}_l + \frac{f_1}{2}\right) \\
0
\end{bmatrix}
= \begin{bmatrix}
\check{O}(r^{-2}) \\ 0
\end{bmatrix}
= \begin{bmatrix}
\hat{O}(r^{-1}) \\ 0
\end{bmatrix}
\end{equation}
gives rise to the normalized sources
\begin{equation}
|\check{A}_0|
= \begin{bmatrix}
-1 \\ \infty
\end{bmatrix} , \quad
|\check{A}_{2M}|
= \begin{bmatrix}
0 \\ \infty
\end{bmatrix} , \quad
|\hat{A}_\infty|
= \begin{bmatrix}
-3 \\ -\infty
\end{bmatrix}
.
\end{equation}
Since $A_{2M} = T_{2M}^{-1}\Delta$ has no pole at $r=2M$, we can
choose the universal multiplier to be $R = 1$. $A_0$ and $A_\infty$ give
the orders $\check{m} = -1$ and $\hat{m} = -3$ and hence the Laurent
polynomial bounds $\check{n} = \min\{-1,-0-0\} = -1$ and $\hat{n} =
\max\{-3,0\} = 0$. From this information, we can conclude that there
exists \emph{no solution} for $\delta,\varepsilon$.
\item \label{itm:berndtson-s02}
The equation
\begin{equation}
\mathcal{D}_0 \circ \delta = \Delta + \varepsilon \circ \mathcal{D}_2 ,
\end{equation}
with
\begin{equation}
\begin{split}
\Delta =&
24 i f_1 r^2 \omega^3 - 4 i f (6 f f_1 +
6 \mathcal{B}_l f_1 +\mathcal{A}_l) r\omega \partial_r\\
&- 2 i \left( \mathcal{A}_l + 2 (\mathcal{B}_l-3) + (\mathcal{A}_l - \mathcal{B}_l) (1 + 2 \mathcal{B}_l) + 2 (\mathcal{A}_l + 6 \mathcal{B}_l) f - 9\frac{\mathcal{A}_l}{\mathcal{B}_l} f^2 -
12 f^3 \right) \omega \\
&+\frac{ f_1 f\mathcal{B}_l (- 4 f_1^2 + 8 f f_1 - 4 \mathcal{B}_l + 16 f\mathcal{B}_l +\mathcal{A}_l)}{ir\omega}\partial_r + \frac{i f_1 \mathcal{B}_l (
\mathcal{A}_l (\mathcal{B}_l - 7 f) + 12 f (1 - (2 + \mathcal{B}_l) f + f^2)) }{r^2 \omega}
\end{split}
\end{equation}
\begin{equation}
\begin{bmatrix}
\Delta_0 \\ \Delta_1
\end{bmatrix}
= \begin{bmatrix}
\check{O}(r^{-4}) \\
\check{O}(r^{-4})
\end{bmatrix}
= \begin{bmatrix}
\hat{O}(r^{3}) \\
\hat{O}(r^{2})
\end{bmatrix}
\end{equation}
gives rise to the normalized sources
\begin{equation}
|\check{A}_0| = \begin{bmatrix}
-3 \\ -3
\end{bmatrix} , \quad
|\check{A}_{2M}| = \begin{bmatrix}
0 \\ 1
\end{bmatrix} , \quad
|\hat{A}_\infty| = \begin{bmatrix}
1 \\ 2
\end{bmatrix} .
\end{equation}
Since $A_{2M} = T_{2M}^{-1}\Delta$ has no pole at $r=2M$, we can
choose the universal multiplier to be $R = 1$. $A_0$ and $A_\infty$ give
the orders $\check{m} = -3$ and $\hat{m} = 2$ and hence the Laurent
polynomial bounds $\check{n} = \min\{-3,-0-2\} = -3$ and $\hat{n} =
\max\{2,0\} = 2$. From this information, we can conclude that there
exists a unique solution
\begin{equation}
\begin{split}
\delta =&
-6 i f_1 f r^3 \omega \partial_r - i (6 f f_1 + 12 \mathcal{B}_l f_1 +\mathcal{A}_l) r^2 \omega
+\frac{i f \mathcal{B}_l (4\mathcal{A}_l - 4 - 2 f_1 + 24 f f_1 + 3 \mathcal{B}_l + f l(l-1)) r\partial_r }{2 \omega}\\
& + \frac{i (\mathcal{A}_l^2 + 2 \mathcal{A}_l (9 + 5 \mathcal{B}_l) f - 6 \mathcal{B}_l f^2 (-8f_1 -6 + 3 \mathcal{B}_l)) }{4 \omega}
,
\end{split}
\end{equation}
with $\varepsilon$ given by~\eqref{eq:eps-param}.
The above result was obtained and checked with computer algebra.
\item \label{itm:berndtson-s12}
The equation
\begin{equation}
\mathcal{D}_1 \circ \delta = \Delta + \varepsilon \circ \mathcal{D}_2 ,
\end{equation}
with
\begin{equation}
\Delta =
-24 i f_1 f r\omega \partial_r - 4 i \mathcal{A}_l \omega
+ \frac{6 f_1 f (3 f - 1) \mathcal{B}_l}{ir\omega}\partial_r
- \frac{-i f_1 \mathcal{B}_l (18 f f_1 - 6 f\mathcal{B}_l +\mathcal{A}_l)}{r^2 \omega}
\end{equation}
\begin{equation}
\begin{bmatrix}
\Delta_0 \\ \Delta_1
\end{bmatrix}
= \begin{bmatrix}
\check{O}(r^{-3}) \\
\check{O}(r^{-3})
\end{bmatrix}
= \begin{bmatrix}
\hat{O}(r^{2}) \\
\hat{O}(r^{1})
\end{bmatrix}
\end{equation}
gives rise to the normalized sources
\begin{equation}
|\check{A}_0| = \begin{bmatrix}
-2 \\ -2
\end{bmatrix} , \quad
|\check{A}_{2M}| = \begin{bmatrix}
0 \\ 1
\end{bmatrix} , \quad
|\hat{A}_\infty| = \begin{bmatrix}
0 \\ 1
\end{bmatrix} .
\end{equation}
Since $A_{2M} = T_{2M}^{-1}\Delta$ has no pole at $r=2M$, we can
choose the universal multiplier to be $R = 1$. $A_0$ and $A_\infty$ give
the orders $\check{m} = -2$ and $\hat{m} = 1$ and hence the Laurent
polynomial bounds $\check{n} = \min\{-2,-1-2\} = -3$ and $\hat{n} =
\max\{1,0\} = 1$. From this information, we can conclude that there
exists a unique solution
\begin{equation}
\delta =
-12 i f_1 r^2 \omega + \frac{2 i
f (\mathcal{B}_l - 2 f_1) (\mathcal{B}_l - 2 f + f_1) r\partial_r }{\omega} + \frac{i
(\frac{\mathcal{A}_l^2}{\mathcal{B}_l}+ 6\frac{\mathcal{A}_l}{\mathcal{B}_l}(\mathcal{B}_l + 3) f -18 (\mathcal{B}_l-4)f^2 -36f)}{3 \omega}
,
\end{equation}
with $\varepsilon$ given by~\eqref{eq:eps-param}.
The above result was obtained and checked with computer algebra.
\end{enumerate}
The solution from Example~\ref{itm:f1-s01} can be generalized to
\begin{equation}
\mathcal{D}_{s_0} \frac{1}{(s_0^2-s_1^2)}
= \frac{f_1}{r^2} + \frac{1}{(s_0^2-s_1^2)} \mathcal{D}_{s_1} ,
\end{equation}
which actually works for any complex values of $s_0,s_1$, except for
$s_0 = \pm s_1$. We obtained this parametric solution by trial and
error, while trying to understand and generalize some identities
from~\cite{berndtson}. However, simply having this formula does not tell
us whether it is the unique solution. Applying our systematic approach,
we can check uniqueness as we did in Example~\ref{itm:f1-s01} for
$s_0=0$ and $s_1=1$, but only for specific values of $s_0,s_1$ at a time.
The reason is that the lower bound $\check{n}$ on the Laurent polynomial
order of $\delta$ in~\eqref{eq:bounded-rat-sol} is influenced by $\min
\sigma_0(e) = -s_0-s_1$ (at least for non-negative integer values of the
$s_i$), which depend on the $s_i$.
Note also that the above formula is singular for $s_0=\pm s_1$ and no
longer tells us anything about the existence of solutions in those
cases. On the other hand, our systematic approach can check that
indeed no solution exists, again on a case by case basis, as we did
for $s_0=0$ in Example~\ref{itm:f1-s00}.
In Equation~(88) of~\cite{kh-vwtriang}, we had managed to reduce a
$3\times 3$ upper triangular system with Regge-Wheeler operators on the
diagonal and a single rational non-vanishing off-diagonal component.
Incidentally, the solution from Example~\ref{itm:f1-s01} was
instrumental in that simplification. The existence of a solution in
Example~\ref{itm:vw-offdiag} would mean that this system could be
further reduced to diagonal form by a rational transformation. This
question was left open in~\cite{kh-vwtriang}. However, the non-existence
of such solutions, as we have just confirmed in
Example~\ref{itm:f1-s01}, proves that no such further simplification is
possible.
In~\cite{kh-vwtriang}, we have arrived at an upper-triangular
Regge-Wheeler system as a partial decoupling of the radial mode
equations of the vector wave equation (which could be interpreted as the
\emph{harmonic} or \emph{Lorenz} gauge-fixed version of Maxwell's
equations) on the background of a Schwarzschild black hole. That work
was strongly inspired by~\cite{berndtson}, which achieved a similar
decoupling for the Lichnerowicz equation (which could be interpreted as
the \emph{harmonic} or \emph{de~Donder} gauge-fixed version of
linearized Einstein's equations) also on Schwarzschild. The methods and
results result achieved in~\cite{berndtson} are unfortunately somewhat
obscure and implicit. We aim to clarify those results using the
systematic methods that we have outlined in~\cite{kh-vwtriang} and in
the present work. For instance, the existence of the solutions from
Examples~\ref{itm:berndtson-s02} and~\ref{itm:berndtson-s12} is
equivalent to Equations~(3.49--51) from~\cite{berndtson}, which were
instrumental to their main decoupling results, but apparently obtained
by trail and error, without a clear guide to how they could be
reproduced independently. Fortunately, Examples~\ref{itm:berndtson-s02}
and~\ref{itm:berndtson-s12} show that our systematic approach can
rediscover these formulas in a straight forward way using computer
algebra.
\section{Discussion} \label{sec:discuss}
The main goal of this work was to conclusively decide when it is or is
not possible to reduce an upper triangular rational ODE system
like~\eqref{eq:rw-sys} to diagonal form by a transformation
like~\eqref{eq:triang-reduce} with rational coefficients, where on the
diagonal we have generalized Regge-Wheeler operators. This question was
left open in our previous work~\cite{kh-vwtriang}. In
Section~\ref{sec:offdiag}, we showed how to reduce this question to the
existence of a rational solution to an auxiliary rational ODE system. In
Section~\ref{sec:rat-sols} we showed that, under mild hypotheses, the
existence of a rational solution of a rational ODE system can be reduced
to a finite dimensional linear algebra problem. Hence, such a question
can always be conclusively decided, at least on a case by case basis. In
Section~\ref{sec:examples}, we gave several examples illustrating our
methods. These examples reproduce, in a systematic way, some identities
previously discovered by voluminous trial and error calculations
in~\cite{berndtson}.
These identities were used in~\cite{kh-vwtriang} to significantly
simplify, after a separation of variables, the coupled radial mode
equations of the vector wave equation on Schwarzschild spacetime. Our
Example~\ref{itm:vw-offdiag} shows that this simplification cannot be
further improved. The vector wave equation plays a role relative to the
Maxwell equation that is analogous to the Lichnerowicz equation relative
to the linearized Einstein equations. In a future work, we will further
build on the results of~\cite{berndtson} to apply to the Lichnerowicz
equation the same simplifications as were applied to the vector wave
equation in~\cite{kh-vwtriang}. The methods developed in this work, will
help decide how much these simplifications could be improved. Of course,
it will also be very interesting to see how much the simplifications
studied jointly in~\cite{kh-vwtriang} and the current work will
translate from the (non-rotating) Schwarzschild black hole to the
significantly more complicated case of the (rotating) Kerr black hole.
An interesting generalization of the question of the existence of
rational solutions to the rational ODE $e[u] = v$ is the
characterization of the image of $e$ when applied to arbitrary rational
arguments. An equivalent question is the characterization of the
rational cokernel of $e$. Then, even if no rational solution to $e[u] =
v$ exists, precisely identifying the equivalence class of $v$ in the
cokernel of $e$ might allow us to choose a representative from the
equivalence class of $v$ that is simplest, with respect to some
reasonable criteria. Such questions also have connections with the
theory of $\mathcal{D}$-modules with rational
coefficients~\cite[Ch.2]{vanderput-singer}, \cite[Sec.10.5]{seiler},
which is an algebraic formalism for studying linear differential
equations, especially those with polynomial or rational coefficients.
These topics may also be explored in future work.
\section*{Acknowledgments}
Research of the author was partially supported by the GA\v{C}R project
18-07776S and RVO: 67985840. The author also thanks Francesco Bussola
for help with converting Equations~(3.49--51) of~\cite{berndtson} into
the form given in Examples~\ref{itm:berndtson-s02}
and~\ref{itm:berndtson-s12}.
\bibliographystyle{utphys-alpha}
|
3,212,635,537,694 | arxiv | \section{Introduction} \label{sec:1}
Personalized decision-making is an emerging artificial intelligence paradigm tailored to an individual's characteristics, with wide real-world applications such as
precision medicine \citep{chakraborty2013statistical},
customized economics \citep{turvey2017optimal},
and personalized marketing \citep{cho2002personalized}. The ultimate goal is to optimize the outcome of interest by assigning the right treatment to the right subjects. The resulting best strategy is referred to as the optimal decision rule (ODR).
A large number of approaches have been developed for finding ODR, including Q-learning \citep{watkins1992q,chak2010,qian2011performance}, A-learning \citep{murphy2003optimal,robins2004optimal,shi2018high}, value search methods \citep{zhang2012robust,wang2018quantile,nie2020learning}, and decision tree-based methods \citep{nikovski2006induction,laber2015tree,zhangyc2016}.
However, all these methods are developed based on data from a single source where the primary outcome of interest can be observed for all subjects, making these works less practical in more complicated situations.
There are many applications involving multiple datasets from different sources, where the primary outcome of interest is limited in the sense that it is only observed in some data sources.
Take the treatment of sepsis as an instance. In the MIMIC-III clinical database \citep{goldberger2000physiobank,johnson2016mimic,biseda2020prediction}, thousands of patients in intensive care units (ICUs) of the Beth Israel Deaconess Medical Center between 2001 and 2012 were treated with different medical supervisions such as the {vasopressor} and followed up for the mortality due to sepsis.
Besides the primary outcome of interest (i.e., the mortality due to sepsis), we can observe other post-treatment intermediate outcomes (also known as surrogacies or proximal outcomes), such as the total urine output and the accumulated net of metabolism. These intermediate outcomes, as well as baseline variables and the treatment information collected in the MIMIC-III data, were also recorded in the eICU collaborative research database \citep{goldberger2000physiobank,pollard2018eicu} that contains over 200,000 admissions to ICUs across the United States between 2014 and 2015,
Yet, the outcome of interest was not reported in eICU. Hence, we view the MIMIC-III data as the primary sample and the eICU data as the auxiliary sample without the outcome of interest,
leading to one desideratum in precision medicine on finding ODR to optimize the mortality rate of sepsis based on these datasets.
Several challenges lie in developing ODR with multiple data sources and the limited outcome. Recall the sepsis example in the MIMIC-III and eICU data. First of all, the primary outcome of interest is limited and only recorded in the MIMIC-III data. Yet, an estimation based on the MIMIC-III data solely would be less efficient, since both the MIMIC-III data and the eICU data collected {common baseline covariates, the treatment, and intermediate outcomes} from ICU patients with sepsis disease. Consequently, the eICU data can be included as the auxiliary sample to gain efficiency. In addition, how to effectively integrate multiple data sources {from heterogeneous studies} for deriving ODR can be challenging. Note that the MIMIC-III and eICU data were collected from {different locations during different periods}. These two samples show certain heterogeneity (see details in Section \ref{sec:5}) such as diverse probability distributions in baseline covariates, the treatment, and intermediate outcomes, and thus cannot be combined directly.
To overcome these difficulties, in this work, we propose \textcolor{black}{a new framework to handle heterogeneous studies and address the limited outcome simultaneously via a novel \underline{c}alibrated \underline{o}ptimal \underline{d}ecision m\underline{a}king (CODA) method.}
Motivated by the common data structures and similar data dependence in the MIMIC-III and the eICU data,
CODA naturally utilizes the common intermediate outcomes in multiple data sources through a calibration technique \citep[see e.g., ][]{chen2000unified,wu2001model,chen2002cox,cao2009improving,lumley2011connections}, to improve the treatment decision rule by borrowing information from auxiliary samples.
\subsection{Our Contribution}
Our contributions can be summarized as follows.
First, personalized decision-making based on multiple data sources is a vital problem in precision medicine. To the best of our knowledge, this is the first work on developing ODR with multiple data sources {from heterogeneous studies} for the limited outcome. Our proposed method is particularly useful for personalized decision-making with multiple {different} electronic medical records data, where missingness in outcomes is a common issue.
Second, to borrow information from auxiliary samples, we {propose the comparable intermediate outcomes (CIO) assumption}, that is, the conditional means of intermediate outcomes given baseline covariates and the treatment information are the same in the two samples. This assumption avoids the specification of the missing mechanism in auxiliary samples and is testable based on the observed data.
{Third, all the current calibration-based methods require that covariates in the primary and auxiliary samples are from the same distribution. In this work, we allow baseline covariates across different studies to have either homogeneous or heterogeneous distributions. We propose a unified framework for deriving a calibrated doubly robust estimator of the conditional mean outcome of interest (i.e., the value function) through the projection onto the difference of value estimators for common intermediate outcomes in the two samples.
When the distributions of baseline covariates differ in different samples, we construct the calibrated value estimator by rebalancing the value estimators of common intermediate outcomes in two samples based on their posterior sampling probability. To handle the large-scale datasets (such as the MIMIC-III and eICU data) and obtain interpretable decision rules, we develop an iterative policy tree search algorithm to find the decision rule that maximizes the calibrated value estimator in the primary sample.}
{Fourth, we establish the asymptotic normality of the calibrated value estimator under the estimated ODR obtained by CODA under both homogeneous and heterogeneous covariates. Different from existing calibration estimators, the proposed calibrated value estimator is a function of the decision rule and needs to be maximized over a class of decision rules to find the estimated ODR. Thus, its asymptotic properties depend on the convergence of the estimated ODR. To this end, we show that the calibrated value estimator under the estimated ODR converges to its counterpart under the true ODR at a rate of $o_p(N_E^{-1/2})$ where $N_E$ is the sample size of the primary sample.
In addition, the calibrated value estimator is shown to be more efficient than that obtained using the primary sample solely under the CIO assumption. Its variance can be consistently estimated by a simple plug-in method without accounting for the variation in estimating nuisance functions, such as propensity scores and conditional mean models of outcomes, due to the rate double robustness properties.}
\subsection{Related Works}
There are several recent works in using multiple data sources for estimating the average treatment effect (ATE) \citep{yang2019combining,athey2020combining, kallus2020role} or deriving a robust ODR to account for heterogeneity in multiple data sources \citep{shi2018maximin,mo2020learning}. However, the settings and goals of these studies are different from what we consider here.
{Specifically, in the works of \cite{yang2019combining}, \cite{athey2020combining}, and \cite{kallus2020role}, it was assumed that the two samples are from the same population and were linked together through a missing data framework, such as under the missing-at-random assumption. This allows to either develop a calibrated estimator using the common baseline covariates in both samples \citep{yang2019combining} or} impute the missing primary outcome in the auxiliary data \citep{athey2020combining, kallus2020role}, so that a more efficient estimator can be constructed for the ATE. Whereas,
the multiple data sources considered in our study may come from {heterogeneous studies} as in the MIMIC III and eICU data, and hence their missing data framework cannot be directly applied in our problem. For example, simply extending the calibration method for the ATE considered in \cite{yang2019combining} may lead to a biased result when the covariate distributions in two samples are heterogeneous; while the adaption of the methods of \cite{athey2020combining} and \cite{kallus2020role} requires an untestable assumption that the conditional means of the outcome of interest given the baseline covariates, treatment, and intermediate outcomes are the same across the two samples.
On the other hand, the main goal of \cite{shi2018maximin} and \cite{mo2020learning} is to develop a single ODR that can work for multiple data sources with heterogeneity in data distributions or outcome models, and their methods do not allow missingness in outcomes. In contrast, we are interested in improving the efficiency of the value estimator under ODR for the limited outcome observed in the primary sample only, by leveraging available auxiliary data sources.
\subsection{Outline of the Paper}
The rest of this paper is organized as follows. We introduce notations and assumptions in Section \ref{sec:2}. In Section \ref{sec:3}, we propose {two calibrated optimal decision-making methods, CODA-HO and CODA-HE, for homogeneous and heterogeneous covariates, respectively, and detail their implementation based on the iterative policy tree search algorithm}. All the theoretical properties are established in Section \ref{sec:thms}.
Extensive simulations are conducted to demonstrate the empirical performance of the proposed method in Section \ref{sec:4}, followed by a real application in developing ODR for treating sepsis using the MIMIC-III data as the primary sample and the eICU data as the auxiliary sample in Section \ref{sec:5}. We conclude our paper in Section \ref{sec:6}. The technical proofs are given in the appendix. The source code is publicly available at our repository at \url{https://github.com/HengruiCai/CODA} implemented in \textsf{R} language.
\section{Statistical Framework} \label{sec:2}
\subsection{Notation and Formulation}
For simplicity of exposition, we consider a study with two data sources.
Suppose there is a primary sample of interest $E$. Let $X_E=[X_E^{(1)},\cdots, X_E^{(r)}]^\top$ denote $r$-dimensional individual's baseline covariates with the support $\mathbb{X}_E \subset \mathbb{R}^r$, and $A_E\in\{0,1\}$ denote the binary treatment an individual receives. After a treatment $A_E$ is assigned, we first obtain $s$-dimensional intermediate outcomes $M_E=[M_E^{(1)},\cdots, M_E^{(s)}]^\top$ with support $\mathbb{M}_E \subset \mathbb{R}^s$, and then observe the primary outcome of interest $Y_E $ with support $\mathbb{Y}_E \in \mathbb{R}$, the larger the better by convention. Denote $N_E$ as the sample size for the primary sample, which consists of $\{E_i=(X_{E,i},A_{E,i},M_ {E,i},Y_ {E,i}), i = 1, \dots , N_E\}$ independent and identically distributed (I.I.D.) across $i$.
To gain efficiency, we include an auxiliary sample ($U$) available from another source. The auxiliary sample $U$ contains {the same set of baseline covariates $X_U=[X_U^{(1)},\cdots, X_U^{(r)}]^\top$ (with the same ordering as $X_E$ when $r>1$), the treatment $A_U$, and intermediate outcomes $M_U=[M_U^{(1)},\cdots, M_U^{(s)}]^\top$ (with the same ordering as $M_E$ when $s>1$) as in the primary sample}, with support $\mathbb{X}_U \subset \mathbb{R}^r$, $\mathbb{A}_U=\{0,1\}$, and $\mathbb{M}_U \subset \mathbb{R}^s$, respectively. Yet, the outcome of interest is limited in the primary sample and is not available in the auxiliary sample. Let $N_U$ denote the sample size for the I.I.D. auxiliary sample that includes $\{U_i=(X_{U,i},A_{U,i},M_ {U,i}), i = 1, \dots , N_U\}$. Denote $t={N_E/ N_U}$ as the sample ratio between the primary sample and the auxiliary sample, and $0<t<+\infty$. {And we allow the distributions of baseline covariates, treatments, and intermediate outcomes differ in two samples.}
In the primary sample, define the potential outcomes $Y_E^*(0)$ and $Y_E^*(1)$ as the outcome of interest that would be observed after an individual receiving treatment 0 or 1, respectively. Similarly, we define the potential outcomes $\{M_E^*(0),M_E^*(1)\}$ and $\{M_U^*(0),M_U^*(1)\}$ as intermediate outcomes that would be observed after an individual receiving treatment 0 and 1 for the primary sample and the auxiliary sample, respectively. Define the propensity score function as the conditional probability of receiving treatment 1 given baseline covariates as $x$, denoted as $\pi_E(x)={\mbox{Pr}}(A_E=1|X_E=x)$ for the primary sample and $\pi_U(x)={\mbox{Pr}}(A_U=1|X_U=x)$ for the auxiliary sample.
A decision rule is a deterministic function $d(\cdot)$ that maps covariate space $\mathbb{X}_E$ to the treatment space $\{0,1\}$. Define the potential outcome of interest under $d(\cdot)$ as
\begin{eqnarray*}
Y_E^*(d)=Y_E^*(0) \{1-d(X_E)\}+Y_E^*(1) d(X_E),
\end{eqnarray*}
which would be observed if a randomly chosen individual from the primary sample had received a treatment according to $d(\cdot)$, where we suppress the dependence of $Y_E^*(d)$ on $X_E$. The value function under $d(\cdot)$ is defined as the expectation of the potential outcome of interest over the primary sample as
\begin{equation*}
V(d)={\mathbb{E}} \{Y_E^*(d)\}={\mathbb{E}} [Y_E^*(0) \{1-d(X_E)\}+Y_E^*(1) d(X_E)].
\end{equation*}
As a result, the optimal decision rule (ODR) for the primary outcome of interest is defined as the maximizer of the value function among a class of decision rules $\Pi$, i.e.,
\begin{eqnarray*}
d^{opt} =\argmax_{d \in \Pi} V(d).
\end{eqnarray*}
Similarly, we define the potential intermediate outcomes under $d(\cdot)$ for two samples as
\begin{eqnarray*}
\begin{split}
&M_E^*(d)=M_E^*(0) \{1-d(X_E)\}+M_E^*(1) d(X_E),\\
&M_U^*(d)=M_U^*(0) \{1-d(X_U)\}+M_U^*(1) d(X_U).\\
\end{split}
\end{eqnarray*}
Here, $M_E^*(d)$ and $M_U^*(d)$ are $s\times 1$ vectors if $s>1$.
\subsection{Assumptions on Outcomes}
To identify ODR for the primary outcome of interest from the observed data, as standard in the causal inference literature \citep{rubin1978bayesian}, we make the following assumptions:
\noindent \textbf{(A1)}. Stable Unit Treatment Value Assumption (SUTVA):
\begin{equation*}
\begin{split}
M_E&= A_E M_E^{\star}(1) + (1-A_E)M_E^{\star}(0);\quad \quad Y_E= A_E Y_E^{\star}(1) + (1-A_E)Y_E^{\star}(0); \\
M_U&= A_U M_U^{\star}(1) + (1-A_U)M_U^{\star}(0).
\end{split}
\end{equation*}
\noindent \textbf{(A2)}. Ignorability:
\begin{equation*}
\begin{split}
\{M_E^*(0),M_E^*(1), Y_E^*(0),Y_E^*(1)\}&\independent A_E\mid X_E; \quad \quad \{M_U^*(0),M_U^*(1)\}\independent A_U\mid X_U.\\
\end{split}
\end{equation*}
\noindent \textbf{(A3)}. Positivity: There exist constants $c_{E,1}$, $c_{E,2}$, $c_{U,1}$, $c_{U,2}$ such that with probability 1, $0<c_{E,1}\leq\pi_E(x)\leq c_{E,2}<1$ for all $x \in \mathbb{X}_E$, and $0<c_{U,1}\leq\pi_U(x)\leq c_{U,2}<1$ for all $x \in \mathbb{X}_U$.\\
The above assumptions (A1) to (A3) are standard in the literature on personalized decision-making \citep[see e.g., ][]{zhang2012robust,wang2018quantile,nie2020learning}, to guarantee that the value function of intermediate outcomes in two samples and the value of the outcome of interest in the primary sample are estimable from the observed data. {Here, we assume that given the same set of baseline covariates, the ignorability assumption in (A2) holds for both samples. This is reasonable in the considered MIMIC-III and eICU data application since both samples are for sepsis patients in ICU and usually a common set of risk factors were used for treatment decisions. In Section \ref{sec:6}, we include some discussions of a possible extension to the setting when different sets of covariates are needed for the ignorability to hold in the two samples.}
Next, we make an assumption on the conditional means of intermediate outcomes in two samples to connect different data sources as follows.
\noindent \textbf{(A4)}. Comparable Intermediate Outcomes (CIO) Assumption:
\begin{equation*}
{\mathbb{E}}(M_E|X_E=x, A_E=a)={\mathbb{E}}(M_U|X_U=x,A_U=a), \text{ for all } x\in{\mathbb{X}_E\cup\mathbb{X}_U} \text{ and for all } a\in\{0,1\}.
\end{equation*}
The above assumption states that the conditional means of intermediate outcomes given baseline covariates $x$ and the treatment information $a$ are the same in the two samples, {for all $x$ and $a$ in the union of the supports of the two samples}.
This assumption is the minimum requirement to combine data sources from {heterogeneous studies}.
It is worthy to mention that (A4) automatically holds when the data sources are from the same population, i.e., $\{X_E,A_E,M_E\}$ has the same probability distribution of $\{X_U,A_U,M_U\}$. Assumption (A4) is also testable based on the observed data (see more details in the real data application in Section \ref{sec:5}).
\newpage
\subsection{Assumption on Decision Rules}
We consider the class of decision rules $\Pi$ that satisfies the following condition.
\noindent \textbf{(A5).} Vapnik-Chervonenkis (VC) Class: $\Pi$ has a finite VC-dimension and is countable. \\
Assumption (A5) is commonly used in statistical learning and empirical process theory \citep[see e.g., ][]{kitagawa2018should,rai2018statistical}. When the baseline covariates have only a finite number of support points, any $\Pi$ satisfies (A5). When the support of baseline covariates is continuous, assumption (A5) requires that $\Pi$ is smaller than all measurable sets and can be well approximated by countably many elements. The following examples give popular classes of decision rules that satisfy (A5).
\textbf{Class I: Finite-depth decision trees.} Following the definition in
\cite{athey2017efficient}, for any $L\geq 1$, a depth-$L$ decision tree $DT_L$ is specified via a splitting variable $X^{(j)} \in \{X^{(1)},\cdots,X^{(r)}\}$, a threshold $\Delta_L \in \mathbb{R}$, and two depth-$(L - 1)$ decision trees $DT_{L-1, c_1}$, and $DT_{L-1, c_2}$, such that $DT_L(x) = DT_{L-1, c_1}(x)$ if $x^{(j)} \leq \Delta_L$, and $DT(x) = DT_{L-1, c_2}(x)$ otherwise. The class of depth-$L$ decision trees over $\mathbb{R}^r$ has a $VC$ dimension bounded on the order of $O\{2^L \log(r)\}$.
\textbf{Class II: Parametric decision rules.}
Suppose the decision rule $d(\cdot)$ relies on a model parameter $\beta$, denoted as $d(\cdot)\equiv d(\cdot;\beta)$. We use a shorthand to write $V(d)$ as $V(\beta)$, and define
$
\beta_0 =\argmax_{\beta} V(\beta).
$
Thus, the value for the primary outcome of interest under the true ODR $d(\cdot;\beta_0)$ is defined as $V(\beta_0)$. Suppose the decision rule takes a form as $d(X;\beta) \equiv \mathbb{I}\{g(X)^\top \beta > 0\}$, where $g(\cdot)$ is an unknown function and $\mathbb{I}(\cdot)$ is the indicator function. We use $\phi_X(\cdot)$ to denote a set of basis functions of baseline covariates with length $q$, which are rich enough to approximate the underlying function $g(\cdot)$. Thus, the decision rule is found within a class of $\mathbb{I}\{\phi_X(X)^\top \beta > 0\}$, denoted as the class $\Pi_2$. Here, for notational simplicity, we include 1 in $\phi_X(\cdot)$ so that the parameter $\beta \in \mathbb{R}^{q+1}$.
{In this paper, to handle the large-scale datasets (such as the motivated MIMIC-III and eICU data) and obtain interpretable decision rules, we consider a class of decision trees with finite-depth $L_n \leq \kappa \log_2(n)$ for some $\kappa < 1/2$, denoted as $\Pi_1$, and search ODR within the class $\Pi_1$.}
\section{Method}\label{sec:3}
{In this section, we formally present the CODA method.
Since the baseline covariates are observed in both primary and auxiliary samples, it is feasible to check if the distributions of baseline covariates from different samples are homogeneous or heterogeneous. In the following, we establish CODA for homogeneous baseline covariates in Section \ref{sec:CODA_homo} and for heterogeneous baseline covariates in Section \ref{sec:CODA_hete}, respectively.
We provide the variance estimation in Section \ref{sec:est_Eho}, followed by the proposed iterative policy tree search algorithm in Section \ref{sec:impl}. }
\subsection{Calibrated Optimal Decision Making for Homogeneous Baseline Covariates}\label{sec:CODA_homo}
{In this section, we focus on the case where the distributions of baseline covariates in different samples are the same, i.e., $X_E \sim X_U$. Consider} the doubly robust (DR) estimators of the value functions \citep{zhang2012robust} for the outcome of interest in the primary sample and intermediate outcomes in the two samples. Specifically, the DR estimator for the outcome of interest in the primary sample is given by
\begin{equation*}
\begin{split}
\widehat{V}_{E}(d)= {1\over N_E}\sum_{i=1}^{N_E} {\mathbb{I}\{A_{E,i}=d(X_{E,i})\} [Y_{E,i} - \widehat{\mu}_E\{X_{E,i},d(X_{E,i})\} ]\over{A_{E,i} \widehat{\pi}_E (X_{E,i})+(1-A_{E,i})\{1-\widehat{\pi}_E(X_{E,i})\}}} + \widehat{\mu}_E\{X_{E,i},d(X_{E,i})\} ,
\end{split}
\end{equation*}
where $\widehat{\pi}_E$ is the estimator of the propensity score function, and $\widehat{\mu}_E(x,a)$ is the estimated conditional mean function for ${\mathbb{E}} (Y_E|X_E=x,A_E=a)$, in the primary sample. Following arguments in \cite{zhang2012robust,luedtke2016statistical,kitagawa2018should,rai2018statistical},
we have the asymptotic normality for the value estimator as
\begin{equation}\label{lem1}
\sqrt{N_E} \Big\{\widehat{V}_{E}(d) - V(d) \Big\} \overset{D}{\longrightarrow} N\Big\{0, \sigma_Y^2(d)\Big\},
\end{equation}
where $\sigma_Y^2(d)$ is the asymptotic variance given any $d(\cdot)$.
{Next,
we introduce the calibrated value estimator.
Based on assumption (A4) and
$X_E\sim X_U $, we can establish
the following lemma.}
{\begin{lemma}\label{lem_cio}
Under assumptions (A1) - (A4) and $X_E\sim X_U $, we have
\begin{equation*}
\begin{split}
{\mathbb{E}} \{M_E^*(d)\}&={\mathbb{E}} [M_E^*(0) \{1-d(X_E)\}+M_E^*(1) d(X_E)] \\
&= {\mathbb{E}} [M_U^*(0) \{1-d(X_U)\}+M_U^*(1) d(X_U)] = {\mathbb{E}} \{M_U^*(d)\}.
\end{split}
\end{equation*}
\end{lemma}
The detailed proof of Lemma \ref{lem_cio} is provided in the appendix. Based on Lemma \ref{lem_cio}, the value functions for the intermediate potential outcomes under $d(\cdot)$ in the two samples are the same, that is,
\begin{equation*}
W_E(d)={\mathbb{E}} \{M_E^*(d)\}={\mathbb{E}} \{M_U^*(d)\}= W_U(d)\equiv W(d),
\end{equation*}
where $W_E(d)$, $W_U(d)$, $W(d)$ are $s\times 1$ value vectors when $s>1$. This motivates us to derive the calibrated value estimator by projecting the value estimator of the outcome of interest in the primary sample on the differences of the value estimators of intermediate outcomes in the two samples.}
Following assumption (A4), we define the conditional mean of intermediate outcomes as $\theta(x,a) \equiv {\mathbb{E}}(M_E|X_E=x, A_E=a)={\mathbb{E}}(M_U|X_U=x,A_U=a)$, which is a $s\times 1$ vector. Then, we have the DR value estimators for intermediate outcomes in two samples as
\begin{equation*}
\begin{split}
\widehat{W}_{E}(d)= {1\over N_E}\sum_{i=1}^{N_E} {\mathbb{I}\{A_{E,i}=d(X_{E,i})\}[M_{E,i} - \widehat{\theta} \{X_{E,i},d(X_{E,i})\} ]\over{A_{E,i} \widehat{\pi}_E (X_{E,i})+(1-A_{E,i})\{1-\widehat{\pi}_E(X_{E,i})\}}} + \widehat{\theta} \{X_{E,i},d(X_{E,i})\} ,
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\widehat{W}_{U}(d)= {1\over N_U}\sum_{i=1}^{N_U} {\mathbb{I}\{A_{U,i}=d(X_{U,i})\} [M_{U,i} - \widehat{\theta} \{X_{U,i},d(X_{U,i})\} ]\over{A_{U,i} \widehat{\pi}_U (X_{U,i})+(1-A_{U,i})\{1-\widehat{\pi}_U(X_{U,i})\}}} + \widehat{\theta}\{X_{U,i},d(X_{U,i})\} ,
\end{split}
\end{equation*}
where $\widehat{W}_{E}(d)$ and $\widehat{W}_{U}(d)$ are $s\times 1$ vectors, $\widehat{\pi}_U $ is the estimator of the propensity score in the auxiliary sample, and $\widehat{\theta}(x,a)$ is the estimated conditional mean function for $\theta(x,a)$ based on two samples combined under assumption (A4). Similarly,
we have
\begin{equation}\label{lem2}
\sqrt{N_E} \Big\{\widehat{W}_{E}(d) - W(d) \Big\} \overset{D}{\longrightarrow} N_s\Big\{\boldsymbol{0}_{s}, \Sigma_E(d)\Big\}, \sqrt{N_U} \Big\{\widehat{W}_{U}(d) - W(d) \Big\} \overset{D}{\longrightarrow} N_s\Big\{\boldsymbol{0}_{s}, \Sigma_U(d)\Big\},
\end{equation}
where $\boldsymbol{0}_{s}$ is the $s$-dimensional zero vector, $\Sigma_E$ and $\Sigma_U$ are $s\times s$ matrices presenting the asymptotic covariance matrices for two samples, and $ N_s(\cdot,\cdot)$ is the $s$-dimensional multivariate normal distribution. {Note that by Lemma \ref{lem_cio}, both $\widehat{W}_{E}(d)$ and $\widehat{W}_{U}(d)$ converge to the same value function $W(d) $.}
The following lemma establishes the asymptotic distribution of the differences of the value estimators of
intermediate outcomes in the two samples, under some technical conditions (A6) and (A7) detailed in Section \ref{sec:thms}.
\textcolor{black}{ \begin{lemma}\label{lem3}
Assume conditions (A1)-(A6) and (A7. i, ii, and iii) hold. Under $X_E\sim X_U $, we
have
\begin{equation*}
\sqrt{N_E} \Big\{\widehat{W}_{E}(d) - \widehat{W}_{U}(d) \Big\} \overset{D}{\longrightarrow} N_s\Big\{\boldsymbol{0}_{s}, \Sigma_M(d)\Big\},
\end{equation*}
where $\Sigma_M(d) = \Sigma_E(d)+ T\Sigma_U(d)$ is a $s\times s$ asymptotic covariance matrix, and $T\equiv \lim_{N_E\to +\infty} t \in(0, +\infty)$.
\end{lemma}}
The key gradient to prove Lemma \ref{lem3} lies in the fact that the two samples ($E$ and $U$) are collected from two different independent sources. The proof of Lemma \ref{lem3} is given in the appendix. Based on Lemma \ref{lem3}, the asymptotic covariance matrix $\Sigma_M(d)$ is a weighted sum of the asymptotic covariance from each sample, where the weight is determined by the limiting sample ratio between two samples.
Based on the results established in \eqref{lem1} and Lemma \ref{lem3}, we have the following asymptotic joint distribution
\begin{equation*
\begin{split}
&\sqrt{N_E} \begin{bmatrix}
\widehat{V}_{E}(d) - V(d) \\
\widehat{W}_{E}(d) - \widehat{W}_{U}(d)
\end{bmatrix}
\overset{D}{\longrightarrow} N_{s+1} \Bigg\{\boldsymbol{0}_{s+1}, \begin{bmatrix}
\sigma_Y^2(d) , \boldsymbol{\rho}(d)^\top\\
\boldsymbol{\rho}(d), \Sigma_M(d)
\end{bmatrix}\Bigg\}, \text{ for all } d(\cdot),
\end{split}
\end{equation*}
where $\boldsymbol{\rho}(d)$ is the $s\times 1$ asymptotic covariance vector between the value estimator of the outcome of interest in the primary sample and the differences of the value estimators of intermediate outcomes between two samples.
It follows that the conditional distribution of $\sqrt{N_E}\{\widehat{V}_{E}(d) - V(d)\}$ given the estimated value differences of intermediate outcomes as
\begin{equation}\label{eqn:proj}
\begin{split}
&\sqrt{N_E}\Big[\widehat{V}_{E}(d) - V(d) - \boldsymbol{\rho}(d)^\top \Sigma_M^{-1}(d) \{\widehat{W}_{E}(d) - \widehat{W}_{U}(d) \}\Big]\Big| \sqrt{N_E}\Big\{\widehat{W}_{E}(d) - \widehat{W}_{U}(d) \Big\} \\
& \overset{D}{\longrightarrow} N\Big\{0, \sigma_Y^2(d) - \boldsymbol{\rho}(d)^\top \Sigma_M^{-1} (d) \boldsymbol{\rho}(d) \Big\} , \quad \text{ for all } d(\cdot).
\end{split}
\end{equation}
It can be observed from \eqref{eqn:proj} that by projecting the value estimator of the outcome of interest on the estimated value differences of intermediate outcomes, we can achieve a smaller asymptotic variance. This result motivates us to construct the calibrated value estimator of $V(d) $ as
\begin{equation}\label{value_CODA}
\widehat{V}(d) = \widehat{V}_{E}(d) - \widehat{\boldsymbol{\rho}}(d)^\top \widehat{\Sigma}_M^{-1}(d) \{\widehat{W}_{E}(d) - \widehat{W}_{U}(d) \},
\end{equation}
where $ \widehat{\boldsymbol{\rho}}(d) $ is the estimator for $\boldsymbol{\rho}(d)$, and $\widehat{\Sigma}_M(d)$ is the estimator for $\Sigma_M(d)$.
\newpage
Finally, the optimal decision rule under CODA for $X_E \sim X_U$, namely CODA-HO, is to maximize the calibrated value estimator within a pre-specified class of decision rules $\Pi$ as
\begin{eqnarray*}
\widehat{d} =\argmax_{d \in \Pi} \widehat{V}(d),
\end{eqnarray*}
with the corresponding estimated value function as $\widehat{V}(\widehat{d})$.
{\subsection{Calibrated Optimal Decision Making for Heterogeneous Baseline Covariates}\label{sec:CODA_hete}
We next consider a more challenging case where the distributions of baseline covariates in the primary sample and the auxiliary sample are distinct, i.e., $X_E\not \sim X_U$. The results under Lemma \ref{lem_cio} may not hold when the joint density of $X_E$ differs from the joint density of $X_U$.
As such, we need to construct a new estimator of modified value differences such that it converges to a normal distribution with a zero mean even under $X_E\not \sim X_U$.
To this end, we consider rebalancing the value estimators of common intermediate outcomes in two samples based on their posterior sampling probability.}
{Specifically, we combine two samples together and denote the joint dataset as
\begin{equation*}
\{X_i, A_i, M_i, R_i, R_iY_i\}_{i=1,\cdots, n},\text{ for } n = N_E+N_U,
\end{equation*}
where $R_i =1$ if subject $i$ is from the primary sample and $R_i =0$ if subject $i$ is from the auxiliary sample. Here, the distributions of baseline covariates, treatments, and intermediate outcomes are allowed to be different across different sub-samples, which distinguishes our work from the homogeneous setting considered in the current literature \citep[see e.g., ][]{yang2019combining,athey2020combining,kallus2020role}.}
{To address the heterogeneous baseline covariates in two samples, also known as the covariate shift problem, a feasible way is to estimate the density functions of baseline covariates in two samples and adjust the corresponding estimator by the importance weights \citep[see e.g., ][]{sugiyama2007covariate,kallus2021more}. However, we note that these methods cannot handle a relatively large number of baseline covariates due to the estimation of density functions, and can be hard to develop a simple inference procedure. Instead, we use a similar projection approach as developed in Section \ref{sec:CODA_homo} to construct a new calibrated estimator through rebalancing to handle the heterogeneous baseline covariates and gain efficiency.} To be specific, define the joint density of $\{X_i,A_i,M_i\}$ given $R_i$ as
\begin{equation*}
\begin{split}
f(X_i=x,A_i=a,M_i=m|R_i=1) \equiv f_E(x,a,m),\\f(X_i=x,A_i=a,M_i=m|R_i=0) \equiv f_U(x,a,m),
\end{split}
\end{equation*}
where $f_E(x,a,m)$ and $f_U(x,a,m)$ are the joint density function of $\{X_E,A_E,M_E\}$ in the primary sample and the joint density function of $\{X_U,A_U,M_U\}$ in the auxiliary sample, respectively.
{By Bayesian theorem, we have the posterior sampling probability as
\begin{equation}\label{sampling_prob}
P(R_i=1|X_i=x,A_i=a,M_i=m) = {P(R_i=1) f_E(x,a,m) \over P(R_i=1) f_E(x,a,m) + P(R_i=0) f_U(x,a,m)}.
\end{equation}
Here, we have $
P(R_i=1) = \lim_{N_E\to \infty} N_E/(N_E+N_U) = \lim_{N_E\to \infty} t/(1+t) = T/(1+T).$
Based on \eqref{sampling_prob}, we can rebalance the value estimators of common intermediate outcomes in each sample to construct a new mean zero estimator. To this end, we estimate the posterior sampling probability $r(x,a,m) \equiv P(R_i=1|X_i=x,A_i=a,M_i=m)$. Let $\widehat{r}(x,a,m)$ denote the resulting estimator. In addition, we estimate the propensity score function $P(A_i=1|X_i)$ in the joint sample, denoted as $\widehat{\pi}(X_i)$. For each sub-sample, we have new DR estimators for intermediate outcomes that taking the sampling probabilities into account as
\begin{equation*}
\begin{split}
\widehat{W}_{1}(d)= {1\over n}\sum_{i=1}^{n} {R_i\over\widehat{r}\{X_i,d(X_i),M_i\} }{\mathbb{I}\{A_{i}=d(X_{i})\} [M_{i} - \widehat{\theta} \{X_{i},d(X_{i})\} ]\over{A_{i} \widehat{\pi}(X_{i})+(1-A_{i})\{1-\widehat{\pi}(X_{i})\}}} + \widehat{\theta} \{X_{i},d(X_{i})\} ,\\
\widehat{W}_{0}(d)= {1\over n}\sum_{i=1}^{n} {(1-R_i)\over1- \widehat{r}\{X_i,d(X_i),M_i\} }{\mathbb{I}\{A_{i}=d(X_{i})\}[M_{i} - \widehat{\theta} \{X_{i},d(X_{i})\} ]\over{A_{i} \widehat{\pi}(X_{i})+(1-A_{i})\{1-\widehat{\pi}(X_{i})\}}} + \widehat{\theta} \{X_{i},d(X_{i})\} ,
\end{split}
\end{equation*}
where $\widehat{W}_{1}(d)$ and $\widehat{W}_{0}(d)$ are $s\times 1$ vectors, and $\widehat{\theta}(x,a)$ is the estimated conditional mean function for $\theta(x,a)$ as used in Section \ref{sec:CODA_homo}. It can be shown that both $\widehat{W}_{1}(d)$ and $\widehat{W}_{0}(d)$ have asymptotic normality and converge to the same mean,
as stated in the following lemma, under some technical conditions (A6) and (A7) detailed in Section \ref{sec:thms}.
\begin{lemma}\label{lem2_hete}
Assume conditions (A1)-(A6) and (A7. i, iv, and v) hold.
We have
\begin{equation*}
\begin{split}
\sqrt{n} \Big\{\widehat{W}_{1}(d) - W^*(d) \Big\} \overset{D}{\longrightarrow} N_s\Big\{\boldsymbol{0}_{s}, \Sigma_{1}(d)\Big\}, \text{ and }
\sqrt{n} \Big\{\widehat{W}_{0}(d) - W^*(d) \Big\} \overset{D}{\longrightarrow} N_s\Big\{\boldsymbol{0}_{s}, \Sigma_{0}(d)\Big\},
\end{split}
\end{equation*}
where $\Sigma_{1}$ and $\Sigma_{0}$ are $s\times s$ asymptotic covariance matrices for each sub-sample, and
\begin{equation*}
W^*(d) = \int E\{M|d(X),X\}\{P(R=1) f_E(X) + P(R=0) f_U(X)\} dX,
\end{equation*}
where $ f_E(X)$ is the marginal density of baseline covariates in the primary sample and $ f_U(X)$ is the marginal density of baseline covariates in the auxiliary sample.
\end{lemma}
Here, to show Lemma \ref{lem2_hete}, we only require $E(M|X,A, R=1) = E(M|X,A,R=0)$, as indicated by the CIO assumption. This is checkable since $(X,A, M)$ are observed in both samples. In contrast, current methods handling multiple datasets \citep[see e.g., ][]{kallus2020role,athey2020combining} require the missing at random assumption such that $R$ is independent of $Y$ given $(X,A,M)$. This implies $E(Y|X,A,M, R=1) = E(Y|X,A,M,R=0)$. This assumption is not testable due to the unobserved outcome in the auxiliary sample. Therefore, our method is built upon a more practical assumption.} The proof of Lemma \ref{lem2_hete} can be found in the appendix.
{It is immediate from Lemma \ref{lem2_hete} that the difference of rebalanced value estimators of intermediate outcomes, i.e., $\widehat{W}_{1}(d) - \widehat{W}_{0}(d)$, is a mean zero estimator. The following lemma establishes the asymptotic normality of the new estimator.
\begin{lemma}\label{lem3_hete}
Suppose the conditions in Lemma \ref{lem2_hete} hold.
We have
\begin{equation*}
\sqrt{n} \{\widehat{W}_{1}(d) - \widehat{W}_{0}(d)\} \overset{D}{\longrightarrow} N_s\Big\{\boldsymbol{0}_{s}, \Sigma_{R}(d)\Big\},
\end{equation*}
where $\Sigma_{R}(d) $ is a $s\times s$ asymptotic covariance matrix.
\end{lemma} }
Based on the results established in \eqref{lem1} and Lemma \ref{lem3_hete}, we have the following asymptotic joint distribution
\begin{equation}\label{eqn:joint_hete}
\begin{split}
&\sqrt{N_E} \begin{bmatrix}
\widehat{V}_{E}(d) - V(d) \\
\sqrt{{n\over N_E}}\{\widehat{W}_{1}(d) - \widehat{W}_{0}(d)\} \end{bmatrix}
\overset{D}{\longrightarrow} N_{s+1} \Bigg\{\boldsymbol{0}_{s+1}, \begin{bmatrix}
\sigma_Y^2(d) , \boldsymbol{\rho}_{R}(d)^\top\\
\boldsymbol{\rho}_{R}(d), \Sigma_{R}(d)
\end{bmatrix}\Bigg\}, \text{ for all } d(\cdot),
\end{split}
\end{equation}
where $\boldsymbol{\rho}_{R}(d)$ is the $s\times 1$ asymptotic covariance vector between the value estimator of the outcome of interest in the primary sample and the new rebalanced value difference estimator of intermediate outcomes between two samples. It follows that the conditional distribution of $\sqrt{N_E}\{\widehat{V}_{E}(d) - V(d)\}$ given the estimated rebalanced value differences of intermediate outcomes as
\begin{equation*}
\begin{split}
&\sqrt{N_E}\Big[\widehat{V}_{E}(d) - V(d) - \sqrt{n/N_E}\boldsymbol{\rho}_{R}(d)^\top \Sigma_{R}^{-1}(d) \{\widehat{W}_{1}(d) - \widehat{W}_{0}(d)\} \Big]\Big| \sqrt{n}\Big\{\widehat{W}_{1}(d) - \widehat{W}_{0}(d) \Big\}\\\nonumber
& \overset{D}{\longrightarrow} N\Big\{0, \sigma_Y^2(d) - \boldsymbol{\rho}_{R}(d)^\top \Sigma_{R}^{-1} (d) \boldsymbol{\rho}_{R}(d) \Big\} , \text{ for all } d(\cdot).
\end{split}
\end{equation*}
This yields the calibrated value estimator of $V(d) $ under heterogeneous baseline covariates
\begin{equation}\label{value_CODA_hete}
\widehat{V}_{R}(d) = \widehat{V}_{E}(d) - \sqrt{n/N_E} \widehat{\boldsymbol{\rho}}_{R}(d)^\top \widehat{\Sigma}_{R}^{-1}(d) \{\widehat{W}_{1}(d) - \widehat{W}_{0}(d)\},
\end{equation}
where $ \widehat{\boldsymbol{\rho}}_{R}(d) $ is the estimator for $\boldsymbol{\rho}_{R}(d)$, and $\widehat{\Sigma}_{R}(d)$ is the estimator for $\Sigma_{R}(d)$. Note that under the homogeneous case where $f_E(x,a,m) = f_U(x,a,m)$, according to \eqref{sampling_prob}, we have $P(R_i=1|X_i=x,A_i=a,M_i=m) = P(R_i=1)$. Then the above projection estimator will reduce to the estimator considered in Section \ref{sec:CODA_homo}.}
{Therefore, the optimal decision rule under CODA for $X_E\not \sim X_U$, namely CODA-HE, is to maximize the new calibrated value estimator $ \widehat{V}_{R}(d)$ within a pre-specified class of decision rules $\Pi$ as
\begin{eqnarray*}
\widehat{d}_{R} =\argmax_{d \in \Pi} \widehat{V}_{R}(d),
\end{eqnarray*}
with the corresponding estimated value function as $\widehat{V}_{R}(\widehat{d}_{R})$.}
\subsection{Estimation on Variances}\label{sec:est_Eho}
{In this section, we present the estimators for ${\sigma}_Y^2 $, $\boldsymbol{\rho}$, $\boldsymbol{\rho}_{R}$, $\Sigma_M $, and $\Sigma_{R} $. }
Recall $\widehat{\pi}_E$, $\widehat{\pi}_U$, {$\widehat{\pi}$, $ \widehat{\mu}_E $, $\widehat{\theta}$, and $\widehat{ r}$ are estimators for the propensity score functions $ \pi_E $, $\pi_U $, and $\pi$, the conditional mean functions $ \mu_E$ and $\theta$, and the posterior sampling probability $ r $}, respectively, using any parametric or nonparametric models such as Random Forest or Deep Learning. Our theoretical results still hold with these nonparametric estimators as long as the estimators have desired convergence rates (see results established in \citet{wager2018estimation,farrell2021deep}). To introduce the variance estimators, we first define the value functions at the individual level. Given a decision rule $d(\cdot)$, let the value for the $i$-th individual in terms of the outcome of interest as
\begin{equation*}
\widehat{v}^{(i)}_E(d) := {\mathbb{I}\{A_{E,i}=d(X_{E,i})\} [Y_{E,i} - \widehat{\mu}_E\{X_{E,i},d(X_{E,i})\} ] \over{A_{E,i} \widehat{\pi}_E (X_{E,i})+(1-A_{E,i})\{1-\widehat{\pi}_E(X_{E,i})\}}} + \widehat{\mu}_E\{X_{E,i},d(X_{E,i})\},
\end{equation*}
in the primary sample, for $i \in \{1,\cdots ,N_E\}$. Similarly, the value for the $i$-th individual in terms of intermediate outcomes are
\begin{equation*}
\widehat{\boldsymbol{w}}^{(i)}_E(d) := {\mathbb{I}\{A_{E,i}=d(X_{E,i})\} [M_{E,i} - \widehat{\theta} \{X_{E,i},d(X_{E,i})\} ]\over{A_{E,i} \widehat{\pi}_E (X_{E,i})+(1-A_{E,i})\{1-\widehat{\pi}_E(X_{E,i})\}}} + \widehat{\theta} \{X_{E,i},d(X_{E,i})\} ,
\end{equation*}
in the primary sample for $i \in \{1,\cdots ,N_E\}$, and
\begin{equation*}
\widehat{\boldsymbol{w}}^{(i)}_U(d) := {\mathbb{I}\{A_{U,i}=d(X_{U,i})\} [M_{U,i} - \widehat{\theta} \{X_{U,i},d(X_{U,i})\} ]\over{A_{U,i} \widehat{\pi}_U (X_{U,i})+(1-A_{U,i})\{1-\widehat{\pi}_U(X_{U,i})\}}} + \widehat{\theta} \{X_{U,i},d(X_{U,i})\},
\end{equation*}
in the auxiliary sample for $i \in \{1,\cdots ,N_U\}$, where $\widehat{\boldsymbol{w}}^{(i)}_E(d)$ and $\widehat{\boldsymbol{w}}^{(i)}_U(d)$ are $s\times 1$ vectors.
Following the results in \eqref{lem1}, \eqref{lem2} and Lemma \ref{lem3}, we propose to estimate ${\sigma}_Y^2 (\cdot)$, $\boldsymbol{\rho}(\cdot)$ and $\Sigma_M (\cdot)$ by
\begin{eqnarray}\label{v_sd_est}
&& \widehat{\sigma}_Y^2 (d)={1\over N_E}\sum_{i=1}^{N_E}\{\widehat{v}_E^{(i)}(d) - \widehat{V}_E(d)\}^2,\\\nonumbe
&&\widehat{\boldsymbol{\rho}}(d) = {1\over N_E}\sum_{i=1}^{N_E} \Big\{\widehat{v}_E^{(i)}(d) -\widehat{V}_E(d) \Big\} \Big\{\widehat{\boldsymbol{w}}^{(i)}_E(d) - \widehat{W}_E(d) \Big\}, \\\nonumbe
&& \widehat{\Sigma}_M(d) ={1\over N_E}\sum_{i=1}^{N_E} \Big\{\widehat{\boldsymbol{w}}^{(i)}_E(d) -\widehat{W}_E(d) \Big\}{\Big\{\widehat{\boldsymbol{w}}^{(i)}_E(d) - \widehat{W}_E(d)\Big\}}^\top \\\nonumber
&&~~~~~~~~~~~~~+ t {1\over N_U}\sum_{i=1}^{N_U} \Big\{\widehat{\boldsymbol{w}}^{(i)}_U(d) -\widehat{W}_U(d) \Big\}{\Big\{\widehat{\boldsymbol{w}}^{(i)}_U(d) - \widehat{W}_U(d)\Big\}}^\top.
\end{eqnarray}
{Similarly, based on the results in \eqref{eqn:joint_hete} and Lemma \ref{lem3_hete}, we define the rebalanced value for the $i$-th individual in terms of intermediate outcomes as
\begin{equation*}
\begin{split}
\widehat{\boldsymbol{w}}^{(i)}_1(d) := {R_i\over\widehat{r}\{X_i,d(X_i),M_i\} }{\mathbb{I}\{A_{i}=d(X_{i})\}[M_{i} - \widehat{\theta} \{X_{i},d(X_{i})\} ]\over{A_{i} \widehat{\pi}(X_{i})+(1-A_{i})\{1-\widehat{\pi}(X_{i})\}}} + \widehat{\theta} \{X_{i},d(X_{i})\},
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\widehat{\boldsymbol{w}}^{(i)}_0(d) := {(1-R_i)\over1- \widehat{r}\{X_i,d(X_i),M_i\} }{\mathbb{I}\{A_{i}=d(X_{i})\} [M_{i} - \widehat{\theta} \{X_{i},d(X_{i})\} ]\over{A_{i} \widehat{\pi}(X_{i})+(1-A_{i})\{1-\widehat{\pi}(X_{i})\}}} + \widehat{\theta} \{X_{i},d(X_{i})\},
\end{split}
\end{equation*}
where $\widehat{\boldsymbol{w}}^{(i)}_0(d)$ and $\widehat{\boldsymbol{w}}^{(i)}_1(d)$ are $s\times 1$ vectors, for $i=1,\cdots, N_E+N_U$. Also, we define the correlated part for the $i$-th individual in terms of intermediate outcomes as
\begin{equation*}
\widehat{\boldsymbol{\psi}}^{(i)}(d) := {1\over\widehat{r}\{X_i,d(X_i),M_i\} }{\mathbb{I}\{A_{i}=d(X_{i})\}\over{A_{i} \widehat{\pi}(X_{i})+(1-A_{i})\{1-\widehat{\pi}(X_{i})\}}} [M_{i} - \widehat{\theta} \{X_{i},d(X_{i})\} ],
\end{equation*}
where $\widehat{\boldsymbol{\psi}}^{(i)}(d)$ is $s\times 1$ vectors, for $i=1,\cdots, N_E$. We then propose to estimate $\boldsymbol{\rho}_{R}(\cdot)$ and $\Sigma_{R} (\cdot)$ by
\begin{eqnarray}\label{v_sd_est_hete}
\widehat{\boldsymbol{\rho}}_{R} (d) = && {1\over N_E}\sum_{i=1}^{N_E} \Big\{\widehat{v}_E^{(i)}(d) -\widehat{V}_E(d) \Big\} \sqrt{N_E/n} \widehat{\boldsymbol{\psi}}^{(i)}(d) , \\\nonumbe
\widehat{\Sigma}_{R} (d) = && {1\over n}\sum_{i=1}^{n} \Big[ \{\widehat{\boldsymbol{w}}^{(i)}_1(d) -\widehat{\boldsymbol{w}}^{(i)}_0(d)\}\Big]^{\otimes 2},
\end{eqnarray}
where $\eta^{\otimes 2} \equiv \eta\eta^\top$ for $\eta$ as a vector.}
The consistency of the proposed variance estimators in \eqref{v_sd_est} and \eqref{v_sd_est_hete} is shown in Section \ref{sec:thms} under certain rate conditions for the estimators of the posterior sampling probability, the propensity scores, and conditional mean functions.
\subsection{Iterative Policy Tree Search algorithm}\label{sec:impl}
We introduce the iterative policy tree search algorithm to implement CODA. We first elaborate on how to find the ODR that maximizes the calibrated value estimator for the homogeneous case, and then extend it to the heterogeneous case as well as parametric decision rules.
Following the tree-based policy learning algorithm proposed in \cite{athey2017efficient}, we define the reward of the $i$-th individual in the primary sample by $\widehat{v}^{(i)}_E(d)$. Specifically, the reward of the $i$-th individual is $\widehat{v}^{(i)}_E(1)$ under treatment 1 and $\widehat{v}^{(i)}_E(0)$ under treatment 0. The decision tree allocates individuals to different treatments, and receives the corresponding rewards. The ODR based on the primary sample solely is obtained by maximizing the sum of these rewards through the exhaustive search within $\Pi_1$, i.e.,
\begin{equation}\label{eqn:tree_base}
\widehat{d}_E =\underset{d\in \Pi_1}{\argmax} \sum_{i=1}^{N_E} \widehat{v}_E^{(i)} (d).
\end{equation}
To develop ODR by CODA-HO within the class $\Pi_1$, based on \eqref{value_CODA}, we construct the calibrated reward of the $i$-th individual in the primary sample by
\begin{equation}\label{tree_CODA_reward}
\widehat{v}^{(i)} (d) =\widehat{v}^{(i)}_E(d)- \widehat{\boldsymbol{\rho}}(d)^\top \widehat{\Sigma}_M^{-1}(d) \{\widehat{\boldsymbol{w}}^{(i)}_E(d) - \widehat{W}_{U}(d) \}.
\end{equation}
Here, notice that the sample mean of \eqref{tree_CODA_reward} over the primary sample yields \eqref{value_CODA}, i.e.,
\begin{equation*}
{1\over N_E}\sum_{i=1}^{N_E} \widehat{v}^{(i)} (d) =\widehat{V}(d).
\end{equation*}
Therefore, the decision tree that maximizes the sum of rewards defined in \eqref{tree_CODA_reward} also maximizes the calibrated value estimator in \eqref{value_CODA}. The finite-depth decision tree-based ODR under CODA-HO is then defined by
\begin{equation}\label{eqn:CODA_tree}
\widehat{d} =\underset{d\in \Pi_1}{\argmax} \sum_{i=1}^{N_E} \widehat{v}^{(i)} (d).
\end{equation}
Yet, the estimators $\widehat{\boldsymbol{\rho}}(d)$ and $ \widehat{\Sigma}_M^{-1}(d)$ defined in \eqref{tree_CODA_reward} are calculated using two samples based on \eqref{v_sd_est} given a specific decision rule $d$, and thus the tree-based policy learning in \cite{athey2017efficient} is not directly applicable to solve \eqref{eqn:CODA_tree}.
To address this difficulty,
we propose an {iterative policy tree search algorithm} consisting of four steps as follows.
\noindent \textbf{Step 1.} We first find the ODR based on the primary sample solely, i.e., $\widehat{d}_E$, as an initial decision rule, using \eqref{eqn:tree_base}.
\noindent \textbf{Step 2.} We next estimate $\boldsymbol{\rho}(\cdot)$ and $\Sigma_M (\cdot)$ based on \eqref{v_sd_est} by plugging in $d =\widehat{d}_E$. Thus, the calibrated reward for the $i$-th individual can be approximated by
\begin{equation*}
\widehat{v}^{(i)}_E(1)- \widehat{\boldsymbol{\rho}}(\widehat{d}_E)^\top \widehat{\Sigma}_M^{-1}(\widehat{d}_E) \{\widehat{\boldsymbol{w}}^{(i)}_E(1) - \widehat{W}_{U}(1) \},
\end{equation*}
under treatment 1, and
\begin{equation*}
\widehat{v}^{(i)}_E(0)- \widehat{\boldsymbol{\rho}}(\widehat{d}_E)^\top \widehat{\Sigma}_M^{-1}(\widehat{d}_E) \{\widehat{\boldsymbol{w}}^{(i)}_E(0) - \widehat{W}_{U}(0) \} ,
\end{equation*}
under treatment 0.
\noindent \textbf{Step 3.} We then search for the optimal decision tree within the class $\Pi_1$ to achieve a maximum overall calibrated reward, denoted as $\widehat{d}^{(1)}$. This step can be solved by applying the tree-based policy learning in \cite{athey2017efficient}, which is implemented in the \proglang{R} package \code{policytree}.
\noindent \textbf{Step 4.} We repeat steps 2 and 3 for $k=1,\cdots,K$, by replacing the previous estimated decision tree $\widehat{d}^{(k-1)}$ ($\widehat{d}^{(0)}=\widehat{d}_E$) with the new estimated decision tree $\widehat{d}^{(k)}$ until it's convergent or achieves the maximum number of iterations $K$. It is observed in our simulation studies (see Section \ref{sec:4}) that $\widehat{d}_E$ is fairly close to $\widehat{d}$, and thus one iteration is usually sufficient to find ODR under CODA in practice.
{Note that the above iterative policy search algorithm also works for finding ODR under CODA-HE,
by replacing the corresponding variance estimates from \eqref{v_sd_est} with that from \eqref{v_sd_est_hete}. Specifically, we can rewrite the calibrated value estimator in \eqref{value_CODA_hete} as
\begin{eqnarray*
\widehat{V}_{R}(d)
=&{1\over N_E}\sum_{i=1}^{N_E} \widehat{v}_E^{(i)}(d) - \sqrt{{n\over N_E}} \widehat{\boldsymbol{\rho}}_{R}(d)^\top \widehat{\Sigma}_{R}^{-1}(d) {1\over n}\sum_{i=1}^{n}\{\widehat{\boldsymbol{w}}^{(i)}_1(d) - \widehat{\boldsymbol{w}}^{(i)}_0(d)\}\\\nonumber
=&{1\over N_E}\sum_{i=1}^{N_E} \widehat{v}_E^{(i)}(d) - \sqrt{{n\over N_E}}\widehat{\boldsymbol{\rho}}_{R}(d)^\top \widehat{\Sigma}_{R}^{-1}(d) \left[ {1\over N_E}\sum_{i=1}^{N_E} {N_E\over n}\{\widehat{\boldsymbol{w}}^{(i)}_1(d) - \widehat{\boldsymbol{w}}^{(i)}_0(d)\}\right.\\\nonumber
&\left.+{1\over n}\sum_{i=N_E+1}^{n} \{\widehat{\boldsymbol{w}}^{(i)}_1(d) - \widehat{\boldsymbol{w}}^{(i)}_0(d)\}\right].
\end{eqnarray*}
This motivates the calibrated reward for the $i$-th individual in the heterogeneous case as
\begin{equation*}
\widehat{v}^{(i)}_E(1)- \sqrt{{n\over N_E}} \widehat{\boldsymbol{\rho}}_{R}(\widehat{d}_E)^\top \widehat{\Sigma}_{R}^{-1}(\widehat{d}_E) \left[ {N_E\over n}\{\widehat{\boldsymbol{w}}^{(i)}_1(1) - \widehat{\boldsymbol{w}}^{(i)}_0(1)\} +\widehat{\Delta}(1)\right],
\end{equation*}
under treatment 1, where $\widehat{\Delta}(1) = {n^{-1}}\sum_{i=N_E+1}^{n} \{\widehat{\boldsymbol{w}}^{(i)}_1(1) - \widehat{\boldsymbol{w}}^{(i)}_0(1)\}$, and
\begin{equation*}
\widehat{v}^{(i)}_E(0)- \sqrt{{n\over N_E}} \widehat{\boldsymbol{\rho}}_{R}(\widehat{d}_E)^\top \widehat{\Sigma}_{R}^{-1}(\widehat{d}_E) \left[ {N_E\over n}\{\widehat{\boldsymbol{w}}^{(i)}_1(0) - \widehat{\boldsymbol{w}}^{(i)}_0(0)\} +\widehat{\Delta}(0)\right],
\end{equation*}
under treatment 0, where $\widehat{\Delta}(0) = {n^{-1}}\sum_{i=N_E+1}^{n} \{\widehat{\boldsymbol{w}}^{(i)}_1(0) - \widehat{\boldsymbol{w}}^{(i)}_0(0)\}$. Then, using similar steps in the iterative policy tree search algorithm for CODA-HO, we can find ODR under CODA-HE
}
Finally, we extent the above iterative policy tree search to parametric decision rules with an illustration in homogeneous case. Suppose the decision rule $d(\cdot)$ falls in class $\Pi_2$ that relies on a parametric model with parameters $\beta$. We use shorthand to write $\boldsymbol{\rho}(d)$ as $\boldsymbol{\rho}(\beta)$, $\Sigma_M(d)$ as $\Sigma_M(\beta)$, $\widehat{V}_{E}(d)$ as $\widehat{V}_{E}(\beta)$, $\widehat{W}_{E}(d)$ as $\widehat{W}_{E}(\beta)$, and $\widehat{W}_{U}(d)$ as $\widehat{W}_{U}(\beta)$, respectively. Then, the calibrated value estimator for $V(\beta) $ can be constructed by
\begin{equation*}
\widehat{V}(\beta) = \widehat{V}_{E}(\beta) + \widehat{\boldsymbol{\rho}}(\beta)^\top \widehat{\Sigma}_M^{-1}(\beta) \{\widehat{W}_{E}(\beta) - \widehat{W}_{U}(\beta) \},
\end{equation*}
where $ \widehat{\boldsymbol{\rho}}(\beta) $ is the estimator for $\boldsymbol{\rho}(\beta)$, and $\widehat{\Sigma}_M(\beta)$ is the estimator for $\Sigma_M(\beta)$.
We apply a similar iterative-updating procedure discussed above for CODA-HO but within parametric decision rules.
\noindent \textbf{Step 1*.} We first find the ODR in the primary sample by $\widehat{\beta}_E =\underset{\beta}{\argmax} \widehat{V}_E(\beta)$ as an initial decision rule. This can be solved using any global optimization algorithm, such as the heuristic algorithm provided in the \proglang{R} package \code{rgenound}.
\noindent \textbf{Step 2*.} We estimate covariance $\boldsymbol{\rho}(\beta)$ and $\Sigma_M (\beta)$ based on \eqref{v_sd_est} with $\beta = \widehat{\beta}_E$.
Then, we search the ODR based on the calibrated value estimator within the class $\Pi_2$ that maximizes
\begin{equation*}
\widehat{V}(\beta) = \widehat{V}_{E}(\beta) - \widehat{\boldsymbol{\rho}}(\widehat{\beta}_E)^\top \widehat{\Sigma}_M^{-1}(\widehat{\beta}_E) \{\widehat{W}_{E}(\beta) - \widehat{W}_{U}(\beta) \}.
\end{equation*}
\noindent \textbf{Step 3*.} We repeat step 2* for $k=1,\cdots,K$, by replacing the previous estimated $\widehat{\beta}^{(k-1)}$ ($\widehat{\beta}^{(0)}=\widehat{\beta}_E$) with the new estimated $\widehat{\beta}^{(k)}$ until the number of iterations achieves $K$ or $||\widehat{\beta}^{(k)}- \widehat{\beta}^{(k-1)}||_2 <\delta$ where $\delta$ is a pre-specified threshold and $||\cdot||_2 $ is the $L_2$ norm.
\section{Theoretical Properties}\label{sec:thms}
{In this section, we investigate the theoretical properties of the value estimator under CODA in \eqref{value_CODA} and \eqref{value_CODA_hete}, corresponding to two cases where the distributions of baseline covariates are the same or different, respectively. We first establish the consistency for the proposed value estimators as well as the variance estimators proposed in \eqref{v_sd_est} and \eqref{v_sd_est_hete}. We next derive the asymptotic normality of our proposed calibrated value estimators under CODA. All the proofs are provided in the appendix.}
As standard in statistical inference for personalized decision-making \citep{zhang2012robust,luedtke2016statistical,kitagawa2018should,rai2018statistical}, we introduce the following conditions to derive our theoretical results.
\noindent \textbf{(A6)}. Suppose the supports $\mathbb{X}_E$, $\mathbb{X}_U$, $\mathbb{M}_E$, $\mathbb{M}_U$, and $\mathbb{Y}_E$ are bounded.
\noindent \textbf{(A7)}. (i) Rate Double robustness for $\widehat{V}_E$:
\begin{equation*} {\mathbb{E}}_{X \in \mathbb{X}_E} [\{\mu_E(X,a)-\widehat{\mu}_E(X,a)\}^2]^{1\over2} [\{\pi_E(X)-\widehat{\pi}_E(X)\}^2]^{1\over2} =o_p(N_E^{-1/ 2}), \text{ for } a = 0,1;
\end{equation*}
(ii) Rate Double Robustness for $\widehat{M}_E$:
\begin{equation*} {\mathbb{E}}_{X \in \mathbb{X}_E} [\{\theta(X,a)-\widehat{\theta}(X,a)\}^2]^{1\over2} [\{\pi_E(X)-\widehat{\pi}_E(X)\}^2]^{1\over2} =o_p(N_E^{-1/ 2}), \text{ for } a = 0,1;
\end{equation*}
(iii) Rate Double Robustness for $\widehat{M}_U$:
\begin{equation*} {\mathbb{E}}_{X \in \mathbb{X}_U} [\{\theta(X,a)-\widehat{\theta}(X,a)\}^2]^{1\over2} [\{\pi_U(X)-\widehat{\pi}_U(X)\}^2]^{1\over2} =o_p(N_U^{-1/ 2}), \text{ for } a = 0,1;\end{equation*}
(iv) Rate Double Robustness for $\widehat{W}_0$ and $\widehat{W}_1$:
\begin{equation*
{\mathbb{E}}_{X \in \mathbb{X}_E\cup \mathbb{X}_U} [\{\theta(X,a)-\widehat{\theta}(X,a)\}^2]^{1\over2} [\{\pi(X)-\widehat{\pi}(X)\}^2]^{1\over2} =o_p(n^{-1/ 2}), \text{ for } a = 0,1;
\end{equation*}
(v) Rate Double Robustness for $\widehat{W}_0$ and $\widehat{W}_1$:
\begin{eqnarray*
{\mathbb{E}}_{X \in \mathbb{X}_E\cup \mathbb{X}_U, M \in \mathbb{M}_E\cup \mathbb{M}_U} [\{\theta(X,a)-\widehat{\theta}(X,a)\}^2]^{1\over2} [\{r(X,a,M)-\widehat{r}(X,a,M)\}^2]^{1\over2} =o_p(n^{-1/ 2}), \\
~~~~~~~\text{ for } a = 0,1.
\end{eqnarray*}
\noindent \textbf{(A8)}. There exist some constants $\gamma,\lambda>0$ such that \begin{equation*} {\mbox{Pr}}\{0<|{\mathbb{E}}(Y_E|X_E,A_E=1)-{\mathbb{E}}(Y_E|X_E,A_E=0)|\le \xi \}=O(\xi^{\gamma}),\end{equation*} where the big-$O$ term is uniform in $0<\xi \le \lambda$.
Assumption (A6) is a technical assumption sufficient to establish the uniform convergence results. A similar assumption is frequently used in the literature of optimal treatment regime estimation \citep[see e.g., ][]{zhang2012robust,zhao2012estimating,zhou2017residual}. Assumption (A7) (i)-(iv) requires the estimated conditional mean outcomes and propensity score functions to converge at certain rates in each decision-making problem (i.e., for the primary outcome of interest $Y_E$, the intermediate outcomes $M_E$ and $M_U$, and the joint intermediate outcomes). This assumption is commonly imposed in the causal inference literature \citep[see e.g., ][]{athey2020combining, kallus2020role} to derive the asymptotic distribution of the estimated ATE with either parametric or non-parametric estimators \citep[see e.g., ][]{wager2018estimation,farrell2021deep}. {We extend it to (A7) (v) that the estimators of the posterior sampling probability and conditional mean functions of intermediate outcomes converge at certain rates, by which together with (A7) (iv) one can establish the asymptotic distribution of the rebalanced value estimators based on the joint sample.}
Finally, assumption (A8) is well known as the margin condition, which is often adopted in the literature to derive a sharp convergence rate for the value function under the estimated optimal decision rule \citep[see e.g., ][]{qian2011performance,luedtke2016statistical,kitagawa2018should}.
{We first establish the consistency of the proposed estimators in the following theorem.
\begin{theorem}{(Consistency)}\label{thm2}
Suppose conditions (A1)-(A7) hold.
We have \\
(i) $\widehat{\sigma}_Y^2 (d) ={\sigma}_Y^2 (d) +o_p(1);$ (ii) $\widehat{\boldsymbol{\rho}}(d) =\boldsymbol{\rho}(d)+o_p(1);$ (iii) $\widehat{\Sigma}_M(d) =\Sigma_M(d)+o_p(1);$
\\
(iv) $\widehat{\boldsymbol{\rho}}_{R}(d) =\boldsymbol{\rho}_{R}(d)+o_p(1);$ (v) $\widehat{\Sigma}_{R}(d) =\Sigma_{R}(d)+o_p(1);$
\\
(vi) if $X_E \sim X_U$, $\widehat{V}(d) =V(d)+o_p(1);$
(vii) $\widehat{V}_{R}(d) =V(d)+o_p(1).$
\end{theorem}}
Note that the result (vi) in Theorem \ref{thm2} requires additional homogeneous baseline covariates, while the rest results hold for either the homogeneous or the heterogeneous case. We remark that the key step of the proof is to decompose the variance estimators based on the true value estimator defined at the individual level in each sample. This allows replacing the estimators with their true models based on the rate doubly robustness in assumption (A7) with a small order.
We next show the asymptotic normality of the proposed calibrated value estimator in the following theorem.
{\begin{theorem}{(Asymptotic Normality for Homogeneous Baseline Covariates)}\label{thm3}
Suppose $ \{d^{opt}, \widehat{d}\}\in \Pi_1$ or $ \{d^{opt}, \widehat{d}\} \in \Pi_2$. Assume conditions (A1)-(A6), (A7. i, ii, and iii), and (A8) hold. Under $X_E \sim X_U$, we have
\begin{equation*}
\sqrt{N_E}\Big\{\widehat{V}(\widehat{d})-V(d^{opt})\Big\} \overset{D}{\longrightarrow} N\Big\{ 0, \sigma^2(d^{opt}) \Big\},
\end{equation*}
where $\sigma^2(d^{opt}) = \sigma_Y^2(d^{opt}) - \boldsymbol{\rho}(d^{opt})^\top\Sigma_M^{-1}(d^{opt}) \boldsymbol{\rho}(d^{opt}) $.
\end{theorem} }
The condition in Theorem \ref{thm3} that $ \{d^{opt}, \widehat{d}\}\in \Pi_1$ or $ \{d^{opt}, \widehat{d}\} \in \Pi_2$ requires the true ODR falls into the class of decision rules of interest, such that the resulting estimated decision rule by CODA-HO is not far away from the true ODR. The proof of Theorem \ref{thm3} consists of three steps. We first replace the estimated propensity score function and the estimated conditional mean function in $\widehat{V}(\widehat{d})$ by their counterparts based on the rate doubly robustness in assumption (A7) with a small order $o_p(N_E^{-1/2})$. Secondly, we show the value estimator under the estimated decision rule by CODA-HO converges to the value estimator under the true ODR at a rate of $o_p(N_E^{-1/2})$, based on the empirical process theory and the margin condition in (A8). The proof is nontrivial to the literature of personalized decision-making, by noticing that the estimated decision rule by CODA-HO is to maximize the newly proposed calibrated value estimator. Lastly, the asymptotic normality follows the central limit theorem. Since the two samples ($E$ and $U$) are independently collected from two separate studies, we can explicitly derive the asymptotic variance of the calibrated value estimator under the estimated decision rule by CODA-HO.
According to the results in Theorem \ref{thm3}, when $\boldsymbol{\rho}(d^{opt})$ is a non-zero vector, we have $\sigma_Y^2 (d^{opt}) - \boldsymbol{\rho}(d^{opt})^\top\Sigma_M^{-1}(d^{opt})\boldsymbol{\rho}(d^{opt}) < \sigma_Y^2 (d^{opt}) $, since $\Sigma_M(d^{opt})$ is positive definite. In other words, if the primary outcome of interest is correlated with one of the selected intermediate outcomes, the asymptotic variance of the calibrated value estimator under CODA-HO is strictly smaller than the asymptotic variance of the value estimator under the ODR obtained based on the primary sample solely. The larger the correlation is, the smaller variance we can achieve. Hence, the proposed calibrated value estimator is more efficient by integrating different data sources
Based on Theorem \ref{thm3}, By plugging the estimates $\widehat{\sigma}_Y^2 (\widehat{d})$, $\widehat{\boldsymbol{\rho}}(\widehat{d})$, and $\widehat{\Sigma}_M(\widehat{d})$, the asymptotic variance of $\widehat{V}(\widehat{d})$ can be consistently estimated by
\begin{equation*}
\widehat{\sigma}^2(\widehat{d}):= \widehat{\sigma}_Y^2 (\widehat{d}) -\widehat{\boldsymbol{\rho}}(\widehat{d})^\top \widehat{\Sigma}_M^{-1}(\widehat{d}) \widehat{\boldsymbol{\rho}}(\widehat{d}).
\end{equation*}
Therefore, a two-sided $1-\alpha$ confidence interval (CI) for $V(d^{opt})$ under CODA-HO
is given by
\begin{equation}\label{ciaipw}
\Big [ \widehat{V}(\widehat{d})-z_{\alpha/2}\widehat{\sigma}/\sqrt{N_E},\quad \widehat{V}(\widehat{d})+z_{\alpha/2}\widehat{\sigma}/\sqrt{N_E} \Big],
\end{equation}
where $z_{\alpha/2}$ denotes the upper $\alpha/2-$th quantile of a standard normal distribution.
{Similarly, we establish the asymptotic normality for heterogeneous baseline covariates as follows.
\begin{theorem}{(Asymptotic Normality for Heterogeneous Baseline Covariates)}\label{thm3_hete}
Suppose $ \{d^{opt}, \widehat{d}_{R}\}\in \Pi_1$ or $ \{d^{opt}, \widehat{d}_{R}\} \in \Pi_2$. Under conditions (A1)-(A6), (A7. i, iv, and v), and (A8), we have
\begin{equation*}
\sqrt{N_E}\Big\{\widehat{V}_{R}(\widehat{d}_R)-V(d^{opt})\Big\} \overset{D}{\longrightarrow} N\Big\{ 0, \sigma_{R}^2(d^{opt}) \Big\},
\end{equation*}
where $\sigma_{R}^2(d^{opt}) = \sigma_Y^2(d^{opt}) - \boldsymbol{\rho}_{R}(d^{opt})^\top\Sigma_{R}^{-1}(d^{opt}) \boldsymbol{\rho}_{R}(d^{opt}) $.
\end{theorem}
Here, we also require the true ODR falls into the class of decision rules of interest, such that the resulting estimated decision rule by CODA-HE is not far away from the true ODR. The proof of Theorem \ref{thm3_hete} is similar to Theorem \ref{thm3}. When $\boldsymbol{\rho}_R(d^{opt})$ is a non-zero vector, we have $\sigma_Y^2 (d^{opt}) - \boldsymbol{\rho}_R(d^{opt})^\top\Sigma_R^{-1}(d^{opt})\boldsymbol{\rho}_R(d^{opt}) < \sigma_Y^2 (d^{opt}) $ such that the proposed calibrated value estimator is more efficient. The corresponding two-sided $1-\alpha$ CI for $V(d^{opt})$ under CODA-HE is given by
\begin{equation}\label{ciaipw_hete}
\Big [ \widehat{V}_R(\widehat{d}_R)-z_{\alpha/2}\widehat{\sigma}_R/\sqrt{N_E},\quad \widehat{V}_R(\widehat{d}_R)+z_{\alpha/2}\widehat{\sigma}_R/\sqrt{N_E} \Big],
\end{equation}
where $\widehat{\sigma}_R = \widehat{\sigma}_Y^2 (\widehat{d}_R) -\widehat{\boldsymbol{\rho}_R}(\widehat{d}_R)^\top \widehat{\Sigma}_R^{-1}(\widehat{d}_R) \widehat{\boldsymbol{\rho}_R}(\widehat{d}_R)$ is the estimator of $\sigma_{R}^2(d^{opt}) $.
}
\section{Simulation Studies}\label{sec:4}
In this section, we investigate the finite sample performance of the proposed CODA, in comparison to the ODR obtained based on the primary sample solely. {We first evaluate the proposed CODA-HO method and the CI for the value based on \eqref{ciaipw} in Section \ref{simu_eval}, and demonstrate the improved efficiency using CODA-HO in various scenarios with high-dimensional covariates and multiple intermediate outcomes in Section \ref{simu_multi_mid}. We then examine the performance of CODA-HE where $X_E \not \sim X_U$ and its corresponding CI for the value based on \eqref{ciaipw_hete} in Section \ref{simu_eval_hete}.}
\subsection{Evaluation on Calibrated Value Estimator for {Homogeneous Baseline Covariates}}\label{simu_eval}
In the following, we produce two samples, i.e., the primary sample and the auxiliary sample.
Suppose
their baseline covariates $X=[X^{(1)},\cdots,X^{(r)}]^\top$, the treatment $A$, and intermediate outcomes $M=[M^{(1)}, \cdots,M^{(s)}]^\top$ are generated from the following model:
\begin{equation*}
\begin{split}
&A \overset{I.I.D.}{\sim} \text{Bernoulli} \{\pi(X)\}, \quad X^{(1)}, \cdots,X^{(r)} \overset{I.I.D.}{\sim} \text{Uniform}[-2,2],\\
&M=U^M(X)+ A C^M(X)+\epsilon^J,\quad J\in \{E,U\}\\
\end{split}
\end{equation*}
where $U^M(\cdot)$ is the baseline function of intermediate outcomes, $C^M(\cdot)$ is the contrast function that describes the treatment-covariates interaction effects for intermediate outcomes, and $\epsilon^M $ is the random error. Here, we consider a logistic regression for the propensity score, i.e.,$\text{ logit}\{\pi(X)\} = 0.4+0.2X^{(1)} - 0.2X^{(2)}$.
{The outcome of interest is generated in the primary sample only, according to the following regression model:
\begin{equation*}
Y=U^Y(X)+AC^Y(X)+\epsilon^Y,
\end{equation*}
where $U^Y(\cdot)$ is the baseline function for the outcome of interest, $C^Y(\cdot)$ is the contrast function that describes the treatment-covariates interaction effects for the outcome of interest, and $\epsilon^Y$ is the random error.}
We consider the case where the intermediate outcomes in the two samples are generated from different distributions to account for their heterogeneity. {Specifically,
the noises of intermediate outcomes in the auxiliary sample are set to be $\epsilon^{U}\overset{I.I.D.}{\sim} \text{Uniform}[-1,1]$. In addition, we let the noises of intermediate outcomes in the primary sample $\epsilon^{E}$ and the noise of the outcome of interest in the primary sample $\epsilon^Y$ generated from a bivariate normal distribution with mean zero, variance vector as $[2, 1.5]$, and a positive correlation of 0.7. Note that given the baseline covariates and the treatment, the conditional means of intermediate outcomes in the two samples are the same, and thus assumption (A4) is satisfied.}
We set the dimension of covariates as $r=2$ and the dimension of the intermediate outcome as $s=1$, and consider the following two scenarios respectively.
\noindent \textbf{Scenario 1 (decision tree)}
\begin{eqnarray*}
\left\{\begin{array}{ll}
U^M(X)={X^{(1)}}+2X^{(2)}, C^M(X)=X^{(1)}\times X^{(2)};\\
U^Y(X)=2{X^{(1)}}+X^{(2)}, C^Y(X)=2X^{(1)}\times X^{(2)}.\\
\end{array}
\right.
\end{eqnarray*}
\noindent \textbf{Scenario 2 (linear rule)}
\begin{eqnarray*}
\left\{\begin{array}{ll}
U^M(X)={X^{(1)}}+2X^{(2)}, C^M(X)=X^{(1)}-X^{(2)};\\
U^Y(X)=2{X^{(1)}}+X^{(2)}, C^Y(X)=2\{{X^{(2)}}-X^{(1)}\}.\\
\end{array}
\right.
\end{eqnarray*}
For Scenario 1, the true ODR can be represented by a decision tree as $d^{opt}(X) = \mathbb{I}\{X^{(1)}{X^{(2)}}>0\}$, which is unique up to permutation. Its true value $V(d^{opt})$ can be calculated by Monte Carlo approximations, as 0.999. In Scenario 2, the true ODR takes a form of a linear rule as $d^{opt}(X) = \mathbb{I}\{{X^{(2)}}-X^{(1)}>0\}$,
with its true value $V(d^{opt})$ as 1.333. Since $d^{opt}$ in Scenario 2 cannot be represented by a decision tree, i.e., $d^{opt} \not \in \Pi_1$, we have $\max_{d\in \Pi_1}V(d) = 1.251$ for Scenario 2 as the true value under the optimal decision tree, which is smaller than $V(d^{opt})$.
We consider $N_U=2000$ and allow $N_E$ chosen from the set $\{500,1000\}$. For each setting, we conduct 500 replications. Here,
we search the ODR using CODA-HO based on two samples and the ODR based on the primary sample solely, within the class of decision trees $\Pi_1$.
To illustrate the approximation error due to policy search, we directly plug the true decision rule $d^{opt}$ in the value estimators for comparison.
\begin{table}
\caption{Empirical results of the proposed CODA-HO method in comparison to the ODR based on the primary sample solely under Scenario 1.}\label{table:2}
\scalebox{0.9}{
\begin{tabular}{ccc|cc||cc|cc}
\hline
\hline
Method (Rule) &\multicolumn{2}{c|}{CODA ($d^{opt}$)} &\multicolumn{2}{c||}{CODA ($\widehat{d}$)} &\multicolumn{2}{c|}{ODR ($d^{opt}$)} &\multicolumn{2}{c}{ODR ($\widehat{d}_E$)} \\
\hline
$N_E=$ & $500$ & $1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$\\
\hline
\hline
True Value $V(\cdot)$ &\multicolumn{2}{c|}{0.999} & 0.963 & 0.976 & \multicolumn{2}{c|}{0.999} & 0.967 & 0.976 \\
\hline
\hline
Estimated Value $\widehat{V}(\cdot)$&0.998 & 0.996 & 1.053 & 1.030 & 1.006 & 0.994 & 1.095 & 1.050 \\
\hline
$SD\{\widehat{V}(\cdot)\}$&0.124 & 0.098 & 0.123 & 0.098 & 0.173 & 0.127 & 0.171 & 0.125 \\
\hline
${\mathbb{E}}\{\widehat{\sigma}\}$&0.129 & 0.096 & 0.129 & 0.095 & 0.182 & 0.128 & 0.181 & 0.128 \\
\hline
Coverage Probabilities &95.6\% & 95.0\% & 94.6\% & 94.6\% & 96.8\% & 95.2\% & 94.2\% & 94.4\% \\
\hline
\hline
Improved Efficiency &29.1\% & 25.0\% &28.7\% & 25.8\%& / & / & / & / \\
\hline
$\widehat{\boldsymbol{\rho}}(\cdot)$&12.4 & 12.3 &12.4 & 12.3 & / & / & / & / \\
\hline
$\widehat{\Sigma}_M(\cdot)$ &18.8 & 20.8 & 18.8 & 20.8 & / & / & / & / \\
\hline
\hline
\end{tabular}}
\end{table}
The empirical results are reported in Tables 1 and 2, for Scenarios 1 and 2, respectively, aggregated over 500 replications. We summarize the true value function $V(\cdot)$ of a given decision rule computed using the Monte Carlo simulation method, the estimated value $\widehat{V}(\cdot)$ with its standard deviation $SD\{\widehat{V}(\cdot)\}$, the averaged estimated standard error ${\mathbb{E}}\{\widehat{\sigma}\}$, and the coverage probability based on the $95\%$ CI in \eqref{ciaipw}, for both the proposed CODA-HO method and the ODR method based on the primary sample solely, under the true decision rule ($d^{opt}$) and the estimated rules ($\widehat{d}$ and $\widehat{d}_E$), respectively. In addition, we report the improved efficiency using CODA-HO, which is calculated as the relative reduction in the standard deviation of the CODA-HO value estimator with respect to that of the value estimator based on the primary sample solely,
the estimated asymptotic correlation $\widehat{\boldsymbol{\rho}}$, and the estimated asymptotic covariance $\widehat{\Sigma}_M$
for CODA-HO.
\begin{table}
\caption{Empirical results of the proposed CODA-HO method in comparison to the ODR based on the primary sample solely under Scenario 2, where $\max_{d\in \Pi_1}V(d) = 1.251$.}\label{table:3}
\scalebox{0.9}{
\begin{tabular}{ccc|cc||cc|cc}
\hline
\hline
Method (Rule) &\multicolumn{2}{c|}{CODA ($d^{opt}$)} &\multicolumn{2}{c||}{CODA ($\widehat{d}$)} &\multicolumn{2}{c|}{ODR ($d^{opt}$)} &\multicolumn{2}{c}{ODR ($\widehat{d}_E$)} \\
\hline
$N_E=$ & $500$ & $1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$\\
\hline
\hline
True Value $V(\cdot)$ &\multicolumn{2}{c|}{1.333} & 1.236 & 1.239 & \multicolumn{2}{c|}{1.333} & 1.227 & 1.232 \\
\hline
\hline
Estimated Value $\widehat{V}(\cdot)$&1.327 & 1.332 & 1.321 & 1.303 & 1.329 & 1.331 & 1.350 & 1.319 \\
\hline
$SD\{\widehat{V}(\cdot)\}$&0.110 & 0.085 & 0.108 & 0.085 & 0.156 & 0.112 & 0.154 & 0.110\\
\hline
${\mathbb{E}}\{\widehat{\sigma}\}$&0.115 & 0.085 & 0.116 & 0.086 & 0.162 & 0.114 & 0.161 & 0.114 \\
\hline
Coverage Probabilities &95.8\% & 95.2\% & 96.2\% & 94.6\% & 95.4\% & 95.2\% & 96.4\% & 95.2\% \\
\hline
\hline
Improved Efficiency &29.0\% & 25.4\% &28.0\% & 24.6\%& / & / & / & / \\
\hline
$\widehat{\boldsymbol{\rho}}(\cdot)$&10.7 & 10.6 &10.6 & 10.5 & / & / & / & / \\
\hline
$\widehat{\Sigma}_M(\cdot)$ &17.8 & 19.4 & 17.8& 19.5& / &/ & / & / \\
\hline
\hline
\end{tabular}}
\end{table}
Based on Tables 1 and 2, it is clear that the proposed CODA-HO method is more efficient than the ODR method based on the primary sample solely, in all cases.
{To be specific, CODA-HO improves efficiency by 28.7\% in Scenario 1 and 28.0\% in Scenario 2 for $N_E = 500$, and by 25.8\% in Scenario 1 and 24.6\% in Scenario 2 for $N_E = 1000$.
On the other hand, the values under CODA-HO approach the true as the sample size $N_E$ increases in all scenarios. Specifically, the proposed method achieves $V (\widehat{d}) = 0.977$ in Scenario 1 ($V (d^{opt}) = 0.999$) and $V (\widehat{d}) = 1.240$ in Scenario 2 $(\max_{d\in \Pi_1}V(d) = 1.251)$ when $N_E = 1000$. These results are comparable to or slightly better than the values under the estimated ODR based on the primary sample solely. }
Two findings help to verify Theorem \ref{thm3}. First, the mean of the estimated standard error of the value function (${\mathbb{E}}\{\widehat{\sigma}\}$) is close to the standard deviation of the estimated value ($SD\{\widehat{V}\}$), and gets smaller as the sample size $N_E$ increases. Second, the empirical coverage probabilities of the proposed 95\% CI in \eqref{ciaipw} approach to the nominal level in all settings. All these findings are further justified by directly applying the true optimal decision rule into the proposed calibrated value estimator.
It can be observed in Tables 1 and 2 that the estimated asymptotic correlation $\widehat{\boldsymbol{\rho}}$ and the estimated asymptotic covariance $\widehat{\Sigma}_M$ under the estimated decision rule by CODA-HO are very close to that under the true rule $d^{opt}$, with only one iteration. This supports the implementation technique discussed in Section \ref{sec:impl}.
\subsection{Investigation with Multiple Intermediate Outcomes}\label{simu_multi_mid}
We next consider the dimension of covariates as $r=10$ and the dimension of intermediate outcomes as $s=2$. The datasets are generated from the following three scenarios, respectively.
\noindent \textbf{Scenario 3 }
\begin{eqnarray*}
\left\{\begin{array}{ll}
U^M(X)=\begin{bmatrix}
{X^{(1)}}+2X^{(2)},\\
0,
\end{bmatrix},
C^M(X)=\begin{bmatrix}
X^{(1)}\times X^{(2)},\\
0,
\end{bmatrix};\\
U^Y(X)=2{X^{(1)}}+X^{(2)},
C^Y(X)=2X^{(1)}\times X^{(2)}.\\
\end{array}
\right.
\end{eqnarray*}
\noindent \textbf{Scenario 4}
\begin{eqnarray*}
\left\{\begin{array}{ll}
U^M(X)=\begin{bmatrix}
0.5\{X^{(1)}\}^2+2X^{(2)},\\
0,
\end{bmatrix},
C^M(X)=\begin{bmatrix}
X^{(1)}\times X^{(2)},\\
0,
\end{bmatrix};\\
U^Y(X)=2{X^{(1)}}+X^{(2)},
C^Y(X)=2X^{(1)}\times X^{(2)}.\\
\end{array}
\right.
\end{eqnarray*}
\noindent \textbf{Scenario 5}
\begin{eqnarray*}
\left\{\begin{array}{ll}
U^M(X)=\begin{bmatrix}
{X^{(1)}}+2X^{(2)},\\
0.5\{X^{(1)}\}^2+2X^{(2)},\\
\end{bmatrix},
C^M(X)=\begin{bmatrix}
X^{(1)}\times X^{(2)},\\
X^{(1)}\times X^{(2)},\\
\end{bmatrix};\\
U^Y(X)=2\cos\{X^{(1)}\}+X^{(2)},
C^Y(X)=2 X^{(1)}\times X^{(2)}.\\
\end{array}
\right.
\end{eqnarray*}
The true ODR for Scenarios 3 to 5 is the same as $d^{opt}(X) = \mathbb{I}\{X^{(1)}{X^{(2)}}>0\}$, with the true value $V(d^{opt})$ as 0.999 for Scenarios 3 and 4 while 1.909 for Scenario 5, based on Monte Carlo approximations. Using a similar procedure introduced in Section \ref{simu_eval}, we apply the proposed CODA-HO method, in comparison to the ODR method based on the primary sample solely. The empirical results are summarized in Table 3 for Scenarios 3 to 5, aggregated over 500 replications.
\begin{table}
\caption{Empirical results of the proposed CODA-HO method in comparison to the ODR based on the primary sample solely under Scenarios 3 to 5.}\label{table:4}
\scalebox{0.85}{
\begin{tabular}{cccc|cc||cc|cc}
\hline
\hline
Scen.&Method (Rule)&\multicolumn{2}{c|}{CODA ($d^{opt}$)} &\multicolumn{2}{c||}{CODA ($\widehat{d}$)} &\multicolumn{2}{c|}{ODR ($d^{opt}$)} &\multicolumn{2}{c}{ODR ($\widehat{d}_E$)} \\
\cline{2-10}
&$N_E=$& $500$ & $1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$\\
\hline
\hline
S3&Estimated $\widehat{V}(\cdot)$&0.983 & 0.986 & 1.037 & 1.021 & 0.984 & 0.980 & 1.072 & 1.038 \\
\cline{2-10}
$V(d^{opt})$& $SD\{\widehat{V}(\cdot)\}$&0.130 & 0.093 & 0.128 & 0.093 & 0.182 & 0.126 & 0.180 & 0.125 \\\cline{2-10}
=0.999&${\mathbb{E}}\{\widehat{\sigma}\}$&0.130 & 0.096 & 0.129 & 0.095 & 0.182 & 0.128 & 0.181 & 0.128 \\\cline{2-10}
&Coverage Probabilities &94.6\% & 95.8\% & 95.2\% & 94.8\% & 94.6\% & 94.6\% & 92.2\% & 94.8\% \\
\cline{2-10}
&Improved Efficiency & 28.6\% & 25.0\% & 28.7\% & 25.8\% & / & / & / &/ \\
\hline
\hline
S4&Estimated $\widehat{V}(\cdot)$&0.980 & 0.984 & 1.037 & 1.022 & 0.984 & 0.980 & 1.072 & 1.038 \\
\cline{2-10}
$V(d^{opt})$& $SD\{\widehat{V}(\cdot)\}$&0.148 & 0.104 & 0.145 & 0.103 & 0.182 & 0.126 & 0.180 & 0.125 \\\cline{2-10}
=0.999&${\mathbb{E}}\{\widehat{\sigma}\}$&0.148 & 0.107 & 0.148 & 0.107 & 0.182 & 0.128 & 0.181 & 0.128 \\\cline{2-10}
&Coverage Probabilities &95.0\% & 96.6\% & 95.8\% & 95.4\% & 94.6\% & 94.6\% & 92.2\% & 94.8\% \\
\cline{2-10}
&Improved Efficiency & 18.7\% & 16.4\% & 18.2\% & 16.4\% & / & / & / &/ \\
\hline
\hline
S5&Estimated $\widehat{V}(\cdot)$&1.898 & 1.895 & 1.977 & 1.948 & 1.898 & 1.889 & 2.004 & 1.962 \\
\cline{2-10}
$V(d^{opt})$ & $SD\{\widehat{V}(\cdot)\}$&0.116 & 0.083 & 0.113 & 0.080 & 0.147 & 0.102 & 0.142 & 0.099 \\\cline{2-10}
= 1.909&${\mathbb{E}}\{\widehat{\sigma}\}$&0.116 & 0.084 & 0.116 & 0.085 & 0.150 & 0.106 & 0.150 & 0.106 \\\cline{2-10}
&Coverage Probabilities &94.8\% & 96.0\% & 92.2\% & 93.8\% & 95.2\% & 95.6\% & 91.0\% & 92.2\% \\\cline{2-10}
&Improved Efficiency & 22.7\% & 20.8\% & 22.7\% & 19.8\% & / & / & / &/ \\
\hline
\hline
\end{tabular}}
\end{table}
{It can be seen from Table 3 that the proposed CODA-HO procedure performs reasonably better than the baseline procedure in terms of smaller variance under all scenarios. Specifically, in Scenario 3 with the baseline function linear in $X$, CODA-HO achieves a standard deviation of 0.095, against the larger standard deviation of 0.128 under the ODR based on the primary sample solely, with improved efficiency as 25.8\%, under $N_E=1000$. In Scenarios 4 and 5 with more complex non-linear baseline functions, CODA-HO outperforms the original ODR method by reducing the standard deviation as 16.4\% and 19.8\%, respectively, under $N_E=1000$. In addition,
the estimated value function under the estimated ODR obtained by CODA-HO achieves better coverage probabilities in comparison to the corresponding estimators obtained using the primary sample solely under all scenarios when the sample size is small, $N_E = 500$, indicating a stronger capacity of the proposed method in handling high-dimensional covariates by incorporating more samples with multiple mediators.}
{\subsection{Evaluation on Calibrated Value Estimator for Heterogeneous Baseline Covariates}\label{simu_eval_hete}
We next consider samples generated from the following model with heterogeneous baseline covariates:
\begin{equation*}
\begin{split}
&A_E \overset{I.I.D.}{\sim} \text{Bernoulli} \{\pi(X_E)\}, \quad X_E^{(1)}, \cdots,X_E^{(r)} \overset{I.I.D.}{\sim} \text{Uniform}[-2,2],\\
&A_U \overset{I.I.D.}{\sim} \text{Bernoulli} \{\pi(X_U)\}, \quad X_U^{(1)}, \cdots,X_U^{(r)} \overset{I.I.D.}{\sim} \text{Uniform}[-1,1.5],\\%\text{Truncated Normal}[0, 1, -2, 2],\\
&M_E=U^M(X_E)+ A_E C^M(X_E)+\epsilon^{E},\quad M_U=U^M(X_U)+ A_U C^M(X_U)+\epsilon^{U}.
\end{split}
\end{equation*}
Here, we consider the same logistic regression for the propensity score, i.e.,$\text{ logit}\{\pi(X)\} = 0.4+0.2X^{(1)} - 0.2X^{(2)}$. Similar to Section \ref{simu_eval}, we generate the outcome of interest for the primary sample only by $Y_E=U^Y(X_E)+A_EC^Y(X_E)+\epsilon^Y$. The noises of intermediate outcomes in the auxiliary sample are set to be $\epsilon^{U}\overset{I.I.D.}{\sim} \text{Uniform}[-1,1]$, while $\epsilon^{E}$ and $\epsilon^Y$ are generated from a bivariate normal distribution with mean zero, variance vector as [2, 1.5], and a positive correlation of 0.7.}
\begin{table}
\caption{Empirical results of the proposed CODA-HE method in comparison to the ODR based on the primary sample solely under Scenario 1 with {heterogeneous} baseline covariates.}\label{table:hete2}
\scalebox{0.9}{
\begin{tabular}{ccc|cc||cc|cc}
\hline
\hline
Method (Rule) &\multicolumn{2}{c|}{CODA ($d^{opt}$)} &\multicolumn{2}{c||}{CODA ($\widehat{d}_R$)} &\multicolumn{2}{c|}{ODR ($d^{opt}$)} &\multicolumn{2}{c}{ODR ($\widehat{d}_E$)} \\
\hline
$N_E=$ & $500$ & $1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$\\
\hline
\hline
True Value $V(\cdot)$ &\multicolumn{2}{c|}{0.999} & 0.975 & 0.984 & \multicolumn{2}{c|}{0.999} & 0.967 & 0.977 \\
\hline
\hline
Estimated Value $\widehat{V}(\cdot)$&1.001 & 0.992 & 1.059 & 1.029 & 1.006 & 0.993 & 1.095 & 1.050 \\
\hline
$SD\{\widehat{V}(\cdot)\}$&0.162 & 0.112 & 0.161 & 0.112 & 0.173 & 0.122 & 0.171 & 0.121 \\
\hline
${\mathbb{E}}\{\widehat{\sigma}\}$&0.172 &0.120 &0.171 &0.120 & 0.182 & 0.128 & 0.181 & 0.128 \\
\hline
Coverage Probabilities &96.8\% & 95.8\% & 97.0\% & 94.8\% & 96.8\% & 96.0\% & 94.2\% & 94.2\% \\
\hline
\hline
Improved Efficiency &5.5\% & 6.3\% &5.5\% & 6.3\%& / & / & / & / \\
\hline
$\widehat{\boldsymbol{\rho}}(\cdot)$&2.43 & 3.08 &2.39 & 3.06 & / & / & / & / \\
\hline
$\widehat{\Sigma}_M(\cdot)$ &3.19 & 4.67 &3.17 & 4.65 & / & / & / & / \\
\hline
\hline
\end{tabular}}
\end{table}
{We set the dimension of covariates as $r=2$ and the dimension of the intermediate outcome as $s=1$, and consider Scenarios 1 and 2 in Section \ref{simu_eval}. Using a similar procedure introduced in Section \ref{simu_eval}, we apply the proposed CODA-HE method ($\widehat{d}_R$), in comparison to the ODR method based on the primary sample solely ($\widehat{d}_E$). The empirical results are summarized in Tables 4 and 5, for Scenarios 1 and 2, respectively, aggregated over 500 replications. The coverage probabilities are calculated based on the $95\%$ CI in \eqref{ciaipw_hete}.
\begin{table}
\caption{Empirical results of the proposed CODA-HE method in comparison to the ODR based on the primary sample solely under Scenario 2 with {heterogeneous} baseline covariates, where $\max_{d\in \Pi_1}V(d) = 1.251$.}\label{table:hete3}
\scalebox{0.9}{
\begin{tabular}{ccc|cc||cc|cc}
\hline
\hline
Method (Rule) &\multicolumn{2}{c|}{CODA ($d^{opt}$)} &\multicolumn{2}{c||}{CODA ($\widehat{d}_R$)} &\multicolumn{2}{c|}{ODR ($d^{opt}$)} &\multicolumn{2}{c}{ODR ($\widehat{d}_E$)} \\
\hline
$N_E=$ & $500$ & $1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$& $ 500$ & $ 1000$\\
\hline
\hline
True Value $V(\cdot)$ &\multicolumn{2}{c|}{1.333} & 1.232 & 1.239 & \multicolumn{2}{c|}{1.333} & 1.226 & 1.235 \\
\hline
\hline
Estimated Value $\widehat{V}(\cdot)$&1.325 & 1.320 & 1.317 & 1.288 & 1.329 & 1.322 & 1.350 & 1.312 \\
\hline
$SD\{\widehat{V}(\cdot)\}$&0.140 & 0.103 & 0.139 & 0.102 & 0.156 & 0.118 & 0.154 & 0.116 \\
\hline
${\mathbb{E}}\{\widehat{\sigma}\}$&0.148 & 0.104 & 0.148 & 0.104 & 0.162 & 0.114 & 0.161 & 0.114 \\
\hline
Coverage Probabilities &96.0\% & 95.0\% & 95.8\% & 95.6\% & 95.4\% & 93.6\% & 94.2\% & 93.6\% \\
\hline
\hline
Improved Efficiency &8.6\% & 8.8\% &8.1\% & 8.8\%& / & / & / & / \\
\hline
$\widehat{\boldsymbol{\rho}}(\cdot)$&2.72 & 3.44 & 2.68 & 3.39 & / & / & / & / \\
\hline
$\widehat{\Sigma}_M(\cdot)$ &3.56 & 5.18 & 3.52 &5.12& / &/ & / & / \\
\hline
\hline
\end{tabular}}
\end{table}
{It can be observed from Tables 4 and 5 that the proposed CODA-HE method is more efficient than the ODR method based on the primary sample solely, in all scenarios, under heterogeneous baseline covariates. Specifically, CODA-HE improves efficiency by 6.3\% in Scenario 1 and 8.8\% in Scenario 2 for $N_E = 1000$. In addition, the values under CODA-HE approach the true as the sample size $N_E$ increases, which yields $V (\widehat{d}) = 0.984$ in Scenario 1 ($V (d^{opt}) = 0.999$) and $V (\widehat{d}) = 1.238$ in Scenario 2 $(\max_{d\in \Pi_1}V(d) = 1.251)$ when $N_E = 1000$. Finally, the coverage probabilities of the value estimator obtained by CODA-HE are close to the nominal level, which support
the theoretical results in Theorem \ref{thm3_hete} for heterogeneous baseline covariates.}
\section{Real Data Analysis}\label{sec:5}
We illustrate the proposed method by application to data sources from the MIMIC-III clinical database \citep{goldberger2000physiobank,johnson2016mimic,biseda2020prediction} and the eICU collaborative research database \citep{goldberger2000physiobank,pollard2018eicu}.
Specifically, we treat the MIMIC-III dataset as the primary sample, while using the eICU data source as the auxiliary sample where the primary outcome of interest is unrecorded. As introduced in Section \ref{sec:1}, both the MIMIC-III and the eICU data focused on the same type of patients who suffered from sepsis. Yet, the MIMIC-III data and the eICU data were collected from {different locations during different periods}, and thus are from two {heterogeneous studies}. Hence, the proposed CODA method is desired here to develop a more efficient optimal decision rule by integrating these two samples for treatment decision-making.
{Both the MIMIC-III data and the eICU data collected information from ICU patients with sepsis disease, and thus contain common baseline covariates, the treatment, and intermediate outcomes. We consider $r=11$ common baseline covariates in both samples after dropping variables with high missing rates}: age (years), gender (0=female, 1=male), admission weights (kg), admission temperature (Celsius), Glasgow Coma Score (GCS), sodium amount (meq/L), glucose amount (mg/dL), blood urea nitrogen amount (BUN, mg/dL), creatinine amount (mg/dL), white blood cell count (WBC, E9/L), and total input amount (mL). Here, the treatment is coded as 1 if receiving the vasopressor, and 0 if receiving other medical supervisions such as IV fluid resuscitation.
Intermediate outcomes include the total urine output (mL) and the accumulated balance (mL) of metabolism for both samples. The outcome of interest ($Y_E$) is 0 if a patient died due to sepsis and 1 if a patient is still alive, observed only in the primary sample. This accords with the definition of $Y_E$ such that a larger $Y_E$ is better. By deleting the abnormal values in two datasets, we form the primary sample of interest consisting of $N_E=10746$ subjects, where 2242 patients were treated with the vasopressor while the rest 8504 subjects were treated with other medical supervisions. The auxiliary sample consists of $N_U=7402$ subjects without the information of the outcome of interest, among which 2005 patients were treated with the vasopressor, while the rest were treated with other medical supervisions.
\begin{figure}
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.85\textwidth]{figs/feature1_check_fin.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=.85\textwidth]{figs/feature2_check_fin.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=.55\textwidth]{figs/outcome_check_fin.pdf}
\end{subfigure}
\caption{The box-plots for the shared variables in the MIMIC-III data and the eICU data.}
\label{fig:3}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}
\centering
\includegraphics[width=.48\textwidth]{figs/output_total_cio_check_fin.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[width=.48\textwidth]{figs/cumulated_balance_cio_check_fin.pdf}
\end{subfigure}
\caption{The density plots for the conditional mean of two intermediate outcomes in the MIMIC-III data and the eICU data. Left: for the total urine output (mL). Right: for the accumulated balance (mL) of metabolism. }
\label{fig:4}
\end{figure}
We illustrate the shared variables in the MIMIC-III data and the eICU data in Figure \ref{fig:3}. It can be seen from Figure \ref{fig:3} that there exist multiple variables that have a distinct pattern in each sample, including GCS, WBC, glucose amount, BUN, total input, and total output. This shows some degree of heterogeneity in these two samples. To check the validity of the proposed CIO assumption, we fit a deep neural network of two intermediate outcomes on baseline covariates and the treatment based on each sample, respectively. For each treatment-covariates pair $(x,a)$ in the support of the two samples, we estimate ${\mathbb{E}}[M_E|X_E=x, A_E=a]$ based on the fitted deep neural network from the primary sample and ${\mathbb{E}}[M_U|X_U=x, A_U=a]$ based on the fitted deep neural network from the auxiliary sample. The relative mean square error of the difference between the fitted conditional means of two samples over the set of $(x,a)$ is 5.55\% for the total output and 22.60\% for the accumulated balance. This indicates the conditional mean estimators for intermediate outcomes in the two samples are close. In addition, we demonstrate the densities of the conditional mean of the total output and the accumulated balance in Figure \ref{fig:4}.
The results in Figure \ref{fig:4} show that the densities of the conditional means of intermediate outcomes are fairly close in the two samples. These findings provide some evidence that the CIO assumption may hold for the MIMIC-III and eICU data.
Next, we apply CODA on the two samples in comparison to the ODR from the primary sample only. Here, {we consider the CODA method for homogeneous covariates and heterogeneous covariates, respectively. All} the decision rules are searched within the class of decision trees, using a similar procedure introduced in Section \ref{simu_eval}. {We consider two different sample sizes of the primary sample as $N_E\in\{5000,10746\}$.}
The results of the estimated value $\widehat{V}$, the estimated standard error $ \widehat{\sigma}$, the improved efficiency as the relative reduction in the estimated standard error of the CODA value estimator with respect to that of the ODR value estimator,
and the number of the assignment to each treatment, are summarized in Table \ref{table:6}. In addition, we report the assignment matching rate between CODA and the ODR method based on the primary sample solely in the last row of Table \ref{table:6}.
\begin{table}
\caption{The real data analysis under the proposed CODA method and the ODR method based on the primary sample solely.}\label{table:6}
\scalebox{0.9}{
\begin{tabular}{cccc|ccc}
\hline
\hline
Sample Size&\multicolumn{3}{c|}{ $N_E=5000$} &\multicolumn{3}{c}{ $N_E=10746$} \\
\hline
Method & CODA-HO & CODA-HE& ODR& CODA-HO & CODA-HE & ODR \\
\hline
\hline
Estimated $\widehat{V}(\cdot)$ &0.204 &0.194&0.184&0.203 &0.200&0.192 \\
\hline
Estimated $\widehat{\sigma}$ &0.0090&0.0094& 0.0097 &0.0065&0.0067& 0.0068 \\
\hline
Improved Efficiency &7.2\%&3.1\%&/&4.4\%&1.5\%&/\\
\hline
Treatment 0&2967&2805&2671&5853&5665&5477\\
\hline
Treatment 1&2033&2195&2329&4893&5081&5269\\
\hline
Matching Rate & 87.5\%&93.7\%& /& 86.5\%&98.3\%& / \\
\hline
\hline
\end{tabular}}
\end{table}
Based on Table \ref{table:6}, CODA performs reasonably better than the original ODR under each different $N_E$. Specifically, with the full primary dataset, i.e., $N_E=10746$, {the proposed CODA-HO achieves a value of 0.203 with a smaller standard error of 0.0065, comparing to the value under the ODR as 0.192 with a standard error of 0.0068.
The efficiency is improved by 4.4\% owing to the CODA-HO. The rate of making the same decision between these two rules is 86.5\%. The CODA-HO assigns 5853 patients to treatment 1 and 4893 patients to the control, which is consistent with the competitive nature of these two treatments. On the other hand, the proposed CODA-HE achieves a value of 0.200 with a slightly smaller standard error of 0.0067, in contrast to the ODR based on the primary sample solely. The efficiency is improved by 1.5\% based on the CODA-HE. The decision rule under CODA-HE is closer to the ODR. In addition, the proposed CODAs could achieve a greater improvement in efficiency when the sample size is smaller, as 7.2\% for CODA-HO and 3.1\% for CODA-HE under $N_E=5000$. These findings are consistent with what we have observed in simulations.}
\section{Discussion}\label{sec:6}
In this paper, we proposed a calibration approach for more efficient optimal treatment decision-making by integrating multiple data sources {from heterogeneous studies} with the limited outcome. Our proposed CODA method is easy and flexible to implement in practice {and has good interpretability. We establish the consistency and asymptotic normality of the calibrated value estimator under the CODA, which is more efficient than the original ODR using the primary sample solely. Particularly, the proposed CODA-HE builds a new framework to incorporate heterogeneous baseline covariates across samples through rebalancing by the posterior sampling probability, which avoids the density estimation of baseline covariates and the missingness specification in the joint sample.}
{It is noted that the magnitude of efficiency gain of CODA depends on $\boldsymbol{\rho}(d)$ or $\boldsymbol{\rho}_R(d)$, the correlation between the value estimator of the outcome of interest and the value difference estimator of intermediate outcomes in the two samples. Specifically, for the case with homogeneous baseline covariates, we have
\begin{eqnarray}\nonumber
\boldsymbol{\rho}(d) &=& E\Bigl({\mathbb{I}\{A_E=d(X_E)\}[Y_E - \mu_E\{X_E,d(X_E)\} ]\over{[A_E \pi_E (X_E)+(1-A_E)\{1-\pi_E(X_E)\}]^2}} \times [M_E - \theta\{X_E,d(X_E)\}]\Bigr)
\\\label{theo_form_rho}
& &+ E\Bigl( [\mu_E\{X_E,d(X_E)\} - V(d)]\times [\theta\{X_E,d(X_E)\} - W(d)]\Bigr),
\end{eqnarray}
while for the case with heterogeneous baseline covariates, we have
\begin{eqnarray*}
\boldsymbol{\rho}_R(d) &=&E\Bigl({\mathbb{I}\{A_E=d(X_E)\}\over{[A_E \pi_E (X_E)+(1-A_E)\{1-\pi_E(X_E)\}]\times [A_E \pi (X_E)+(1-A_E)\{1-\pi(X_E)\}]}} \\
& & \times \frac{1}{r \{X_E,d(X_E),M_E\}} [Y_E - \mu_E\{X_E,d(X_E)\} ] \times [M_E - \theta\{X_E,d(X_E)\}]\Bigr).
\end{eqnarray*}
In general, $\boldsymbol{\rho}(d)$ tends to be larger than $\boldsymbol{\rho}_R(d)$ due to the second summation term in $\boldsymbol{\rho}(d)$ in \eqref{theo_form_rho}, which partly explains why CODA-HO has relatively larger efficiency gain than CODA-HE over the ODR obtained using the primary sample solely as observed in both simulations and the real data application.}
There are several possible extensions we may consider in future work. First, we only consider two treatment options in this paper, while in applications it is common to have more than two options for decision making. Thus, a more general method with multiple treatments or even continuous decisions is desirable. Second, we can extend the proposed CODA method to dynamic treatment decision making, where each subject successively receives a treatment followed by intermediate outcomes, however, the primary outcome of interest can be observed in the primary sample only. {Third, we only consider the setting where two samples share the same set of baseline covariates so that the ignorability assumption holds in both samples. In practice, different samples from heterogeneous studies may not have exactly the same set of baseline covariates. To be specific, let $X_E=[X_E^{(1)},\cdots, X_E^{(r_1)}]^\top$ denote $r_1$-dimensional individual's baseline covariates in the primary sample, and let $X_U=[X_U^{(1)},\cdots, X_U^{(r_2)}]^\top$ denote $r_2$-dimensional individual's baseline covariates in the auxiliary sample. The ignorability assumption holds for its own set of covariates in each sample. Suppose two samples share a same subset of baseline covariates with dimension $r_3 \leq \min(r_1, r_2)$, denoted as $X_C$.
Suppose that $X_C$ has the same joint distribution in two samples and the comparable
intermediate outcomes assumption holds for this common set of covariates $X_C$.
Then, we can modify the proposed calibrated value estimator
by calibrating only with this common set of baseline covariates in two samples, but maintain searching the ODR based on whole available baseline covariates in the primary sample. We leave it for future research.}
\newpage
\bibliographystyle{agsm}
|
3,212,635,537,695 | arxiv | \section{Introduction and main results}
We are interested in a model of alignment of unit vectors. Our interest comes from the mechanism of alignment of self-propelled particles presented by Degond and Motsch in~\cite{degond2008continuum}, which is a time-continuous model inspired from the Vicsek model~\cite{vicsek1995novel} (in which the alignment process is discrete in time). In these models, the velocities of the particles, considered as unit vectors, try to align towards the average orientation of their neighbors and are subject to some angular noise. We want to study the simple case without spatial dependence and without noise. More precisely, at the level of the particle dynamics, we consider the deterministic part of the spatially homogeneous model of~\cite{bolley2012meanfield}, which corresponds to a regularized version of~\cite{degond2008continuum}: the particles align with the average velocity of the others (instead of dividing this average vector by its norm to get a averaged orientation). It reads as
\begin{equation}
\label{ode-intro}\frac{\d v_i}{\d t}=P_{v_i^\perp}J, \quad\text{ with } J=\frac1{N}\sum_{j=1}^Nv_j,
\end{equation}
where~$(v_i)_{1\leqslant i\leqslant N}$ are~$N$ unit vectors belonging to~$\mathbb{S}$, the unit sphere of~$\mathbb{R}^n$, and~$P_{v^\perp}$ is the projection on the orthogonal of a unit vector~$v\in\mathbb{S}$, given by~$P_{v^\perp}u=u-(v\cdot u) v$ for~$u\in\mathbb{R}^n$. This projection ensures that the velocities stay of norm one for all positive times. This system of equations can be seen as alignment towards the unit vector pointing in the same direction as~$J$ (the average of all velocities). Indeed the term~$P_{v^\perp}J$ is equal to~$\nabla_v(J\cdot v)$, where~$\nabla_v$ is the gradient operator on the unit sphere~$\mathbb{S}$. Therefore the dynamics of a particle following the equation~$\frac{\d v}{\d t}=\nabla_v(v\cdot J)$ corresponds to the maximization of this quantity~$v\cdot J$, which is maximal when~$v$ is aligned in the same direction as~$J$.
At the kinetic level, we are interested in the evolution of a probability measure~$f(t,\cdot)$ on~$\mathbb{S}$ given by
\begin{equation}
\label{pde-intro}
\partial_tf+\nabla_v\cdot(fP_{v^\perp}J_f)=0,\quad\text{ with } J_f=\int_\mathbb{S}v f \d v,
\end{equation}
where~$\nabla_v\cdot$ is the divergence operator on the sphere~$\mathbb{S}$. The link between this evolution equation and the system of ordinary differential equations~\eqref{ode-intro}, is that if the measure~$f$ is the so-called empirical distribution of the particles~$(v_i)_{1\leqslant i\leqslant N}$, given by~$f=\frac1{N}\sum_{i=1}^N\delta_{v_i}$, then it is a weak solution of the kinetic equation~\eqref{pde-intro} if and only if the vectors~$(v_i)_{1\leqslant i\leqslant N}$ are solutions of the system~\eqref{ode-intro} (see Remark~\ref{remark-ode-pde}). This kinetic equation~\eqref{pde-intro} corresponds to the spatially homogeneous version of the mean-field limit of~\cite{bolley2012meanfield} in which the diffusion coefficient has been set to zero. The case with a positive diffusion has been treated in detail in~\cite{frouvelle2012dynamics} by the authors of the present paper, and it presents a phenomenon of phase transition: when the diffusion coefficient is greater than a precise threshold, all the solutions converge exponentially fast towards the uniform measure on the sphere~$\mathbb{S}$, and when it is smaller, all solutions except those for which~$J_f$ is initially zero converge exponentially fast to a non-isotropic steady-state (a von Mises distribution). When the diffusion coefficient tends to zero, the von Mises distributions converge to Dirac measures concentrated at one point of~$\mathbb{S}$. Therefore, we can expect that the solutions of~\eqref{pde-intro} converge to a Dirac measure. The main object of this paper is to make this statement precise, in proving the following theorem:
\begin{theorem}\label{thm-pde}
Let~$f_0$ be a probability measure on~$\mathbb{S}$ of~$\mathbb{R}^n$, and~$f\in C(\mathbb{R}_+,\mathcal{P}(\mathbb{S}))$ be the solution of~\eqref{pde-intro} with initial condition~$f(0,v)=f_0(v)$.
If~$J_f(0)\neq0$, then~$t\mapsto|J_f(t)|$ is nondecreasing, so~$\Omega(t)=\frac{J_f(t)}{|J_f(t)|}\in\mathbb{S}$ is well-defined for all times~$t\geqslant0$. Furthermore there exists~$\Omega_\infty\in\mathbb{S}$ such that~$\Omega(t)$ converges to~$\Omega_\infty(t)$ as~$t\to+\infty$.
Finally, there exists a unique~$v_{\mathrm{back}}\in\mathbb{S}$ such that the solution of the differential equation~$\frac{\d v}{\d t}=P_{v^\perp}J_f(t)$ with initial condition~$v(0)=v_{\mathrm{back}}$ is such that~$v(t)\to-\Omega_\infty$ as~$t\to\infty$. Then, if we denote by~$m$ the mass of the singleton~$\{v_{\mathrm{back}}\}$ with respect to the measure~$f_0$, we have~$m<\frac12$ and~$f(t,\cdot)$ converges weakly as~$t\to\infty$ towards the measure~$(1-m)\delta_{\Omega_\infty}+m\delta_{-\Omega_\infty}$.
\end{theorem}
In particular, this theorem shows that if the initial condition~$f_0$ has no atoms and satisfies~$J_{f_0}\neq0$, then the measure~$f$ converges weakly to a Dirac mass at some~$\Omega_\infty\in\mathbb{S}$. Let us mention that there is no rate of convergence in this theorem. In general, there is no hope to have such a rate for an arbitrary initial condition (see Proposition~\ref{prop-no-rate}), but under regularity assumptions, one can expect to have an exponential rate of convergence (this is the case when the initial condition has some symmetries implying that~$\Omega(t)$ is constant, see Proposition~\ref{prop-rates-regular}).
We will also study in detail the system of ordinary differential equations~\eqref{ode-intro}. Since this is a particular case of~\eqref{pde-intro} in the case where~$f=\frac1{N}\sum_{i=1}^N\delta_{v_i}$ (see Remark~\ref{remark-ode-pde}), we can apply the main theorem, but now the measure~$f$ has atoms, and actually we will see that working directly with the differential equations allows to have more precise results such as exponential rates of convergence. For instance the quantity~$\Omega(t)$ plays the role as a nearly conserved quantity, as it converges to~$\Omega_\infty$ at a higher rate than the convergence of the~$(v_i)_{1\leqslant i\leqslant n}$. More precisely, we will prove the following theorem:
\begin{theorem}\label{thm-ode}
Given~$N$ positive real numbers~$(m_i)_{1\leqslant i\leqslant N}$ with~$\sum_{i=1}^Nm_i=1$, and~$N$ unit vectors~$v_i^0\in\mathbb{S}$ (for~$1\leqslant i\leqslant N$) such that~$v_i^0\neq v_j^0$ for all~$i\neq j$, let~$(v_i)_{1\leqslant i\leqslant N}$ be the solution of the following system of ordinary differential equations :
\begin{equation}
\label{ode-with-mi}
\frac{\d v_i}{\d t}=P_{v_i^\perp}J, \text{ with } J(t)=\sum_{i=1}^Nm_iv_i(t),
\end{equation}
with the initial conditions~$v_i(0)=v_i^0$ for~$1\leqslant i\leqslant N$, and where~$P_{v_i^\perp}$ denotes the projection on the orthogonal of~$v_i$.
If~$J(0)\neq0$, then~$t\mapsto|J(t)|$ is nondecreasing, so~$\Omega(t)=\frac{J(t)}{|J(t)|}\in\mathbb{S}$ is well-defined for all times~$t\geqslant0$. Furthermore there exists~$\Omega_\infty\in\mathbb{S}$ such that~$\Omega(t)$ converges to~$\Omega_\infty(t)$ as~$t\to+\infty$, and there are only two types of possible asymptotic regimes, which are described below.
\begin{itemize}
\item[(i)] All the vectors~$v_i$ are converging to~$\Omega_\infty$. Then this convergence occurs at an exponential rate~$1$, and~$\Omega$ is converging to~$\Omega_\infty$ at an exponential rate~$3$. More precisely, there exists~$a_i\in\{\Omega_\infty\}^\perp\subset\mathbb{R}^n$, for~$1\leqslant i\leqslant N$ such that~$\sum_{i=1}^Nm_ia_i=0$ and that, as~$t\to+\infty$,
\begin{align*}
v_i(t)&=(1-|a_i|^2e^{-2t})\Omega_\infty+e^{-t}a_i +O(e^{-3t})\quad \text{for }1\leqslant i\leqslant N,\\
\Omega(t)&=\Omega_\infty+O(e^{-3t}).
\end{align*}
\item[(ii)] There exists~$i_0$ such that~$v_{i_0}$ converges to~$-\Omega_\infty$. Then~$m_{i_0}<\frac12$, and if we denote~$\lambda=1-2m_{i_0}$, the vector~$v_{i_0}$ converges to~$-\Omega_\infty$ at an exponential rate~$3\lambda$. Furthermore, all the other vectors~$v_i$ for~$i\neq i_0$ converge to~$\Omega_\infty$ at a rate~$\lambda$, and the vector~$\Omega$ converges to~$\Omega_\infty$ at a rate~$3\lambda$. More precisely, there exists~$a_i\in\{\Omega_\infty\}^\perp\subset\mathbb{R}^n$, for~$i\neq i_0$ such that~$\sum_{i\neq i_0}m_ia_i=0$ and that, as~$t\to+\infty$,
\begin{align*}
v_i(t)&=(1-|a_i|^2e^{-2\lambda t})\Omega_\infty+e^{-\lambda t}a_i +O(e^{-3\lambda t})\quad \text{for }i\neq i_0,\\
v_{i_0}(t)&=-\Omega_\infty+O(e^{-3\lambda t}),\\
\Omega(t)&=\Omega_\infty+O(e^{-3\lambda t}).
\end{align*}
\end{itemize}
\end{theorem}
Notice that the original system~\eqref{ode-intro} can be put as~\eqref{ode-with-mi} with~$m_i=\frac1N$, but the assumption~$v_i^0\neq v_j^0$ for~$i\neq j$ may not be satisfied. Up to renumbering particles and grouping those starting in the same position by setting~$m_i=\frac{k}N$ where~$k$ is the number of particles sharing the same initial condition, we can always fall into the framework of~\eqref{ode-with-mi} with distinct initial conditions. We can finally remark that this system~\eqref{ode-with-mi} is still a particular case of the kinetic equation~\eqref{pde-intro} for a measure given by~$f=\sum_{i=1}^Nm_i\delta_{v_i}$ (see once again Remark~\ref{remark-ode-pde}).
Let us conclude this introduction by saying that these models have also been introduced and studied in different contexts from the one of self-propelled particles. Alignment on the sphere has been introduced as a model of opinion formation in~\cite{aydogdu2017opinion,caponigro2015nonlinear}. The kinetic equation~\eqref{pde-intro} with a diffusion term corresponds to the evolution of rodlike polymers with dipolar potential~\cite{fatkullin2005critical}. Finally the two-dimensional case, where~$\mathbb{S}$ is the unit circle, can correspond to the evolution of identical Kuramoto oscillators. The results we present here were first exposed in detail (with the same proofs as in the present paper) by the first author in the CIMPA Summer School “Mathematical Modeling in Biology and Medicine” in June 2016. They are somewhat similar to those of~\cite{benedetto2015complete} in dimension two, in the context of Kuramoto oscillators, a work that has been raised to us during the presentation of Bastien Fernandez in the workshop “Life Sciences” of the trimester “Stochastic Dynamics out of equilibrium” in May 2017. Very recently, a work~\cite{ha2018relaxation} on generalization of Kuramoto oscillators in higher dimensions, the so-called Lohe oscillators, recovers the same kind of results, although not using exactly the same techniques and not obtaining the precise estimates of Theorem~\ref{thm-ode}. The estimates given by Proposition~\eqref{prop-rates-regular} are also new, as far as we know.
This paper is divided in two main parts. After this introduction, Section~\ref{section-pde} is devoted to the kinetic equation~\eqref{pde-intro}. It is divided in two subsections, the first one being dedicated to the proof of Theorem~\ref{thm-pde}, and the second one giving more precise estimates of convergence in case of symmetries in the initial condition. Section~\ref{section-ode} concerns the system of differential equations~\eqref{ode-with-mi} and the proof of Theorem~\ref{thm-ode}. Even if some conclusions can be drawn using Theorem~\ref{thm-pde} thanks to~Remark~\ref{remark-ode-pde}, we try to make the two parts independent and the proofs self-contained, so the reader interested in Theorem~\ref{thm-ode} can directly jump to this last section.
\section{The continuum model}\label{section-pde}
\subsection{Proof of Theorem~\ref{thm-pde}}
We start with a proposition about well-posedness of the kinetic equation~\eqref{pde-intro}. We proceed for instance as in~\cite{spohn1991large}. We denote by~$\mathcal{P}(\mathbb{S})$ the set of probability measures on~$\mathbb{S}$. In this set we consider the Wasserstein distance~$W_1$ (also called bounded Lipschitz distance) given by~$W_1(\mu,\nu)=\inf_{\varphi\in\mathrm{Lip}_1(\mathbb{S})}|\int_{\mathbb{S}}\varphi\,\d \mu-\int_\mathbb{S}\varphi\,\d\nu|$ for~$\mu$ and~$\nu$ in~$\mathcal{P}(\mathbb{S})$, where~$\mathrm{Lip}_1$ is the set of functions~$\varphi$ such that for all~$u,v$ in~$\mathbb{S}$, we have~$|\varphi(u)-\varphi(v)|\leqslant|v-u|$. This distance corresponds to the weak convergence of probability measures :~$W_1(\mu_n,\mu)\to0$ if and only if for any continuous function~$\varphi:\mathbb{S}\to\mathbb{R}$, we have~$\int_\mathbb{S}\varphi\, \d \mu_n\to\int_\mathbb{S}\varphi\,\d \mu$. The well-posedness result is stated in the space~$C(\mathbb{R}_+,\mathcal{P}(\mathbb{S}))$ of family of probability measures weakly continuous with respect to time:
\begin{proposition} Given~$T>0$ and~$f_0\in\mathcal{P}(\mathbb{S})$, there exists a unique weak solution~$f\in C([0,T],\mathcal{P}(\mathbb{S}))$ to the equation~\eqref{pde-intro} with initial condition~$f_0$, in the sense that for all~$t\in[0,T]$, and for all~$\varphi\in C^1(\mathbb{S})$, we have
\begin{equation}
\frac{\d}{\d t}\int_\mathbb{S}\varphi(v)f(t,v)\,\d v=\int_\mathbb{S}J_{f(t,\cdot)}\cdot\nabla_v\varphi(v)f(t,v)\,\d v,\label{pde-weak}\end{equation}
were we use the notation~$f(t,v)\,\d v$ even if~$f(t,\cdot)$ is not absolutely continuous with respect to the Lebesgue measure on~$\mathbb{S}$, and~$J_{f(t,\cdot)}=\int_\mathbb{S}vf(t,v)\,\d v$.
\end{proposition}
\begin{proof} Notice that the term~$P_{v^\perp}J_f\cdot\nabla_v\varphi$ that we obtain when doing a formal integration by parts of~\eqref{pde-intro} against a test function~$\varphi$ is replaced by~$J_f\cdot\nabla_v\varphi$ in the weak formulation~\eqref{pde-weak}, since the gradient on the sphere at a point~$v$ is already orthogonal to~$v$. The proof of this proposition relies on the fact that the linear equation corresponding to~\eqref{pde-intro} when replacing~$J_f$ by an external given “alignment field”~$\mathcal{J}\in C(\mathbb{R}_+,\mathbb{R}^n)$ is also well-posed. Indeed the solution to this linear equation, namely
\begin{equation}\partial_tf+\nabla_v\cdot(P_{v^\perp}\mathcal{J}(t)f)=0\quad \text{with}\quad f(0,\cdot)=f_0,\label{pde-linear}
\end{equation}
is given by the image measure of~$f_0$ by the flow~$\Phi_t$ of the differential equation~$\frac{\d v}{\d t}=P_{v^\perp}\mathcal{J}(t)$. In detail, if~$\Phi_t$ is the solution of
\begin{equation}
\begin{cases}\frac{\d \Phi_t}{\d t}=P_{\Phi_t^\perp}\mathcal{J}(t),\\\Phi_0(v)=v,
\end{cases}\label{ode-flow}
\end{equation}
then the solution~$f(t,\cdot)=\Phi_t\#f_0$ is characterized by the fact that
\begin{equation}\label{pushforward}
\forall\varphi\in C(\mathbb{S}), \int_\mathbb{S}\varphi(v)f(t,v)\,\d v=\int_\mathbb{S}\varphi(\Phi_t(v))f_0(v)\,\d v.
\end{equation}
Since the differential equation~\eqref{ode-flow} satisfies the assumptions for which the Cauchy-Lipschitz theorem applies, it is well-known (see for instance~\cite{ambrosio2008existence}) that the solution of~\eqref{pde-linear} is unique and given by~$\Phi_t\#f_0$.
Therefore, if, given~$\mathcal{J}\in C([0,T],\mathbb{R}^n)$, we denote by~$\Psi(\mathcal{J})$ the solution of the linear equation~\eqref{pde-linear}, solving the nonlinear kinetic equation~\eqref{pde-intro} corresponds to finding a fixed point of the map~$f\in C([0,T],\mathcal{P(S)})\mapsto\Psi(J_f)$, or equivalently of the map~$\mathcal{J}\in C([0,T],B)\mapsto J_{\Psi(\mathcal{J})}$, where~$B$ is the closed unit ball of~$\mathbb{R}^n$ (recall that if~$f\in\mathcal{P}(\mathbb{S})$, then~$|J_f|\leqslant1$). The space~$E=C([0,T],B)$ is a complete metric space if the distance is given by~$d_T(\mathcal{J},\bar{\mathcal{J}})=\sup_{t\in[0,T]}|\mathcal{J}(t)-\bar{\mathcal{J}}(t)|e^{-\beta t}$, for an arbitrary~$\beta>0$. Using the fact that~$|(P_{v^\perp}-P_{\bar{v}^\perp})u|\leqslant2|v-\bar{v}|$ if~$|u|\leqslant1$, by a simple Grönwall estimate, if~$\mathcal{J},\bar{\mathcal{J}}\in E$ and~$\Phi_t,\bar{\Phi_t}$ are the associated flow given by~\eqref{ode-flow}, we obtain
\[|\Phi_t-\bar{\Phi}_t|\leqslant\int_0^t|\mathcal{J}(s)-\bar{\mathcal{J}}(s)|e^{2(t-s)}\d s.\]
Finally, we get (using the notation~$J_{f}(t)=J_{f(t,\cdot)}$)
\[\begin{split}|J_{\Psi(\mathcal{J})}(t)-J_{\Psi(\bar{\mathcal{J}})}(t)|&=\left|\int_\mathbb{S} v\, \Psi(\mathcal{J})(t,v)\,\d v -\int_\mathbb{S} v\, \Psi(\bar{\mathcal{J}})(t,v)\,\d v\right|\\&=\left|\int_\mathbb{S}[\Phi_t(v)-\bar{\Phi}_t(v)]f_0(v)\, \d v\right|\\
&\leqslant\int_0^t|\mathcal{J}(s)-\bar{\mathcal{J}}(s)|e^{2(t-s)}\d s\leqslant d_t(\mathcal{J},\bar{\mathcal{J}})\int_0^te^{2(t-s)+\beta s}\d s.
\end{split} \]
Therefore when~$\beta>2$ we get~$|J_{\Psi(\mathcal{J})}(t)-J_{\Psi(\bar{\mathcal{J}})}(t)|e^{-\beta t}\leqslant\frac1{\beta-2}d_t(\mathcal{J},\bar{\mathcal{J}})$, so if we take~$\beta>3$, we get that the map~$\mathcal{J}\mapsto J_{\Psi(\mathcal{J})}$ is indeed a contraction mapping from~$E$ to~$E$, which gives the existence and uniqueness of the fixed point.
\qed
\end{proof}
\begin{remark}\label{remark-sobolev} The well-posedness of the kinetic equation~\eqref{pde-intro} can also be established in Sobolev spaces, by means of harmonic analysis on the sphere and standard Galerkin method (see~\cite{frouvelle2012dynamics}).
\end{remark}
\begin{remark}\label{remark-ode-pde} Using the weak formulation~\eqref{pde-weak} and the definition of the pushforward measure~\eqref{pushforward}, it is possible to show that a convex combination of Dirac masses, of the form~$f(t,\cdot)=\sum_{i=1}^Nm_i\delta_{v_i}(t)$ with~$m_i\geqslant0$ for~$1\leqslant i\leqslant N$ and~$\sum_{i=1}^Nm_i=1$ is a weak solution of~\eqref{pde-intro} if and only if the~$(v_i)_{1\leqslant i\leqslant N}$ are solutions of the system of differential equations~\eqref{ode-with-mi}.
\end{remark}
We are now ready to prove some qualitative properties of the solution to the kinetic equation~\eqref{pde-intro}. Without further notice, we will denote by~$f$ this solution, and by~$\Phi_t$ the flow~\eqref{ode-flow} associated to~$\mathcal{J}=J_f$. The first property is a simple lemma related to the monotonicity of~$|J_f|$.
\begin{lemma}\label{lem-increasing} If~$f$ is a solution of~\eqref{pde-intro}, then~$|J_f|$ is nondecreasing in time. Therefore if~$J_{f_0}\neq0$, the “average orientation”~$\Omega(t)=\frac{J_f(t)}{|J_f(t)|}$ is well defined and smooth. Furthermore its time derivative~$\dot{\Omega}$ tends to~$0$ as~$t\to\infty$.
\end{lemma}
\begin{proof} Notice that if~$J_{f_0}=0$, then~$f(t,\cdot)=f_0$ for all~$t$. To compute the evolution of~$J_f$, we use~\eqref{pde-weak} with~$\varphi(v)=v\cdot e$ for an arbitrary vector~$e$ in~$\mathbb{R}^n$. We obtain, using the fact that~$\nabla_v(v\cdot e)=P_{v^\perp}e$:
\[e\cdot\frac{\d J_f}{\d t}=J_f\cdot\int_\mathbb{S}P_{v^\perp}ef(t,v)\,\d v=e\cdot M_fJ_f,\]
where~$M_f$ is the matrix given by~$\int_\mathbb{S}P_{v^\perp}f(t,v)\,\d v$ (it is a symmetric matrix with eigenvalues in~$[0,1]$, as convex combination of orthogonal projections). Since~$M_f$ is continuous in time, then~$J_f$ is~$C^1$, and by the same procedure we can compute the evolution of~$M_f$, which will depend on higher moments of~$f$, to get that~$J_f$ is smooth. More precisely, since any moment is uniformly bounded (the sphere is compact and~$f(t,\cdot)$ is a probability density for all~$t$), we get that all derivatives of~$J_f$ are uniformly bounded in time. Since
\[\frac12\frac{\d |J_f|^2}{\d t}=J_f\cdot M_fJ_f=\int_\mathbb{S}[|J_f|^2-(v\cdot J_f)^2]f(t,v)\,\d v\geqslant0,\]
we get the first part of the proposition.
From now on we suppose that~$J_{f_0}\neq0$, therefore~$\Omega(t)$ is well defined. The function~$\frac12\frac{\d |J_f|^2}{\d t}=|J_f|^2\Omega\cdot M_f\Omega$ being nonnegative, smooth, integrable in~$\mathbb{R}_+$ (since~$|J_f|$ is bounded by~$1$), and with bounded derivative, it is a classical exercise to show that it must converge to~$0$ as~$t\to\infty$. This gives us that~$\Omega\cdot M_f\Omega\to0$ as~$t\to\infty$. Let us now compute the evolution of~$\Omega$. We get
\begin{equation}\label{omegadot}
\dot{\Omega}=\frac{1}{|J_f|}\frac{\d J_f}{\d t}-\frac{\d |J_f|}{\d t}\frac{J_f}{|J_f|^2}=M_f\Omega-(\Omega\cdot M_f\Omega)\Omega=P_{\Omega^\perp}(M_f\Omega).
\end{equation}
Since~$M_f$ has eigenvalues in~$[0,1]$, we get that~$|M_f\Omega|^2=\Omega\cdot M_f^2\Omega\leqslant\Omega\cdot M_f\Omega$, therefore~$M_f\Omega\to0$ as~$t\to0$. So we get that~$\dot{\Omega}\to0$ as~$t\to\infty$. \qed
\end{proof}
\begin{remark}\label{remark-gradient-flow} The fact that~$|J_f|$ is nondecreasing can be enlightened by the theory of gradient flow in probability spaces~\cite{ambrosio2008gradient}. Indeed, the kinetic equation~\eqref{pde-intro} corresponds to the gradient flow of the functional~$-\frac12|J_f|^2$ for the Wasserstein distance~$W_2$. Therefore the evolution amounts to minimizing in time this quantity. We also remark that since~$|J_f|$ is nondecreasing, by an appropriate change of time, we can recover the equation~$\partial_tf+\nabla_v\cdot(f P_{v^\perp}\Omega)$ which corresponds to the spatial homogeneous version of~\cite{degond2008continuum} without noise. This equation can also be interpreted as a gradient flow~\cite{figalli2018global}.
\end{remark}
The fact that~$\dot{\Omega}\to0$ is not sufficient to prove that~$\Omega$ converges to some~$\Omega_\infty$, we would need~$\dot{\Omega}\in L^1(\mathbb{R}_+)$ and we only have up to now~$\dot{\Omega}\in L^2(\mathbb{R}_+)$ (since we have seen in the proof of Lemma~\ref{lem-increasing} that~$|J_f|^2\Omega\cdot M_f\Omega$ is integrable in time). To fill this gap, one solution is to compute the second derivative of~$\Omega$, and more precisely, to obtain an estimate on~$|\dot{\Omega}|$ corresponding to the assumption of the following lemma, which mainly says that if~$g$ is integrable, then any bounded solution of the differential equation~$y'=y+g$ has to be integrable.
\begin{lemma}\label{lem-L1} Let~$y:\mathbb{R}_+\to\mathbb{R}$ be a nonnegative function such that~$y^2$ is~$C^1$ and bounded. We suppose that there exists a function~$g\in L^1(\mathbb{R}_+)$ such that for all~$t\in\mathbb{R}$, we have
\begin{equation}
\frac12\frac{\d}{\d t}y^2=y^2+y\,g.\label{eq-exploding-ode-L1}
\end{equation}
Then~$y\in L^1(\mathbb{R}_+)$.
\end{lemma}
\begin{proof}
Let~$t\geqslant0$ such that~$y(t)>0$. We set~$T=\sup\{s\geqslant t, y>0\text{ on }[t,s]\}$ (we may have~$T=+\infty$).
We have that~$y$ is~$C^1$, positive and bounded on~$[t,T)$, and satisfies the differential equation~$y'=y+g$, therefore by Duhamel’s formula we have, for~$s\in[t,T)$:
\[y(s)e^{-s}-y(t)e^{-t}=\int_t^sg(u)e^{-u}\d u.\]
Letting~$s=T$ (resp.~$s\to+\infty$ if~$T=+\infty$), since~$y(T)=0$ (resp.~$y$ is bounded), we obtain
\[y(t)=-\int_t^Tg(u)e^{t-u}\d u\leqslant\int_t^\infty|g(u)|e^{t-u}\d u.\]
This equality being true for any~$t\in\mathbb{R}_+$ (even if~$y(t)=0$), we have by Fubini’s theorem that
\[\int_0^\infty y(t)\d t\leqslant\int_0^\infty\int_t^\infty|g(u)|e^{t-u}\d u\, \d t=\int_0^\infty|g(u)|(1-e^{-u})\d u,\]
which is finite by integrability of~$g$.
\qed
\end{proof}
We are now ready to prove the convergence of~$\Omega$.
\begin{proposition}\label{prop-omega-converges}If~$J_{f_0}\neq0$, then~$\dot{\Omega}\in L^1(\mathbb{R}_+)$, and therefore there exists~$\Omega_\infty\in\mathbb{S}$ such that~$\Omega\to\Omega_\infty$ as~$t\to\infty$.
\end{proposition}
\begin{proof}
We first compute the derivative of~$M_f$. For convenience, we use the notation~$\langle\varphi(v)\rangle_f$ for~$\int_\mathbb{S}\varphi(v) f(t,v)\,\d v$. Therefore we have~$J_f=\langle v\rangle_f$ and~$M_f=\langle P_{v^\perp}\rangle_f$, and the weak formulation~\eqref{pde-weak} reads
\begin{equation*}\frac{\d}{\d t}\langle\varphi(v)\rangle_f=J_f\cdot\langle\nabla_v\varphi(v)\rangle_f.\label{pde-weak-short}
\end{equation*}
We have, for fixed~$e_1,e_2\in\mathbb{R}^n$:
\[e_1\cdot M_fe_2=\langle e_1\cdot P_{v^\perp}e_2\rangle_f=e_1\cdot e_2-\langle(e_1\cdot v)(e_2\cdot v)\rangle_f.\]
Therefore, since~$\nabla_v(e\cdot v)=P_{v^\perp}e$, we obtain
\begin{align*}
\frac{\d}{\d t}(e_1\cdot M_fe_2)&=-J_f\cdot\langle(e_2\cdot v)P_{v^\perp}e_1+(e_1\cdot v)P_{v^\perp}e_2\rangle_f\\&=e_1\cdot[-\langle(e_2\cdot v)P_{v^\perp}J_f\rangle_f+\langle J_f\cdot P_{v^\perp}e_2 \, v\rangle_f],
\end{align*}
so the term in between the brackets is the derivative of~$M_fe_2$. We then get
\begin{align}
\frac{\d}{\d t}(M_f\Omega)&=M_f\dot{\Omega}-|J_f|\langle(\Omega\cdot v)P_{v^\perp}\Omega\rangle_f-|J_f|\langle\Omega\cdot P_{v^\perp}\Omega\, v\rangle_f\nonumber\\
&=M_f\dot{\Omega}+2|J_f|\langle(\Omega\cdot v)^2v\rangle_f-|J_f|[\langle(\Omega\cdot v)\Omega+v\rangle_f]\nonumber\\
&=M_f\dot{\Omega}+2|J_f|\langle(\Omega\cdot v)^2v\rangle_f-2|J_f|^2\Omega.\label{ddtMOmega}
\end{align}
Thanks to~\eqref{omegadot}, we finally have
\begin{align*}\frac{\d}{\d t}\dot{\Omega}&=\frac{\d}{\d t}(M_f\Omega)-(\Omega\cdot M_f\Omega)\dot{\Omega}-(\dot{\Omega}\cdot M_f\Omega)\Omega-\Omega\cdot\frac{\d}{\d t}(M_f\Omega)\, \Omega\\
&=P_{\Omega^\perp}\frac{\d}{\d t}(M_f\Omega)-(\Omega\cdot M_f\Omega)\dot{\Omega}-(\dot{\Omega}\cdot M_f\Omega)\Omega.
\end{align*}
Since~$\Omega$ and~$\dot{\Omega}$ are orthogonal, we have some simplifications by taking the dot product with~$\dot{\Omega}$ and using~\eqref{ddtMOmega}:
\begin{align}
\dot{\Omega}\cdot\frac{\d}{\d t}\dot{\Omega}&=\dot{\Omega}\cdot\frac{\d}{\d t}(M_f\Omega)-(\Omega\cdot M_f\Omega)|\dot{\Omega}|^2.\nonumber\\
&=\dot{\Omega}\cdot M_f\dot{\Omega}-2|J_f|[\langle(\Omega\cdot v)^2\, \dot{\Omega}\cdot v\rangle_f]-(\Omega\cdot M_f\Omega)|\dot{\Omega}|^2\nonumber\\
&=|\dot{\Omega}|^2-\langle(\dot{\Omega}\cdot v)^2\rangle_f-(\Omega\cdot M_f\Omega)|\dot{\Omega}|^2-2|J_f|[\langle(\Omega\cdot v)^2\, \dot{\Omega}\cdot v\rangle_f].\label{ddtOmegadot2}
\end{align}
If we define~$u$ to be the unit vector~$\frac{\dot{\Omega}}{|\dot{\Omega}|}$ when~$|\dot{\Omega}|\neq0$ and to be zero if~$|\dot{\Omega}|=0$, and we set
\begin{equation}\label{defg}
g(t)=-|\dot{\Omega}|[\langle(u\cdot v)^2\rangle_f+(\Omega\cdot M_f\Omega)]-2|J_f|\langle(\Omega\cdot v)^2)u\cdot v\rangle_f,
\end{equation}
we get that the formula~\eqref{ddtOmegadot2} is written under the following form, corresponding to~\eqref{eq-exploding-ode-L1} with~$y=|\dot{\Omega}|$:
\[\frac12\frac{\d}{\d t}|\dot{\Omega}|^2=|\dot{\Omega}|^2+|\dot{\Omega}|g(t).\]
Our goal is to show that~$g\in L^1(\mathbb{R}_+)$ in order to apply Lemma~\ref{lem-L1}. Indeed, thanks to~\eqref{omegadot}, we have that~$|\dot{\Omega}|\leqslant1$ (recall that~$M_f$ is a symmetric matrix with eigenvalues in~$[0,1]$), and~$|\dot{\Omega}|^2$ is~$C^1$.
As was remarked before in the proof of Lemma~\ref{lem-increasing}, the quantity~$|J_f|^2\Omega\cdot M_f\Omega$ is integrable in time, which gives that~$\Omega\cdot M_f\Omega=\langle1-(\Omega\cdot v)^2\rangle_f$ is integrable. Since~$u$ is colinear to~$\dot{\Omega}$, which is orthogonal to~$\Omega$, we have that~$P_{\Omega^\perp}u=u$, and therefore we get (using the fact that~$|u|\leqslant1$, since~$|u|$ is~$1$ or~$0$)
\[\langle(u\cdot v)^2\rangle_f=\langle(u\cdot P_{\Omega^\perp}v)^2\rangle_f\leqslant\langle|P_{\Omega^\perp}v|^2\rangle_f=\langle1-(\Omega\cdot v)^2\rangle_f.\]
This gives that the first term in the definition~\eqref{defg} of~$g$ is integrable in time. Finally, since~$u\cdot\Omega=0$, we have that~$\langle u\cdot v\rangle_f=0$, and we get
\[|\langle(\Omega\cdot v)^2\,u\cdot v\rangle_f|=|\langle(1-(\Omega\cdot v)^2)u\cdot v\rangle_f|\leqslant\langle1-(\Omega\cdot v)^2\rangle_f,\]
since~$1-(\Omega\cdot v)^2\geqslant0$ and~$|u\cdot v|\leqslant1$ for all~$v\in\mathbb{S}$. This gives that the last term in the definition~\eqref{defg} of~$g$ is also integrable in time. In virtue of Lemma~\ref{lem-L1}, we then get that~$|\dot{\Omega}|$ is integrable. Therefore~$\Omega(t)=\Omega(0)+\int_0^t\dot{\Omega}(s)\d s$ converges as~$t\to+\infty$.
\qed
\end{proof}
In order to control the distance between~$f$ and~$\delta_{\Omega_\infty}$, we now need to understand the properties of the flow of the differential equation~$\frac{\d v}{\d t}=P_{v^\perp} J_f$.
\begin{proposition}\label{prop-vback}Let~$\mathcal{J}$ be a continuous function~$\mathbb{R}_+\to\mathbb{R}^n$ such that~$t\mapsto|\mathcal{J}(t)|$ is positive, bounded and nondecreasing, and~$\Omega(t)=\frac{\mathcal{J}(t)}{|\mathcal{J}(t)|}$ converges to~$\Omega_\infty\in\mathbb{S}$ as~$t\to\infty$.
Then there exists a unique~$v_{\mathrm{back}}\in\mathbb{S}$ such that the solution of the differential equation~$\frac{\d v}{\d t}=P_{v^\perp} \mathcal{J}$ with initial condition~$v(0)=v_{\mathrm{back}}$ satisfies~$v(t)\to-\Omega_\infty$ as~$t\to+\infty$. Furthermore, for all~$v_0\neq v_{\mathrm{back}}$, the solution of this differential equation with initial condition~$v(0)=v_0$ converges to~$\Omega_\infty$ as~$t\to+\infty$.
\end{proposition}
\begin{proof} The outline of the proof is the following: we first show that any solution satisfies either~$v(t)\to-\Omega_\infty$ or~$v(t)\to\Omega_\infty$, then we construct~$v_{\mathrm{back}}$, and finally we prove that it is unique. We still denote by~$\Phi_t$ the flow of the differential equation~\eqref{ode-flow}.
We first notice that~$|\mathcal{J}(t)|$ converges to some~$\lambda>0$, therefore~$\mathcal{J}(t)$ converges to~$\lambda\Omega_\infty$ as~$t\to\infty$. Therefore the solution of the equation~$\frac{\d v}{\d t}=P_{v^\perp} \mathcal{J}$ with initial condition~$v(0)=v_0$ is also the solution of a differential equation of the form
\begin{equation}
\frac{\d v}{\d t}=\lambda P_{v^\perp} \Omega_\infty + r_{v_0}(t),\label{ode-asymp}
\end{equation}
where the remainder term~$r_{v_0}(t)$ converges to~$0$ as~$t\to\infty$, uniformly in~$v_0\in\mathbb{S}$. Let us suppose that~$v(t)$ does not converge to~$-\Omega_\infty$ (that is to say~$v(t)\cdot\Omega_\infty$ does not converge to~$-1$), and let us prove that in this case~$v(t)\to\Omega_\infty$. Taking the dot product with~$\Omega_\infty$ in~\eqref{ode-asymp}, we obtain
\begin{equation}
\frac{\d}{\d t}(v\cdot\Omega_\infty)=\lambda[1-(v\cdot\Omega_\infty)^2] + \Omega_\infty\cdot r_{v_0}(t),\label{ode-asymp-vOmega}
\end{equation}
so we can use a comparison principle with the one-dimensional differential equation~$y'=\lambda(1-y^2)-\varepsilon$. Since~$\lambda(1-y^2)-\varepsilon$ is positive for~$|y|<\sqrt{1-\frac{\varepsilon}{\lambda}}$ and negative for~$|y|>\sqrt{1-\frac{\varepsilon}{\lambda}}$, any solution starting with~$y(t_0)>-\sqrt{1-\frac{\varepsilon}{\lambda}}$ converges to~$\sqrt{1-\frac{\varepsilon}{\lambda}}$ as~$t\to+\infty$. Since~$v(t)\cdot\Omega_\infty$ does not converge to~$-1$, there exists~$\delta>0$ such that~$v(t)\cdot\Omega_\infty>-1+\delta$ for arbitrarily large times~$t$. For any~$\varepsilon>0$ sufficiently small (such that~$-\sqrt{1-\frac{\varepsilon}{\lambda}}<-1+\delta$), there exists~$t_0\geqslant0$ such that~$v(t_0)\cdot\Omega_\infty>-1+\delta$ and~$|\Omega_\infty\cdot r_{v_0}(t)|\leqslant\varepsilon$ for all~$t\geqslant t_0$. By comparison principle, we then get that~$\liminf_{t\to+\infty} v(t)\cdot\Omega_\infty\geqslant\sqrt{1-\frac{\varepsilon}{\lambda}}$. Since this is true for any~$\varepsilon>0$ sufficiently small, we then get that~$v(t)\cdot\Omega_\infty$ converges to~$1$, that is to say~$v(t)\to\Omega_\infty$ as~$t\to+\infty$.
Let us now prove that if~$v(t)$ converges to~$\Omega_\infty$, then there exists a neighborhood of~$v_0$ such that the convergence to~$\Omega_\infty$ of solutions starting in this neighborhood is uniform in time. This is done thanks to the same comparison principle. We fix~$\delta>0$ and~$\varepsilon>0$ such that~$-1+\delta>-\sqrt{1-\frac{\varepsilon}{\lambda}}$. We take~$t_0\geqslant0$ such that~$v(t_0)\cdot\Omega_\infty>-1+\delta$ and~$|\Omega_\infty\cdot r_{\tilde{v}_0}(t)|\leqslant\varepsilon$ for any~$\tilde{v}_0\in\mathbb{S}$ and~$t\geqslant t_0$. By continuity of the flow of the equation~$\frac{\d v}{\d t}=P_{v^\perp}\mathcal{J}$, there exists a neighborhood~$B$ of~$v_0$ in~$\mathbb{S}$ such that for any~$\tilde{v}_0\in B$, the solution~$\tilde{v}(t)=\Phi_t(\tilde{v_0})$ of this equation with initial condition~$\tilde{v}_0$ satisfies~$\tilde{v}(t_0)\cdot\Omega_\infty>-1+\delta$. We now look at the equation~$y'=\lambda(1-y^2)-\varepsilon$ starting with~$y(t_0)=-1+\delta$, which converges to~$\sqrt{1-\frac{\varepsilon}{\lambda}}>1-\delta$. There exists~$T$ such that~$y(t)\geqslant1-\delta$ for all~$t\geqslant T$. Therefore, by comparison principle with~\eqref{ode-asymp-vOmega} (where~$v_0$ is replaced by~$\tilde{v}_0$), we get that for all~$\tilde{v}_0\in B$, the solution~$\tilde{v}$ satisfies~$\tilde{v}(t)\cdot\Omega_\infty\geqslant1-\delta$ for all~$t\geqslant T$.
We are now ready to construct~$v_{\mathrm{back}}$. We take~$(t_n)$ a sequence of increasing times such that~$t_n\to+\infty$ and define~$v_{\mathrm{back}}^n$ as the solution at time~$t=0$ of the backwards in time differential equation~$\frac{\d v^n}{\d t}=P_{(v^n)^\perp}\mathcal{J}$ with terminal condition~$v^n(t_n)=-\Omega_\infty$, that is to say~$v_{\mathrm{back}}^n=\Phi_{t_n}^{-1}(-\Omega_\infty)$. Up to extracting a subsequence, we can assume that~$v_{\mathrm{back}}^n$ converges to some~$v_{\mathrm{back}}\in\mathbb{S}$ and we set~$v(t)=\Phi_t(v_{\mathrm{back}})$. By the first part of the proof, we have that either~$v(t)\to\Omega_\infty$ or~$v(t)\to-\Omega_\infty$ as~$t\to+\infty$. The first case is incompatible with the uniform convergence in time. Indeed, in that case, we would have a neighborhood~$B$ of~$v_{\mathrm{back}}$ and a time~$T$ such that for all~$t\geqslant T$ and all~$\tilde{v}\in B$,~$\Phi_t(\tilde{v})\cdot\Omega_\infty\geqslant0$ (by taking~$\delta=1$ in the previous paragraph). Since we can take~$n$ such that~$t_n\geqslant T$ and~$v_{\mathrm{back}}^n\in B$, this is in contradiction with the fact that~$\Phi_{t_n}(v_{\mathrm{back}}^n)=-\Omega_\infty$.
It remains to prove that~$v_{\mathrm{back}}$ is unique (which implies that~$\Phi_t^{-1}(-\Omega_\infty)$ actually converges to~$v_{\mathrm{back}}$ as~$t\to+\infty$, thanks to the previous paragraph). This is due to a phenomenon of repulsion of two solutions~$v(t)$ and~$\tilde{v}(t)$ when they are close to~$-\Omega(t)$. Indeed, they satisfy
\[\frac{\d}{\d t}v\cdot\tilde{v}=v\cdot P_{\tilde{v}^\perp}\mathcal{J}+\tilde{v}\cdot P_{v^\perp}\mathcal{J}=\mathcal{J}\cdot(v+\tilde{v})(1-v\cdot\tilde{v}),\]
which can be written, since~$\|v-\tilde{v}\|^2=2\,(1-v\cdot\tilde{v})$ as
\begin{equation}
\frac{\d}{\d t}\|v-\tilde{v}\|^2=\gamma(t)\|v-\tilde{v}\|^2,\label{odevvtilde}
\end{equation}
where~$\gamma(t)=-\mathcal{J}(t)\cdot(v(t)+\tilde{v}(t))$. Let us suppose that both~$v(t)=\Phi_t(v_0)$ and~$\tilde{v}(t)=\Phi_t(\tilde{v}_0)$ converge to~$-\Omega_\infty$ as~$t\to+\infty$. Since~$\mathcal{J}(t)\to\lambda\Omega_\infty$ as~$t\to+\infty$, we have~$\gamma(t)\to2\lambda>0$ as~$t\to+\infty$. Therefore the only bounded solution of the linear differential equation~\eqref{odevvtilde} is the constant~$0$, therefore we have~$v=\tilde{v}$, and thus~$v_0=\tilde{v}_0$.
\qed
\end{proof}
We are now ready to prove the last part of Theorem~\ref{thm-pde}.
\begin{proposition}Let~$v_{\mathrm{back}}$ be given by Proposition~\ref{prop-vback} with~$\mathcal{J}=J_f$ (we suppose~$J_{f_0}\neq0$). We denote by~$m=\int_\mathbb{S}\mathbf{1}_{v=v_{\mathrm{back}}}f_0(v)\d v$ the initial mass of~$\{v_{\mathrm{back}}\}$. Then~$m<\frac12$ and~$W_1(f(t,\cdot),(1-m)\delta_{\Omega_\infty}+m\delta_{-\Omega_\infty})\to0$ as~$t\to+\infty$.
\end{proposition}
\begin{proof}
We write~$f_\infty=(1-m)\delta_{\Omega_\infty}+m\delta_{-\Omega_\infty}$. Let~$\varphi\in\mathrm{Lip}_1(\mathbb{S})$. We have
\[\begin{split}\int_\mathbb{S}\varphi(v)f_\infty(v)\,\d v&=m\varphi(-\Omega_\infty)+(1-m)\varphi(\Omega_\infty)\\ &=m\varphi(-\Omega_\infty)+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}\varphi(\Omega_\infty)f_0(v)\,\d v,\end{split}\]
and~$\int_\mathbb{S}\varphi(v)f(t,v)\,\d v=\int_\mathbb{S}\varphi(\Phi_t(v))f_0(v)\,\d v$ (recall that~$f(t,\cdot)=\Phi_t\#f_0$ is characterized by~\eqref{pushforward}, where~$\Phi_t$, defined in~\eqref{ode-flow} is the flow of the differential equation~$\frac{\d v}{\d t}=P_{v^\perp}\mathcal{J}$). Therefore we get
\begin{equation}\label{eqphisplitted}\int_\mathbb{S}\varphi(v)f(t,v)\d v=m\varphi(\Phi_t(v_{\mathrm{back}}))+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}\varphi(\Phi_t(v))f_0(v)\,\d v.
\end{equation}
We then obtain
\[\begin{split}\Big|\int_\mathbb{S}&\varphi(v)f(t,v)\,\d v-\int_\mathbb{S}\varphi(v)f_\infty(v)\,\d v|\\
&\leqslant m|\varphi(\Phi_t(v_{\mathrm{back}}))-\varphi(-\Omega_\infty)|+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}|\varphi(\Phi_t(v))-\varphi(\Omega_\infty)|f_0(v)\,\d v\\
&\leqslant m|\Phi_t(v_{\mathrm{back}})+\Omega_\infty|+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}|\Phi_t(v)-\Omega_\infty|f_0(v)\,\d v, \end{split} \]
since~$\varphi\in\mathrm{Lip}_1(\mathbb{S})$. We finally get \begin{equation}\label{estimateW1}W_1(f(t,\cdot),f_\infty)\leqslant m|\Phi_t(v_{\mathrm{back}})+\Omega_\infty|+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}|\Phi_t(v)-\Omega_\infty|f_0(v)\,\d v.
\end{equation}
Now, by Proposition~\eqref{prop-vback}, as~$t\to+\infty$ we have~$\Phi_t(v)\to\Omega_\infty$ for all~$v\neq v_{\mathrm{back}}$, and~$\Phi_t(v_{\mathrm{back}})\to-\Omega_\infty$. Therefore by the dominated convergence theorem, the estimate~\eqref{estimateW1} gives that~$W_1(f(t,\cdot),f_\infty)\to0$ as~$t\to+\infty$. It remains to prove that~$m>\frac12$, which comes from Proposition~\ref{prop-omega-converges}, which gives that~$\frac{J_f}{|J_f|}\to\Omega_\infty$ as~$t\to+\infty$. Indeed, applying~\eqref{eqphisplitted} with~$\varphi(v)=v$, we get
\[J_f(t)=m\Phi_t(v_{\mathrm{back}})+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}\Phi_t(v)f_0(v)\,\d v,\] which gives by dominated convergence that, as~$t\to+\infty$, we have
\[J_f(t)\to-m\Omega_\infty+\int_\mathbb{S}\mathbf{1}_{v\neq v_{\mathrm{back}}}\Omega_\infty f_0(v)\,\d v=(1-2m)\Omega_\infty.\]
Since~$\frac{J_f(t)}{|J_f(t)|}\to\Omega_\infty$ as~$t\to+\infty$, we get~$1-2m>0$.
\qed
\end{proof}
\subsection{Symmetries and rates of convergence}
This subsection is dedicated to the study of rates of convergence, based on somewhat explicit solutions in the case where~$\Omega$ is constant in time, which is the case when the initial condition has some symmetries.
\begin{proposition} Let~$G$ be a group of orthogonal transformations under which~$f_0$ is invariant (that is to say~$f_0\circ g=f_0$ and all~$g\in G$) and such that the only fixed points on~$\mathbb{S}$ of every element of~$G$ are two opposite unit vectors that we call~$\pm e_n$. Then the solution~$f(t,\cdot)$ of the partial differential equation~\eqref{pde-intro} is also invariant under all elements of~$g$. Furthermore if~$J_{f_0}\neq0$, then~$J_{f}(t)=\alpha(t)e_n$ with~$\alpha$ positive (up to exchanging~$e_n$ and~$-e_n$), and~$\Omega(t)$ is constantly equal to~$e_n$.
\end{proposition}
\begin{proof} The first part of the proposition comes from the fact that~$t\mapsto f(t,\cdot)\circ g$ is also a solution of~\eqref{pde-intro} (which is well-posed) with the same initial condition. Then, we have by invariance that~$gJ_{f(t,\cdot)}=\int_\mathbb{S}gvf_0(v)\,\d v=\int_\mathbb{S}gvf_0(gv)\,\d v=J_{f(t,\cdot)}$, for all~$g\in G$, and therefore~$\Omega(t)$ is a fixed point of every element of~$g$ and must be equal to~$\pm e_n$.\qed
\end{proof}
Let us mention two simple examples of these kind of symmetries : when~$f_0(v)$ only depends on~$v\cdot e_n$ ($G$ is then the set of isometries having~$e_n$ as fixed point), or when~$f(\sin{\theta}w+\cos{\theta}e_n)=f(-\sin{\theta}w+\cos{\theta}e_n)$ ($G$ is reduced to identity and to~$v\mapsto2e_n\cdot v-v$).
Let us now do some preliminary computations in the case where~$\Omega$ is constant in time. We work in an orthogonal base~$(e_1,\dots,e_n)$ of~$\mathbb{R}^n$ for which~$\Omega=e_n$ is the last vector, and we write write~$J_f(t)=\alpha(t)e_n$, with~$t\mapsto\alpha(t)$ positive and nondecreasing. We will use the stereographic projection
\begin{equation}\label{eq-def-s}s:\quad \begin{matrix}\mathbb{S}\setminus\{-e_n\}&\to&\mathbb{R}^{n-1}\\v&\mapsto&s(v)=\frac{1}{1+v\cdot e_n}P_{e_n^\perp}v,
\end{matrix}
\end{equation}
where we identify~$P_{e_n^\perp}v$ with its first~$n-1$ coordinates. This is a diffeomorphism between~$\mathbb{S}\setminus\{-e_n\}$ and~$\mathbb{R}^{n-1}$, and its inverse is given by
\begin{equation}\label{eq-def-p}p:\quad
\begin{matrix}\mathbb{R}^{n-1}&\to&\mathbb{S}\setminus\{-e_n\}\subset\mathbb{R}^{n-1}\times\mathbb{R}\\z&\mapsto&p(z)=(\frac{2}{1+|z|^2}\,z,\frac{1-|z|^2}{1+|z|^2}).
\end{matrix}
\end{equation}
If~$\varphi$ is an integrable function on~$\mathbb{S}$, the change of variable for this diffeomorphism reads
\begin{equation}\label{chg-variable}\int_\mathbb{S}\varphi(v)\,\d v=c_n^{-1}\int_{\mathbb{R}^{n-1}}\frac{\varphi(p(z))}{(1+|z|^2)^{n-1}}\,\d z,
\end{equation}
where the normalization constant is~$c_n=\int_{\mathbb{R}^{n-1}}\frac{\d z}{(1+|z|^2)^{n-1}}$.
If~$v$ is a solution to the differential equation~$\frac{\d v}{\d t}=\alpha(t)P_{v^\perp}e_n$ with~$v\neq-e_n$, a simple computation shows that~$z=s(v)$ satisfies the differential equation~$\frac{\d z}{\d t}=-\alpha(t)z$. Therefore, if we write~$\lambda(t)=\int_0^t\alpha(\tau)\,\d \tau$, we have an explicit expression for the solution~$f$ of the aggregation equation~\eqref{pde-linear}: the pushforward formula~\eqref{pushforward} is given, when~$f_0$ has no atom at~$-e_n$, by
\begin{equation}\label{pushforward-p}\forall\varphi\in C(\mathbb{S}), \int_\mathbb{S}\varphi(v)f(t,v)\,\d v=c_n^{-1}\int_{\mathbb{R}^{n-1}}\frac{\varphi(p(ze^{-\lambda(t)}))f_0(p(z))}{(1+|z|^2)^{n-1}}\, \d z.
\end{equation}
In particular, we have
\begin{align}\nonumber1-\alpha(t)&=1-J_f(t)\cdot e_n=\int_\mathbb{S}(1-v\cdot e_n)f(t,v)\,\d v\\ \label{oneminusalpha}&=c_n^{-1}\int_{\mathbb{R}^{n-1}}\frac{2|z|^2e^{-2\lambda(t)}f_0(p(z))}{(1+|z|^2e^{-2\lambda(t)})(1+|z|^2)^{n-1}}\, \d z. \end{align}
We are now ready to state the first proposition regarding the rate of convergence towards~$\Omega_\infty$: in the framework of Theorem~\ref{thm-pde}, there is no hope to have a rate of convergence of~$f(t,\cdot)$ with respect to the~$W_1$ distance without further assumption on the regularity of~$f_0$, even if it has no atoms (in this case~$f(t,\cdot)\to\delta_{\Omega_\infty}$ as~$t\to+\infty$). More precisely the following proposition gives the construction of a solution decaying arbitrarily slowly to~$\delta_{\Omega_\infty}$, in contrast with results of local stability of Dirac masses for other models of alignment on the sphere~\cite{degond2014local}, for which as long as the initial condition is close enough to~$\delta_{\Omega_\infty}$, the solution converges exponentially fast in Wasserstein distance.
\begin{proposition} \label{prop-no-rate} Given a smooth decreasing function~$t\mapsto g(t)$ converging to~$0$ (slowly) as~$t\mapsto+\infty$, and such that~$g(0)<\frac12$, there exists a probability density function~$f_0$ such that the solution~$f(t,\cdot)$ of~\eqref{pde-intro} converges weakly to~$\delta_{\Omega_\infty}$, but such that~$W_1(f(t,\cdot),\delta_{\Omega_\infty})\geqslant g(t)$ for all~$t\geqslant0$.
\end{proposition}
\begin{proof}
We will construct~$f_0$ as a function of the form~$f_0(v)=h(|s(v)|)$, where the stereographic projection~$s$ is defined in~\eqref{eq-def-s}.
Let us prove that the following choice of~$h$ works, for~$\varepsilon>0$ sufficiently small :
\[h(r)=b_n\tfrac{(1+r^2)^{n-1}}{r^{n-2}}\Big[\tfrac{1-g(0)}{\varepsilon}\,\mathbf{1}_{0<r<\varepsilon}-\tfrac{g'(\ln r)}{r}\,\mathbf{1}_{r\geqslant1}\Big],\]
where the normalization constant is~$b_n=\int_{\mathbb{R}_+}\frac{r^{n-2}\,\d r}{(1+r^2)^{n-1}}$. First of all,~$f_0$ is a probability density, since we have, thanks to~\eqref{chg-variable}
\[\begin{split}\int_\mathbb{S}f_0(v) \d v&=\frac{\int_{\mathbb{R}^{n-1}}\frac{h(|z|)\,\d z}{(1+|z|^2)^{n-1}}}{\int_{\mathbb{R}^{n-1}}\frac{\d z}{(1+|z|^2)^{n-1}}}=\frac{\int_0^{+\infty}\frac{h(r)r^{n-2}\,\d r}{(1+r^2)^{n-1}}}{\int_0^{+\infty}\frac{r^{n-2}\,\d r}{(1+r^2)^{n-1}}}=b_n^{-1}\int_0^{+\infty}\frac{h(r)r^{n-2}\,\d r}{(1+r^2)^{n-1}}\\
&=\int_0^\varepsilon\tfrac{1-g(0)}{\varepsilon}\,\d r-\int_1^{+\infty}\tfrac{g'(\ln r)}{r}\,\d r=1-g(0)-[g(\ln r)]_1^{+\infty}=1.
\end{split}\]
By symmetry, we have that~$J_f(t)=\alpha(t)e_n$. Let us check that~$\alpha(0)>0$. We do as in formula~\eqref{oneminusalpha} :
\[1-\alpha(t)=b_n^{-1}\int_0^{+\infty}\frac{2r^2e^{-2\lambda(t)}h(r)r^{n-2}\,\d r}{(1+r^2e^{-2\lambda(t)})(1+r^2)^{n-1}}.\]
We therefore get
\[\begin{split}1-\alpha(0)&=\int_0^{\varepsilon}\frac{2(1-g(0))r^2 \d r}{(1+r^2)\varepsilon}-\int_1^{+\infty}g'(\ln r)\frac{2r}{1+r^2}\d r\\
&\leqslant\frac{2\varepsilon^2}3(1-g(0))-2\int_1^\infty g'(\ln r)\frac{\d r}{r}=2g(0)+\frac{2\varepsilon^2}3(1-g(0)),
\end{split}\]
which is strictly less than~$1$ as long as~$g(0)<\frac12$ and~$\varepsilon$ is sufficiently small. Therefore in this case we have~$\alpha(0)>0$. This means that~$\Omega(t)=e_n=\Omega_\infty$ for all time~$t$, and thanks to Theorem~\eqref{thm-pde}, since~$f_0$ has no atoms, the solution~$f(t,\cdot)$ converges weakly to~$\delta_{\Omega_\infty}$ as~$t\to+\infty$.
Let us also remark that~$W_1(f(t,\cdot),\delta_{e_n})=\int_{\mathbb{S}}|v-e_n|f(t,v)\d v$ (see the the proof of the forthcoming Proposition~\ref{prop-rates-regular}), and since we have~$1-v\cdot e_n\leqslant|v-e_n|$, we obtain~$1-\alpha(t)\leqslant W_1(f(t,\cdot),\delta_{e_n})$. Therefore, to prove that the convergence of~$f$ towards~$\delta_{\Omega_\infty}$ is as slow as~$g(t)$, it only remains to prove that~$1-\alpha(t)\geqslant g(t)$. We have~$\lambda(t)\leqslant t$, and so when~$r\geqslant e^{t}$, we get~$re^{-\lambda(t)}\leqslant1$. Since~$x\mapsto\frac{2x}{1+x}$ is increasing, we get~$\frac{2r^2e^{-2\lambda(t)}}{1+r^2e^{-2\lambda(t)}}\geqslant1$. We therefore get
\[1-\alpha(t)\geqslant-\int_{e^t}^{+\infty}g'(\ln r)\frac{2re^{-2\lambda(t)}}{(1+r^2e^{-2\lambda(t)})}\d r\geqslant-\int_{e^t}^{+\infty}\frac{g'(\ln r)\d r}{r}=g(t),\]
which ends the proof.\qed
\end{proof}
We conclude this subsection by more precise estimates of the rate of convergence in various Wasserstein distances when~$\Omega$ is constant in time and when the initial condition has a density with respect to the Lebesgue measure which is bounded above and below. We write~$a(t)\asymp b(t)$ whenever there exists two positive constants~$c_1,c_2$ such that~$c_1b(t)\leqslant a(t)\leqslant c_2b(t)$ for all~$t\geqslant0$.
We recall the definition of the Wasserstein distance~$W_2$, for two probability measures~$\mu$ and~$\nu$ on~$\mathbb{S}$ :
\[W_2^2(\mu,\nu)=\inf_{\pi}\int_{\mathbb{S}\times\mathbb{S}}|v-w|^2\d\pi(v,w),\]
where the infimum is taken over the probability measures~$\pi$ on~$\mathbb{S}\times\mathbb{S}$ with first and second marginals respectively equal to~$\mu$ and~$\nu$.
\begin{proposition} \label{prop-rates-regular} Suppose that~$f_0$ has a density with respect to the Lebesgue measure satisfying~$m\leqslant f_0(v)\leqslant M$ for all~$v$ (for some~$0<m<M$), with~$J_{f_0}\neq0$ and such that~$\Omega(t)=e_n$ is constant in time. Then we have
\begin{align*}
W_1(f(t,\cdot),\delta_{e_n})&\asymp\begin{cases}(1+t)e^{-t}&\text{ if }n=2,\\e^{-t}&\text{ if }n\geqslant3,
\end{cases}\\
W_2(f(t,\cdot),\delta_{e_n})&\asymp\begin{cases}e^{-\frac12t}&\text{ if }n=2,\\\sqrt{1+t}\,e^{-t}&\text{ if }n=3,\\e^{-t}&\text{ if }n\geqslant4.
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
Let us first give explicit formulas for~$W_1(f(t,\cdot),\delta_{e_n})$ and~$W_2(f(t,\cdot),\delta_{e_n})$. If~$\varphi\in\mathrm{Lip}_1(\mathbb{S})$, we have
\[\left|\int_\mathbb{S}\varphi(v)f(t,v)\,\d v-\varphi(e_n)\right|\leqslant\int_\mathbb{S}|\varphi(v)-\varphi(e_n)|f(t,v)\,\d v\leqslant\int_\mathbb{S}|v-e_n|f(t,v)\,\d v.\]
Therefore, by taking the supremum, we get~$W_1(f(t,\cdot),\delta_{e_n})\leqslant\int_\mathbb{S}|v-e_n|f(t,v)\,\d v$. Furthermore, by taking~$\varphi(v)=|v-e_n|$, we get that this inequality is an equality. The explicit expression of~$W_2(f(t,\cdot),\delta_{e_n})$ comes from the fact that the only probability measure on~$\mathbb{S}\times\mathbb{S}$ with marginals~$f(t,\cdot)$ and~$\delta_{e_n}$ is the product measure~$\mu\otimes\delta_{v_0}$, and therefore we have~$W_2^2(f(t,\cdot),\delta_{e_n})=\int_\mathbb{S}|v-e_n|^2f(t,v)\,\d v$. Using the fact that~$|v-e_n|^2=2-2v\cdot e_n$ and the definition~\eqref{eq-def-p} of~$p$, we get~$|p(z)-e_n|=\frac{2|z|}{\sqrt{1+|z|^2}}$. Finally, using~\eqref{pushforward-p}, we obtain
\begin{equation}\label{W1explicit}W_1(f(t,\cdot),\delta_{e_n})= c_n^{-1}\int_{\mathbb{R}^{n-1}}\frac{2|z|e^{-\lambda(t)}f_0(p(z))}{\sqrt{1+|z|^2e^{-2\lambda(t)}}(1+|z|^2)^{n-1}}\, \d z,
\end{equation}
and, as in~\eqref{oneminusalpha}:
\begin{equation}\label{W2explicit}W_2^2(f(t,\cdot),\delta_{e_n})=2(1-\alpha(t))= c_n^{-1}\int_{\mathbb{R}^{n-1}}\frac{4|z|^2e^{-2\lambda(t)}f_0(p(z))\, \d z}{(1+|z|^2e^{-2\lambda(t)})(1+|z|^2)^{n-1}}.
\end{equation}
Thanks to the assumptions on~$f_0$, from~\eqref{W1explicit} we immediately get
\[W_1(f(t,\cdot),\delta_{e_n})\asymp\int_0^{+\infty}\frac{r^{n-1}e^{-\lambda(t)}\,\d r}{\sqrt{1+r^2e^{-2\lambda(t)}}(1+r^2)^{n-1}},\]
and for~$n\geqslant3$, since~$\lambda(t)\geqslant0$, we get
\[\begin{split}0<\int_0^{+\infty}\frac{r^{n-1}\,\d r}{\sqrt{1+r^2}(1+r^2)^{n-1}}&\leqslant\int_0^{+\infty}\frac{r^{n-1}\,\d r}{\sqrt{1+r^2e^{-2\lambda(t)}}(1+r^2)^{n-1}}\\ &\leqslant\int_0^{+\infty}\frac{r^{n-1}\,\d r}{(1+r^2)^{n-1}}<+\infty,
\end{split}
\]
which gives~$W_1(f(t,\cdot),\delta_{e_n})\asymp e^{-\lambda(t)}$. For~$n=2$, we have
\[\begin{split}\int_0^{+\infty}\frac{re^{-\lambda(t)}\,\d r}{\sqrt{1+r^2e^{-2\lambda(t)}}(1+r^2)}&=\left[\tfrac{e^{-\lambda(t)}}{2\sqrt{1-e^{-2\lambda(t)}}}\ln\Big(\tfrac{\sqrt{1+r^2e^{-2\lambda(t)}}-\sqrt{1-e^{-2\lambda(t)}}}{\sqrt{1+r^2e^{-2\lambda(t)}}+\sqrt{1-e^{-2\lambda(t)}}}\Big)\right]_0^{+\infty}\\
&=\frac{e^{-\lambda(t)}}{2\sqrt{1-e^{-2\lambda(t)}}}\ln\Big(\frac{1+\sqrt{1-e^{-2\lambda(t)}}}{1-\sqrt{1-e^{-2\lambda(t)}}}\Big).
\end{split}\]
Since this last expression is equivalent to~$\lambda(t)e^{-\lambda(t)}$ as~$\lambda(t)\to+\infty$ and converges to~$1$ as~$\lambda(t)\to0$, we then get~$W_1(f(t,\cdot),\delta_{e_n})\asymp (1+\lambda(t))e^{-\lambda(t)}$.
We proceed similarly for the distance~$W_2$. From the assumptions on~$f_0$ and~\eqref{W2explicit} we get
\[W_2^2(f(t,\cdot),\delta_{e_n})\asymp1-\alpha(t)\asymp\int_0^{+\infty}\frac{r^{n}e^{-2\lambda(t)}\,\d r}{(1+r^2e^{-2\lambda(t)})(1+r^2)^{n-1}}.\]
By the same argument of integrability, when~$n\geqslant4$, since~$\int_0^{+\infty}\frac{r^n\, \d r}{(1+r^2)^{n-1}}<+\infty$, we obtain~$1-\alpha(t)\asymp e^{-2\lambda(t)}$. For~$n=2$ we have
\[\begin{split}\int_0^{+\infty}\frac{r^{2}e^{-2\lambda(t)}\,\d r}{(1+r^2e^{-2\lambda(t)})(1+r^2)}&=\left[\tfrac{e^{-\lambda(t)}\tan^{-1}(e^{-\lambda(t)}r)-e^{-2\lambda(t)}\tan^{-1}(r)}{1-e^{-2\lambda(t)}}\right]_0^{+\infty}\\ &=\frac{\pi\, e^{-\lambda(t)}}{2(1+e^{-\lambda(t)})},
\end{split}
\]
which gives~$1-\alpha(t)\asymp e^{-\lambda(t)}$. For~$n=3$ we have
\[\begin{split}\int_0^{+\infty}\frac{r^{2}e^{-2\lambda(t)}\,\d r}{(1+r^2e^{-2\lambda(t)})(1+r^2)^2}&=\tfrac{e^ {-2\lambda(t)}}{2(1-e^{-2\lambda(t)})^2}\left[\ln\Big(\tfrac{1+r^2}{1+r^2e^{-2\lambda(t)}}\Big)+\tfrac{1-e^{-2\lambda(t)}}{1+r^2}\right]_0^{+\infty}\\ &=\frac{e^ {-2\lambda(t)}}{2(1-e^{-2\lambda(t)})^2}(2\lambda(t)-1+e^{-2\lambda(t)}).
\end{split}
\]
Since this last expression is equivalent to~$\lambda(t)e^{-2\lambda(t)}$ as~$\lambda(t)\to+\infty$ and converges to~$\frac14$ as~$\lambda(t)\to0$, we then get~$1-\alpha(t)\asymp (1+\lambda(t))e^{-2\lambda(t)}$.
In all dimensions, we have, since~$\lambda(t)=\int_0^t\alpha(\tau)\d \tau\geqslant\alpha(0)t$, that there exists~$C>0$ such that~$1-\alpha(t)\geqslant Ce^{-\alpha(0)t}$. Therefore, integrating in time, we obtain~$t-\lambda(t)\geqslant\widetilde{C}e^{-\alpha(0)t}$. This gives, since~$\lambda(t)\leqslant t$, that~$e^{-\lambda(t)}\sim e^{-t}$ and~$1+\lambda(t)\asymp 1+t$. Combining this with all the estimates we obtain so far (and reminding that~$W_2(f(t,\cdot),\delta_{e_n})\asymp\sqrt{1-\alpha(t)}$ ends the proof.
\qed
\end{proof}
Interestingly, the estimates given by Proposition~\ref{prop-rates-regular} depend on the dimension and on the chosen distance. We expect that these estimates still hold when~$\Omega$ depends on time, and, as in the result of Theorem~\ref{thm-ode}, we expect to have an even better rate of convergence of~$\Omega$ towards~$\Omega_\infty$.
\section{The particle model}\label{section-ode}
The object of this section is to prove Theorem~\eqref{thm-ode}, and we divide it into several propositions. We take~$N$ positive real numbers~$(m_i)_{1\leqslant i\leqslant N}$ with~$\sum_{i=1}^Nm_i=1$, and~$N$ unit vectors~$v_i^0\in\mathbb{S}$ (for~$1\leqslant i\leqslant N$) such that~$v_i^0\neq v_j^0$ for all~$i\neq j$. We denote by~$(v_i)_{1\leqslant i\leqslant N}$ the solution of the system of differential equation~\eqref{ode-with-mi} :
\[\frac{\d v_i}{\d t}=P_{v_i^\perp}J, \text{ with } J(t)=\sum_{i=1}^Nm_iv_i(t),\]
with the initial conditions~$v_i(0)=v_i^0$ for~$1\leqslant i\leqslant N$.
\begin{proposition}\label{unique-back-II}
If~$J(0)\neq0$, then~$|J|$ is nondecreasing, so~$\Omega(t)=\frac{J(t)}{|J(t)|}\in\mathbb{S}$ is well-defined for all times~$t\geqslant0$. We have one of the two following possibilities :
\begin{itemize}
\item For all~$1\leqslant i\leqslant N$,~$v_i(t)\cdot\Omega(t)\to1$ as~$t\to+\infty$.
\item There exists~$i_0$ such that~$v_i(t)\cdot\Omega(t)\to-1$ as~$t\to+\infty$, and for all~$i\neq i_0$, we have~$v_i(t)\cdot\Omega(t)\to1$ as~$t\to+\infty$.
\end{itemize}
Furthermore, if we denote by~$\lambda>0$ the limit of~$|J(t)|$ as~$t\to+\infty$, we have for all~$i,j$ in the first possibility (resp. for all~$i\neq i_0$,$j\neq i_0$ in the second possibility),~$\|v_i(t)-v_j(t)\|=O(e^{-(\lambda-\varepsilon)t})$ (for any~$\varepsilon>0$).
\end{proposition}
\begin{proof}
Let us see the differential system as a kind of gradient flow of the following interaction energy (this is reminiscent of the gradient flow structure of the kinetic equation~\eqref{pde-intro}, see Remark~\ref{remark-gradient-flow}):
\[\mathcal{E}=\frac12\sum_{i,j=1}^Nm_im_j\|v_i-v_j\|^2=\sum_{i,j=1}^Nm_im_j(1-v_i\cdot v_j)=1-|J|^2\geqslant0\]
Indeed, we then get~$\nabla_{v_i}\mathcal{E}=-2\sum_{j=1}^Nm_im_jP_{v_i^\perp}v_j=-2m_iP_{v_i^\perp}J$ (using the formula~$\nabla_v(u\cdot v)=P_{v^\perp}u$).
We therefore have~$\frac{\d v_i}{\d t}=-\frac1{2m_i}\nabla_{v_i}\mathcal{E}$, and we obtain
\begin{equation}\label{dJ2dt}\frac{\d |J|^2}{\d t}=-\frac{\d \mathcal{E}}{\d t}=-\sum_{i=1}^N\nabla_{v_i}\mathcal{E}\cdot\frac{\d v_i}{\d t}=2\sum_{i=1}^Nm_i\left|\frac{\d v_i}{\d t}\right|^2\geqslant0.
\end{equation}
This gives that~$|J|$ is nondecreasing in time. So we can define~$\Omega(t)=\frac{J(t)}{|J(t)|}$ and
rewrite~\eqref{dJ2dt} as
\begin{equation}\label{dJ2dtbis}\frac{\d|J|^2}{\d t}=2\sum_{i=1}^Nm_i|P_{v_i^\perp}J|^2=2|J|^2\sum_{i=1}^Nm_i(1-(v_i\cdot\Omega)^2).
\end{equation}
We can compute the time derivative of this quantity and observe that all terms are uniformly bounded in time. Therefore, since it is an integrable function of time (since~$|J|^2\leqslant1$) with bounded derivative, it must converge to~$0$ as~$t\to+\infty$. Therefore we obtain that~$(v_i(t)\cdot\Omega(t))^2\to1$ for all~$1\leqslant i\leqslant N$.
Let us now take~$1\leqslant i,j\leqslant N$ and estimate~$\|v_i-v_j\|$. We have
\begin{align}\nonumber\frac12\frac{\d}{\d t}\|v_i-v_j\|^2&=-\frac{\d}{\d t}(v_i\cdot v_j)=-|J|(v_j\cdot P_{v_i^\perp}\Omega+v_i\cdot P_{v_j^\perp}\Omega)\\ \nonumber &=-|J|\,(\Omega\cdot v_i+\Omega\cdot v_j)(1-v_i\cdot v_j)\\ &=-|J|\,\Omega\cdot\frac{v_i+v_j}2\|v_i-v_j\|^2.\label{dvivj}
\end{align}
Therefore if~$v_i\cdot\Omega\to1$ and~$v_j\cdot\Omega\to1$, we get~$\frac12\frac{\d}{\d t}\|v_i-v_j\|^2\leqslant-(\lambda-\varepsilon)\|v_i-v_j\|^2$ for~$t$ sufficiently large, and therefore we obtain~$\|v_i-v_j\|^2=O(e^{-2(\lambda-\varepsilon)t})$.
Finally if~$v_i\cdot\Omega\to-1$ and~$v_j\cdot\Omega\to-1$, for~$t$ sufficiently large (say~$t\geqslant t_0$) we obtain~$\frac12\frac{\d}{\d t}\|v_i-v_j\|^2\geqslant(\lambda-\varepsilon)\|v_i-v_j\|^2$. This is the same phenomenon of repulsion as~\eqref{odevvtilde} in the previous part, and the only bounded solution to this differential inequality is when~$v_i(t_0)=v_j(t_0)$, which means, by uniqueness that~$v_i^0=v_j^0$ and therefore~$i=j$. This means that if there is an index~$i_0$ such that~$v_{i_0}(t)\cdot\Omega(t)\to-1$, then for all~$i\neq i_0$, we have~$v_{i}(t)\cdot\Omega(t)\to1$ as~$t\to\infty$, and this ends the proof.
\qed
\end{proof}
Let us now study the first possibility more precisely.
\begin{proposition}\label{prop-no-back}
Suppose that~$v_i(t)\cdot\Omega(t)\to1$ as~$t\to\infty$ for all~$1\leqslant i\leqslant N$. Then there exists~$\Omega_\infty\in\mathbb{S}$ and~$a_i\in\{\Omega_\infty\}^\perp\subset\mathbb{R}^n$, for~$1\leqslant i\leqslant N$ such that~$\sum_{i=1}^Nm_ia_i=0$ and that, as~$t\to+\infty$,
\begin{align*}
v_i(t)&=(1-|a_i|^2e^{-2t})\Omega_\infty+e^{-t}a_i +O(e^{-3t})\quad \text{for }1\leqslant i\leqslant N,\\
\Omega(t)&=\Omega_\infty+O(e^{-3t}).
\end{align*}
\end{proposition}
\begin{proof}
We first have~$|J(t)|=J(t)\cdot\Omega(t)=\sum_{i}m_iv_i(t)\cdot\Omega(t)\to1$ as~$t\to\infty$. Therefore~$\lambda=1$, and thanks to the estimates of Proposition~\ref{unique-back-II} (first possibility), for all~$i,j$ we have~$1-v_i\cdot v_j=\frac12\|v_i-v_j\|^2=O(e^{-2(1-\varepsilon)t})$. Summing with weights~$m_j$, we obtain~$1-v_i\cdot J=O(e^{-2(1-\varepsilon)t})$. Plugging back this into~\eqref{dvivj}, we obtain
\[\frac12\frac{\d}{\d t}\|v_i-v_j\|^2=-\big(1+O(e^{-2(1-\varepsilon)t})\big)\|v_i-v_j\|^2.\]
We therefore obtain~$\|v_i-v_j\|^2=\|v_i^0-v_j^0\|^2e^{-\int_0^t(1+O(e^{-2(1-\varepsilon)\tau}))\d \tau}=O(e^{-2t})$. This is the same estimate as previously without the~$\varepsilon$. Therefore, similarly, we get~$1-v_i\cdot J=O(e^{-2t})$, which gives~$1-|J|^2=O(e^{-2t})$ by summing with weights~$m_i$. We finally obtain~$1-v_i\cdot\Omega=1-v_i\cdot J+(|J|-1)v_i\cdot\Omega=O(e^{-2t})$, therefore~$|P_{v_i^\perp}\Omega|^2=|P_{\Omega^\perp}v_i|^2=1-(v_i\cdot\Omega)^2=O(e^{-2t})$.
Let us now compute the evolution of~$\Omega$, as in~\eqref{omegadot}. Since~$\frac{\d J}{\d t}=\sum_{i}m_iP_{v_i^\perp}J$, we use~\eqref{dJ2dtbis} to get~$\frac{\d|J|}{\d t}=|J|\sum_{i}m_i|P_{v_i^\perp}\Omega|^2=O(e^{-2t})$, and we obtain
\begin{align*}\frac{\d \Omega}{\d t}=\frac{1}{|J|}&\frac{\d J}{\d t}-\frac{\d |J|}{\d t}\frac{J}{|J|^2} =\sum_{i}m_iP_{v_i^\perp}\Omega-\sum_{i}m_i|P_{v_i^\perp}\Omega|^2\Omega \\
&=-\sum_i m_i(v_i\cdot\Omega)(v_i-(v_i\cdot\Omega)\Omega) =-\sum_{i}m_i(v_i\cdot\Omega)P_{\Omega^\perp}v_i.
\end{align*}
Since~$\sum_{i}m_iP_{\Omega^\perp}v_i=P_{\Omega^\perp}J=0$, we can then add this quantity to the previous identity to get
\begin{equation}\label{dOmegadt}
\frac{\d \Omega}{\d t}=\sum_{i}m_i(1-v_i\cdot\Omega)P_{\Omega^\perp}v_i.
\end{equation}
We therefore get~$|\frac{\d \Omega}{\d t}|\leqslant\sum_im_i(1-v_i\cdot\Omega)|P_{\Omega^\perp}v_i|=O(e^{-3t})$. Therefore~$\Omega$ converges towards~$\Omega_\infty\in\mathbb{S}$ and we have~$\Omega=\Omega_\infty+O(e^{-3t})$.
Finally, to get the precise estimates for the~$v_i$, we compute their second derivative.
\begin{equation}
\label{d2vidt}\frac{\d^2 v_i}{\d t^2}=\frac{\d}{\d t}P_{v_i^\perp}J=P_{v_i^\perp}\frac{\d J}{\d t}-\frac{\d v_i}{\d t}\cdot J\, v_i-v_i\cdot J\,\frac{\d v_i}{\d t}.
\end{equation}
We have~$P_{v_i^\perp}\frac{\d J}{\d t}=\frac{\d |J|}{\d t}P_{v_i^\perp}\Omega+|J|P_{v_i^\perp}\frac{\d \Omega}{\d t}=O(e^{-3t})$, since~$P_{v_i^\perp}\Omega=O(e^{-t})$ and~$\frac{\d |J|}{\d t}=O(e^{-2t})$ thanks to~\eqref{dJ2dtbis}. Then we notice that~$\frac{\d v_i}{\d t}\cdot J=J\cdot P_{v_i^\perp}J=|\frac{\d v_i}{\d t}|^2$ and that~$v_i\cdot J\,\frac{\d v_i}{\d t}=\frac{\d v_i}{\d t}-(1-v_i\cdot J)P_{v_i^\perp}J=\frac{\d v_i}{\d t}+O(e^{-3t})$. At the end we obtain
\begin{equation}\label{dvi2dt2}
\frac{\d^2 v_i}{\d t^2}=-\frac{\d v_i}{\d t}-\Big|\frac{\d v_i}{\d t}\Big|^2\,v_i+O(e^{-3t}).
\end{equation}
Considering first that~$|\frac{\d v_i}{\d t}|^2=O(e^{-2t})$, the resolution of this differential equation gives~$\frac{\d v_i}{\d t}=-a_ie^{-t}+O(e^{-2t})$ with~$a_i\in\mathbb{R}^n$. Integrating in time, we therefore obtain~$v_i(t)=\Omega_\infty+a_ie^{-t}+O(e^{-2t})$, (we already know that~$v_i(t)$ converges to~$\Omega_\infty$ since~$v(t)\cdot\Omega(t)\to1$). The fact that~$|v_i(t)|=1$ gives us~$a_i\cdot\Omega_\infty e^{-t}=O(e^{-2t})$ and therefore~$a_i\in\{\Omega_\infty\}^\perp$. Summing all these estimations with weights~$m_i$ and using the fact that~$J-\Omega_\infty=O(e^{-2t})$, we obtain~$\sum_{i}m_ia_i=0$.
Finally, the more precise estimate for~$v_i(t)$ up to order~$O(e^{-3t})$ given in the proposition is obtained by plugging back~$|\frac{\d v_i}{\d t}|^2v_i=|a_i|^2e^{-2t}\Omega_\infty+O(e^{-3t})$ into~\eqref{dvi2dt2} and solving it again.
\qed
\end{proof}
Let us finally study the second possibility.
\begin{proposition}\label{prop-one-back}
Suppose there exists~$i_0$ such that~$v_{i_0}(t)\cdot\Omega(t)\to-1$ as~$t\to\infty$. Then we have~$\lambda=1-2m_{i_0}$ (which gives~$m_{i_0}<\frac12$), and there exists~$\Omega_\infty\in\mathbb{S}$ and~$a_i\in\{\Omega_\infty\}^\perp\subset\mathbb{R}^n$ for~$i\neq i_0$ such that~$\sum_{i\neq i_0}m_ia_i=0$ and that, as~$t\to+\infty$,
\begin{align*}
v_i(t)&=(1-|a_i|^2e^{-2\lambda t})\Omega_\infty+e^{-\lambda t}a_i +O(e^{-3\lambda t})\quad \text{for }i\neq i_0,\\
v_{i_0}(t)&=-\Omega_\infty+O(e^{-3\lambda t}),\\
\Omega(t)&=\Omega_\infty+O(e^{-3\lambda t}).
\end{align*}
\end{proposition}
\begin{proof} First of all we have~$|J(t)|=\Omega(t)\cdot J(t)=\sum_im_iv_i(t)\cdot\Omega(t)$ which converges as~$t\to\infty$ towards~$\lambda=\sum_{i\neq i_0}m_i-m_{i_0}=1-2m_{i_0}$. The proof then follows closely the one of Proposition~\ref{prop-no-back}, except for the case of~$v_{i_0}$. Indeed, Proposition~\ref{unique-back-II} only gives estimates on~$\|v_i-v_j\|$ (and therefore on~$v_i\cdot v_j$) when~$i\neq i_0$ and~$j\neq i_0$. To estimate more precisely the quantity~$v_{i_0}\cdot v_i$, let us prove that~$-v_{i_0}$ must be in the convex cone spanned by~$0$ and all the~$v_i$,~$i\neq i_0$. The idea is that a configuration which is in a convex cone stays in it for all time.
Let us suppose that all the~$v_i$ (including~$i=i_0$) satisfy~$e\cdot v_i(t_0)\geqslant c$ for some~$c>0$,~$t_0\geqslant0$ and~$e\in\mathbb{S}$ (the direction of the cone). We want to prove that~$e\cdot v_i(t)\geqslant c$ for all~$i$ and for all~$t\geqslant t_0$. If not, we denote by~$t_1>t_0$ a time such that~$e\cdot v_i(t)\geqslant0$ for all~$i$ on~$[t_0,t_1]$, but with~$e\cdot v_j(t_1)<c$ for some~$j$. On~$[t_0,t_1]$, we have
\begin{equation}\label{devidt}\frac{\d (e\cdot v_i)}{\d t}=e\cdot J-(e\cdot v_i)(v_i\cdot J)\geqslant e\cdot J-(e\cdot v_i),
\end{equation}
since~$v_i\cdot J\leqslant|J|\leqslant1$ and~$e\cdot v_i\geqslant0$ on~$[t_0,t_1]$. Summing with weights~$m_i$, we obtain~$\frac{\d (e\cdot J)}{\d t}\geqslant0$. Therefore, since~$e\cdot J(t_0)\geqslant c$, we obtain~$e\cdot J(t)\geqslant c$ on~$[t_0,t_1]$, and the estimation~\eqref{devidt} becomes~$\frac{\d(e\cdot v_i)}{\d t}\geqslant c-(e\cdot v_i)$. By comparison principle, this tells us that~$e\cdot v_i\geqslant c$ on~$[t_0,t_1]$ for all~$i$, which is a contradiction.
Let us now fix~$t_0\geqslant0$. We want to prove that there exists~$\alpha_i\geqslant0$ for~$i\neq i_0$ such that~$-v_{i_0}=\sum_{i\neq i_0}\alpha_iv_i$. Using Farkas' Lemma, it is equivalent to prove that this is not possible to find~$e\in\mathbb{S}$ such that~$e\cdot v_i(t_0)\geqslant0$ for all~$i\neq i_0$ and~$e\cdot(-v_{i_0})<0$. By contradiction, if such a~$e$ exists, we would have~$e\cdot J(t_0)\geqslant m_{i_0}e\cdot v_{i_0}>0$ and for~$i\neq i_0$, as in~\eqref{devidt}, if~$e\cdot v_i(t_0)=0$ we get~$\frac{\d (e\cdot v_i)}{\d t}|_{t=t_0}=e\cdot J(t_0)>0$. Therefore for~$\delta>0$ sufficiently small, we have~$e\cdot v_i(t_0+\delta)>0$ for all~$i$ (including~$i_0$, and those for which~$e\cdot v_i(t_0)>0$). Therefore there exists~$c>0$ such that for all~$i$,~$e\cdot v_i(t_0+\delta)\geqslant c$, and by the previous paragraph, we get that~$e\cdot v_i(t)\geqslant c$ for all~$t\geqslant t_0+\delta$. We therefore get~$e\cdot\Omega(t)\geqslant\frac1{|J(t)|}e\cdot J(t)\geqslant\frac{c}{|J(0)|}$ for all~$t\geqslant t_0+\delta$. Finally, since~$\|v_{i_0}(t)+\Omega(t)\|^2=2(1+v_{i_0}(t)\cdot\Omega(t))\to0$ as~$t\to\infty$, this is in contradiction with the fact that~$e\cdot(v_{i_0}+\Omega(t))\geqslant(1+\frac1{|J|(0)})c>0$ for all~$t\geqslant t_0+\delta$.
In conclusion we have that for all~$t\geqslant0$, there exists~$\alpha_i(t)\geqslant0$ for~$i\neq i_0$ such that~$-v_{i_0}(t)=\sum_{i\neq i_0}\alpha_i(t)v_i(t)$. We thus obtain, for~$i\neq i_0$
\begin{equation}\label{vivi0}v_i(t)\cdot v_{i_0}(t)=-\sum_{i\neq i_0}\alpha_i+\sum_{j\neq i_0}\alpha_i(1-v_j(t)\cdot v_i(t))\leqslant-1+O(e^{-2(\lambda-\varepsilon)t}),
\end{equation}
since~$1=\|v_{i_0}(t)\|\leqslant\sum_{i\neq i_0}\alpha_i\|v_i(t)\|=\sum_{i\neq i_0}\alpha_i$, and thanks to Proposition~\eqref{unique-back-II}. Since~$v_i(t)\cdot v_{i_0}(t)\geqslant-1$, this gives~$v_i(t)\cdot v_{i_0}(t)=-1+O(e^{-2(\lambda-\varepsilon)t})$. From there, we have, if~$i\neq i_0$,
\[\begin{split}v_i\cdot J&=\sum_{i\neq i_0}m_j\,v_i\cdot v_j-m_{i_0}v_i\cdot v_{i_0}\\ &=\sum_{i\neq i_0} (m_j+O(e^{-2(\lambda-\varepsilon)t})-m_{i_0}+O(e^{-2(\lambda-\varepsilon)t})=\lambda+O(e^{-2(\lambda-\varepsilon)t}).\end{split}\]
Plugging this into~\eqref{dvivj}, for~$i\neq i_0$ and~$j\neq i_0$, we obtain
\[\frac12\frac{\d}{\d t}\|v_i-v_j\|^2=-\big(\lambda+O(e^{-2(\lambda-\varepsilon)t})\big)\|v_i-v_j\|^2.\]
We therefore obtain, as in the proof of Proposition~\eqref{prop-no-back},~$1-v_i\cdot v_j=O(e^{-2\lambda t})$. As in~\eqref{vivi0}, we now get~$v_i\cdot v_{i_0}=-1+O(e^{-2\lambda t})$. Finally, by summing with weights~$m_j$, we obtain~$v_i\cdot J=\lambda+O(e^{-2\lambda t})$ for~$i\neq i_0$ and~$v_{i_0}\cdot J=-\lambda+O(e^{-2\lambda t})$. Therefore, by summing once again with weights~$m_i$, we get~$|J|^2=\lambda^2+O(e^{-2\lambda t})$. This allows to get~$1-v_i\cdot\Omega=O(e^{-2\lambda t})$ and~$|P_{v_i^\perp}\Omega|=O(e^{-\lambda t})$ when~$i\neq i_0$, and~$1+v_{i_0}\cdot\Omega=O(e^{-2\lambda t})$. Unfortunately this is not enough to use~\eqref{dOmegadt} to obtain a decay at rate~$3\lambda$ : we obtain
\begin{equation}\label{absdOmegadt}\Big|\frac{\d \Omega}{\d t}\Big|\leqslant O(e^{-3\lambda t})+m_{i_0}(1-v_{i_0}\cdot\Omega)|P_{v_{i_0}^\perp}\Omega|.
\end{equation}
However, since~$|P_{v_{i_0}^\perp}\Omega|^2=1-(v_{i_0}\cdot\Omega)^2=(1-v_{i_0}\cdot\Omega)(1+v_{i_0}\cdot\Omega)=O(e^{-2\lambda t})$, we obtain at least~$|\frac{\d \Omega}{\d t}\big|\leqslant O(e^{-\lambda t})$, which gives the existence of~$\Omega_\infty\in\mathbb{S}$ such that~$\Omega(t)=\Omega_\infty+O(e^{-\lambda t})$. To get the rate~$3\lambda$, we have to be a little bit more careful, and use the same kind of trick as in Lemma~\ref{lem-L1} of the first part : if we have a differential equation of the form~$y'=y+O(e^{-\beta t})$, and furthermore that~$y$ is bounded, then we must have~$y=O(e^{-\beta t})$. Indeed, by Duhamel’s formula, we get~$y=y_0e^t+O(e^{-\beta t})$ and the only bounded solution corresponds to~$y_0=0$. We apply this to~$y=\frac{\d v_{i_0}}{\d t}$. We have, as in~\eqref{d2vidt}
\begin{align}\nonumber\frac{\d^2 v_{i_0}}{\d t^2}&=P_{v_{i_0}^\perp}\frac{\d J}{\d t}-\frac{\d v_{i_0}}{\d t}\cdot J\, v_{i_0}-v_{i_0}\cdot J\,\frac{\d v_{i_0}}{\d t}\\
&=P_{v_{i_0}^\perp}\frac{\d J}{\d t}-\left|\frac{\d v_{i_0}}{\d t}\right|^2\, v_{i_0} + \lambda\,\frac{\d v_{i_0}}{\d t}+O(e^{-3\lambda t}).\label{d2vi0dt}
\end{align}
We have
\[ P_{v_{i_0}^\perp}\frac{\d J}{\d t}=P_{v_{i_0}^\perp}\Big[J-\sum_{i=1}^N m_i (v_i\cdot J)v_i\Big]=(1-\lambda)P_{v_{i_0}^\perp}J+\sum_{i=1}^Nm_i (\lambda-v_i\cdot J)P_{v_{i_0}^\perp}v_i.\]
The term for~$i=i_0$ in this last sum vanishes and we have~$\lambda-v_i\cdot J=O(e^{-2\lambda t})$ for~$i\neq i_0$, as well as~$|P_{v_{i_0}^\perp}v_i|^2=1-(v_{i_0}\cdot v_i)^2=O(e^{-2\lambda t})$. We therefore obtain~$P_{v_{i_0}^\perp}\frac{\d J}{\d t}=(1-\lambda)P_{v_{i_0}^\perp}J+O(e^{-3\lambda t})$, and writing~$y=P_{v_{i_0}^\perp}J=\frac{\d v_{i_0}}{\d t}$, the formula~\eqref{d2vi0dt} becomes~$y'=y-|y|^2\,v_{i_0}+O(e^{-3\lambda t})$. We of course have that~$y$ is bounded, and we even know that~$y=\frac1{|J|}P_{v_{i_0}^\perp}\Omega=O(e^{-\lambda t})$. We can then apply the result once by replacing~$|y|^2$ with~$O(e^{-2\lambda t})$ to get~$y=O(e^{-2\lambda t})$, and then apply it a second time to obtain~$y=O(e^{-3\lambda t})$. This already provides the result~$v_{i_0}(t)=-\Omega_\infty+O(e^{-3\lambda t})$, and looking back at~\eqref{absdOmegadt}, we get that~$\frac{\d \Omega}{\d t}=O(e^{-3\lambda t})$ and therefore~$\Omega(t)=-\Omega_\infty+O(e^{-3\lambda t})$.
It remains to prove the more precise estimates for~$v_i$ when~$i\neq i_0$, and this is done exactly as in the proof of Proposition~\eqref{prop-no-back}, from formula~\eqref{d2vidt} to the end of the proof, now we know that~$\frac{\d \Omega}{\d t}=O(e^{-3\lambda t})$. The only difference is that~$v_i\cdot J$ converges to~$\lambda$ instead of~$1$, together with the fact that all rates are multiplied by~$\lambda$. For instance, the main estimate~\eqref{dvi2dt2} becomes
\[\frac{\d^2 v_i}{\d t^2}=-\lambda\frac{\d v_i}{\d t}-\Big|\frac{\d v_i}{\d t}\Big|^2\,v_i+O(e^{-3\lambda t}),\]
and the rest of the proof does not change.\qed
\end{proof}
\section{Acknowledgments}
The authors want to thank the hospitality of Athanasios Tzavaras and the University of Crete, back in 2012, where this work was done and supported by the EU FP7-REGPOT project ``Archimedes
Center for Modeling, Analysis and Computation''.\\
A.F. acknowledges support from the EFI project ANR-17-CE40-0030 and the Kibord project ANR-13-BS01-0004 of the French National Research Agency (ANR), from the project Défi S2C3 POSBIO of the interdisciplinary mission of CNRS, and the project SMS co-funded by CNRS and the Royal Society.\\
J.-G. L. acknowledges support from the National Science Foundation under NSF Research Network Grant no. RNMS11-07444 (KI-Net) and grant DMS-1812573.
|
3,212,635,537,696 | arxiv | \section{Introduction}
The GW170817 event~\cite{abbott17a,abbott17b} was arguably one of the most important astrophysical findings of the last decade for several reasons. First, it demonstrated that binary neutron star (BNS) mergers can produce strong gravitational wave (GW) signals and power bright electromagnetic emissions on a broad range of the spectrum~\cite{goldstein2017,savchenko2017,abbott17d,abbott17c,metzger17,davanzo2018,fong2019,dobie2018,mooley2018}. Second, the comparison of theoretical models with the observations allowed to narrow some of the physical properties of neutron stars (NSs) such as their radius, maximum mass, tidal deformability and equation of state (EoS) (see, e.g.,~\cite{margalit17,shibata2017modeling,abbott2018}). In addition, it also served to measure the GW speed with unprecedented precision, setting strong constraints in many alternative theories of gravity (see, e.g.,~\cite{PhysRevLett.119.251301,2018EPJC...78..738G}).
Although the dynamics of BNS mergers is fairly well understood (e.g., \cite{ciolfi2020key}), there are still many open questions regarding the details of the physical processes taking place during and after the merger. Here we focus in one of them, namely the amplification and large-scale reorganization of the magnetic field, which is thought to be necessary in order to launch a magnetically dominated jet associated to the short gamma-ray burst (SGRB) (see e.g.~\cite{Mckinney2009,10.1093/mnras/staa955}). Although the BNS merger and post-merger evolution of the remnant has been extensively studied through general-relativistic magnetohydrodynamics (GRMHD) simulations~\cite{palenzuela2013electromagnetic,kiuchi14,neilsen2014magnetized,kiuchi15,giacomazzo15,palenzuela15,ruiz16,kiuchi18,ciolfi2019,ciolfi2020collimated,ruiz2020,mosta2020}, the problem is not fully resolved yet. The impossibility of capturing the small (but dynamically important) scales induced by the Kelvin-Helmholtz instability (KHI), possibly combined with other instabilities, prevents to give a definite answer on what is the topology and intensity of the magnetic field in the remnant, when the turbulent amplification approaches saturation. Moreover, these strong magnetic fields are thought to enhance the angular momentum transportation (see e.g.~\cite{ciolfi2020key} and references therein), a key factor in the fate of the remnant. In presence of large-scale magnetic fields, the formation of the jet can be favored, and the amount mass ejecta could also change, compared to a non-magnetized remnant.
Different GRMHD simulations have attempted to resolve all the relevant scales of the problem. The highest-resolution simulations so far~\cite{kiuchi18} showed that the average magnetic field of the remnant can be amplified from $10^{13}$G to $10^{16}$G during the first miliseconds after the merger. Unfortunately, even the very fine spatial grid size used there (i.e. $\Delta \sim 12.5$m) is still much larger than the estimated wavelength of the fastest-growing unstable modes of the KHI: insufficient to well capture the small, highly dynamical scales and the magnetic field amplification at this stage. As a result, the saturation level of the magnetic field intensity was not converging to any clear value.
Nowadays (and arguably in the foreseeable future) it is not possible to perform direct numerical simulations of this scenario, since the range of the relevant scales (from hundreds/thousands km of the domain to the sub-meter shear layer thickness) makes the computational cost unfeasible, even employing the most efficient numerical and parallelization methods currently available. In order to overcome this limitation, different strategies have been implemented to reproduce the under-resolved KHI field amplification. A commonly used one is to impose a purely poloidal magnetic field, with unrealistically large strengths $\sim10^{14-16}\,$G, either before or after the merger(e.g.,~\cite{ruiz16,kiuchi18,ciolfi2019,ciolfi2020collimated,ruiz2020,mosta2020}). This choice is hardly comparable to the real effects of the KHI-driven dynamo at small scales, for which purely large-scale ordered field is not an expected outcome.
Other alternatives involve the use of large-eddy simulations (LES), which consist on including extra terms in the {\em discretized} version of the evolution equations to account for the unresolved sub-grid scale (SGS) dynamics (see e.g.~\cite{zhiyin15}). The main idea of the LES is to reproduce the imprints of the sub-grid dynamics on the large-scale (numerically resolved) fields, thus providing a magnetic field growth with a realistic topology and spectrum. Following this line, we have recently extended and implemented the so-called gradient SGS model used in non-relativistic fluid dynamics~\cite{leonard75,muller02a} to the non-relativistic~\cite{vigano19b}, special~\cite{carrasco19} and general relativistic~\cite{vigano20} MHD with excellent results on capturing the small scale effects of turbulent flow induced by the KHI. We have performed LES of BNS coalescence~\cite{aguilera20}, and found that the average magnetic field in the remnant is amplified with much less computational resources than the higher-resolution simulations leading to comparable results. In an accompanying paper~\cite{palenzuela21}, we show that high-resolution LES provide an amplification of the average magnetic field from $10^{11}$G to $10^{16}$G. More importantly, for the first time the magnetic field strength and its energy spectral distribution are converging to the same saturated level.
The work shown in this letter, which uses the same techniques as in~\cite{palenzuela21}, focuses on the role of the initial topology and intensity of the magnetic field in the post-merger remnant. Most (if not all) simulations of magnetized BNS mergers up to date start with mainly dipolar magnetic fields, often confined in each star, for simplicity. However, magnetic topologies are expected to be much richer, with relevant contributions from small scales and from the magnetospheric currents. This holds throughout a neutron star's life: at birth after the core-collapse supernova~\cite{reboul-salze21}, for middle (Myr) ages (as shown by magnetars' observations~\cite{tiengo13,borghese15} and simulations~\cite{gourgouliatos16}) and for late (Gyr) ages similar to what merging NSs should have (as shown by NICER studies of old millisecond pulsars~\cite{riley19}). The key question we want to address here is the following: how the choice of the pre-merger configuration affect the final magnetic field of the remnant? The results presented in this letter indicate that the memory of the initial magnetic field configuration is lost during the amplification phase induced by the KHI. For all the initial topologies considered, the bulk of the remnant (i.e., the regions with $\rho \geq 10^{13} ~\rm{g~cm^{-3}}$) is endowed with a very similar isotropic, turbulent-like configuration with an average magnetic field of approximately $10^{16}$G.
The paper is organized as follows: the evolution equations and the numerical setup are described briefly in \S\ref{sec:evol_and_setup}. The results of the simulations are presented and analysed in \S\ref{sec:results}. Conclusions are drawn in \S\ref{sec:conclusions}.
\section{Initial models}\label{sec:evol_and_setup}
We evolve the GRMHD equations using the same formalism and numerical methods presented in \cite{aguilera20,palenzuela21}: in particular, with the addition of an explicit SGS term in the induction equation, which is able to provide a more convergent amplified field. We employ the same hybrid EoS as in~\cite{palenzuela21} for the evolution, with a cold contribution given by a tabulated polytrope fit to the APR4 zero-temperature EoS~\cite{read09}, and thermal effects modeled by the ideal gas EoS with adiabatic index $\Gamma_{\rm th}= 1.8$~\cite{bauswein10}.
The conversion from the conserved fields to the primitive ones is performed by using the robust procedure given in~\cite{kastaun20}. To minimize further failures in the very tenuous regions outside the star, we impose a minimum density of $6.2 \times 10^{5}~\rm{g~cm^{-3}}$, with the regions having such values referred hereafter as atmosphere. Moreover, we apply the SGS terms only in regions where the density is higher than $6.2 \times 10^{13}~\rm{g~cm^{-3}}$ in order to avoid spurious effects near the stellar surface. Since the remnant's maximum density is above $10^{15} ~\rm{g~cm^{-3}}$, the SGS model is applied only in the most dense regions of the star.
\begin{figure*}
\centering
\includegraphics[width=0.285\linewidth, trim={0 4.5cm 0 0}, clip]{images/CM8_1}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/CM8_2}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/CM8_3}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/CM8_4}\\
\includegraphics[width=0.285\linewidth, trim={0 4.5cm 0 0}, clip]{images/Bhigh_1}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/Bhigh_2}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/Bhigh_3}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/Bhigh_4}\\
\includegraphics[width=0.285\linewidth, trim={0 4.5cm 0 0}, clip]{images/Misal_1}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/Misal_2}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/Misal_3}
\includegraphics[width=0.233\linewidth, trim={6.5cm 4.5cm 0 0}, clip]{images/Misal_4}\\
\includegraphics[width=0.285\linewidth, trim={0 2cm 0 0}, clip]{images/Mult_1}
\includegraphics[width=0.233\linewidth, trim={6.5cm 2cm 0 0}, clip]{images/Mult_2}
\includegraphics[width=0.233\linewidth, trim={6.5cm 2cm 0 0}, clip]{images/Mult_3}
\includegraphics[width=0.233\linewidth, trim={6.5cm 2cm 0 0}, clip]{images/Mult_4}\\
\caption{\textit{Magnetic field evolution}. Values of the magnetic intensity $|\vec{B}|$ in the orbital plane for: from top to bottom, {\ttfamily Dip}, {\ttfamily BHigh}, {\ttfamily Misal} and {\ttfamily Mult} simulations at, from left to right, $t=\{2,5,10,20\}$ ms after the merger. Outer and inner black lines mark the contours $\rho = 10^{13}$ and $10^{14}\ \rm{g/cm^{-3}}$, respectively. The length are given in geometrical units (corresponding to 1.47 km).}
\label{fig:slices_B2}
\end{figure*}
The initial data is created with the {\sc Lorene} package~\cite{lorene}, using the same tabulated polytropic EoS described above. We consider an equal-mass BNS in quasi-circular orbit with an irrotational configuration. The total mass of the system is $M=2.7~M_{\odot}$ and the initial separation is $45$km, corresponding to an initial angular frequency of $1775\ \rm{rad~s^{-1}}$.
The binary is solved in a cubic domain of side
$\left[-1204,1204\right]$ km. The inspiral is fully covered by $7$ Fixed Mesh Refinement (FMR) levels, each being a cube doubling the resolution of the previous one, and an Adaptive Mesh Refinement (AMR) level, achieving a maximum resolution of $60$ m in a domain covering at least the bulk of the remnant.
For each star, we consider a commonly-used initial axially-symmetric magnetic field, confined in the region where the fluid pressure $P$ is larger than a value $P_{cut}$, set to 100 times the atmospheric pressure. The azimuthal ($\phi$) component of the vector potential has a radial dependence $A_ {\phi} \propto r^2 (P - P_{cut})$, where $r$ is the distance from the center of the star. We have considered four initial configurations that differ among themselves in the intensity and the co-latitude ($\theta$) dependence of the magnetic field, as follows (see also Table~\ref{tab:models}):
\begin{itemize}
\item Aligned Dipole-like ({\ttfamily Dip}):
A very ordered (large scale) poloidal field $A_\phi = A_0 r^2 \sin^2\theta (P - P_{cut})$, similar to a dipole (which would go $\propto \sin\theta$), with the magnetic moment aligned to the orbital axis, and a normalization value $A_0$ such that the maximum intensity (at the centre) is $10^{12}~\rm{G}$, orders of magnitude lower than the large initial fields of other simulations (e.g., \cite{kiuchi15,ruiz16,kiuchi18,ciolfi2019,ciolfi2020collimated,ruiz2020}).
\item Highly magnetized ({\ttfamily BHigh}): The same as the above {\ttfamily Dip} model, except that the intensity is 1000 times larger, reaching a maximum of $10^{15}$ \rm{G}.
\item Misaligned dipole ({\ttfamily Misal}): The same as the {\ttfamily Dip} model, except that the magnetic moment is orthogonal to the orbital axis.
\item Multipole ({\ttfamily Mult}): A more complex topology containing high multi-polar structures, with $A_ {\phi} \propto r^2 \sin^6\theta\left(1+\cos\theta\right) (P - P_{cut})$, with the same maximum intensity as {\ttfamily Dip}.
\end{itemize}
\begin{table}[ht]
\begin{tabular}{ |c|c|c|c| }
\hline
Case
& Max $B$
& Orbit-magnetic
& Meridional\\
& (G) & misalignment (degrees) & topology
\\ \hline
{\tt Dip} & $10^{12}$ & 0 & Dipole-like \\
{\tt BHigh} & \cellcolor{gray!25}$10^{15}$ & 0 & Dipole-like \\
{\tt Misal} & $10^{12}$ & \cellcolor{gray!25}90 & Dipole-like \\
{\tt Mult} & $10^{12}$ & 0 & \cellcolor{gray!25}Multipole \\
\hline
\end{tabular}
\caption{{\em Configuration of the simulations.} We indicate the initial values of the maximum intensity of the magnetic field, the angular misalignment between the orbital and magnetic field axes, and the initial topology of the magnetic field. Cells remarked in grey correspond to the differences with respect to the reference model {\ttfamily Dip}.}
\label{tab:models}
\end{table}
\section{Results}\label{sec:results}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{images/Energy_all}
\caption{\textit{Energy evolution}. (Top) Rotational (dashed lines) and thermal (solid lines) energies, integrated over the whole dominion, for different simulations as a function of time. (Bottom) Magnetic energy for the same simulations.}
\label{fig:integrals}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{images/Average_Bpol_Btor}
\caption{\textit{Average intensity of magnetic field components}. Average intensity evolution of the poloidal (solid lines) and toroidal (dashed lines) components of the magnetic field for all cases in the bulk of the remnant where $\rho\geq10^{13}\ \rm{g~cm^{-3}}$.}
\label{fig:averagebbpolbtor}
\end{figure}
Our initial binary system evolves for $5$ orbits before merging. The merger produces a differentially rotating remnant that relaxes to an hypermassive-neutron star (HMNS) in a few milliseconds. Before the merger occurs (hereafter, $t=0$), we set a magnetic field topology on each star corresponding to one of the cases described before. Therefore, we have then considered four different simulations, as summarized in Table~\ref{tab:models}.
Fig.~\ref{fig:slices_B2} displays some snapshots, in the orbital plane $z=0$, of the {\ttfamily Dip}, {\ttfamily BHigh}, {\ttfamily Misal} and {\ttfamily Mult} simulations (from top to bottom) at $t=\{2,5,10,20\}$ ms (from left to right) after the merger. The orange scale represents the intensity of the magnetic field, while the two black thin lines are mass density contours corresponding to $\rho = 10^{14}~\rm{g~cm^{-3}}$ (inner line) and $10^{13}~\rm{g~cm^{-3}}$ (outer line). The shape of the remnant varies at the initial times, but clearly restructures itself at later ones, approaching an almost axisymmetric structure. There we can also see the KHI appearing at the merging layers, amplifying local values of MF up to a maximum of $10^{17}$ G.
Thus, MF changes from fully turbulent to partially ordered at $\sim 20$ ms after the merger, where we can see the systematic formation of azimuthal/spiraling filaments. This is due to the winding that from now on rule the rising of the magnetic field (see \cite{palenzuela21} for an in-depth discussion about the mechanisms contributing to the amplification).
In Fig.~\ref{fig:integrals} we represent the evolution of the volume-integrated thermal energy (top, solid line), kinetic rotational (top, dashed line) and magnetic energy (bottom) for the four models. We can see the energies of all cases rise soon after merger, at the expense of the large gravitational energy available. For all cases, the thermal energy still rises monotonically after the merger while the rotational one is losing energy, becoming all remnants objects with higher temperature but rotating more slowly. We obtain comparable values between all cases at $30$ ms after the merger for the rotational energy. For the thermal one, differences are around a factor of $\sim1.2$ between {\ttfamily BHigh} and {\ttfamily Mult} at the same time.
After about 5 milliseconds, the magnetic energy growth saturates, with a maximum factor difference of $\sim3$ between {\ttfamily Dip} and {\ttfamily Misal}. The KHI that appears during the merger between the two stars, possibly combined with the Rayleigh-Taylor instability near the surface, is the responsible of the amplification of the magnetic energy, which for all cases (except by the {\ttfamily BHigh}) increases $10$ orders of magnitude, from $10^{40}\ \rm{erg}$ to $10^{50}\ \rm{erg}$. For the {\ttfamily BHigh} case, the initially large value implies a smaller amplification by $4$ orders of magnitude from $10^{46}\ \rm{erg}$ to a similar value of $10^{50}\ \rm{erg}$. Overall, differences in energies lie within a factor $\sim3$, much less than both the orders-of-magnitude differences in the initial magnetic energy and across the evolution.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{images/spectra_all}
\caption{ \textit{Energy spectra}. Top: Kinetic (solid) and magnetic (dotted) energy spectra for different configurations as a function of the wavenumber at, from left to right, $t=\{5,10,20,30\}$ ms after the merger. The solid thin black line corresponds to the Kolmogorov slope, while the dotted one belongs to the Kazantsev slope. The dots are the wavenumbers which contains, in average, most the energy of each spectra.}
\label{fig:spectra_all}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{images/tor_pol_spectra_all}
\caption{\textit{Magnetic energy spectra}. Poloidal (solid) and toroidal (dotted) components of the magnetic spectra for different configurations as a function of the wavenumber at, from left to right, $t=\{5,10,20,30\}$ ms after the merger.}
\label{fig:tor_pol_spectra_all}
\end{figure*}
Fig.~\ref{fig:averagebbpolbtor} shows the intensity of the poloidal (solid lines) and toroidal (dashed lines) components of the magnetic field, as a function of time, averaged in the bulk of the remnant. There we can see the amplification for both components due to the KHI in the firsts $5$ milliseconds after the merger. What is important to remark here is that: (i) during the exponential growth, both components are similar due to the isotropic character of turbulence, and (ii) after the rising, the toroidal component is the dominant one, as its shape is practically the same as the magnetic energy plot from Fig.~\ref{fig:integrals}. The toroidal component, in the saturation phase, maintains its value merely constant for all cases, close to $10^{16}~\rm{G}$. The poloidal component, on the other hand, is slowly decreasing, probably due to energy transfer to the toroidal component, getting values around $10^{15}~\rm{G}$. For the toroidal component of the magnetic field we can see almost the same behaviour for all cases where, again, the misaligned and the multipolar ones are below the others by a factor $\sim3$. Differences are less notorious when focusing on the poloidal component of the magnetic field, where at the beginning the same cases (i.e. misaligned and multipolar) do not rise as much as the others but at $30$ ms after the merger the differences are only about a factor $\sim2$.
Besides the volume-integrated quantities, we can analyze the evolving spectral energy distribution ${\cal E}(k)$ (for details on how it is calculated, see appendix of \cite{vigano20}). This will allow us to see whether the different scales of the problem behave similar or not between the cases we are considering.
The spectral distribution of the kinetic (solid lines) and magnetic (dotted lines) energies, as a function of the wavenumber $k$, is displayed in Fig.~\ref{fig:spectra_all}. The four plots correspond to, from left to right, $t=\{5,10,20,30\}$ ms after the merger. As a reference, Kolmogorov ($k^{-5/3}$, thin solid line) and Kazantsev ($k^{3/2}$, thin dotted line) slopes are also included in all plots. Large dots indicate the spectra-weighted average of the wavenumber, ${\bar k}=\frac{\int_k [k {\cal E}(k)]}{\int_k {\cal E}(k)}$, which gives the typical size ${\bar \lambda}= 2 \pi/{\bar k}$ of either the fluid or the magnetic (${\bar \lambda}_B$) structures.
For all times represented, the kinetic energy spectra behave in the same way for all cases (having a Kolmogorov slope in the inertial range), regardless of the scale we are considering. For the magnetic energy spectra, they initially follow the Kazantsev slope up to the numerical dissipative scale (intrinsically set by the discretization scheme). This is proper of the kinematic phase of the dynamo, until the dynamo approaches saturation at small scales (large $k$). At $t=10\ \rm{ms}$ all cases have roughly the same amount of magnetic energy spectra. At $t=20\ \rm{ms}$ after the merger, small differences begin to appear, although they may be in part due to stochastic variations. At $t=30\ \rm{ms}$ after the merger, such differences are less evident. Moreover, the amplification has saturated and magnetic spectra appear to be compatible with a Kolmogorov slope at intermediate scales.
We found that ${\bar \lambda}_B \sim 800$~m soon after the merger, increasing to almost $2$~km at $t=30$~ms. This confirm that larger coherent magnetic field structures are being formed in the remnant. Clearly, there is no significant difference (less than $7\%$ in ${\bar \lambda}_B$) between the simulations with different initial topologies considered here.
In Fig.~\ref{fig:tor_pol_spectra_all} we further analyze the magnetic energy spectra, identifying the contributions coming from the toroidal (dashed lines) and poloidal (solid) components. At $5$ ms after the merger, both components are similar for all simulations. As time passes, the poloidal component of all cases decreases two orders of magnitude while the toroidal one increases by about one order of magnitude. However, for all times, both components are still comparable among the different simulations. Notice that, comparing both components in different models, the differences are up to a factor $3$ only, much less than the relative differences and the overall changes in time. Finally, at $t=30$ ms, the slope of the toroidal component (the dominant contribution to the magnetic energy) approaches a Kolmogorov slope at intermediate scales. An interesting difference at these times is that, starting with a (unrealistically) high magnetic field, there is an excess of large-scale magnetic fields (low $k$), an effect probably due to the winding acting on the already quite organized field.
\section{Conclusions}\label{sec:conclusions}
We have studied the influence of different initial magnetic configuration on the evolution of BNS mergers, using high-resolution large-eddy simulations employed also in the accompanying paper~\cite{palenzuela21}, for which we refer for further details on the methods and amplification mechanisms at work. In particular, we have considered initial magnetic fields confined within the stars, varying the intensity, the magnetic moment misalignment with the orbital axis and the poloidal topology. Looking at the evolution of integrated energy and spectral distribution, we proved that the differences lie within a factor $3$ at most, which could be even smaller in more accurate simulations. This, then, ensures that the initial topology of the stars is not relevant at all because the turbulence, induced in the remnant mainly due to the KHI, will erase any memory of realistic magnetic fields of $B \leq 10^{12}$G in only few milliseconds after the merger.
In this work we have explored only some of the infinite possible magnetic configurations. More choices could be explored, in particular: presence of a toroidal field; non-axisymmteric topology; magnetic field extended outside the stars; more complex, small-scale dominated configuration... However, the results shown here already suggest that the expected dependence on the initial topology is basically negligible, compared to other much more uncertain issues. Among the latter, we mention the importance of the numerical ability to resolve the KHI, the physics involved in the post-merger phase (neutrino transport, temperature-dependent equation of state...).
This universality of the magnetic field outcome after the merger sets serious doubts on how could we infer the initial magnetic field of the stars in a BNS merger through multimessenger astronomy. The only foreseeable possibility is through the presence of a precursor electromagnetic signal before the merger, which would still have information on the initial topology and intensity of the magnetic field~\cite{palenzuela13a,palenzuela13b,ponce14}.
The final message is that the commonly used simplification on the topology of the magnetic field is acceptable in BNS mergers, as far as the magnetic field is not too large and enough turbulence is developed to erase the seed and produce the correct spectra distribution. In those cases, the system will tend to be practically the same remnant regardless of its initial configuration. However, if one wants to focus on the realistic generation of a large-scale field in the post-merger, the rule-of-thumb is basically the following: be sure that the large-scale initial magnetic field is much smaller than the KHI-amplified average field that the numerical scheme is able to reproduce.
\subsection*{Acknowledgements}
We thank Riccardo Ciolfi, Wolfgang Kastaun and Jay Vijay Kalinani for providing us the EoS and for the useful discussions. This work was supported by European Union FEDER funds, the Ministry of Science, Innovation and Universities and the Spanish Agencia Estatal de Investigación grant PID2019-110301GB-I00. DV is funded by the European Research Council (ERC) Starting Grant IMAGINE (grant agreement No. [948582]) under the European Union’s Horizon 2020 research and innovation programme. The authors thankfully acknowledge the computer resources at
MareNostrum and the technical support provided by Barcelona Supercomputing Center (BSC) through Grant project Long-LESBNS by the $22^{th}$ PRACE regular call (Proposal 2019215177, P.I. CP and DV).
\bibliographystyle{utphys}
|
3,212,635,537,697 | arxiv | \section{Introduction and Summary}
Symmetry has always been a guiding principle in characterizing physical systems. While weakly coupled field theories are known to be tractable in terms of perturbation theory in coupling, often the strongly coupled ones can only be constrained by symmetry arguments. For example, the physics of low-energy quantum chromo dynamics (QCD) is captured by an effective theory of pions, whose low-energy interactions are fixed by the broken chiral symmetry.
Conformal field theories (CFTs) are especially beautiful examples of how one can leverage the symmetry group. While generically strongly coupled, conformal symmetry almost completely fixes the behavior of correlation functions and gives non-trivial insights into the structure of their Hilbert spaces. In some cases, the conformal bootstrap \cite{Rattazzi:2008pe}
can provide us with rich physics of such theories entirely based on symmetry principles. However, we are still lacking many concrete calculational tools for these theories. In CFTs with an additional global $U(1)$, recent progress has been made by constructing effective field theories for their large charge ($Q$) sector. Generically, the large charge sector can be horribly complicated in terms of elementary fields and their interactions, but one can set up a systematic $1/Q$ expansion to probe this strongly coupled regime. This has been useful in finding the scaling of operator dimensions, and many other meaningful physical quantities \cite{hellerman2015cft, hellerman2017note,
monin2017semiclassics,banerjee2018conformal,de2018large,Mukhametzhanov:2018zja}.
In this work, we will be dealing with systems with non relativistic scale and conformal invariance i.e. systems invariant under Schr\"odinger symmetry. While in CFT, one needs to have a external global symmetry to talk about large charge expansion, the nonrelativistic conformal field theories (NRCFTs) come with a ``natural" $U(1)$, the particle number symmetry. The Sch\"odinger symmetry group and its physical consequences have been studied in \cite{Mehen:1999nd,Nishida:2010tm,Nishida:2007pj,goldberger2015ope,Golkar:2014mwa,Pal:2018idc}. The physical importance of Schr\"odinger symmetry lies in varied realisation of the symmetry group, starting from fermions at unitarity\cite{Regal:2004zza,Zwierlein:2004zz} to examples including spin chain models \cite{Chen:2017tij}, systems consisting of deuterons \cite{Kaplan:1998tg,Kaplan:1998we}, ${}^{133}Cs$\cite{Chin:2001uan}, ${}^{85}Rb$ \cite{Roberts:1998zz},${}^{39}K$ \cite{loftus2002resonant}.
Such theories, similar to CFTs, admit a state-operator correspondence\cite{nishida2007nonrelativistic, goldberger2015ope} in which the dimensions of operators correspond to energy of a state in a harmonic potential\footnote{This state-operator map is different from the one discussed in \cite{Pal:2018idc} to explore the neutral sector. In \cite{Pal:2018idc}, the map is more akin to the $(0+1)$ dimensional CFT.}. Specifically, the scaling generator $D$, which scales $\vec{x}\mapsto\lambda x$ and $t\mapsto\lambda^2 t$ for $\lambda \in \mathbf{R}$ gets mapped to the Hamiltonian ($H_\omega$) in the harmonic trap i.e. $H_\omega \equiv H+ \omega^2 C$ where $C=\frac{1}{2}\int d^dx ~ x^2 n(x)$ is the special conformal generator and $n(x)$ is the number density and $H$ is the time translation generator of the Schr\"odinger group. The parameter $\omega$ determines the strength of the potential and plays an analogous role to the radius of the sphere in the relativistic state-operator correspondence.\footnote{Here and also subsequently, we will be working in non-relativistic ``natural" units of $m=\hbar=1$}.
Given this set up, we consider an operator $\Phi$ with large number charge $Q$. For example, one can think of $\phi^\frac{N}{2}$ for $\phi(x)=\ \normord{\psi^\dag_{\uparrow}(x) \psi^\dag_{\downarrow}(x)}$ in the case of fermions at unitarity in $d=3$ dimensions. By the state-operator correspondence, the operator is related to a state $|\Phi\rangle$ with finite density of charge ($n$) in the harmonic trap. There's an energy scale set by the density $\Lambda_{UV} \sim \mu \sim n^{\frac{2}{d}}$, $\mu$ being the chemical potential which fixes the total charge to $Q$. There is also a scale set by the trap $\Lambda_{IR} \sim \omega$ which controls the level spacing of $H_\omega$. The limit of large charge $Q \gg 1$ then implies a parametric separation of these scales. This allows us to set up a perturbatively controlled expansion in $1/Q$ and probe the large charge sector of a theory invariant under Schr\"odinger symmetry.
In this limit it becomes appropriate to ask, what state of \emph{matter} describes the large charge sector? Such a state with finite density of charge necessarily breaks some of the space-time symmetries e.g. scale transformations, (Galilean) boosts, special-conformal transformations. That these symmetries are spontaneously broken also implies that they must be realized non-linearly in the effective field theory (EFT) describing the large charge sector. We expect the low-energy degrees of freedom to be Goldstones.
One possibility is that the $U(1)$ symmetry remains unbroken. This is the case for a system with a Fermi surface. There the low-energy degrees of freedom would also include fermionic matter in addition to any Goldstones. The simplest candidate EFT, Landau Fermi-Liquid theory, is incompatible with the non-linearly realized Schr\"odinger symmetry\cite{rothstein2018symmetry} and therefore this is a fairly exotic possibility.
Another possibility is that the $U(1)$ symmetry is also spontaneously broken, leading to superfluid behavior. This has been the case most studied in the literature and seems like the most obvious possibility for a bosonic NRCFT. Additionally, both unitary fermions and the scale invariant anyon gas at large density are suspected to be superfluids. Therefore we focus exclusively on this symmetry breaking pattern.
\subsection*{Summary of Results}
We compute the properties of the ground state $\ket{\Phi}$ with finite density of charge, under the assumption it describes a rotationally invariant superfluid, via an explicit path integral representation:
\begin{equation} \label{pathint}
\vev{\Phi|e^{-H_\omega T}|\Phi} = \int \mathcal{D}\chi ~ e^{-S_{eff}[\chi]+\mu \int d^dx ~n(x)}
\end{equation}
where $\chi$ is a Goldstone boson describing excitations above the ground state, $\mu$ is the chemical potential and $n(x)$ is the number density which is canonically conjugate to $\chi$. This integral can then be computed by saddle point in the large $\mu$ limit. The chemical potential $\mu$ can then be fixed semi-classically in terms of the charge $Q$. Thus self-consistently, we are obtaining a large $Q$ expansion. We employ the coset construction to write down the most general effective action for the Goldstone which is consistent with the non-linearly realized Schr\"odinger symmetry.
\begin{itemize}
\item
For the case with magnetic vector potential $\vec{A}=0$ (the one that is relevant for the NRCFT in harmonic trap), we find the effective Lagrangian given by
\begin{equation} \label{secondorderLefff}
\mathcal{L}_{eff}= c_0 X^{\frac{d}{2}+1} + c_1 \frac{X^{\frac{d}{2}+1}}{X^3} \partial_i X \partial^i X + c_2 \frac{X^{\frac{d}{2}+1}}{X^3} (\partial_i A_0)^2 + c_3 \frac{X^{\frac{d}{2}+1}}{X^2} \partial_i \partial^i A_0 + c_4 \frac{X^{\frac{d}{2}+1}}{X^2} (\partial_i \partial^i \chi)^2
\end{equation}
where $X=\partial_t \chi-A_0 -\frac{1}{2}\partial_i \chi \partial^i \chi $. However this is not the full set of constraints. It can be shown that imposing `general coordinate invariance' will reduce the number of independent Wilson coefficients even further\cite{son2006general}. In particular there are the additional constraints: $c_2 = 0$ and $c_3 = - d^2 c_4$.
Additionally, in $d=2$, one can have parity violating operator at this order:
\begin{equation}
c_5 \frac{1}{X} \epsilon^{ij} (\partial_i A_0)(\partial_j X)
\end{equation}
The details can be found in Section \ref{sec:effL}.
\item The dispersion relation of low energy excitation above the ground state is found out to be:
\begin{equation}
\epsilon(n,\ell) = \pm \omega \left(\frac{4}{d}n^2 + 4n + \frac{4}{d}\ell n - \frac{4}{d}n + \ell \right)^{\frac{1}{2}}
\end{equation}
where $\ell$ is the angular momentum and $n$ is a non-negative integer and $\epsilon(n,\ell)$ is the excitation energy. The dispersion determines the low-lying operator dimensions explicitly. Since, $\epsilon(n=0,\ell=1) = \pm \omega$ and $\epsilon(n=1,\ell=0)=\pm 2\omega$, they can be identified with two different kinds of descendant operators appearing in the Schr\"odinger algebra. The details can be found in Section \ref{sec:exc}.
\item
In the leading order in $Q$, we find the ground state energy i.e. dimension $\Delta_Q$ of the corresponding operator $\Phi$:
\begin{align}
\Delta_Q=\left(\frac{d}{d+1}\right)\xi Q^{1+\tfrac{1}{d}}\,,\quad\text{where}\quad \frac{1}{c_0}=\frac{\Gamma(\tfrac{d}{2}+2)}{\Gamma(d+1)}(2\pi \xi^2)^{\tfrac{d}{2}}\,.
\end{align}
where $c_0$ is UV parameter of the theory, appearing in the Lagrangian \eqref{secondorderLefff}.
Specifically, we have
\begin{align}
\Delta_Q&=\frac{2}{3} \left(\xi Q^{3/2}\right) + c_1 \frac{4\pi}{3}\xi \left( Q^{\frac{1}{2}} \log Q \right)+\mathcal{O}\left(Q^{\frac{1}{2}} \right) \quad \text{for}\ d=2\,.\\
\Delta_Q&= \left(\frac{3}{4}\right)\xi Q^{4/3}-\left (c_1+\frac{c_3}{2}\right) (3\sqrt{2}\pi^2) \xi^2 Q^{2/3}+ O\left(Q^{5/9}\right)\, \quad \text{for}\ d=3\,.
\end{align}
The details can be found in Section \ref{sec:dim}.
\item We find the structure function $F$ appearing in three point function of two operators with large charge $Q$ and $Q+q$ and one operator $\phi_q$ with small charge $q$ goes as follows:
\begin{equation} \label{Ffunction}
F(v= {\bf i} \omega y^2) \propto Q^{\frac{\Delta_\phi}{2d}} \left(1- \frac{\omega y^2 }{2\xi }Q^{-1/d}\right)^{\frac{\Delta_\phi}{2}} e^{-\frac{1}{2} q \omega y^2}
\end{equation}
where $y$ is the insertion point of $\phi_q$ in the oscillator co-ordinate and $\Delta_\phi$ is the dimension $\phi_q$. The details can be found in Section \ref{sec:3pt}.
\end{itemize}
\textit{Note: While this work was being completed a paper appeared with some overlap\cite{Favrod:2018xov}. They identify many of the same operators we do, through different means and without couplings to the background gauge field. The primary tool we utilize is the state-operator correspondence for NRCFTs, therefore directly compute properties of the NRCFTs in harmonic trap in large charge limit.}
\section{Lightning Review of Schr\"odinger Algebra}
The Schr\"odinger algebra has been extensively explored in \cite{Mehen:1999nd,Nishida:2010tm,Nishida:2007pj,goldberger2015ope,Golkar:2014mwa,Pal:2018idc}. Here we take the readers through a quick tour of the essential features of Schr\"odinger algebra, that we are going to use through out this paper. The most important subgroup of Schr\"odinger group is the Galilean group, generated by time translation generator $H$, spatial translation generators $P_i$, rotation generators $J_{ij}$ and boost generators $K_i$. One can centrally extend this group by appending another $U(1)$ generator $N$, which generates the particle number symmetry. As a whole, these generators constitute what we call Galilean algebra and they satisfy:
\begin{gather}
[J_{ij},N]=[P_i,N]=[K_i,N]=[H,N]=0\nonumber \\
[J_{ij}, P_{k}]={\bf i}
(\delta_{ik}P_{j}-\delta_{jk}P_{i})\,, \nonumber \\
[J_{ij},K_{k}]={\bf i}
(\delta_{ik}K_{j}-\delta_{jk}K_{i})\,, \nonumber\\
[J_{ij},J_{kl}]= {\bf i} (\delta_{ik}J_{jl}-\delta_{jk}J_{il}+\delta_{il}J_{kj}-\delta_{jl}J_{ki})\,,\nonumber \\
\label{eq:GalAlg1}
[P_{i},P_{j}]=[K_{i},K_{j}]=0\,, \qquad [K_{i},P_{j}]={\bf i}\delta_{ij}N\,,\\
[H,N]=[H,P_{i}]=[H,M_{ij}]=0\,, \quad [H,K_{i}]=-{\bf i} P_{i}\,. \nonumber
\end{gather}
The Galilean group is enhanced to Schr\"odinger group by appending a scaling generator $D$ and a special conformal generator $C$ such that they satisfy the following commutator relations:
\begin{gather}
[D,P_{i}]={\bf i} P_{i}\,, \quad [D,K_{i}]= -{\bf i} K_{i}\,, \\
\quad [D,H]=2{\bf i} H\,,\quad [D,C]=-2{\bf i} C\,, [H,C]=-{\bf i} D\,,\\
\quad [J_{ij},D]=0\,,\quad [J_{ij},C]=0\,,\quad [N,D]=[N,C]=0\,.
\end{gather}
The state-operator correspondence for an NRCFT is based on the following definition \cite{goldberger2015ope}:
\begin{equation} \label{SOdef}
\ket{\mathcal{O}} \equiv e^{-\frac{H}{\omega}} \mathcal{O}^\dag (0) \ket{0} = \mathcal{O}^\dag\left(-\frac{{\bf i}}{\omega},0\right)\ket{0}
\end{equation}
where $\mathcal{O}^\dag$ is a primary operator of number charge $Q_{\mathcal{O}^\dag} = -Q_{\mathcal{O}} \geq 0$. By the Schr\"odinger algebra, this state satisfies:
\begin{equation} \label{SOconseq}
N \ket{\mathcal{O}} = Q_{\mathcal{O}^\dag}\ket{\mathcal{O}} ~~~~~ H_\omega \ket{\mathcal{O}} = \omega \Delta_\mathcal{O} \ket{\mathcal{O}}
\end{equation}
where $H_\omega = H + \omega^2 C$ is the Hamiltonian with the trapping potential.
It is natural to define a transformation from Galilean coordinates $x=(t,\vec{x})$ to the ``oscillator frame" $y=(\tau, \vec{y})$ where the time translation $\tau \rightarrow \tau + a$ is generated by $H_\omega$. Explicitly this is given by
\begin{equation} \label{coordinates}
\omega \tau = \arctan \omega t\,,\quad\quad \vec{y} = \frac{\vec{x}}{\sqrt{1+\omega^2 t^2}}
\end{equation}
and allows us to map primary operators and their correlation functions in the oscillator frame to the Galilean frame via the map\cite{goldberger2015ope}:
\begin{align}\label{primopmap}
\mathcal{\tilde{O}}(y) &= \left(1+\omega^2 t^2\right)^{\frac{\Delta_\mathcal{O}}{2}} \exp\left[\frac{{\bf i}}{2} Q_{\mathcal{O}} \frac{ \omega^2 |\vec{x}|^2 t}{1+\omega^2 t^2}\right] \mathcal{O}(x)\\
\mathcal{O}(x)&=\left[\cos(\omega t)\right]^{\Delta_{\mathcal{O}}} \exp\left[-\frac{{\bf i}}{2} Q_{\mathcal{O}} \omega |\vec{y}|^2 \tan(\omega\tau)\right] \mathcal{\tilde{O}}(y)
\end{align}
In this paper, we will be interested in matrix elements of the form:
\begin{equation} \label{genericC}
\vev{\Phi|\phi_1(y_1) \cdots \phi_n(y_n)|\Phi}
\end{equation}
where $\Phi^\dag$ is a primary of charge $Q \gg 1$ and $\phi_i$ are also charged \footnote{The state-operator correspondence breaks down for neutral operators as they actually trivially on the vacuum and their representation theory is not well understood. \cite{Pal:2018idc} explores how to circumvent this issue.} primaries with $q_i \ll Q$.\footnote{Here we point out that if an operator is explicitly written as a function of oscillator co-ordinate, it is to be understood that we have already employed the mapping \eqref{primopmap}. Thus $\phi_{i}(y_1)$ in \eqref{genericC} should technically be written as $\tilde{\phi}_{i}(y_1)$, albeit we omit ``tilde" sign for notational simplicity.}
In the Galilean frame, the general form of a two point function is fixed to be
\begin{align}
\vev{\mathcal{O}_1(x_1)\mathcal{O}_2(x_2)} = c \delta_{\Delta_1,\Delta_2} \delta_{Q_1,-Q_2} \frac{\exp\left[{\bf i} Q_{2}\frac{|\vec{x}|^2}{2t}\right]}{(t_1-t_2)^{\Delta_1}}
\end{align}
where $c$ is a numerical constant, $\Delta_i$ is the dimension of the operator $\mathcal{O}_i$, $Q_i$ is the charge of $\mathcal{O}_i$. The symmetry algebra constrains the general form of a three-point function upto a arbitrary function of a cross-ratio $v_{ijk}$ defined below:
\begin{align} \label{3ptfuncGal}
\nonumber &\vev{\mathcal{O}_1(x_1)\mathcal{O}_2(x_2)\mathcal{O}_3(x_3)}\equiv G(x_1;x_2;x_3)\\
&= F(v_{123}) \exp\left[-{\bf i} \frac{Q_1}{2} \frac{\vec{x}_{13}^2}{t_{13}}-{\bf i} \frac{Q_2}{2} \frac{\vec{x}_{23}^2}{t_{23}}\right] \prod_{i<j} t_{ij}^{\frac{\Delta}{2}-\Delta_i -\Delta_j}
\end{align}
where $\Delta\equiv \sum_i \Delta_i $ , $x_{ij}\equiv x_i - x_j$ , and $F(v_{ijk})$ is a function of the cross-ratio $v_{ijk}$ defined:
\begin{equation} \label{crossratio}
v_{ijk} = \frac{1}{2}\left(\frac{\vec{x}_{jk}^2}{t_{jk}} - \frac{\vec{x}_{ik}^2}{t_{ik}} + \frac{\vec{x}_{ij}^2}{t_{ij}} \right)
\end{equation}
We note that the three point function becomes zero unless $\sum Q_i=0$.
\section{Lightning Review of Coset Construction}
A symmetry is said to be spontaneously broken if the lowest energy state, the ground state, is not an eigenstate of the associated charge. The low-energy effective action, describing the physics above the ground state, is still invariant under the full global symmetry group but the broken subgroup is realized \emph{non-linearly}. Typically this means the effective action describes some number of Goldstones.
The coset construction gives a general method for constructing effective actions with appropriate non-linearly realized symmetry actions. It was developed for internal symmetries by CCZW \cite{coleman177structure, callan1969cg} and later generalized to space-time symmetries\cite{ogievetsky1974nonlinear}. Here we give a nimble review of the method and its application to the superfluid. We refer to the original literature and the recent review \cite{delacretaz2014re} for more details. The primary objective of the coset construction is to write down the most general action, invariant under a global symmetry group $G$ but where only the subgroup $G_0$ is linearly realized. Let us consider a symmetry group which contains the group of translations, generated by $P_a$. Let us denote the broken generators as $X_b$ corresponding to associated Goldstones $\pi_b(x)$. We denote unbroken generators as $T_c$.
We can define the exponential map from space-time to the coset space $G / G_0$
\begin{equation} \label{expmap}
U \equiv e^{{\bf i} \bar{P}_a x^a} e^{{\bf i} X_b \pi^b(x)}
\end{equation}
With this map we can define the $1$-form, known as the Maurer-Cartan (henceforth we call it MC) form, on the coset space. Under a $G$-transformation \eqref{expmap} transforms as
\begin{equation} \label{exptrans}
g: U(x) \rightarrow e^{{\bf i} \bar{P}_a (x')^a} e^{{\bf i} X_b \pi^{'b}(x')} h(\pi(x),g)
\end{equation}
where $h(\pi(x),g)$ is some element in $G_0$, determined by the Goldstones and $g \in G$, that ``compensates" to bring $U(x)$ back to the form in \eqref{expmap}. This determines how the Goldstone fields transform\footnote{For space-time symmetries there's a translation piece even though $\bar{P}_a$ are unbroken. This is because, on coordinates, translations are always non-linearly realized as $x \rightarrow (x+a)$}.
Expanded in a basis of generators the MC form looks like:
\begin{equation} \label{MCformgen}
\Omega \equiv -{\bf i} U^{-1} \partial_\mu U \equiv E_\mu^a(\bar{P}_a + (\nabla_a \pi^b) X_b + A_a^c T_c )
\end{equation}
where each of the tensors $\{E_\mu^a , ~ \nabla_a \pi^b , ~ T_c \}$ is a function of the Goldstone fields $\pi_a$. Here $E_\mu^a$ is a vierbein, $\nabla_a \pi^b$ are the covariant Goldstone derivatives and $A_a^c$ transforms like a connection.
Several remarks are in order. Once space-time symmetries are broken the quantity $d^dx$ is no longer necessarily a scalar under those transformations. However the quantity $d^dx \det E$ can be used to define an invariant measure for the action. On the other hand, contractions of the objects $\nabla_a \pi^b$, in a way which manifestly preserves the $G_0$ symmetry, also provides us with $G$ invariants and form the Goldstone part of the effective action. The connection, $A_a^c$ and the vierbein, can be used to define the following ``higher" covariant derivative
\begin{equation} \label{highcov}
\nabla^H_a \equiv (E^{-1})_a^\mu \partial_\mu + {\bf i} A_a^c T_c
\end{equation}
An object like $\nabla^H_a \nabla_b \pi^c$ also transforms covariantly and $G_0$-invariant contractions with other tensors should be included. The other primary use of \eqref{highcov} is for defining covariant derivatives of ``matter fields". For example, suppose $\psi$ is a matter field transforming in a $k$-dimensional linear representation $r$ of $G_0$ as $\psi \rightarrow \psi' = r(h) \psi$. The coset construction provides multiple ways to uplift $G_0$ representations to full $G$ representations. The one of importance to us is when $r$ appears in the decomposition of a $K$-dimensional representation $R$ of $G$. Defining the field $\tilde{\psi} \equiv \left( \psi , ~ 0 \right)$ in the $K$-dimensional representation, one can show that the field $\Psi = R(\Omega) \tilde{\psi}$ transforms linearly under the full group $G$. If a subset of the symmetry is gauged then we just covariantly replace $\partial_\mu \rightarrow D_\mu = \partial_\mu + {\bf i} \bar{A}_\mu^d \bar{T}_d$ in the above. The tensors will then depend on the gauge fields $\bar{A}$ but otherwise everything goes through.
One last important aspect of space-time symmetry breaking is that not all the Goldstone bosons are necessarily independent \cite{low2002spontaneously}. This occurs when the associated currents differ only by functions of spacetime. A localized Goldstone particle is made by a current times a function of spacetime, so we can not sharply distinguish the resulting particles. This redundancy also appears in the coset construction. Suppose $X$ and $X'$ are two different broken generators in different $G_0$-multiplets and we denote their associated Goldstone bosons $\pi$ and $\pi'$. Let $\bar{P}_\nu$ be an unbroken translation generator. Let us also assume that there's a non-trivial commutator of the form $[P_\nu , X] \supseteq X'$. One can see, from calculating the Maurer-Cartan form via the BCH identity, that this implies an undifferentiated $\pi$ in the covariant Goldstone derivative $\nabla_\nu \pi'$. The quadratic term is then $ (\nabla_\nu \pi')^2 \sim c^2 \pi^2 $ ; this is an effective mass term for the $\pi$ Goldstone. Thus we are justified in integrating it out by imposing its equation of motion. A simpler, but equivalent up to redefinitions, constraint is setting $\nabla_\nu \pi' = 0$. This is a covariant constraint, completely consistent with the symmetries. In the literature it is known as an ``inverse Higgs constraint" .
\section{Schr\"odinger Superfluid from Coset Construction}\label{sec:effL}
In this section, we will use the coset construction to construct the most general Goldstone action consistent with the broken symmetries of a rotationally invariant Schr\"odinger superfluid. For the purpose of determining local properties of the superfluid state in the trap we can first work in the thermodynamic limit defined by $\Lambda_{IR} \sim \omega \rightarrow 0$. The symmetry generators are then just those of the usual Schr\"odinger group.
The superfluid ground state $\ket{\Phi}$ spontaneously breaks the number charge $N$. As mentioned in the introduction, this state also breaks the conformal generators and boosts. It is simplest to describe such states in the grand canonical ensemble. We remark that in the thermodynamic limit, one can leverage the equivalence between canonical ensemble with fixed chrage and grand canonical ensemble\footnote{As a result, one can always view the large charge expansion as a large chemical potential expansion}. Thus, in what follows, we define the operator $\bar{H} = H-\mu N$ such that $\bar{H} \ket{\Phi} = 0$. The parameter $\mu$ plays the role of a chemical potential; it is a Lagrange multiplier to be determined by the charge density. By assumption, $\ket{\Phi}$ is not an eigenstate of $N$. It therefore cannot be an eigenstate of $H$ while satisfying $\bar{H}\ket{\Phi}=0$. The unbroken `time' translations are therefore generated by $\bar{H}$\cite{nicolis2013more}. The symmetry breaking pattern is then given by:
\begin{equation} \label{symmetry}
\text{Unbroken:}~ \{ \bar{H} \equiv H - \mu N\,, P_i\,, J_{ij}\} ~~~~~ \text{Broken:}~ \{ N\,, K_i\,, C\,, D\}\,,
\end{equation}
for which we can parameterize the coset space as:
\begin{equation} \label{cosetelem}
U= e^{{\bf i} \bar{H} t} e^{-{\bf i} \vec{P}\cdot \vec{x}} e^{{\bf i} \vec{\eta} \cdot \vec{K}} e^{-{\bf i} \lambda C} e^{-{\bf i} \sigma D} e^{{\bf i} \pi N} = e^{{\bf i} H t} e^{-{\bf i} \vec{P}\cdot \vec{x}} e^{{\bf i} \vec{\eta} \cdot \vec{K}} e^{-{\bf i} \lambda C} e^{-{\bf i} \sigma D} e^{{\bf i} \chi N}\,.
\end{equation}
Here we use 4 distinct Goldstone fields:
\begin{itemize}
\item $\pi$ is the `phonon', the Goldstone for the charge. It defines the shifted field $\chi \equiv \pi + \mu t$
\item $\vec{\eta}$ is the `framon', the Goldstone for (Galilean) boosts. It transforms as a vector.
\item $\lambda$ is the `trapon', the Goldstone for special conformal transformations.
\item $\sigma$ is the `dilaton', the Goldstone for dilations.
\end{itemize}
To allow for a background field $A_\mu$, we define the covariant derivative $D_\mu = \partial_\mu + {\bf i} A_\mu N$. From this group element we can calculate the MC form:
\begin{equation} \label{MCform}
-{\bf i} U^{-1} D_\mu U \equiv E_\mu^\nu[ \bar{P}_\nu + (\nabla_\nu \eta^i )K_i - (\nabla_\nu \lambda ) C - (\nabla_\nu \sigma) D + (\nabla_\nu \pi) Q]
\end{equation}
where $\bar{P}_\mu \equiv (-\bar{H}, \vec{P})$, and we've anticipated the absence of a gauge field for $J_{ij}$. We remark that the relativistic notation is just for ease of writing; because space and time are treated differently we have to treat those components of the MC form separately. Explicitly we have the following:
\begin{equation} \label{vierbein}
E_0^0 = e^{-2\sigma}\,, ~~~~~ E_0^i = -\eta^i e^{-\sigma}\,, ~~~~~ E_i^0 = 0\,, ~~~~~ E_i^j = \delta_i^j e^{-\sigma}\,,
\end{equation}
\begin{equation} \label{framon}
\nabla_0 \eta^j = e^{3 \sigma}( \dot{\eta}^j + \vec{\eta}\cdot \vec{\partial}\eta^j )\,, ~~~~~ \nabla_i \eta^j = e^{2 \sigma}( \partial_i \eta^j - \lambda \delta_i^j )\,,
\end{equation}
\begin{equation} \label{trapon}
\nabla_0 \lambda = e^{4\sigma}(\dot{\lambda}+ \vec{\eta}\cdot \vec{\partial}\lambda + \lambda^2)\,, ~~~~~ \nabla_i \lambda = e^{3\sigma}\partial_i \lambda\,,
\end{equation}
\begin{equation} \label{dilaton}
\nabla_0 \sigma = e^{2\sigma}(\dot{\sigma}+\vec{\eta}\cdot \vec{\partial}\sigma - \lambda)\,, ~~~~~ \nabla_i \sigma = e^\sigma \partial_i \sigma\,,
\end{equation}
\begin{equation} \label{phonon}
{\nabla_0 \pi} = e^{2\sigma}(\dot{\chi} - A_0 -\mu e^{-2\sigma}+ \vec{\eta}\cdot \vec{\partial}\chi + \frac{1}{2}\eta^2)\,, ~~~~~ \nabla_i \pi = e^\sigma ( \partial_i \chi - A_i + \eta_i)\,,
\end{equation}
which can be used to construct the effective action.
There are 4 commutators that each imply a different constraint
\begin{equation} \label{piIHC}
[P_i , K_j] = - {\bf i} \delta_{ij} N \implies \nabla_i \pi = 0\,, ~~~~~~ [\bar{H} , D ] = -2{\bf i} (\bar{H} + \mu N) \implies \nabla_0 \pi = 0\,,
\end{equation}
\begin{equation} \label{confIHC}
[\bar{H}, C ] = -{\bf i} D \implies \nabla_0 \sigma = 0\,, ~~~~~~ [P_i , C] = -{\bf i} K_j \delta_{ij} \implies \nabla_i \eta^j = 0\,.
\end{equation}
Imposing them allows everything to be written in terms of a single Goldstone field $\chi$. Upon defining the gauge invariant derivatives:
\begin{equation} \label{gaugeinv}
D_t \chi \equiv \partial_t \chi - A_0\,, ~~~~ D_i \chi \equiv \partial_i \chi - A_i\,,
\end{equation}
the simplest pair can be solved as:
\begin{eqnarray}\label{superfluid}
\nabla_i \pi = 0 &\implies & \eta_i = -D_i \chi\,,\\
\nabla_0 \pi = 0 &\implies & \mu e^{-2 \sigma} = D_t \chi - \frac{1}{2}D_i \chi D^i \chi \,.
\end{eqnarray}
The other two involve the trapon $\lambda$:
\begin{eqnarray}\label{confworked1}
\nabla_i \eta^j = 0 &\implies & \lambda \delta_i^j= \partial_i \eta^j= - \partial_i D^j \chi\,, \\\label{confworked2}
\nabla_0 \sigma = 0 &\implies& \lambda = \dot{\sigma}+ \vec{\eta}\cdot \vec{\partial}\sigma\,,
\end{eqnarray}
which can be written together as:
\begin{equation} \label{confcont}
\dot{\sigma}+\vec{\eta}\cdot \vec{\partial}\sigma - \frac{1}{d}\vec{\partial}\cdot \vec{\eta} = -\frac{1}{2} \frac{\partial_0 X}{X} + \frac{1}{2} \frac{D_i \chi \partial^i X}{X} + \frac{1}{d} \partial_i D^i \chi = 0\,.
\end{equation}
This is simply the leading order equation of motion for $\chi$ as we will show below.
The leading order action comes from the vierbein \eqref{vierbein} which can be expressed with $\chi$ as
\begin{equation} \label{expvierbein}
\det E = e^{-(d+2)\sigma} \propto \left(D_t \chi-\frac{1}{2}D_i \chi D^i \chi\right)^{\frac{d}{2}+1}\,.
\end{equation}
Defining the variable $X$ as
\begin{equation} \label{Xdef}
X = D_t \chi -\frac{1}{2}D_i \chi D^i \chi\,,
\end{equation}
we can write the leading order effective action as
\begin{equation} \label{leading order}
S_0 = \int dt d^dx ~c_0\ \mathcal{O}_0=\int dt d^dx ~ c_0 X^{\frac{d}{2}+1}\,,
\end{equation}
where $c_0$ is a dimensionless constant. The leading order theory \eqref{leading order} is time reversal invariant as it acts as:
\begin{equation} \label{T reversal}
T: ~~~ t\rightarrow -t\,, ~~~ \pi \rightarrow - \pi \,,~~~ A_0 \rightarrow - A_0\,.
\end{equation}
Higher derivative terms are constructable from contractions of the following objects:
\begin{equation} \label{highdir}
\nabla_0 \eta^i\,, ~~~~ \nabla_0 \lambda\,, ~~~~ \nabla_i \lambda\,, ~~~~ \nabla_i \sigma\,.
\end{equation}
as well as contractions of the `higher covariants'
\begin{equation} \label{highercov}
\nabla^H_0 = -e^{2\sigma} \partial_0 + e^\sigma \eta^i \partial_i\,, ~~~~~ \nabla^H_i = e^\sigma \partial_i\,,
\end{equation}
acting on the tensors \eqref{highdir}. All of these objects can be expressed in terms of $\chi$ by the constraints \eqref{piIHC} and \eqref{confIHC}. Even though we are interested in large $Q$ expansion eventually, to touch the base with the EFT written in \cite{son2006general}, we emphasize that the power counting is done with $X$, being taken to be $\mathcal{O}(p^0)$, which implies that objects like $[(\partial_i \chi)(\partial_i \chi)]^k$, $\partial_t\chi$ and $A_0$ are also order one. Additional derivatives then increase the dimension. In what follows, the field strengths$E_i$ and $F_{ij}$ are defined as
\begin{equation} \label{EandBd}
E_i \equiv \partial_0 A_i - \partial_i A_0 ~~~~ F_{ij} \equiv \partial_i A_j - \partial_j A_i\,.
\end{equation}
At $\mathcal{O}(p^2)$ we have following operators:
\begin{align} \label{gradsig}
\mathcal{O}_1 & \equiv \det E\ \nabla_i \sigma \nabla^i \sigma\ \propto\ \frac{X^{\frac{d}{2}+1}}{X^3} \partial_i X \partial^i X\,, \\
\label{quadratic}
\mathcal{O}_2 & \equiv \det E\ (\nabla_0 \eta_i - 2 \nabla_i \sigma)^2\ \propto\ \frac{X^{\frac{d}{2}+1}}{X^3}[E^2 + 2 E_i F_{ij} (D_j \chi) + F_{ij} F_{ik} (D_j \chi) (D_k \chi) ]\,, \\
\label{derivE}
\mathcal{O}_3 & \equiv \det E\ \nabla_i \sigma(\nabla_0 \eta^i - 2 \nabla^i \sigma)\ \propto\ \frac{X^{\frac{d}{2}+1}}{X^2} [\partial_i E^i + [\partial_i F_{ij} ](D_j \chi) - \frac{1}{2}F_{ij}F^{ij}]\,, \\
\label{lambda2}
\mathcal{O}_4 & \equiv \det E\ \nabla_0 \lambda\ \propto\ \frac{X^{\frac{d}{2}+1}}{X^2} (\partial_i D^i \chi)^2\,,
\end{align}
where the second expression of \eqref{derivE} is obtained via integration-by-parts and the \eqref{lambda2} is obtained by a straight forward application of the identity \eqref{confcont} and integration-by-parts. These operators were found in reference\cite{son2006general} for $d=3$ by very different means. Additionally, in $d=2$, one can construct following parity violating operators at this order:
\begin{equation} \label{parityvol1}
\mathcal{O}_5 \equiv \det E\ \epsilon^{ij} (\nabla_0 \eta_i ) (\nabla_j \sigma) \propto \frac{X^{\frac{d}{2}+1}}{X^3} \epsilon^{ij} \left[E_i -F_{jk} (D_k \chi)\right](\partial_j X)\,,
\end{equation}
\begin{equation} \label{parityvol2}
\mathcal{O}_6 \equiv \det E\ \epsilon^{ij} \nabla^H_i (\nabla_0 \eta_j - 2 \nabla_j \sigma) \propto \frac{X^{\frac{d}{2}+1}}{X^2} \epsilon^{ij} \partial_i (E_j-F_{jk} (D_k \chi) )\,.
\end{equation}
Similarly in $d=3$ we have $\epsilon^{ijk}$ but that means the parity violating operators will be higher order in the derivative expansion.
\section{Superfluid Hydrodynamics}
In this section, we study the superfluid hydrodynamics. As a warm up, we first consider the fluid without the trap, thus there is no intrinsic length scale associated with such a system. The leading order superfluid Lagrangian is known to take the form \cite{son2006general}:
\begin{equation} \label{pressureL}
\mathcal{L}= P(X)
\end{equation}
where $P$ stands for `pressure' as function of the chemical potential $\mu$ at zero temperature and $X$ is the same as defined in the previous section. Due to the absence of any internal scale, dimensional analysis dictates that:
\begin{equation} \label{pressurescale}
P =c_0 \mu^{\frac{d}{2}+1}\,,
\end{equation}
which we get from \eqref{leading order} by evaluating on the groundstate solution $\chi_{cl} = \mu t$. The number density is conjugate to the Goldstone field $\chi$ and at leading order is:
\begin{equation} \label{numberdensity}
n \equiv \frac{\partial \mathcal{L}}{\partial \dot{\chi} } = P'(X)= c_0 \left(\frac{d}{2}+1\right) X^{\frac{d}{2}}\,.
\end{equation}
One can then define the superfluid velocity in terms of the Goldstone as:
\begin{equation} \label{velocity}
v_i \equiv -D_i \pi = -D_i \chi=\eta_i
\end{equation}
where we have used the inverse Higgs constraint \eqref{superfluid}. This gives a simple interpretation of the equation of motion:
\begin{equation} \label{EoMchi}
\partial_\mu \frac{\partial \mathcal{L}}{\partial (\partial_\mu \chi)} = \partial_t n + \partial_i ( n v^i) = 0\,,
\end{equation}
which is the continuity equation of superfluid hydrodynamics. Using equations \eqref{superfluid}, we can write:
\begin{equation} \label{number}
\partial_\mu n =c_0 \frac{d}{2}\left(\frac{d}{2}+1\right) X^{\frac{d}{2}-1} (\partial_\mu X )= - d n (\partial_\mu \sigma) ~~~~~ \partial_i v^i = -\partial_i D^i \chi = \vec{\partial}\cdot \vec{\eta}
\end{equation}
The equation of motion \eqref{EoMchi} thus comes out to be as follows:
\begin{equation} \label{tada}
\partial_t n + \partial_i ( n v^i) = -d n \dot{\sigma} - d n (\vec{\eta}\cdot \vec{\partial}\sigma) + n \vec{\partial}\cdot \vec{\eta}= 0
\end{equation}
and becomes equivalent to the constraint \eqref{confcont}. Thus the superfluid EFT is consistent with the symmetry breaking pattern we discussed in the previous section.
\subsection{Superfluid in a Harmonic Trap}
Now we turn on the harmonic trap and study this superfluid EFT in the trapping potential by taking:
\begin{equation} \label{trapcouple}
A_0 = \frac{1}{2} \omega^2 r^2\,, ~~~~~ \vec{A}=0\,.
\end{equation}
In the presence of a harmonic potential, the ground state density is no longer uniform. The number density is given by the conjugacy relation \eqref{numberdensity} and to leading order is:
\begin{equation} \label{trapdensity}
n(x) = c_0 \left(\frac{d}{2}+1\right) (\mu - \frac{1}{2} \omega^2 r^2)^{\frac{d}{2}}\,,
\end{equation}
which is vanishing at the ``cloud radius" $R = \sqrt{\frac{2\mu}{\omega^2}}$. This defines an IR cutoff for the validity of our EFT in the trap. Semi-classically, we can fix $\mu$ in terms of the number charge $Q$ by imposing\footnote{This is equivalent to fixing $Q$ by differentiating the free energy given by the action}:
\begin{equation} \label{numberfix}
Q=\vev{Q|\hat{N}|Q} = \int d^dx \vev{Q|n(x)|Q} = \frac{c_0 (2 \pi )^{d/2} \Gamma \left(\frac{d}{2}+2\right) \left(\frac{\mu }{\omega }\right)^d}{\Gamma (d+1)} \implies \frac{\mu}{\omega}\equiv \xi Q^{\frac{1}{d}}
\end{equation}
The naive effective Lagrangian up to next-leading order is then:
\begin{equation} \label{secondorderLeff}
\mathcal{L}_{eff}= c_0 X^{\frac{d}{2}+1} + c_1 \frac{X^{\frac{d}{2}+1}}{X^3} \partial_i X \partial^i X + c_2 \frac{X^{\frac{d}{2}+1}}{X^3} (\partial_i A_0)^2 + c_3 \frac{X^{\frac{d}{2}+1}}{X^2} \partial_i \partial^i A_0 + c_4 \frac{X^{\frac{d}{2}+1}}{X^2} (\partial_i \partial^i \chi)^2
\end{equation}
For $d=2$ we have an additional parity violating operator at this order:
\begin{equation} \label{specialop}
\mathcal{L}_{eff} \noindent c_5 \epsilon^{ij}\frac{ (\partial_i A_0) (\partial_j X)}{X}
\end{equation}
However, this is not the full set of constraints. It can be shown that imposing `general coordinate invariance' will reduce the number of independent Wilson coefficients even further\cite{son2006general}. In particular there are the additional constraints:
\begin{equation} \label{gencorcon}
c_2 = 0 ~~~~ c_3 = - d^2 c_4
\end{equation}
Obtaining these from the coset construction would require additionally gauging the space-time symmetries \cite{brauner2014general}. The requirement of gauging the space-time symmetries is expected as a consequence of the number operator being part of the spacetime symmetry algebra and the fact that the number symmetry has been gauged. We leave this refinement for future work. For reasons that will become clear in the next section it is not necessary to work beyond this order in the derivative expansion.
\section{Operator Dimensions}
\subsection{Ground State Energy \& Scaling of Operator Dimension}\label{sec:dim}
The ground state energy is readily computed by a Euclidean path integral, in the infinite Euclidean time separation, the path integral projects out the ground state, from which one can read off the ground state energy. A nice pedagogical example of this technique can be found in \cite{monin2017semiclassics} in context of fast spinning rigid rotor. On the other hand, from the state operator correspondence, we know that the ground state energy translated to dimension of the corresponding operator. Thus, equipped with the effective Lagrangian \eqref{secondorderLeff} obtained, the operator dimensions can be calculated via the path integral \eqref{pathint}:
\begin{equation} \label{pathintdim}
\lim_{T\rightarrow \infty} \vev{Q|e^{-H_\omega T}|Q} \sim e^{-S_{eff}[\chi_{cl}]-\mu \int d^Dx ~n(x)} \sim e^{-\Delta_Q \omega T}\,,
\end{equation}
where to leading order we have
\begin{equation} \label{actioneval}
-S_{eff}[\chi_{cl}] = c_0 \Omega_d T \int_0^{R} dr~ r^{d-1} \left(\mu -\frac{1}{2}\omega ^2 r^2 \right)^{\frac{d}{2}+1} = c_0\frac{ (2\pi) ^{d/2} \Gamma \left(\frac{d}{2}+2\right)}{\Gamma (d+2)} \left(\frac{\mu }{\omega }\right)^{d+1} \omega T\,.
\end{equation}
Here, $\Omega_d$ is the volume factor. Combining the results of \eqref{actioneval} and \eqref{numberfix} then gives the leading order operator dimension:
\begin{align} \label{leadingopdim}
\Delta_Q = \frac{\mu}{\omega}Q-\left(-\frac{S_{eff}}{\omega T}\right)=\frac{d }{d+1}\xi Q^{1+\frac{1}{d}}\,.
\end{align}
This predicts $\Delta_Q \sim Q^{\frac{3}{2}}$ in $d=2$ and $\Delta_Q \sim Q^{\frac{4}{3}}$ in $d=3$, as in the relativistic case. That these leading order results are finite implies we can trust the EFT prediction. In general, however, the ground state energy in the trap is an infrared (IR) sensitive quantity. This becomes apparent at higher orders in the derivative expansion.
For example, we consider the case of $d=2$. The simplest operator at next leading order is \eqref{gradsig}. To analyze its contribution, define the distance from the cloud $s$ as $r=R-s$. Its contribution to the energy, and hence the operator dimension via \eqref{actioneval}, would go like:
\begin{equation} \label{divd2}
\int \text{d}^3x ~ \frac{\partial_i X \partial^i X}{X} \sim \int_0^R \text{d}r ~ r \frac{\omega^4 r^2}{\mu - \frac{1}{2} \omega^2 r^2} \sim \mu \int \text{d}s ~ \frac{1}{s}\,,
\end{equation}
which is log divergent for small $s$, close to the edge. For $d=3$, noticed in reference \citep{son2006general}, a divergence first appears at next-next leading order associated with the operator:
\begin{equation} \label{badopd3}
\det E (\nabla_i \sigma \nabla^i \sigma)^2 \propto \frac{(\partial_i X \partial^i X)^2}{X^{\frac{7}{2}}}\,.
\end{equation}
This leads to a power-law divergence, implying an even greater sensitivity to IR physics compared to $d=2$. Ultimately these divergences originate from the breakdown of our EFT as the superfluid gets less dense. This occurs in a small region before the edge of the cloud at radius $R^* \equiv R - \delta$ where $\delta$ is roughly the width of this region. Following \cite{son2006general}, we can estimate the size of this region as follows. One interpretation of \eqref{trapdensity} is that the chemical potential is now effectively space dependent. At the cutoff radius $R^*$, there is then an ``effective chemical potential"
\begin{equation} \label{effectivepot}
\mu(r) \equiv \mu- \frac{1}{2} \omega^2 r^2\,, ~~~~~ \mu_{eff} \equiv \mu( r=R^*) = \frac{1}{2} \delta(2 R - \delta) \omega^2 \approx R \omega^2 \delta\,.
\end{equation}
There is a length scale set by $\mu_{eff}$ which controls the EFT expansion parameter in this region. Once that length is comparable to the distance $\delta$ itself we cannot claim to control the calculation semi-classically. Using \eqref{effectivepot} this gives the estimate scaling:
\begin{equation} \label{cloudedgescale}
\delta \sim \sqrt{\frac{1}{\mu_{eff}}} \implies \delta \sim \frac{1}{(\omega^2 \mu)^{\frac{1}{6}}}
\end{equation}
We can estimate the contribution of this region to the energy by cutting off the divergent integrals at $R^*$. For $d=2$ the effective action contains a term:
\begin{equation} \label{d2NLOterm}
-S_{eff} \noindent c_1 (2\pi) T \int_0^{R^*} dr ~ r \frac{\omega^4 r^2}{\mu - \frac{1}{2}\omega^2 r^2} = 4\pi T \mu c_1 \left( \frac{13}{8}- \log\left[ \frac{2\mu}{\mu_{eff}}\right]\right)+ \cdots
\end{equation}
where the $\cdots$ terms vanish as $\delta \rightarrow 0 $
Substituting the relations \eqref{numberfix} and \eqref{cloudedgescale} gives:
\begin{equation} \label{d2NLOterm2}
\Delta_Q \noindent -4\pi \xi Q^{\frac{1}{2}} c_1 \left( \frac{13}{8} -\frac{1}{2}\log 2- \frac{1}{3}\log Q - \frac{2}{3}\log \xi\right)
\end{equation}
Changing the cutoff relation \eqref{cloudedgescale} by a factor can then change the $\mathcal{O}(Q^{\frac{1}{2}} )$ contribution, but not the logarithmic divergence which is universal. This translates to an uncertainty of order $\mathcal{O}(Q^{\frac{1}{2}})$ in the operator dimension in $d=2$. A similar analysis\cite{son2006general} for $d=3$ and \eqref{badopd3} translates to uncertainty of order $\mathcal{O}(Q^{\frac{5}{9}})$.
Unlike $d=2$, the operator \eqref{gradsig} gives a finite correction to leading order scaling of dimension of operator in $d=3$. This can be found by figuring out the contibution to $S_{eff}$ [see Eq.~\eqref{secondorderLeff}]
\begin{align}
-S_{eff}\noindent c_1\int \text{d}\tau^E\ \int_0^{R}\ \text{d}r\ 4\pi r^2\ \left(\frac{\omega^4r^2}{\sqrt{\mu-\tfrac{1}{2}\omega^2r^2}}\right) =c_1 (3\sqrt{2}\pi^2) \left(\frac{\mu}{\omega }\right)^2\omega T
\end{align}
Similar contribution\footnote{Contribution should have come from \eqref{quadratic} as well, but as we mentioned earlier, $c_2=0$ \cite{son2006general}.} comes from \eqref{derivE}:
\begin{align}
-S_{eff}\noindent c_3\int \text{d}\tau^E\ \int_0^{R}\ \text{d}r\ 4\pi r^2(\omega^2) \left(\mu-\tfrac{1}{2}\omega^2r^2\right)^{\tfrac{1}{2}} =c_3 \left(\frac{3\pi^2}{\sqrt{2}}\right) \left(\frac{\mu}{\omega }\right)^2\omega T
\end{align}
To summarize, using \eqref{leadingopdim}, we have
\begin{align}
\label{eq:m1}
\Delta_Q&=\frac{3}{4}\left(\xi Q^{4/3}\right) - \left(c_1+\frac{c_3}{2}\right) (3\sqrt{2}\pi^2) \xi^2 Q^{2/3}+\mathcal{O}(Q^{\frac{5}{9}}) \quad \text{for}\ d=3\,,\\
\label{eq:m2}
\Delta_Q&=\frac{2}{3} \left(\xi Q^{3/2}\right) + c_1 \frac{4\pi}{3}\xi \left( Q^{\frac{1}{2}} \log Q \right)+\mathcal{O}\left(Q^{\frac{1}{2}} \right) \quad \text{for}\ d=2\,.
\end{align}
The Eq.~\eqref{leadingopdim}, \eqref{eq:m1} and \eqref{eq:m2} constitute the main findings of this subsection.
\subsection{Excited State Spectrum}\label{sec:exc}
We can also analyze the low energy excitations above the ground state. These correspond to low lying operators in the spectrum at large charge. To compute their dimension, we expand the leading action \eqref{leading order} to quadratic order in fluctuations $\pi$ about the semi-classical saddle, $\chi = \mu t + \pi$. The spectrum of $\pi$ can then be found by linearizing the equation of motion \eqref{EoMchi}:
\begin{equation} \label{linearEOM}
\ddot{\pi} - \frac{2}{d}\left(\mu - \frac{1}{2} \omega^2 r^2\right)\partial^2 \pi + \omega^2 \vec{r}\cdot \vec{\partial}\pi = 0
\end{equation}
Expanding $\pi(t,x) = e^{{\bf i} \epsilon t} f(r) Y_\ell$ where $Y_\ell$ is a spherical harmonic, one can show \eqref{linearEOM} reduces to a hypergeometric equation. Details can be found in Appendix A. The dispersion relation is given by:
\begin{equation} \label{dispersion}
\epsilon(n,\ell) = \pm \omega \left(\frac{4}{d}n^2 + 4n + \frac{4}{d}\ell n - \frac{4}{d}n + \ell \right)^{\frac{1}{2}}
\end{equation}
where $\ell$ is the angular momentum and $n$ is a non-negative integer.
In the NRCFT state-operator correspondence, there are two different operators which generate descendants. In the Galilean frame, these are the operators $\vec{P}$ and $H$. While $\vec{P}$ raises the dimension by 1 and carries angular momentum, acting by $H$ raises the dimension by 2 and carries no angular momentum. In the oscillator frame, this corresponds to:
\begin{equation} \label{oscillatorraise}
\vec{P}_\pm = \frac{1}{\sqrt{2\omega}}\vec{P}\pm {\bf i} \sqrt{\frac{\omega}{2}} \vec{K} ~~~~~ L_\pm = \frac{1}{2}(\frac{1}{\omega}H - \omega C \pm {\bf i} D )
\end{equation}
which then satisfy
\begin{equation} \label{raiseit}
[H_\omega , \vec{P}_\pm ] = \pm \omega \vec{P}_\pm ~~~~~ [H_\omega , L_\pm ] = \pm 2 \omega L_\pm
\end{equation}
One can check by equation \eqref{dispersion} that $\epsilon(n=0,\ell=1) = \pm \omega$ and $\epsilon(n=1,\ell=0)=\pm 2\omega$. This allows us to identify these Goldstone modes with the descendant operators in \eqref{oscillatorraise} as $\pi_{(n=0,\ell=1)} \sim P_\pm$ and $\pi_{(n=1,\ell=0)} \sim L_\pm$. The other modes generate distinct primaries and descendants, including higher spin. We remark that in a strict sense, the above is the leading order result for the difference in dimensions between low-lying operators in this sector and the dimension of the ground state found in the previous section. It is also subject to corrections suppressed in $1/Q$ from subleading operators and loop effects.
\section{Correlation Functions}
In a relativistic CFT, the form of two and three point correlators is entirely fixed by symmetry. However, the four-point function depends on two conformally invariant cross ratios of the coordinates. The Schr\"odinger symmetry is less constraining, as there exists an invariant cross ratio even for a three-point function. This implies only the two-point functions of (number) charged operators is completely determined by symmetry.
\subsection{Two Point Function}
Following \cite{monin2017semiclassics}, we start with analyzing two point function. In path integral approach, when the in and out states are well separated in time, we have
\begin{align}
\langle \Phi_{Q},\tau_2 | e^{-H_{\omega}(\tau^{(E)}_2-\tau^{(E)}_1)}|\Phi_{Q},\tau_1\rangle = e^{-\Delta_{\mathcal{O}}(\tau^{(E)}_2-\tau^{(E)}_1)}
\end{align}
where $\tau^{(E)}$ is the Euclideanized oscillator time. This is obtained from $\tau$ by doing Wick rotation i.e. $\tau^{(E)}={\bf i} \tau$. This is evidently consistent with \eqref{eq:twopoint} upon doing the Wick rotation and taking $(\tau^{(E)}_2-\tau^{(E)}_1)\rightarrow\infty$. One subtle remark is in order: the Hamiltonian $H_{\omega}$ generates the time ($\tau$) translation in oscillator frame. Thus the states prepared by path integration corresponds to operators in oscillator frame.
\subsection{Three Point Function}\label{sec:3pt}
We consider the matrix element that defines the simplest charged\footnote{The additional charge of $\bra{\Phi}$ is required for the correlator to be overall neutral and therefore non-vanishing.} three-point function
\begin{equation} \label{3ptelem}
\vev{\Phi_{Q+q}|\phi_q(y)|\Phi_Q}
\end{equation}
where $\phi_q$ is a light charged scalar primary with charge $q$ and both of $\Phi_Q$ and $\Phi_{Q+q}$ has $\mathcal{O}(1)$ dimension, given by $\Delta_Q$ and $\Delta_{Q+q}$. By assumption, $\phi_q$ transforms in a linear representation $R$ of the unbroken rotation group. To enable calculation in our EFT, we can extend this to a linear representation of the full Schroedinger group using the Goldstone fields. In what follows, we take $\phi_q$ as the ``dressed" operator\cite{monin2017semiclassics}:
\begin{equation} \label{dressing}
\phi_q(y) = R\left[e^{{\bf i} \vec{K}\cdot \vec{\eta}} e^{-{\bf i} \lambda C} e^{-{\bf i} \sigma D} e^{{\bf i} \chi N} \right] \hat{\phi}_q
\end{equation}
where, by the assumption of $\phi_q$ being a scalar primary, is trivially acted on by $\vec{K}$ and $C$. This, combined with \eqref{superfluid} gives
\begin{equation} \label{opexpand}
\phi_q = c_q X^{\frac{\Delta_\phi}{2}} e^{{\bf i} \chi q}
\end{equation}
where $c_q$ is a constant, which depends on UV physics. Upon evaluating \eqref{3ptelem} semi-classically about the saddle we found before, the leading order result for the correlator comes out to be:
\begin{align} \label{semiclassic3pt}
\nonumber \vev{\Phi_{Q+q}(\tau_2)|\phi_q (\tau,\vec{y})|\Phi_Q(\tau_1)} &= c_q \left(\mu- \frac{1}{2}m \omega^2 y^2\right)^{\frac{\Delta_\phi}{2}} e^{{\bf i} \mu q (\tau-\tau_{2})}e^{-{\bf i}\Delta_{Q}(\tau_2-\tau_1)}\\
&=c_q \mu^{\frac{\Delta_\phi}{2}}\left(1-\frac{y^2}{R^2}\right)^{\frac{\Delta_\phi}{2}} e^{\mu q \tau^{(E)}} e^{\omega\left(-\Delta_{Q+q}\tau^{(E)}_2+\Delta_Q\tau^{(E)}_{1}\right)}
\end{align}
where we have used the following identity, which can be derived using the leading order operator dimension \eqref{leadingopdim} and \eqref{numberfix}:
\begin{equation} \label{opdiff}
\frac{\Delta_{Q+q} - \Delta_Q}{q} = \alpha_0 \left(1+\frac{1}{d}\right) Q^{\frac{1}{d}} + \mathcal{O}\left(\frac{1}{Q}\right) \approx \frac{\partial \Delta_Q}{\partial Q} = \frac{\mu}{\omega}
\end{equation}
as expected since $\mu$ is a chemical potential and $\omega \Delta_Q$ is the energy. We note that the operator insertion should be away from the edge of the cloud $|y-R| \gg \delta$, where $\delta$ is the cut-off imposed to keep the divergences coming from the $y\rightarrow R$ limit at bay.
Now we use (the details can be found in appendix~[\ref{app:2pt}])
\begin{align}
\nonumber\lim_{\tau^{(E)}_2\rightarrow \infty}\frac{1}{(1+\omega^2t_2^2)^{\Delta_{Q+q}/2}}\exp\left(-\omega\Delta_{Q+q}\tau^{(E)}_2\right) &= 2^{-\Delta_{Q+q}}\omega^{\Delta_{Q+q}/2}\,,\\
\nonumber\lim_{\tau^{(E)}_1\rightarrow -\infty}\frac{1}{(1+\omega^2t_1^2)^{\Delta_{Q}/2}}\exp\left(\omega\Delta_{Q}\tau^{(E)}_1\right) &= 2^{-\Delta_{Q}}\omega^{\Delta_{Q}/2}\,,
\end{align}
to write down the correlator in terms of operators in Galilean frame (we repeat that the path intergral in oscillator frame prepares a state corresponding to operator in oscillator frame):
\begin{align}\label{eq:3ptlc}
\vev{\Phi_{Q+q}({\bf i}/\omega)|\phi_q (\tau,\vec{y})|\Phi_Q(-{\bf i}/\omega)}= c_q \mu^{\frac{\Delta_\phi}{2}}\left(1-\frac{y^2}{R^2}\right)^{\frac{\Delta_\phi}{2}} e^{\mu q \tau^{(E)}} 2^{-\Delta_{Q}-\Delta_{Q+q}}\omega^{(\Delta_{Q}+\Delta_{Q+q})/2}\,.
\end{align}
This can be matched onto the three point function, which is constrained by Schr\"odinger algebra:
\begin{align}\label{eq:3ptsym}
\vev{\Phi_{Q+q}|\phi_q (\tau,\vec{y})|\Phi_Q} &=F(v) \exp\left(\frac{q}{2} \omega y^2\right) (2)^{\Delta_{\phi}} \left(\frac{{\bf i}\omega}{2}\right)^{\frac{\Delta}{2}}e^{- i \omega \left(\Delta_{Q}-\Delta_{Q+q}\right)\tau}.
\end{align}
The appendix~[\ref{app:3pt}] has the necessary details. Now, upon comparing \eqref{eq:3ptsym} and \eqref{eq:3ptlc}, we deduce the universal behavior of $F(v)$ in the large charge sector:
\begin{equation} \label{Ffunction}
F(v= {\bf i} \omega y^2) \propto Q^{\frac{\Delta_\phi}{2d}} \left(1- \frac{\omega y^2 }{2\xi }Q^{-1/d}\right)^{\frac{\Delta_\phi}{2}} e^{-\frac{1}{2} q \omega y^2}
\end{equation}
which can be rewritten as following, using \eqref{leading order}:
\begin{align} \label{Ffunction2}
F(v= {\bf i} \omega y^2) \propto \Delta_Q^{\frac{\Delta_\phi}{2(d+1)}}\left(1- \frac{\omega y^2}{2\xi}(\tfrac{d+1}{d\xi}\Delta_Q)^{-\tfrac{1}{d+1}} \right)^{\frac{\Delta_\phi}{2}} e^{-\frac{1}{2} q \omega y^2}
\end{align}
The \eqref{Ffunction} and \eqref{Ffunction2} are the main results of this subsection. This shows the universal scaling behavior of the structure function $F$ in the large charge sector.
\section{Conclusions and Future Directions}
We have studied the large charge ($Q$) sector of theories invariant under Schr\"odinger group. We have employed coset construction to write down an effecive field theory (EFT) describing the large $Q$ sector in any arbitrary dimension $d\geq2$ assuming superfluidity and rotational invariance. The effective Lagrangian is given by
\begin{equation}
\nonumber \mathcal{L}_{eff}= c_0 X^{\frac{d}{2}+1} + c_1 \frac{X^{\frac{d}{2}+1}}{X^3} \partial_i X \partial^i X + c_2 \frac{X^{\frac{d}{2}+1}}{X^3} (\partial_i A_0)^2 + c_3 \frac{X^{\frac{d}{2}+1}}{X^2} \partial_i \partial^i A_0 + c_4 \frac{X^{\frac{d}{2}+1}}{X^2} (\partial_i \partial^i \chi)^2
\end{equation}
where $X=\partial_t \chi-A_0 -\frac{1}{2}\partial_i \chi \partial^i \chi $ and $\chi$ is the Goldstone excitation of the superfluid ground state. We emphasize that the general co-ordinate invariance, as discussed in \cite{son2006general} will put more constraints on the Wilson coefficients, we leave that as a future project. The EFT is then studied perturbatively as an expansion in $1/Q$. This is to be contrasted with the EFT written down in \cite{son2006general}. While EFT in \cite{son2006general} is controlled by small momentum parameter, ours is controlled by $1/Q$ expansion, which enables us to probe and derive universal results and scaling behaviors in large $Q$ sector. In particular, when $Q$ is very large, we find the scaling behavior of operator dimension with charge, consistent with that found very recently in \cite{Favrod:2018xov}. We also find that in the large charge sector, structure function of three point correlator has a universal behavior. Last but not the least we derived the dispersion relation for the low energy excitation over this state with large $Q$ and identify the two different kind of descendents as two different modes of excitations. A summary of the results can be found in the introduction.
The theory of conformal, and even superconformal, anyons has been studied before in great detail \cite{doroud2018conformal, doroud2016superconformal,Nishida:2007pj,Jackiw:1991au}. In these systems there exists a simple $n$-particle operator $\mathcal{O}=(\Phi^\dag)^n$ whose dimension is given as
\begin{equation} \label{dimshortanyon}
\Delta_\mathcal{O} = n + n(n-1)\theta
\end{equation}
where $\theta$ is the statistics parameter that arises from the Chern-Simons term of level $k$ as $\theta=\frac{1}{2k}$ for bosonic theories. For large $k$ relative to $n$, close to the bosonic limit, this is known to be the ground state in the trap. It is known as the "linear solution" in the literature due to the linear dependence on $\theta$. For the superconformal theories it is a BPS operator and the dimension \eqref{dimshortanyon} is exact. A state corresponding to such an operator is not a superfluid and our theory cannot capture the physics of the system in that regime. However, it is known there is a level crossing for smaller $k$ where the ground state corresponds to an operator whose dimension is not protected by the BPS bound. For those operators the classical dimension scales as $n^\frac{3}{2}$, in agreement with our results. We are then led to believe the effective field theory we've constructed may apply to anyon NRCFTs in that regime.
Another family of NRCFTs can be defined by the holographic constructions of McGreevy, Balasubramanian\cite{Balasubramanian:2008dm} and Son\cite{son2008toward}. It would be interesting to study these on the gravitational side in the large charge limit, as there might exist a regime where both the EFT and gravity descriptions are valid. The analog of this for the relativistic case was carried out recently\cite{loukas2018ads}.
One can envision to extend our results in several ways. One possible extension of these results would be to study operators with large spin as well as charge. If the superfluid EFT remains valid, for sufficiently large spin, one naively expects such operators correspond to vortex configurations in the trap. This was studied in $CFT_3$, where multiple distinct scaling regimes were shown to exist \cite{Cuomo:2017vzg}. Moreover, one can generalize these results to NRCFTs with a larger internal global symmetry group or study systems where the symmetry breaking pattern is different. Potentially interesting examples include ``chiral" superfluids \cite{hoyos2014effective}, where the rotational symmetry is additionally broken by the superfluid order parameter, or the vortex lattice \cite{moroz2018effective} where the translation symmetry is spontaneously broken.
\section*{Acknowledgements}
The authors acknowledge useful comments from John McGreevy. This work was in part supported by the US Department of Energy (DOE) under cooperative research agreement DE-SC0009919.
|
3,212,635,537,698 | arxiv | \section{Introduction}\label{introduction}
Fano manifolds, i.e., smooth complex projective varieties with ample anticanonical class,
play an important role in the classification of complex projective varieties.
In \cite{mori79}, Mori showed that Fano manifolds are uniruled, i.e.,
they contain rational curves through every point.
Then he studied minimal dominating families of rational curves on Fano manifolds, and used them to
characterize projective spaces as the only smooth projective varieties having ample tangent bundle. Since then,
minimal dominating families of rational curves have been extensively investigated and proved to be a useful tool in the
study of uniruled varieties.
Let $X$ be a smooth complex projective uniruled variety, and $x\in X$ a general point.
A \emph{minimal family of rational curves through $x$} is a \emph{smooth} and \emph{proper}
irreducible component $H_x$
of the scheme $\rat(X,x)$ parametrizing rational curves on $X$ passing through $x$.
There always exists such a family.
For instance, one can take $H_x$ to be an irreducible component of $\rat(X,x)$ parametrizing
rational curves through $x$ having minimal degree with respect to some fixed
ample line bundle on $X$.
While we view $X$ as an abstract variety,
$H_x$ comes with a natural polarization $L_x$, which can be defined as follows
(see Section~\ref{section:rat_curves} for details).
By \cite{kebekus}, there is a \emph{finite morphism} $\tau_x: \ H_x \to \p(T_xX^{^{\vee}})$
that sends a point parametrizing a curve smooth at $x$ to its tangent direction at $x$.
We set $L_x=\tau_x^*\o(1)$.
We call the pair $(H_x, L_x)$ a \emph{polarized minimal family of rational curves through $x$}.
The image ${\mathcal {C}}_x$ of $\tau_x$ is called the \emph{variety of minimal rational tangents} at $x$.
The natural projective embedding ${\mathcal {C}}_x\subset \p(T_xX^{^{\vee}})$ has been
successfully explored to investigate the geometry of Fano manifolds.
See \cite{hwang} and \cite{hwang_ICM} for an overview of applications of the
variety of minimal rational tangents.
In this paper we view $(H_x, L_x)$ as a \emph{smooth polarized variety}.
We start by giving a formula for all the Chern characters of the variety $H_x$ in terms of the
Chern characters of $X$ and $c_1(L_x)$.
This illustrates the general principle that the pair $(H_x, L_x)$ encodes many
properties of the ambient variety $X$.
In what follows $\pi_x:U_x\to H_x$ and $\ev_x:U_x\to X$
denote the universal family morphisms introduced in section~\ref{section:rat_curves}, and the $B_j$'s
denote the Bernoulli numbers, defined by the formula
$\frac{x}{e^x-1}=\sum_{j=0}^{\infty} \frac{B_j}{j!} x^j$.
\begin{prop} \label{chern_characters}
Let $X$ be a smooth complex projective uniruled variety.
Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$.
For any $k\geq1$, the $k$-th Chern character of $H_x$ is given by the formula:
\begin{equation}\label{ch_k for H_x}
ch_k(H_x)=\sum_{j=0}^{k}\frac{(-1)^jB_j}{j!}c_1(L_x)^j{\pi_x}_*\ev_x^*\big(\text {ch}_{k+1-j}(X)\big)-\frac{1}{k!}c_1(L_x)^k.
\end{equation}
When $k$ is $1$ or $2$ this becomes:
\begin{equation}\label{c_1 of H_x}
c_1(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)+\frac{d}{2}c_1(L_x), \ \ and
\end{equation}
\begin{equation}\label{ch_2 of H_x}
\text {ch}_2(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_3(X)\big)+\frac{1}{2}{\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)\cdot c_1(L_x)+\frac{d-4}{12}c_1(L_x)^2.
\end{equation}
\end{prop}
Formulas for the first Chern class $c_1(H_x)$ were previously obtained in \cite[Proposition 4.2]{druel_chern_classes} and
\cite[Theorem 1.1]{dJ-S:Chern_classes}. However, $c_1(L_x)$ appears disguised in those formulas.
\medskip
Next we turn our attention to Fano manifolds $X$ whose Chern characters satisfy some positivity
conditions.
In order to state our main theorem, we introduce some notation.
See section~\ref{section:rat_curves} for details.
Given a positive integer $k$, we denote by $N_k(X)_{\r}$ the
real vector space of $k$-cycles on $X$ modulo numerical equivalence.
We denote by $\eff_k(X)\subset N_k(X)_{\r}$ the closure of the cone generated by effective $k$-cycles.
There is a linear map ${\ev_x}_*\pi_x^*: N_k(H_x)_{\r} \to N_{k+1}(X)_{\r}$.
A codimension $k$ cycle $\alpha\in A^k(X)\otimes_\z\q$ is
\emph{weakly positive} (respectively \emph{nef}) if $\alpha\cdot \beta>0$ (respectively $\alpha\cdot \beta \geq 0$)
for every effective integral $k$-cycle $\beta\neq 0$.
In this case we write $\alpha>0$ (respectively $\alpha\geq0$).
One easily checks that the only del Pezzo surface satisfying $\text {ch}_2>0$ is $\p^2$.
In \cite{2Fano_3folds}, we go through the classification of Fano threefolds, and
check that the only ones satisfying $\text {ch}_2>0$ are $\p^3$ and the smooth quadric hypersurface in $\p^4$.
In higher dimensions, Proposition~\ref{chern_characters} above
allows us to translate positivity properties of the Chern characters of $X$ into those of $H_x$, and
classify polarized varieties $(H_x, L_x)$ associated to
Fano manifolds $X$ with $\text {ch}_2(X)\geq0$ and $\text {ch}_3(X)\geq0$.
The following is our main theorem.
\begin{thm}\label{thm1} \label{thm2}
Let $X$ be a Fano manifold.
Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$.
Set $d=\dim H_x$.
\begin{enumerate}
\item If $\text {ch}_2(X)>0$ (respectively $\text {ch}_2(X)\geq0$), then $-2K_{H_x}-dL_x$ is ample (respectively nef).
This necessary condition is also sufficient provided that ${\ev_x}_*\pi_x^*\big(\eff_1(H_x)\big)=\eff_2(X)$.
\item If $\text {ch}_2(X)>0$, then $H_x$ is a Fano manifold with $\rho(H_x)=1$
except when $(H_x, L_x)$ is isomorphic to one of the following:
\begin{enumerate}
\item $\Big(\p^m\times \p^m, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m$,
\item $\Big(\p^{m+1}\times\p^{m} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$,
with $d=2m+1$,
\item $\Big(\p_{\p^{m+1}}\Big(\o(2)\oplus \o(1)^{^{\oplus m}}\Big) \ , \ \o_{\p}(1)\Big)$,
with $d=2m+1$,
\item $\Big(\p^{m}\times Q^{m+1} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$,
with $d=2m+1$, or
\item $\Big(\p_{\p^{m+1}}\big(T_{\p^{m+1}}\big) \ , \ \o_{\p}(1)\Big)$, with $d=2m+1$.
\end{enumerate}
Morever, each of these exceptional pairs occurs as $(H_x, L_x)$ for some Fano manifold $X$ with
$\text {ch}_2(X)>0$.
\item If $\text {ch}_2(X)>0$, $\text {ch}_3(X)\geq0$ and $d\geq 2$, then $H_x$ is a Fano manifold
with $\rho(H_x)=1$ and $\text {ch}_2(H_x)>0$.
\end{enumerate}
\end{thm}
\begin{rems}
\noindent {\bf (i)} Fano manifolds $X$ with $\text {ch}_2(X)\geq 0$ were introduced in \cite{dJ-S:2fanos_1} and \cite{dJ-S:2fanos_2}.
In \cite{dJ-S:2fanos_1} de Jong and Starr described a few examples and many non-examples of such manifolds.
Roughly, the only examples of Fano manifolds with $\text {ch}_2(X)>0$ in their list are
complete intersections of type $(d_1,\cdots, d_m)$ in $\p^n$,
with $\sum d_i^2\leq n$, and the Grassmannians $G(k,2k)$ and $G(k,2k+1)$.
Theorem~\ref{thm1} explains why, as pointed out in \cite{dJ-S:2fanos_1}, other examples are difficult to find.
\smallskip
\noindent {\bf (ii)}
Eventually, one would hope to classify all Fano manifolds with weakly positive (or even nef) higher Chern characters.
Theorem~\ref{thm1} is a step in this direction.
In fact, many homogeneous spaces $X$ are characterized by their variety of minimal rational tangents
${\mathcal {C}}_x=\tau_x(H_x)\subset \p(T_xX^{^{\vee}})$ among Fano manifolds with Picard number one.
This is the case when $X$ is a Hermitian symmetric space or a homogeneous contact manifold
{\cite[Main Theorem]{mok_05}}, or $X$ is the quotient of a complex simple Lie group by
a maximal parabolic subgroup associated to a long simple root {\cite[Main Theorem]{hong_hwang}}.
Notice, however, that $(H_x,L_x)$ carries less information than the embedding ${\mathcal {C}}_x\subset \p(T_xX^{^{\vee}})$.
For instance, in Example~\ref{G2/P},
$X$ is the $5$-dimensional homogeneous space $G_2/P$,
$(H_x, L_x)\cong \big(\p^1,\o(3)\big)$, and ${\mathcal {C}}_x$ is a twisted cubic in $\p(T_xX^{^{\vee}})\cong \p^4$,
and thus degenerate.
\smallskip
\noindent {\bf (iii)}
In a forthcoming paper we classify polarized varieties $(H_x, L_x)$ associated to
Fano manifolds $X$ satisfying $\text {ch}_2(X)\geq0$. In this case, the list of pairs
$(H_x, L_x)$ with $\rho(H_x)>1$ is
much longer than the one in Theorem~\ref{thm1}(2).
\end{rems}
By \cite{mori79}, Fano manifolds are covered by rational curves.
In \cite{dJ-S:2fanos_2}, de Jong and Starr considered the question whether there
is a rational surface through a general point of a Fano manifold $X$
satisfying $\text {ch}_2(X)\geq 0$.
They showed that the answer is positive if the pseudoindex $i_X$ of $X$ is at least $3$.
The condition $i_X\geq 3$ implies that $\dim H_x\geq1$, and so
Theorem~\ref{thm1}(1) recovers their result.
In fact, we can say a bit more:
\begin{thm}\label{thm3}
Let $X$ be a Fano manifold, and
$(H_x, L_x)$ a polarized minimal family of rational curves through a general point $x\in X$.
Suppose that $\text {ch}_2(X)\geq0$ and $d=\dim H_x\geq 1$.
\begin{enumerate}
\item (\cite{dJ-S:2fanos_2}) There is a rational surface through $x$.
\item If $\text {ch}_2(X)> 0$ and $(H_x, L_x)\not\cong$
$\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$, then
there is a generically injective morphism $g:(\p^{2},p)\to (X,x)$
mapping lines through $p$ to curves parametrized by $H_x$.
\end{enumerate}
Suppose moreover that $\text {ch}_2(X)> 0$, $\text {ch}_3(X)\geq0$ and $d\geq 2$.
\begin{enumerate}
\setcounter{enumi}{2}
\item There is a rational $3$-fold through $x$, except possibly if $(H_x, L_x)\cong \big(\p^2,\o(2)\big)$
and ${\mathcal {C}}_x=\tau_x(H_x)$ is singular.
\item Let $(W_h,M_h)$ be a polarized minimal family of rational curves through a general point $h\in H_x$.
Suppose that $(H_x, L_x)\not\cong \big(\p^d,\o(2)\big)$ and
$(W_h,M_h)\not\cong$ $\big(\p^k,\o(2)\big)$, $\big(\p^1,\o(3)\big)$.
Then there is a generically injective morphism $h:(\p^{3},q)\to (X,x)$ mapping
lines through $q$ to curves parametrized by $H_x$.
\end{enumerate}
\end{thm}
\begin{rems}
\noindent {\bf (i)} We believe that the exception in Theorem~\ref{thm3}(3) does not occur.
\smallskip
\noindent {\bf (ii)}
If $H_x$ parametrizes lines under some projective embedding $X\DOTSB\lhook\joinrel\rightarrow \p^N$,
then the morphisms $g$ and $h$ from Theorem~\ref{thm3}(2) and (4)
map lines of $\p^{2}$ and $\p^3$ to lines of $\p^N$.
Hence, they are isomorphisms onto their images.
\end{rems}
This paper is organized as follows. In section~\ref{section:rat_curves} we
introduce polarized minimal families of rational curves and study some of their basic
properties.
In section~\ref{section:Chern} we make a
Chern class computation to prove Proposition~\ref{chern_characters}.
This is a key ingredient to the proof of Theorem~\ref{thm1}.
Theorems~\ref{thm1}
and \ref{thm3} are proved in section~\ref{section:proofs}.
In section~\ref{examples},
we give new examples of Fano manifolds satisfying $\text {ch}_2(X)\geq 0$.
In particular,
we exhibit Fano manifolds $X$ with $\text {ch}_2(X)>0$ realizing
each of the exceptional pairs
in Therorems~\ref{thm1}.
\bigskip
\noindent {\bf Notation.}
Throughout this paper we work over the field of complex
numbers.
We often identify vector bundles with their corresponding locally free subsheaves.
We also identify a divisor on a smooth projective variety $X$ with
its corresponding line bundle and its class in $\pic(X)$.
Let $E$ be a vector bundle on a variety $X$.
We denote the Grothendieck projectivization
$\operatorname{Proj}_X(Sym (E))$ by $\p(E)$, and the tautological line bundle on $\p(E)$
by $\o_{\p(E)}(1)$, or simply $\o_{\p}(1)$.
By a \emph{rational curve} we mean a \emph{proper rational curve}, unless otherwise noted.
\bigskip
\noindent {\it Acknowledgments.}
Most of this work was developed while we were research members at the Mathematical Sciences Research Institute (MSRI) during
the 2009 program in Algebraic Geometry. We are grateful to MSRI and the organizers of the program
for providing a very stimulating environment for our research and for the financial support.
This work has benefitted from ideas and suggestions by
J\'anos Koll\'ar and Jaros\l aw Wi\'sniewski. We thank them for their comments and interest in our work.
We thank Izzet Coskun, Johan de Jong, Jason Starr and Jenia Tevelev
for fruitful discussions on the subject of this paper.
The first named author was partially supported by CNPq-Brazil Research Fellowship and
L'Or\'eal-Brazil For Women in Science Fellowship.
\section{Polarized minimal families of rational curves}\label{section:rat_curves}
We refer to \cite[Chapters I and II]{kollar} for the basic theory of rational curves on complex projective varieties.
See also \cite{debarre}.
Let $X$ be a smooth complex projective uniruled variety of dimension $n$.
Let $x\in X$ be a \emph{general} point.
There is a scheme $\rat(X,x)$ parametrizing rational curves on $X$ passing through $x$.
This scheme is constructed as the normalization of a certain
subscheme of the Chow variety $\chow(X)$ parametrizing effective $1$-cycles on $X$.
We refer to \cite[II.2.11]{kollar} for details on the construction of $\rat(X,x)$.
An irreducible component $H_x$ of $\rat(X,x)$ is called a \emph{family of rational curves through $x$}.
It can also be described as follows.
There is an irreducible open subscheme $V_x$ of the Hom scheme $\Hom(\p^1,X,o\mapsto x)$ parametrizing
morphisms $f:\p^1\to X$ such that $f(o)=x$ and $f_*[\p^1]$ is parametrized by $H_x$.
Then $H_x$ is the quotient of $V_x$ by the natural action of the automorphism
group $\aut(\p^1,o)$.
Given a morphism $f:\p^1\to X$ parametrized by $V_x$,
we use the same symbol $[f]$ to denote the element of $V_x$ corresponding to $f$,
and its image in $H_x$.
Since $x\in X$ is a general point, both $V_x$ and $H_x$ are smooth, and
every morphism $f:\p^1\to X$ parametrized by $V_x$ is \emph{free}, i.e.,
$f^*T_X\ \cong \ \bigoplus_{i=1}^{\dim X}\o_{\p^1}(a_i)$, with all $a_i\geq 0$.
From the universal properties of $\Hom(\p^1,X,o\mapsto x)$ and $\chow(X)$,
we get a commutative diagram:
\begin{equation} \label{diagram_Hx}
\xymatrix{
\p^1 \times V_x \ar[d] \ar[r] \ar@/^0.8cm/[rrr]^{(t,[f])\mapsto f(t)} & U_x \ar[d]_{\pi_x} \ar[rr]^{\ev_x} & & X, \\
V_x \ar[r] \ar@/^0.4cm/[u]^{\{o\}\times \id} & H_x \ar@/_0.4cm/[u]_{\sigma_x}
}
\end{equation}
where $\pi_x$ is a $\p^1$-bundle and $\sigma_x$ is
the unique section of $\pi_x$ such that
$\ev_x\big(\sigma_x(H_x)\big)=x$.
We denote by the same symbol both $\sigma_x$ and its image in $U_x$,
which equals the image of $\{o\}\times V_x$ in $U_x$.
In \cite[Proposition 3.7]{druel_chern_classes}, Druel gave the following description of the tangent
bundle of $H_x$:
\begin{equation} \label{druel}
T_{H_x}\ \cong \ (\pi_x)_* \Big( \big( (\ev_x^*T_X)/T_{\pi_x} \big)(-\sigma_x) \Big),
\end{equation}
where the relative tangent sheaf $T_{\pi_x}=T_{U_x/H_x}$ is identified with its image
under the map $d\ev_x: T_{U_x}\to \ev_x^*T_X$.
When $H_x$ is proper,
we call it a \emph{minimal family of rational curves through $x$}.
\begin{say}[Minimal families of rational curves]\label{Hx}
Let $H_x$ be a minimal family of rational curves through $x$.
For a general point $[f]\in H_x$, we have
$f^*T_X \cong \o(2)\oplus \o(1)^{\oplus d}\oplus \o^{\oplus n-d-1}$,
where $d=\dim H_x=\deg(f^*T_X)-2\leq n-1$ (see \cite[IV.2.9]{kollar}).
Moreover, $d=n-1$ if and only if $X\cong \p^n$ by \cite{CMSB}
(see also \cite{kebekus_on_CMSB}).
Let $H_x^{\text{Sing},x}$ denote the subvariety of $H_x$ parametrizing curves that are singular at $x$.
By a result of Miyaoka (\cite[V.3.7.5]{kollar}), if $Z\subset H_x\setminus H_x^{\text{Sing},x}$ is a \emph{proper}
subvariety, then $\ev_x\big|_{\pi_x^{-1}(Z)}: \pi_x^{-1}(Z)\to X$ is generically injective.
In particular, if $H_x^{\text{Sing},x}=\emptyset$, then $\ev_x$ is birational onto its image.
By \cite[Theorem 3.3]{kebekus}, $H_x^{\text{Sing},x}$ is at most finite,
and every curve parametrized by $H_x$ is immersed at $x$.
\end{say}
\begin{say}[Polarized minimal families of rational curves]\ \label{describing_Lx}
Now we describe a natural polarization $L_x$ associated to a
minimal family $H_x$ of rational curves through $x$.
There is an inclusion of sheaves
\begin{equation}\label{tau}
\sigma_x^*\big(T_{\pi_x}\big)\ \DOTSB\lhook\joinrel\rightarrow \ \sigma_x^*\ev_x^*T_X\ \cong \ T_xX\otimes \o_{H_x}.
\end{equation}
By \cite[Theorems 3.3 and 3.4]{kebekus}, the cokernel of this map is locally free,
and defines a finite morphism $\tau_x: \ H_x \to \p(T_xX^{^{\vee}})$.
By \cite{hwang_mok_birationality}, $\tau_x$ is birational onto its image.
Notice that $\tau_x$ sends a curve that is smooth at $x$ to its tangent direction at $x$.
Set $L_x=\tau_x^*\o(1)$. It is an ample and globally generated line bundle on $H_x$.
We call the pair $(H_x, L_x)$ a \emph{polarized minimal family of rational curves through $x$}.
The following description of $L_x$ from \cite[4.2]{druel_chern_classes} is very useful for computations.
Set $E_x=(\pi_x)_*\o_{U_x}(\sigma_x)$. Then $U_x\cong \p(E_x)$ over $H_x$,
and under this isomorphism $\o_{U_x}(\sigma_x)$ is identified with the tautological line bundle
$\o_{\p(E_x)}(1)$.
Notice also that $\sigma_x^*\big(\o_{U_x}(\sigma_x)\big) \cong \sigma_x^* \big({\mathcal {N}}_{\sigma_x/U_x}\big)
\cong \sigma_x^*\big(T_{\pi_x}\big)$.
Therefore \eqref{tau} induces in isomorphism
\begin{equation}\label{Lx=-normal}
L_x\cong \sigma_x^*\big({T_{\pi_x}}\big)^{-1}\cong \sigma_x^*\o_{U_x}(-\sigma_x) \cong \sigma_x^*\o_{\p(E_x)}(-1).
\end{equation}
By pulling back by $\sigma_x$, the Euler sequence
\begin{equation}\label{euler}
0 \ \to \ \ \o_{\p(E_x)}(1)\otimes\big({T_{\pi_x}}\big)^{-1} \ \to \ \pi_x^*E_x \ \to \ \o_{\p(E_x)}(1) \ \to \ 0
\end{equation}
induces an exact sequence
\begin{equation}\label{L}
0 \ \to \ \o_{H_x} \ \to \ E_x \ \to \ L_x^{-1} \ \to \ 0.
\end{equation}
This description of $L_x$ and the projection formula
yield the following identities of cycles on $U_x$:
\begin{itemize}
\item[(i) ] $\ev_x^*(c_1(X))=(d+2)(\sigma_x+\pi_x^*c_1(L_x))$,
\item[(ii) ] $\sigma_x\cdot\ev_x^*(\gamma)=0$ for any $\gamma\in A^k(X)$, $k\geq1$, and
\item[(iii) ] $\sigma_x^2=-\sigma_x\cdot\pi_x^*c_1(L_x)$,
\end{itemize}
where, as before, $d=\dim H_x=\deg(f^*T_X)-2$ for any $[f]\in H_x$.
\end{say}
\begin{lemma} \label{f:P^k->X}
Let $X$ be a smooth complex projective uniruled variety.
Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$.
Suppose there is a subvariety $Z\subset H_x$ such that
$(Z, L_x|_Z)\cong (\p^k,\o_{\p^k}(1))$.
\begin{enumerate}
\item Then there is a finite morphism $g:(\p^{k+1},p)\to (X,x)$
that maps lines through $p$ birationally
to curves parametrized by $H_x$.
\item If moreover $Z\subset H_x\setminus H_x^{\text{Sing},x}$, then
$g$ is generically injective.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $Z\subset H_x$ be a subvariety such that $(Z, L_x|_Z)\cong (\p^k,\o_{\p^k}(1))$.
Set $U_Z:=\pi_x^{-1}(Z)$, $\sigma_Z:=\sigma_x\cap U_Z$, and $E_Z:=E_x|_{Z}$.
By \ref{describing_Lx}, $U_Z$ is isomorphic to $\p(E_Z)$ over $Z$,
and under this isomorphism $\o_{U_Z}(\sigma_Z)$ is identified with the tautological line bundle
$\o_{\p(E_Z)}(1)$.
By \eqref{L}, $E_Z\cong \o_{\p^k}\oplus \o_{\p^k}(-1)$. Thus $U_Z$ is isomorphic to the blowup
of $\p^{k+1}$ at a point $p$, and under this isomorphism $\sigma_Z$ is identified with the exceptional divisor.
Since $\ev_x|_{U_Z}: U_Z\to X$ maps $\sigma_Z$ to $x$ and contracts nothing else,
it factors through a finite morphism $g:\p^{k+1}\to X$ mapping $p$ to $x$.
The lines through $p$ on $\p^{k+1}$ are images of fibers of $\pi_x$ over $Z$, and thus are mapped
birationally to curves parametrized by $Z\subset H_x$.
If $Z\subset H_x\setminus H_x^{\text{Sing},x}$, then
$g$ is generically injective by \cite[V.3.7.5]{kollar}.
\end{proof}
\begin{say}\label{H}
There is a scheme $\rat(X)$ parametrizing rational curves on $X$.
A \emph{minimal dominating family of rational curves on $X$} is an
irreducible component $H$ of $\rat(X)$ parametrizing a
family of rational curves that sweeps out a dense open subset of $X$, and satisfying the following condition.
For a general point $x\in X$, the (possibly reducible) subvariety $H(x)$ of $H$
parametrizing curves through $x$ is proper.
In this case, for each irreducible component $H(x)^i$ of $H(x)$, there is a
minimal family $H_x^i$ of rational curves through $x$ parametrizing the same curves as $H(x)^i$.
Moreover, $H_x^i$ is naturally isomorphic to the normalization of $H(x)^i$.
This follows from the construction of $\rat(X)$ and $\rat(X,x)$ in \cite[II.2.11]{kollar}.
If in addition $H$ is proper, then we say that it is an \emph{unsplit covering family of rational curves}.
This is the case, for instance, when the curves parametrized by $H$ have degree $1$ with respect to
some ample line bundle on $X$.
\end{say}
We end this section by investigating the relationship between the Chow ring of a
Fano manifold and that of its minimal families of rational curves.
\begin{defn}\label{defn_cycles}
Let $X$ be a projective variety, and $k$ a non negative integer.
We denote by $A_k(X)$ the group of $k$-cycles on $X$ modulo rational equivalence, and
by $A^k(X)$ the $k^{\text{th}}$ graded piece of the
Chow ring $A^*(X)$ of $X$.
Let $N_k(X)$ (respectively $N^k(X)$) be the quotient of $A_k(X)$
(respectively $A^k(X)$) by numerical equivalence.
Then $N_k(X)$ and $N^k(X)$ are finitely generated Abelian groups, and intersection
product induces a perfect pairing $N^k(X)\times N_k(X)\to \z$.
For every $\z$-module $B$, set $N_k(X)_B:=N_k(X)\otimes B$ and $N^k(X)_B:=N^k(X)\otimes B$.
We denote by $\eff_k(X)\subset N_k(X)_{\r}$ the closure of the cone generated by effective $k$-cycles.
Let $\alpha \in N^k(X)_{\r}$. We say that $\alpha$ is
\begin{enumerate}
\item[$\bullet$] \emph{ample} if $\alpha=A^k$ for some ample $\r$-divisor $A$ on $X$;
\item[$\bullet$] \emph{positive} if $\alpha\cdot \beta>0$ for every $\beta\in \eff_k(X)\setminus \{0\}$;
\item[$\bullet$] \emph{weakly positive} if $\alpha\cdot \beta>0$ for every effective integral
$k$-cycle $\beta\neq 0$;
\item[$\bullet$] \emph{nef} if $\alpha\cdot \beta\geq0$ for every $\beta\in \eff_k(X)$.
\end{enumerate}
We write $\alpha>0$ for $\alpha$ weakly positive and $\alpha\geq 0$ for $\alpha$ nef.
\end{defn}
\begin{defn}\label{defn_Tk}
Let $X$ be a smooth projective uniruled variety, and
$H_x$ a minimal family of rational curves through a general point $x\in X$.
Let $\pi_x$ and $\ev_x$ be as in \eqref{diagram_Hx}.
For every positive integer $k$,
we define linear maps
\begin{align}
T^k: \ & N^k(X)_{\r} \ \to \ N^{k-1}(H_x)_{\r}\ , & \ T_k: \ & N_k(H_x)_{\r} \ \to \ N_{k+1}(X)_{\r}. \notag \\
& \ \ \ \ \ \alpha \ \mapsto \ {\pi_x}_*\ev_x^*\alpha & & \ \ \ \ \ \beta \ \mapsto \ {\ev_x}_*\pi_x^*\beta \notag
\end{align}
This is possible because $\ev_x$ is proper and $\pi_x$ is a $\p^1$-bundle, and thus flat.
We remark that in general these maps are neither injective nor surjective.
\end{defn}
\begin{lemma}\label{Tk_preserves_positivity}
Let $X$ be a smooth projective uniruled variety, and
$(H_x, L_x)$ a polarized minimal family of rational curves through a general point $x\in X$.
\begin{enumerate}
\item Let $A$ be an $\r$-divisor on $X$, and set $a=\deg f^*A$, where $[f]\in H_x$.
Then $T^k(A^k)= a^k c_1(L_x)^{k-1}$.
\item $T_k$ maps $\eff_k(H_x)\setminus \{0\}$ into $\eff_{k+1}(X)\setminus \{0\}$.
\item $T^k$ preserves the properties of being ample, positive,
weakly positive and nef.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (1), let $A$ be an $\r$-divisor on $X$, and set $a=\deg f^*A$, where $[f]\in H_x$.
Using \eqref{Lx=-normal} and \ref{describing_Lx}(ii),
it is easy to see that $\ev_x^*A= a(\sigma_x+\pi_x^*L_x)$ in $N^1(U_x)$.
By \ref{describing_Lx}(iii) and the projection formula,
{\small \begin{align}
T^k(A^k)={\pi_x}_*\ev_x^*(A^k)&=
a^k{\pi_x}_*\left[\sum_{i=0}^k{{k}\choose{i}}\sigma_x^{k-i}\cdot \pi_x^*c_1(L_x)^i\right] \notag \\
&=a^k{\pi_x}_*\left[\left(\sum_{i=0}^{k-1}{{k}\choose{i}}(-1)^{k-i-1}\right)\sigma_x \cdot
\pi_x^*c_1(L_x)^{k-1} + \pi_x^*c_1(L_x)^{k}\right] \notag \\
&=a^k{\pi_x}_*\Big[\sigma_x\cdot \pi_x^*c_1(L_x)^{k-1} + \pi_x^*c_1(L_x)^{k}\Big]
= a^k c_1(L_x)^{k-1}.\notag
\end{align}}
Notice that $T_k$ maps effective cycles to effective cycles, inducing a linear map
$T_k: \eff_k(H_x) \to\eff_{k+1}(X)$. By taking $A$ an ample divisor in (1) above, we see that
$T_k$ maps $\eff_k(H_x)\setminus \{0\}$ into $\eff_{k+1}(X)\setminus \{0\}$.
By the projection formula, $T^{k+1}(\alpha)\cdot\beta=\alpha\cdot T_k(\beta)$ for every
$\alpha\in N^{k+1}(X)_{\r}$ and $\beta\in N_k(H_x)_{\r}$. Together with (1) and (2) above, this
implies that $T^k$ preserves the properties of being ample, positive,
weakly positive and nef.
\end{proof}
\section{A Chern class computation}\label{section:Chern}
In this section we prove Proposition~\ref{chern_characters}.
We refer to \cite{fulton} for basic results about intersection theory.
In particular,
$\text {ch}(F)$ denotes the Chern character of the sheaf $F$, and $\text {td}(F)$ denotes its Todd class.
We follow the lines of the proof of \cite[Proposition 4.2]{druel_chern_classes}.
\begin{proof}[Proof of Proposition~\ref{chern_characters}]
We use the notation introduced in Section \ref{section:rat_curves}.
By Grothendieck-Riemann-Roch
\begin{gather*}
\text {ch}\Big({\pi_x}_!\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\Big)= \\
{\pi_x}_*\Big(\text {ch}\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x})\Big)\in A(H_x)_{\q}.
\end{gather*}
Since $f:\p^1\to X$ is free for any $[f]\in H_x$, $R^1{\pi_x}_*\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)=0$, and thus
$
{\pi_x}_!\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)={\pi_x}_*\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big).
$
It follows from \eqref{druel} that
$$
ch_k(H_x)=\text {ch}_k(T_{H_x})={\pi_x}_*\Big(\Big[\text {ch}\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x})\Big]_{k+1}\Big).
$$
Denote by $W_k$ the codimension $k$ part of the cycle
\begin{gather*}
\text {ch}\big(\ev_x^*T_X/T_{\pi_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x})= \\
\big(\ev_x^*\text {ch}(T_X)-\text {ch}(T_{\pi_x})\big)\cdot\text {ch}\big(\o_{U_x}(-\sigma_x)\big)\cdot\text {td}(T_{\pi_x}).
\end{gather*}
Denote by $Z_k$ the codimension $k$ part of the cycle
\begin{equation*}
\big(\ev_x^*\text {ch}(T_X)-\text {ch}(T_{\pi_x})\big)\cdot\text {ch}\big(\o_{U_x}(-\sigma_x)\big),
\end{equation*}
Then $\text {ch}_k(H_x)={\pi_x}_*\big(W_{k+1}\big)$, and $W_{k+1}=\sum_{j=0}^{k+1}Z_{k+1-j}\cdot\big[\text {td}(T_{\pi_x})\big]_j$.
We have:
$$\ev_x^*\big(\text {ch}(T_X)\big)=\ev_x^*\big(n+c_1(X)+\text {ch}_2(X)+\text {ch}_3(X)+\cdots\big),$$
$$\text {ch}\big(\o_{U_x}(-\sigma_x)\big)=\sum_{k=0}^{\infty}\frac{(-1)^k}{k!}\sigma_x^k,$$
$$\text {ch}(T_{\pi_x})=\sum_{k=0}^{\infty}\frac{1}{k!}c_1(T_{\pi_x})^k,\quad
\text {td}(T_{\pi_x})=\sum_{k=0}^{\infty}\frac{(-1)^kB_k}{k!}c_1(T_{\pi_x})^k.$$
From \eqref{euler} and \eqref{L}, $c_1(T_{\pi_x})=2\sigma_x+\pi_x^*c_1(L_x)$.
By repeatedly using \ref{describing_Lx}(iii), we have the following identities:
\begin{itemize}
\item[(iv) ] $\pi_x^*c_1(L_x)^i\cdot\sigma_x^j=(-1)^i\sigma_x^{i+j}$, for any $j\geq1$,
\item[(v) ] $c_1(T_{\pi_x})^i\cdot\sigma_x^j=\sigma_x^{i+j}$, for any $j\geq1$,
\item[(vi) ] $c_1(T_{\pi_x})^i=\left\{
\begin{aligned}
&\pi_x^*c_1(L_x)^i, &\text { if } i \text{ is even}\\
&2\sigma_x^i+\pi_x^*c_1(L_x)^i, &\text { if } i \text{ is odd.}
\end{aligned}
\right.$
\end{itemize}
\begin{claim}\label{Z formula}
For any $k\geq1$ we have the following formulas:
\begin{equation}\label{Z}
Z_k=\ev_x^*\text {ch}_k(X)+\frac{(n+1)(-1)^k}{k!}\sigma^k-\frac{1}{k!}\pi_x^*c_1(L_x)^k,
\end{equation}
\begin{equation}\label{Z times sigma}
Z_k\cdot\sigma_x=\frac{(n+1)(-1)^k}{k!}\sigma_x^{k+1}-\frac{1}{k!}\sigma_x\cdot\pi_x^*c_1(L_x)^k,
\end{equation}
\begin{equation}\label{push forward Z}
{\pi_x}_*Z_k={\pi_x}_*\ev_x^*\text {ch}_k(X)-\frac{(n+1)}{k!}c_1(L_x)^{k-1},
\end{equation}
\begin{equation}\label{push forward Z times sigma}
{\pi_x}_*\big(Z_k\cdot\sigma_x\big)=\frac{n}{k!}c_1(L_x)^k.
\end{equation}
\end{claim}
In addition, $Z_0=n-1$ (formula (\ref{Z}) does not hold for $k=0$).
\begin{proof}[Proof of Claim~\ref{Z formula}]
For $k\geq1$ we have:
$$Z_k=\sum_{j=0}^k\big(\ev_x^*\text {ch}_j(X)-\frac{1}{j!}\pi_x^*c_1(T_{\pi_x})^j\big)\cdot\frac{(-1)^{k-j}}{(k-j)!}\sigma_x^{k-j}.$$
By identity \ref{describing_Lx}(ii),
$$Z_k=\ev_x^*\text {ch}_k(X)-\sum_{j=0}^k\frac{(-1)^{k-j}}{j!(k-j)!}\pi_x^*c_1(T_{\pi_x})^j\cdot\sigma_x^{k-j}.$$
Formula (\ref{Z}) follows now from identities (v), (vi) and $\sum_{j=0}^k\frac{(-1)^{k-j}}{j!(k-j)!}=0$. Formula (\ref{Z times sigma}) follows immediately from
(\ref{Z}) and \ref{describing_Lx}(ii).
Using the identity (iv) and the projection formula, we have
\begin{itemize}
\item[(vii) ] ${\pi_x}_*\sigma_x^{k}=(-1)^{k-1}c_1(L_x)^{k-1}$, for any $k\geq1$, and
\item[(viii) ] ${\pi_x}_*\pi_x^*(\gamma)=0$ for any class $\gamma\in A(H_x)$.
\end{itemize}
Formulas (\ref{push forward Z}) and (\ref{push forward Z times sigma}) now follow from (vii) and (viii).
\end{proof}
For simplicity, we denote by $A_j$ the coefficient of $c_1(M)^j$ in the formula for the Todd class $\text {td}(M)$ of a line bundle $M$, i.e.,
$A_j=\frac{(-1)^j}{j!}B_j$. Recall that $A_0=1$, $A_1=1/2$, $A_2=1/12$, $A_3=0$, $A_4=-1/720$, etc.
We have
$W_{k+1}=\sum_{j=0}^{k+1}A_{k+1-j}Z_j\cdot c_1(T_{\pi_x})^{k+1-j}$.
Since $A_l=0$ for all odd $l\geq3$, by identity (vi) the formula for $W_{k+1}$ becomes:
$$W_{k+1}=\sum_{j=0}^{k+1}A_{k+1-j}Z_j\cdot \pi_x^*c_1(L_x)^{k+1-j}+Z_k\cdot\sigma_x.$$
By the projection formula,
$${\pi_x}_*W_{k+1}=\sum_{j=1}^{k+1}A_{k+1-j}\big({\pi_x}_*Z_j\big)\cdot c_1(L_x)^{k+1-j}+{\pi_x}_*\big(Z_k\cdot\sigma_x\big).$$
Using (\ref{push forward Z}), (\ref{push forward Z times sigma}), we have
\begin{gather*}
{\pi_x}_*W_{k+1}=\sum_{j=1}^{k+1}A_{k+1-j}{\pi_x}_*\ev_x^*\text {ch}_j(X)\cdot c_1(L_x)^{k+1-j}+\\
-(n+1)\big(\sum_{j=1}^{k+1}\frac{A_{k+1-j}}{j!}\big)c_1(L_x)^k+\frac{n}{k!}c_1(L_x)^k.
\end{gather*}
It is easy to see that the identity $\sum_{l=0}^m B_l {{m+1}\choose{l}}=0$ implies the identity
\begin{equation*}
\sum_{j=1}^{k+1}\frac{A_{k+1-j}}{j!}=\frac{1}{k!}
\end{equation*}
(use $A_l=0$ for all odd $l\geq3$) and now \eqref{ch_k for H_x} follows.
To prove \eqref{c_1 of H_x} and \eqref{ch_2 of H_x} from \eqref{ch_k for H_x},
observe that ${\pi_x}_*\ev_x^*c_1(X)=d+2$.
\end{proof}
\section{Higher Fano manifolds} \label{section:proofs}
We start this section by recalling some results about the index and pseudoindex of a Fano manifold
and extremal rays of its Mori cone.
We refer to \cite{kollar_mori} for basic definitions and results about the minimal model program.
\begin{defn}\label{index}
Let $X$ be a Fano manifold. The \emph{index} of $X$ is the largest integer
$r_X$ that divides $-K_X$ in $\pic(X)$.
The \emph{pseudoindex} of $X$ is the integer
$i_{X}=\min\big\{-K_Y\cdot C\ \big| \ C\subset Y \text{ rational curve }\big\}$.
\end{defn}
Notice that $1\leq r_X \leq i_X$.
Moreover
$i_X\leq \dim X+1$, and $i_X=\dim X+1$ if and only if $X\cong \p^n$
(see \ref{Hx}).
By \cite{wisniewski_90}, if $\rho(X)>1$, then $i_X\leq \frac{\dim X}{2}+1$.
\begin{defn}\label{index&rays}
Let $X$ be a smooth complex projective variety.
Let $R$ be an extremal ray of the Mori cone $\nec{X}$, and let $f:Y\to Z$ be the corresponding contraction.
The \emph{exceptional locus} $E(R)$ of $R$ is the closed subset of $X$ where $f$ fails to be a local isomorphism.
Given a divisor $L$ on $X$, we set
$L\cdot R= \min\big\{L\cdot C\ \big| \ C\subset X \text{ rational curve such that } [C]\in R\big\}$.
In particular, the \emph{length} of $R$ is
$l(R)=-K_X\cdot R=\min\big\{-K_X\cdot C\ \big| \ C\subset X \text{ rational curve such that } [C]\in R\big\}\geq i_X$.
\end{defn}
Let $X$ be a Fano manifold, and
$R$ an extremal ray of $\nec{X}$.
By the theorem on lengths of extremal rays, $l(R)\leq \dim X+1$.
By \cite{AO_long_R}, if $\rho(X)>1$, then,
\begin{equation} \label{long_R}
i_X\ +\ l(R)\ \leq \ \dim E(R) \ + \ 2.
\end{equation}
\begin{lemma}\label{adjunction}
Let $Y$ be a $d$-dimensional Fano manifold. Let $L$ be an ample divisor on $Y$ such
that $-2K_Y-dL$ is ample.
\begin{enumerate}
\item Suppose that $(Y,L) \not\cong$ $\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$,
and let $R$ be any extremal ray of $\nec{Y}$. Then $L\cdot R=1$.
\item Suppose that $\rho(Y)>1$. Then $(Y,L)$ is isomorphic to one of the following:
\begin{enumerate}
\item $\Big(\p^m\times \p^m, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$, with $d=2m$,
\item $\Big(\p^{m+1}\times\p^{m} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$,
with $d=2m+1$,
\item $\Big(\p_{\p^{m+1}}\Big(\o(2)\oplus \o(1)^{^{\oplus m}}\Big) \ , \ \o_{\p}(1)\Big)$,
with $d=2m+1$,
\item $\Big(\p^{m}\times Q^{m+1} \ ,\ p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\Big)$,
with $d=2m+1$, or
\item $\Big(\p_{\p^{m+1}}\big(T_{\p^{m+1}}\big) \ , \ \o_{\p}(1)\Big)$, with $d=2m+1$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{rem}
In (c) above, $Y$ can also be described as the blowup of $\p^{2m+1}$ along a linear subspace of dimension $m-1$.
In (e) above, $Y$ can also be described as a smooth divisor of type $(1,1)$ on $\p^{m+1}\times\p^{m+1}$.
\end{rem}
\begin{proof}[{Proof of Lemma~\ref{adjunction}}]
Suppose that $\rho(Y)=1$.
Then $\nec{Y}$ consists of a single extremal ray $R$, and there is an ample divisor
$L'$ on $Y$ such that $\pic(X)=\z\cdot [L']$.
Let $\lambda$ be the positive integer such that $L\sim \lambda L'$.
If $i_Y=d+1$, then $(Y,L')\cong \big(\p^d, \o_{\p^d}(1)\big)$, and
$-2K_Y-dL\sim \big(d(2-\lambda) +2\big)L'$.
Since this is ample, either $\lambda\leq 2$ or $(d,\lambda)=(1,3)$.
If $i_Y\leq d$, then
$
1\leq (-2K_Y-dL)\cdot R = 2i_Y-d\lambda (L'\cdot R)\leq d\big(2-\lambda (L'\cdot R)\big).
$
Hence, $\lambda=L'\cdot R=1$.
From now on we assume that $\rho(Y)>1$.
Then $d>1$ and $i_Y\geq \frac{d+1}{2}$.
Moreover, by \cite{wisniewski_90}, $r_Y\leq i_Y\leq \frac{d}{2}+1$.
Let $R$ be any extremal ray of $\nec{Y}$. We claim that $L\cdot R=1$.
Indeed, if $L\cdot R\geq 2$, then $l(R)=d+1$, contradicting \eqref{long_R}.
Suppose that $d=2m$ is even. Then $i_Y= m+1$.
Set $A=-K_Y-mL$. By assumption $A$ is ample.
For any extremal ray $R\subset \nec{Y}$, \eqref{long_R} implies that $l(R)=m+1$,
and thus $A\cdot R= 1=L\cdot R$.
Hence, $A\equiv L$, and so $A\sim L$ since $Y$ is Fano.
In particular $-K_Y\sim (m+1)L$, and thus $r_Y=m+1$.
By \cite[Theorem B]{wisniewski_90}, this implies that $Y\cong \p^{m}\times \p^{m}$.
Now suppose that $d=2m+1$ is odd. Then $i_Y= m+1$.
Set $A'=-2K_Y-(2m+1)L$. By assumption $A'$ is ample.
Let $R$ be an extremal ray of $\nec{Y}$.
Then $l(R)\geq m+1$, and $\dim E(R)\leq 2m+1$.
By \eqref{long_R}, there are three possibilities:
\begin{enumerate}
\item[(a)] $l(R)= m+2$, $E(R)=Y$, and equality holds in \eqref{long_R};
\item[(b)] $l(R)= m+1$, $\dim E(R) =2m$, and equality holds in \eqref{long_R}; or
\item[(c)] $l(R)= m+1$, $E(R)=Y$, and equality in \eqref{long_R} fails by $1$.
\end{enumerate}
In \cite{AO_long_R}, Andreatta and Occhetta classify the cases in which equality holds in \eqref{long_R}, assuming
$\dim Y-1\leq\dim E(R)\leq \dim Y$. They show that in this case either $Y$ is a product of projective spaces, or a blowup
of $\p^{2m+1}$ along a linear subspace of dimension at most $m-1$.
From this we see that in case (a) we must have $Y\cong \p^{m+1}\times\p^{m}$,
while in case (b) $Y$ must be isomorphic to
the blowup of $\p^{2m+1}$ along a linear subspace of dimension $m-1$.
From now on we assume that every extremal ray $R$ of $\nec{Y}$ falls into case (c) above,
which implies that $A'\cdot R= 1=L\cdot R$. Thus $A'\sim L$,
$-K_Y\sim (m+1)L$, and thus $r_Y=m+1$.
By \cite{wisniewski_91}, this implies that either $Y\cong \p^{m}\times Q^{m+1}$, or
$Y\cong \p_{\p^{m+1}}\big(T_{\p^{m+1}}\big)$.
\end{proof}
\medskip
\begin{proof}[{Proof of Theorem~\ref{thm1}}]
Let $X$ be a Fano manifold with $\text {ch}_2(X)\geq 0$.
Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$.
Set $d=\dim H_x$.
By Proposition~\ref{chern_characters},
$$
c_1(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)+\frac{d}{2}c_1(L_x).
$$
By Lemma~\ref{Tk_preserves_positivity},
${\pi_x}_*\ev_x^*$ preserves the properties of being weakly positive and nef.
Thus $-K_{H_x}$ is ample and $-2K_{H_x}-dL_x$ is nef.
Since $H_x$ is Fano, ${\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)$ is ample if and only if it is weakly positive.
Hence, $\text {ch}_2(X)> 0$ implies that $-2K_{H_x}-dL_x$ is ample.
If ${\ev_x}_*\pi_x^*\big(\eff_1(H_x)\big)=\eff_2(X)$, and $-2K_{H_x}-dL_x$ is ample (respectively nef), then
clearly $\text {ch}_2(X)$ is positive (respectively nef).
This proves the first part of the theorem.
The second part follows from Lemma~\ref{adjunction}(2).
Examples of Fano manifolds $X$ with $\text {ch}_2(X)>0$ realizing
each of the exceptional pairs are given in section~\ref{examples}.
Finally, suppose that $\text {ch}_2(X)>0$, $\text {ch}_3(X)\geq 0$ and $d\geq 2$.
We already know from part (1) that $-2K_{H_x}-dL_x$ is ample.
We want to prove that $\text {ch}_2(H_x)>0$ and $\rho(H_x)=1$.
For that purpose we may assume that $(H_x, L_x)\not\cong \big(\p^d,\o(2)\big)$.
Let $R\subset \eff_1(H_x)$ be an extremal ray.
By Lemma~\ref{adjunction}(1), there is a rational curve $\ell\subset H_x$ such that $R=\r_{\geq 0}[\ell]$
and $L_x\cdot \ell=1$. Moreover,
$$
{\pi_x}_*\ev_x^*\big(2 \ \text {ch}_2(X)\big)\cdot [\ell]=2\ \text {ch}_2(X)\cdot {\ev_x}_*\pi_x^*[\ell]
= \big(c_1(T_X)^2-2c_2(T_X)\big)\cdot {\ev_x}_*\pi_x^*[\ell]
$$
is a positive integer, and thus $\geq 1$.
Therefore $\eta:={\pi_x}_*\ev_x^*\big(\text {ch}_2(X)\big)-\frac{1}{2}c_1(L_x)\in N^1(H_x)_{\q}$ is nef.
We rewrite formula \eqref{ch_2 of H_x} of Proposition~\ref{chern_characters} as
$$
\text {ch}_2(H_x)={\pi_x}_*\ev_x^*\big(\text {ch}_3(X)\big)+\frac{1}{2}\eta\cdot c_1(L_x)+\frac{d-1}{12}c_1(L_x)^2.
$$
Since ${\pi_x}_*\ev_x^*$ preserves the properties of being nef,
we conclude that $\text {ch}_2(H_x)$ is positive.
By \ref{non-examples}, none of the exceptional pairs $(H_x, L_x)$ from part (2)
satisfy $\text {ch}_2(H_x)>0$.
Hence, $\rho(H_x)=1$.
\end{proof}
\begin{lemma} \label{Hx_covered_by_lines}
Let $X$ be a Fano manifold.
Let $(H_x, L_x)$ be a polarized minimal family of rational curves through a general point $x\in X$.
Set $d=\dim H_x$.
\begin{enumerate}
\item Suppose that $\text {ch}_2(X)>0$, $d\geq 1$ and $(H_x, L_x)\not\cong$
$\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$.
Then any minimal dominating family of rational curves on $H_x$
parametrizes smooth rational curves of $L_x$-degree equal to $1$.
\item Suppose that $\text {ch}_2(X)>0$, $\text {ch}_3(X)\geq0$, $d\geq 2$ and $(H_x, L_x)\not\cong$
$\big(\p^d,\o(2)\big)$.
Let $(W_h,M_h)$ be a polarized minimal family of rational curves through a general point $h\in H_x$.
Suppose that
$(W_h,M_h)\not\cong$ $\big(\p^k,\o(2)\big)$, $\big(\p^1,\o(3)\big)$.
Then there is an isomorphism $g:\p^2\to S\subset H_x$ mapping a point $p\in \p^2$ to $h\in H_x$,
sending lines through $p$ to curves parametrized by $W_h$, and such that
$g^*L_x\cong \o_{\p^2}(1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $\text {ch}_2(X)>0$, $d\geq 1$ and $(H_x, L_x)\not\cong$
$\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$.
Let $W$ be a minimal dominating family of rational curves on $H_x$.
We will show that the curves parametrized by $W$ have $L_x$-degree equal to $1$.
If $\rho(H_x)>1$, then this can be checked directly from the list in Theorem~\ref{thm1}(2).
So we assume $\rho(H_x)=1$.
Let $\ell\subset H_x$ be a curve parametrized by $W$.
Then, as in \ref{Hx},
$-K_{H_x}\cdot \ell\leq d+1$, and $-K_{H_x}\cdot \ell= d+1$ if and only if $H_x\cong \p^d$.
By Theorem~\ref{thm1}(1), $-K_{H_x}\cdot \ell>\frac{d}{2} L_x\cdot \ell$.
If $ L_x\cdot \ell >1$, then $(H_x, L_x)\cong$
$\big(\p^d,\o(2)\big)$ or $\big(\p^1,\o(3)\big)$,
contradicting our assumptions. We conclude that $L_x\cdot \ell =1$.
The generically injective morphism
$\tau_x: H_x \to \p(T_xX^{^{\vee}})$ defined in \ref{describing_Lx}
maps curves parametrized by $W$ to lines.
So all curves parametrized by $W$ are smooth.
Now suppose we are under the assumptions of the second part of the lemma.
By Theorem~\ref{thm1}(3), $H_x$ is a Fano manifold with $\text {ch}_2(H_x)>0$ and $\rho(H_x)=1$.
If $d=2$, then $H_x\cong \p^2$, and by our assumptions $L_x\cong \o(1)$.
Now suppose $d=3$.
Recall that the only Fano threefolds satisfying $\text {ch}_2>0$ are $\p^3$ and the smooth quadric hypersurface $Q^3\subset \p^4$ (\cite{2Fano_3folds}).
The polarized minimal family of rational curves through a general point of $Q^3$ is isomorphic to $\big(\p^1,\o(2)\big)$.
So our assumptions imply that $(H_x, L_x)\cong \big(\p^3,\o(1)\big)$, and the conclusion of the lemma is clear.
Finally assume that $d\geq 4$.
By the first part of the lemma,
$L_x\cdot \ell =1$
for any curve $\ell$ parametrized by $W_h$, and $W_h^{\text{Sing},h}=\emptyset$.
Theorem~\ref{thm1}(1) implies that $i_{H_x}>\frac{d}{2}\geq 2$,
and thus $\dim W_h\geq i_{H_x}-2\geq 1$.
By the first part of the lemma, now applied to the variety $H_x$, $W_h$ is covered by smooth
rational curves of $M_h$-degree equal to $1$.
By Lemma~\ref{f:P^k->X}, applied to the variety $H_x$,
there is a generically injective morphism $g:(\p^2,p)\to (H_x, h)$ mapping
lines through $p$ to curves on $H_x$ parametrized by $W_h$.
Since these curves have $L_x$-degree equal to $1$, they
are mapped to lines by the generically injective morphism
$\tau_x:H_x \to \p(T_xX^{^{\vee}})$.
We conclude that the composition $\tau_x\circ g:\p^2 \to \p(T_xX^{^{\vee}})$ is an isomorphism onto its image
and $(\tau_x\circ g)^*\o_{\p}(1) \cong \o_{\p^2}(1)$.
This proves the second part of the lemma.
\end{proof}
\begin{proof}[{Proof of Theorem~\ref{thm3}}]
Let the notation and assumptions be as in Theorem~\ref{thm3}.
Part (1) was proved in \cite{dJ-S:2fanos_2}. It also follows from Theorem~\ref{thm1}(1):
the conditions $\text {ch}_2(X)\geq 0$ and $d\geq 1$ imply that $H_x$ is a positive dimensional
Fano manifold, and thus covered by rational curves. For any rational curve $\ell\subset H_x$,
$S=\ev_x\big(\pi_x^{-1}(\ell)\big)$ is a rational surface on $X$ through $x$.
Notice that $S$ is covered by rational curves parametrized by $H_x$.
For part (2), assume $\text {ch}_2(X)> 0$ and $(H_x, L_x)\not\cong$
$\big(\p^d,\o(2)\big)$, $\big(\p^1,\o(3)\big)$.
If $d=1$, then $(H_x, L_x)\cong \big(\p^1,\o(1)\big)$. In this case the morphism
$\tau_x: H_x \to \p(T_xX^{^{\vee}})$
is an isomorphism onto its image, and thus $H_x^{\text{Sing},x}=\emptyset$ by
\cite[Corollary 2.8]{artigo_tese}.
If $d>1$, let $W$ be a minimal dominating family of rational curves on $H_x$.
By Lemma~\ref{Hx_covered_by_lines}(1),
the curves parametrized by $W$ are smooth and have $L_x$-degree equal to $1$.
Moreover, since $H_x^{\text{Sing},x}$ is at most finite,
a general curve parametrized by $W$
is contained in $H_x\setminus H_x^{\text{Sing},x}$ by \cite[II.3.7]{kollar}.
Part (2) now follows from Lemma~\ref{f:P^k->X}.
From now on assume that $\text {ch}_2(X)> 0$, $\text {ch}_3(X)\geq 0$ and $d\geq 2$.
Then $H_x$ is a Fano manifold with $\rho(H_x)=1$ and $\text {ch}_2(H_x)>0$
by Theorem~\ref{thm1}(3).
If $d=2$, then $H_x\cong\p^2$ and $U_x=\p(E_x)$ is a rational $3$-fold.
Hence, $\ev_x(U_x)$ is a rational $3$-fold through $x$ except possibly if $\ev_x$
fails to be birational onto its image.
This can only occur if ${\mathcal {C}}_x$ is singular by \cite[Corollary 2.8]{artigo_tese},
in which case $L_x\cong \o(2)$.
Now suppose $d\geq 3$.
We claim that there is a rational surface $S\subset H_x\setminus H_x^{\text{Sing},x}$.
From this it follows that $\ev_x\big|_{\pi_x^{-1}(S)}$ is generically injective and $\ev_x\big(\pi_x^{-1}(S)\big)$ is a rational
$3$-fold through $x$.
If $d=3$, then $H_x\cong \p^3$ or $Q^3\subset \p^4$,
and we can find a rational surface $S\subset H_x\setminus H_x^{\text{Sing},x}$.
Now assume $d\geq 4$, and let $W_h$ be a minimal family of rational curves through a general point $h\in H_x$.
Then $W_h$ is a Fano manifold and $\dim W_h\geq i_{H_x}-2\geq 1$.
As in the proof of part (1), now applied to $H_x$,
each rational curve on $W_h$ yields a rational surface $S$ on $H_x$ through $h$.
Recall that $H_x^{\text{Sing},x}$ is at most finite.
If $S \cap H_x^{\text{Sing},x} \neq \emptyset$, then there is a rational curve on $H_x$ parametrized by $W_h$
meeting $H_x^{\text{Sing},x}$.
If this holds for a general point $h\in H_x$, then there is a point $h_0\in H_x^{\text{Sing},x}$
that can be connected
to a general point of $H_x$ by a curve parametrized
by a suitable minimal dominating family of rational curves on $H_x$.
But this implies that a curve from this minimal dominating family has
$-K_{H_x}$-degree equal to $d+1$, and so we must have $H_x\cong \p^d$ (see \ref{Hx}).
Since $d>2$, we can find a rational surface $S'\subset H_x\setminus H_x^{\text{Sing},x}$.
This proves part (3).
Finally, suppose we are under the assumptions of part (4).
By Lemma~\ref{Hx_covered_by_lines}(2), $H_x$ is covered by surfaces
$S$ such that $(S, L_x|_S)\cong (\p^2,\o_{\p^2}(1))$.
Exactly as in the proof of part (3) above,
we can take such a surface $S\subset H_x\setminus H_x^{\text{Sing},x}$.
Part (4) now follows from Lemma~\ref{f:P^k->X}.
\end{proof}
\section{Examples}\label{examples}
In this section we discuss examples of Fano manifolds $X$ with $\text {ch}_2(X)\geq 0$.
Theorem~\ref{thm1} provides a new way of checking positivity of $\text {ch}_2(X)$,
enabling us to find new examples.
Examples \ref{C.I.} and \ref{G} below appear in \cite{dJ-S:2fanos_1}.
Example \ref{H_in_G} does not appear explicitly in \cite{dJ-S:2fanos_1}, but it can be inferred
from \cite[Theorem 1.1(3)]{dJ-S:2fanos_1}.
Examples \ref{OG}, \ref{SG}, \ref{degenerate SG} and \ref{G2/P} are new.
\begin{say}[Complete Intersections]\label{C.I.}
Let $X$ be a complete intersection of type $(d_1,\ldots, d_c)$ in $\p^n$.
Standard Chern class computations show that $\text {ch}_k(X)>0$ (respectively $\geq0$)
if and only if $\sum d_i^k\leq n$ (respectively $\leq n+1$). See for instance
\cite[2.1 and 2.4]{dJ-S:2fanos_1}.
Let $x\in X$ be a general point, and let $H_x$ be the variety of
lines through $x$ on $X$.
Then $H_x$ is a complete intersection of type
$(1,2,\ldots d_1,\ldots,1,2,\ldots d_c)$ in $\p^{n-1}$, and $L_x\cong \o(1)$.
The condition from Theorem~\ref{thm1}(1) of $-2K_{H_x}-dL_x$ being ample
(respectively nef) is clearly equivalent to
$\sum d_i^2\leq n$ (respectively $\leq n+1$).
\end{say}
\begin{say}[Grassmannians] \label{G}
Let $X=G(k,n)$ be the Grassmannian of $k$-dimensional linear subspaces of an $n$-dimensional vector
space $V$, with $2\leq k\leq\frac{n}{2}$.
As computed in \cite[2.2]{dJ-S:2fanos_1}, the second Chern class of $X$ is given by
$$
\text {ch}_2(X)=\frac{n+2-2k}{2}\sigma_2-\frac{n-2-2k}{2}\sigma_{1,1},
$$
where $\sigma_2$ and $\sigma_{1,1}$ are the usual Schubert cycles of codimension $2$.
Recall that $\eff_{2}(X)$ is generated by the dual Schubert cycles $\sigma_{1,1}^{*}$ and $\sigma_2^{*}$.
Thus $\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $2k\leq n \leq2k+1$ (respectively $2k\leq n \leq2k+2$).
Given $x\in X$, let $H_x$ be the variety of lines through $x$ on $X$ under the Pl\"ucker embedding.
As explained in \cite[1.4.4]{hwang},
$(H_x,L_x)\cong \big(\p^{k-1}\times\p^{n-k-1}, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\big)$.
Indeed, if $x$ parametrizes a linear subspace $[W]$, then
a line through $x$ corresponds to subspaces $U$ and $U'$ of $V$, of dimension $k-1$ and $k+1$,
such that $U\subset W\subset U'$. So there is a natural identification $H_x\cong\p(W)\times\p(V/W)^*$.
The condition of $-2K_{H_x}-dL_x$ being ample (respectively nef) is clearly equivalent to $2k\leq n\leq 2k+1$ (respectively $2k\leq n\leq 2k+2$).
Notice also that the map $T_1: \eff_1(H_x) \to\eff_{2}(X)$ sends
lines on fibers of $p_1$ and $p_2$ to the dual Schubert cycles $\sigma_2^{*}$ and $\sigma_{1,1}^{*}$.
In particular it is surjective.
Exceptional pairs (a), (b) in Theorem~\ref{thm1}(2) occur in this case.
\end{say}
\begin{say}[Hyperplane sections of Grassmannians] \label{H_in_G}
Let $X$ be a general hyperplane section of the Grassmannian $G(k,n)$ under the Pl\"ucker embedding,
where $2\leq k\leq\frac{n}{2}$.
Let $x\in X$ be a general point, and $H_x$ the variety of lines through $x$ on $X$.
Then $H_x$ is a smooth divisor of type $(1,1)$
in $\p^{k-1}\times\p^{n-k-1}$ and $L_x$ is the restriction to $H_x$ of $p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)$.
Thus $-2K_{H_x}-dL_x$ is ample (respectively nef) if and only if $n=2k$
(respectively $2k\leq n\leq 2k+1$).
In these cases $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective,
and thus Theorem~\ref{thm1}(1) applies. We conclude that
$\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $n=2k$ (respectively $2k\leq n\leq 2k+1$).
This example occurs as the exceptional case (e) in Theorem~\ref{thm1}(2).
\end{say}
\begin{say}[Orthogonal Grassmannians]\label{OG}
We fix $Q$ a nondegenerate symmetric bilinear form on the $n$-dimensional vector
space $V$, and $k$ an integer satisfying $2\leq k<\frac{n}{2}-1$.
Let $X=OG(k,n)$ be the subvariety of the
Grassmannian $G(k,n)$ parametrizing linear subspaces that are isotropic with respect to $Q$.
Then $X$ is a Fano manifold of dimension $\frac{k(2n-3k-1)}{2}$ and $\rho(X)=1$.
Notice that $X$ is the zero locus in $G(k,n)$ of a global section of the vector bundle
$\sym^2({\mathcal {S}}^*)$, where ${\mathcal {S}}^*$ is the universal quotient bundle on $G(k,n)$.
Using this description and the formula for $\text {ch}_2\big(G(k,n)\big)$ described in \ref{G},
standard Chern class computations show that
$$
\text {ch}_2(X)=\frac{n-1-3k}{2}\sigma_2-\frac{n-3-3k}{2}\sigma_{1,1},
$$
where we denote by the same symbols $\sigma_2$ and $\sigma_{1,1}$ the restriction to $X$
of the corresponding Schubert cycles on $G(k,n)$.
Given $x\in X$, let $H_x$ be the variety of lines through $x$ on $X$ under the Pl\"ucker embedding.
We claim that $(H_x,L_x)\cong \big(\p^{k-1}\times Q^{n-2k-2}, p_{_1}^*\o(1)\otimes p_{_2}^*\o(1)\big)$.
Indeed, if $x$ parametrizes a linear subspace $[W]$, then
a line through $x$ on $X$ corresponds to a pair $(U,U')\in\p(W)\times\p(V/W)^*$ such that
$U'\subset U^{\perp}$ and $Q(v,v)=0$ for any $v\in U'$.
This is equivalent to the condition that $U'\subset W^{\perp}$ and $Q(v,v)=0$ for any $v\in U'$.
The form $Q$ induces a nondegenerate quadratic form on $W^{\perp}/W$, which
defines a smooth quadric $Q^{n-2k-2}$ in $\p(W^{\perp}/W)^*\cong \p^{n-2k-1}$.
The condition then becomes $U'\subset W^{\perp}$ and $[U'/W]\in Q^{n-2k-2}$, proving the claim.
Thus $-2K_{H_x}-dL_x$ is ample (respectively nef) if and only if $n=3k+2$
(respectively $3k+1\leq n\leq 3k+3$).
In these cases there are lines on fibers of $p_1$ and $p_2$ contained in $H_x\subset \p^{k-1}\times\p^{n-k-1}$, and thus
the composite map $\eff_1(H_x) \to\eff_{2}(X)\DOTSB\lhook\joinrel\rightarrow \eff_{2}\big(OG(k,n)\big)$ is surjective.
Thus $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective, and Theorem~\ref{thm1}(1) applies.
We conclude that
$\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $n=3k+2$
(respectively $3k+1\leq n\leq 3k+3$).
The exceptional pair (d) in Theorem~\ref{thm1}(2) occurs in this case.
\end{say}
\begin{say}[Symplectic Grassmannians] \label{SG}
We fix $\omega$ a non-degenerate antisymmetric bilinear form on the $n$-dimensional vector
space $V$, $n$ even, and $k$ an integer satisfying $2\leq k\leq\frac{n}{2}$.
Let $X=SG(k,n)$ be the subvariety of the
Grassmannian $G(k,n)$ parametrizing linear subspaces that are isotropic with respect to $\omega$.
Then $X$ is a Fano manifold of dimension $\frac{k(2n-3k+1)}{2}$ and $\rho(X)=1$.
Notice that $X$ is the zero locus in $G(k,n)$ of a global section of the vector bundle
$\wedge^2({\mathcal {S}}^*)$, where ${\mathcal {S}}^*$ is the universal quotient bundle on $G(k,n)$.
Using this description and the formula for $\text {ch}_2\big(G(k,n)\big)$ described in \ref{G},
standard Chern class computations show that
$$
\text {ch}_2(X)=\frac{n+3-3k}{2}\sigma_2-\frac{n+1-3k}{2}\sigma_{1,1},
$$
where we denote by the same symbols $\sigma_2$ and $\sigma_{1,1}$ the restriction to $X$
of the corresponding Schubert cycles on $G(k,n)$.
Given $x\in X$, let $H_x\subset \p^{k-1}\times\p^{n-k-1}$ be the variety of lines through $x$ on $X$ under the Pl\"ucker embedding.
By \cite[1.4.7]{hwang},
$(H_x,L_x)\cong \big( \p_{\p^{k-1}}(\o(2)\oplus\o(1)^{n-2k}), \o_{\p}(1) \big)$.
When $n=2k$ this becomes $(H_x,L_x)\cong \big(\p^{k-1},\o(2)\big)$.
When $n>2k$, $H_x$ can also be described as
the blow-up of $\p^{n-k-1}$ along a linear subspace $\p^{n-2k-1}$, and $L_x$ as $2H-E$, where
$H$ is the hyperplane class in $\p^{n-k-1}$ and $E$ is the exceptional divisor.
Thus $-2K_{H_x}-dL_x$ is ample (respectively nef) if and only if $n=2k$ or $n=3k-2$
(respectively $n=2k$ or $3k-3\leq n\leq 3k-1$).
In these cases $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective.
Indeed, if $n=2k$, then $b_4(X)=1$.
If $3k-3\leq n\leq 3k-1$, then there are lines on fibers of $p_1$ and $p_2$ contained in $H_x\subset \p^{k-1}\times\p^{n-k-1}$, and thus
the composite map $\eff_1(H_x) \to\eff_{2}(X)\DOTSB\lhook\joinrel\rightarrow \eff_{2}\big(OG(k,n)\big)$ is surjective.
So Theorem~\ref{thm1}(1) applies, and we conclude that
$\text {ch}_2(X)>0$ (respectively $\geq0$) if and only if $n=2k$ or $n=3k-2$
(respectively $n=2k$ or $3k-3\leq n\leq 3k-1$).
When $m$ is even, the exceptional pair (c) in Theorem~\ref{thm1}(2) occurs for
$X=SG(m+2,3m+4)$.
The exceptional pair $(H_x,L_x)\cong (\p^d, \o(2))$ in Theorem~\ref{thm3}(2)
occurs for $X=SG(d+1,2d+2)$.
\end{say}
\begin{say}[A two-orbit variety]\label{degenerate SG}
We fix $\omega$ an antisymmetric bilinear form of maximum rank $n-1$ on the $n$-dimensional vector
space $V$, $n$ odd, and $k$ an integer satisfying $2\leq k<\frac{n}{2}$.
Let $X$ be the subvariety of the
Grassmannian $G(k,n)$ parametrizing linear subspaces
that are isotropic with respect to $\omega$.
Then $X$ is a Fano manifold of dimension $\frac{k(2n-3k+1)}{2}$ and $\rho(X)=1$.
Note that $X$ is not homogeneous.
The same argument presented in \ref{SG} above, taking $x\in X$ a general point,
shows that $\text {ch}_2(X)>0$ (respectively $\text {ch}_2(X)\geq0$)
if and only if $n=3k-2$ (respectively $3k-3\leq n\leq 3k-1$).
When $m$ is odd, the exceptional pair (c) in Theorem~\ref{thm1}(2) occurs for such $X$, with
$k=m+2$ and $n=3m+4$.
\end{say}
\begin{say}[The $5$-dimensional homogeneous space $G_2/P$] \label{G2/P}
Let $X$ be the $5$-dimensional homogeneous space $G_2/P$.
Then $X$ is a Fano manifold with $\rho(X)=1$, and
$(H_x,L_x)\cong\big(\p^1,\o(3)\big)$, as explained in \cite[1.4.6]{hwang}.
Since $b_4(X)=1$, the map $T_1: \eff_1(H_x) \to\eff_{2}(X)$ is surjective,
and thus Theorem~\ref{thm1}(1) applies. We conclude that
$\text {ch}_2(X)>0$.
The exceptional pair $(H_x,L_x)\cong (\p^1, \o(3))$ in Theorem~\ref{thm3}(2)
occurs in this case.
\end{say}
\begin{say}[Non-Examples]\label{non-examples}
By \cite[Theorem 1.2]{dJ-S:2fanos_1}, the following smooth projective varieties do not satisfy $\text {ch}_2(X)>0$.
\begin{itemize}
\item Products $X\times Y$, with $\dim X, \dim Y>0$.
\item Projective space bundles $\p(E)$, with $\dim X>0$ and $\rank E \geq2$.
\item Blowups of $\p^n$ along smooth centers of codimension $2$.
\end{itemize}
By Theorem \ref{thm2}(1),
if $X$ a Fano manifold and $H_x$ is not Fano, then $\text {ch}_2(X)$ is not nef.
This is the case, for instance, when $X$ is the moduli space of rank $2$ vector bundles with fixed determinant
of odd degree on a smooth curve $C$ of genus $\geq2$.
In this case $H_x$ is the family of Hecke curves through $x=[E]\in X$, which are
conics with respect to the ample generator of $\pic(X)$.
As explained in \cite[1.4.8]{hwang}, $H_x\cong\p_C(E)$, which is not Fano.
\end{say}
\bibliographystyle{amsalpha}
|
3,212,635,537,699 | arxiv | \section{Introduction}
\object{PSR B0656+14} is one of three nearby pulsars in the middle-age range in
which pulsed high-energy emission has been detected. These are
commonly known as ``The Three Musketeers'' \citep{bt97}, the other two
being Geminga and PSR B1055--52. PSR B0656+14 was included in a recent
extensive survey of subpulse modulation in pulsars in the northern sky
at the Westerbork Synthesis Radio Telescope (WSRT) by
\citet{wes06}. In the single pulses analysed for this purpose, the
unusual nature of this pulsar's emission was very evident, especially
the brief, yet exceptionally powerful bursts of radio emission.
These extreme bursts of radio emission of PSR B0656+14 are similar to
those detected in the recently discovered population of bursting
neutron stars. These Rotating RAdio Transients (RRATs;
\citealt{mll+06}) typically emit detectable radio emission for less
than one second per day, causing standard periodicity searches to fail
in detecting the rotation period. From the greatest common divisor of
the time between bursts, a period has been found for ten out of the
eleven sources. The periods (between 0.4 and 7 s) suggest these
sources may be related to the radio-quiet X-ray populations of neutron
stars, such as magnetars \citep{wt06} and isolated neutron stars
\citep{hab04}. However, \citet{ptp06} have shown that the estimated
formation rate of magnetars is too low. Furthermore the spectrum of
the only RRAT for which an X-ray counterpart has so far been detected
\citep{rbg+06} seems to be too cool, too thermal and too dim for a
magnetar, but is consistent with a cooling middle-aged neutron star
like PSR B0656+14
\citep{szk+06}. Also the pulse period and the slowdown-rate of PSR
B0656+14, as well as the derived surface magnetic field strength and
characteristic age, are within the range of measured values for RRATs.
\section{Observations}
\label{SctObs}
The results in this paper are based on an archival and a new
observation made using the 305-meter Arecibo telescope on 20 July 2003
and 30 April 2005 respectively. Both observations had a
centre-frequency of 327 MHz and a bandwidth of 25 MHz. Almost 25,000
and 17,000 pulses with a sampling time of 0.5125 and 0.650 ms were
recorded using the Wideband Arecibo Pulsar Processor
(WAPP\footnote{{\tt http://www.naic.edu/{\tiny{$\!\sim$}}wapp}}) for
the 2003 and 2005 observation respectively. The Stokes parameters have
been corrected off-line for dispersion, Faraday rotation and various
instrumental polarization effects.
The data were in some instances affected by Radio Frequency
Interference, but this could relatively easily be removed by excluding
the pulses with the highest root-mean-square (RMS) of the off-pulse noise
(about 1\% of the data in both observations) from further
analysis. The results derived from both observations (of which the one
from 2005 is relatively clean) are very similar, making us confident
in the results.
The observations are not flux calibrated, but are sufficiently long to
get a pulse profile with high precision. From its shape it follows
that the peak-flux of the profile is about 17 times that of the
integrated flux-density over the entire pulse phase. The average flux
at our observing frequency is estimated to be 7.2 mJy, based on the
measurement of the spectral index ($-0.5$) and flux-density by
\citet{lylg95} at 408 MHz. Therefore the peak-flux of the profile is
approximately 0.12 Jy. The scintillation bandwidth of PSR B0656+14 is
much smaller than the observing bandwidth, so no intensity
fluctuations appear in the data as a function of time due to
interstellar scintillation.
\section{The radio bursts of PSR B0656+14}
\begin{figure}
\epsscale{.99}
\plotone{f1.eps}
\caption{\label{Enhists}The pulse-energy distribution of
the 2005 observation of PSR B0656+14 (solid line) and the off-pulse
distribution (dashed line). The brightest pulse is about 116 times
stronger than the average, which is outside the plotted energy-range.}
\end{figure}
To characterize the bright pulses of PSR B0656+14, the pulse-energy
distribution is calculated (see Fig. \ref{Enhists}). In this plot the
energies are normalized to the average pulse-energy {\Eav}. The
brightest measured pulse is 116 {\Eav}, which is exceptional for
regular radio pulsars. Based on the energy of these pulses alone, PSR
B0656+14 would fit into the class of pulsars that emit so-called giant
pulses (e.g. \citealt{cai04}). About 0.3\% of the pulses of PSR
B0656+14 are brighter than 10 {\Eav}, which is the working definition
of giant pulses. Nevertheless, there are important differences between
giant pulses and the bright bursts of PSR B0656+14. The bursts of PSR
B0656+14 have timescales that are much longer than the nano-second
timescale observed for giant pulses (e.g. \citealt{spb+04,hkwe03}), do
not show a power-law energy-distribution, are not confined to a narrow
pulse window and are not associated with an X-ray component. This
suggests differing emission mechanisms for the classical giant pulses
and the bursts of PSR B0656+14. Also the possible correlation between
emission of giant pulses and high magnetic field strengths at the
light cylinder (around $10^5$ Gauss;
\citealt{cst+96}) clearly fails for PSR B0656+14 (766 Gauss).
However, giant pulses have been claimed in other (slow) pulsars that
also easily fail this test (e.g. \citealt{ke04,kkg+03}) and for
millisecond pulsars a high magnetic field strengths at the light
cylinder seems to be a poor indicator of the rate of emission of giant
pulses \citep{kbmo+06}.
\begin{figure}
\epsscale{.80}
\plotone{f2.eps}
\caption{\label{lrced}
The dashed line is the average profile of our 2005 observation. The
solid line shows the ratio between the peak-flux of the brightest
burst at each longitude and the average peak-flux of the profile.
}
\end{figure}
The bursts of PSR B0656+14 are even more extreme when we consider
their peak-fluxes (see Fig. \ref{lrced}). The highest measured
peak-flux of a burst is 420 times the average peak-flux of the pulsed
emission, which is an order of magnitude brighter than the giant
micropulses observed for the Vela pulsar \citep{jvkb01} and PSR
B1706--44 \citep{jr02}. Giant micropulses are not necessarily extreme
in the sense of having a large integrated energy (as required for
giant pulses), but their peak-flux densities are very large. Not only
are the bursts of PSR B0656+14 much brighter (both in peak-flux and
integrated energy) than those found for giant micropulses, they are
also not confined in pulse longitude and they do not show a power-law
energy-distribution as the giant pulses and micropulses do.
\begin{figure*}
\begin{center}
\resizebox{!}{0.29\hsize}{\includegraphics[angle=0,trim=0 -15 0 0,clip=true]{f3a.eps}}
\resizebox{!}{0.29\hsize}{\includegraphics[angle=0,trim=0 -30 0 0,clip=true]{f3b.ps}}
\resizebox{!}{0.3\hsize}{\includegraphics[angle=0,trim=0 0 0 0,clip=true]{f3c.eps}} \vspace*{-5mm}
\end{center}
\caption{\label{megapulse}The bright radio burst detected at the
leading edge of the pulse profile in the 2003 observation. {\bf Left:}
The burst (solid line) compared with the average pulse profile (dashed
line). {\bf Middle:} The same burst, but now with frequency
resolution. The data in this plot is not de-dispersed and its
dispersion track matches exactly what is expected for the known
dispersion measure (DM) of this pulsar (dashed line). {\bf Right: }
The longitude-resolved energy-distribution at the longitude of the
peak of the strong pulse (solid line) and the off-pulse distribution
(dashed line). The peak-fluxes ($F_i$) are normalized to the average
peak-flux of the profile ($\left<F_p\right>$).}
\end{figure*}
At the leading edge of the profile we detected a burst with an
integrated pulse-energy of $12.5\left<E\right>$. What makes this pulse
so special is that it has a peak-flux that is 2000 times that of the
average emission at that pulse longitude (left panel of
Fig. \ref{megapulse}). Its dispersion track exactly matches what is
expected for this pulsar (middle panel of Fig. \ref{megapulse}),
proving that this radio burst is produced by the pulsar. Notice that
the effect of interstellar scintillation is also clearly visible
(different frequency channels have different intensities) and that the
dispersion track is the same for the two pulses in the centre of the
profile. This burst demonstrates that the emission mechanism operating
in this pulsar is capable of producing intense sporadic bursts of
radio emission even at early phases of the profile. There are only two
bursts with a peak-flux above the noise level detected at the
longitude of the peak of this pulse out of the total of almost 25,000
pulses (see right panel of Fig. \ref{megapulse}). This implies either
that these two bursts belong to an extremely long tail of the
distribution, or that there is no emission at this longitude other
than such sporadic bursts.
\section{The RRAT connection}
It is unclear how the extreme pulses of PSR B0656+14 fit into the zoo
of apparently different emission types of radio pulsars. They are
brighter than the giant micropulses, and not constrained to a
particular pulse longitude and, despite being energetic enough, they
are too broad to be characterized as classical giant pulses. However,
the observational facts are that PSR B0656+14 occasionally emits
extremely bright bursts of radio emission which are short in
duration. One cannot help but see the similarities with the RRATs.
\begin{table}
\caption{\label{LuminositiesTable}Comparison of the peak-flux of
the brightest bursts of PSR B0656+14 and those of the RRATs. Here
$S_\mathrm{peak}$ is the peak-flux of the brightest detected burst for
each source, $D$ the distance and $L_\mathrm{peak}=S_\mathrm{peak}d^2$
the peak-luminosity of the brightest detected burst. }
\begin{center}
\begin{tabular}{lrcc}
\hline
\hline
Name & $S_\mathrm{peak}$ & $D$ & $L_\mathrm{peak}$ \\
& mJy & kpc & Jy kpc$^2$ \\
\hline
B0656+14 & 59000 & 0.288 & 4.1\\
J0848--43 & 100 & 5.5 & 3.0\\
J1317--5759 & 1100 & 3.2 & 11\\
J1443--60 & 280 & 5.5 & 8.5\\
J1754--30 & 160 & 2.2 & 0.77\\
J1819--1458 & 3600 & 3.6 & 47\\
J1826--14 & 600 & 3.3 & 6.5\\
J1839--01 & 100 & 6.5 & 4.2\\
J1846--02 & 250 & 5.2 & 6.8\\
J1848--12 & 450 & 2.4 & 2.6\\
J1911+00 & 250 & 3.3 & 2.7\\
J1913+1333 & 650 & 5.7 & 21\\
\hline
\end{tabular}
\end{center}
\end{table}
One important question is whether the luminosities of the bursts of
the relatively nearby PSR B0656+14 (288 pc; \citealt{btg+03}) and
those of the RRATs are comparable (there is no indication that the
spatial distributions of RRATs and PSRs are different;
\citealt{mll+06}). The brightest burst we found in the centre of the
profile has a peak-flux that is 420 times the average peak-flux. With
an average peak-flux of 0.12 Jy (see Sect. \ref{SctObs}), this
corresponds to a peak flux of 50 Jy. If one compares luminosities
(Table \ref{LuminositiesTable}), one can see that the brightest burst
of PSR B0656+14 is as luminous as those of four of the eleven RRATs,
and therefore very typical for these sources.
It is interesting to compare not only the luminosities of the bursts,
but also their peak-flux distributions. Although the slope of the top
end of the distribution of PSR B0656+14 is in the range of the giant
pulses (between $-2$ and $-3$), it is better described by a lognormal
than by a power-law distribution. This again suggests that the bright
bursts of PSR B0656+14 are different from the classical giant
pulses. The top end of the RRAT distribution with the highest number
of detections seems to be harder (with a slope $-1$), but for the
other RRATs this is as yet unclear. For instance, the tail of the
distribution of PSR B0656+14 seems to be consistent with the
distribution of the RRAT with the second highest number of detections.
Normal periodicity searches failed to detect the RRATs, which places
an upper limit on the average peak-flux density of weak pulses among
the detected bursts of about 1:200 \citep{mll+06}. Because the
brightest burst of PSR B0656+14 exceeds the underlying peak-flux by a
much greater factor, PSR B0656+14 could have been identified as an
RRAT, were it not so nearby. Were it located twelve times farther away
(thus farther than five of the RRATs), we estimate that only one burst
per hour would be detectable (the RRATs have burst rates ranging from
one burst every 4 minutes to one every 3 hours). The typical burst
duration (about 5 ms) of PSR B0656+14 also matches that of the RRATs
(between 2 and 30 ms).
\begin{figure*}
\epsscale{0.55}
\plotone{f4a.ps}
\epsscale{0.27}
\plotone{f4b.ps}
\caption{\label{distribution}{\bf Left:} The power-spectrum of
the first 35 minutes of the 2005 observation of PSR B0656+14 with
artificially increased noise corresponding to a twelve times larger
distance of this pulsar. The dashed lines indicates the rotation
frequency of this pulsar. {\bf Right:} The brightest recorded pulse in
the same piece of data, again for a twelve times larger distance.}
\end{figure*}
Were PSR B0656+14 twelve times more distant, the RMS of the noise
would increase by a factor 144 relative to the strength of the pulsar
signal. When we artificially add gaussian-distributed noise at this
level to the observation, we find no sign of the pulsar's (2.6-Hz)
rotation frequency in 35-minute segments of the data\footnote{For
telescopes with a lower sensitivity than Arecibo (e.g. Parkes) then
even if PSR B0656+14 were quite a bit closer it would not reveal it's
periodicity in a similarly long observation.} (left panel of
Fig. \ref{distribution}) yet the brightest pulse (right panel of
Fig. \ref{distribution}) is easily detected ($18\sigma$). Only in the
spectrum of the whole 1.8-hour observation --- which is significantly
longer than the continuous observations of the Parkes Multibeam Survey
--- is the periodicity marginally detectable. This means that a
distant PSR B0656+14 could only be found as a RRAT in a survey using
Arecibo, unless the pointings were unusually long.
\section{Implications and discussion}
We have shown that PSR B0656+14 could have been identified as an RRAT,
had it been at the typical distance of the known RRATs. We have no way
of telling whether its capacity to produce intense bursts of emission
right across its profile is related to its age, period, inclination,
or even its immediate galactic environment, since this behavior has
been found in no other pulsar. The pulse-energy distribution is not a
power-law, but is better fitted by a lognormal distribution and such
distributions are thought to be common for pulsars
(e.g. \citealt{cjd04}).
In a study of 32 pulsars, \citet{rit76} found that PSR B0950+08 shows
the highest degree of pulse-to-pulse intensity
variation. Nevertheless, the brightest pulse found in an extensive
study of its field statistics by
\citet{cjd04} is approximately 5 {\Eav}. Vela does not show pulses
brighter than 10 {\Eav} and only 0.5\% of the pulses are brighter than
3 {\Eav} \citep{jvkb01}. For PSR B0656+14 4\% of the pulses are
brighter than 3 {\Eav}. Therefore the emission of PSR B0656+14 appears
to be extremely erratic compared with both normal pulsars and pulsars
with giant micropulses.
One could wonder if more known pulsars show a similar kind of sporadic
bright emission. To answer this question one should analyse the pulse
energy distribution of a sample of pulsars. A complication would be
that longer than typical observations are required to detect the
presence of a tail of strong pulses. In a large survey for subpulse
modulation by \citet{wes06}, this pulsar was the only pulsar that
showed clear evidence for this kind of sporadic emission.
Our identification of PSR B0656+14 with RRATs implies that at least
some RRATs could be sources which emit pulses continuously, but over
an extremely wide range of energies. This is in contrast to a picture
(predicted by \citealt{zgd06}) of infrequent powerful pulses with
otherwise no emission. Therefore, if it indeed turns out that PSR
B0656+14 (despite its relatively short period) is a true prototype for
an RRAT, we can expect future studies to demonstrate that
RRATs emit much weaker pulses among their occasional bright bursts. We
would also predict that their integrated profiles will be found to be
far broader than the widths of the individual bursts, and will need
many thousands of bursts to stabilize.
Hopefully, radio observations of RRATs will soon be able to test these
predictions. These, together with the detection of more RRATs and
potentially their high-energy counterparts, will shed light on their
true nature. The transient nature of these sources makes them
difficult to detect. However it is likely that the Galactic population
exceeds that of the ``normal'' radio pulsars \citep{mll+06}. Thus
surveys with long pointings, such as those planned with LOFAR, or many
observations of the same region of sky are required. Surveys at low
frequencies will also be more sensitive to nearer RRATs as the greater
degree of dispersion will allow them to be more easily distinguished
from radio frequency interference.
\acknowledgments
GAEW and JMR thank the Netherlands Foundation for Scientific Research
(NWO) and the Anton Pannekoek Institute, Amsterdam, for their kind
hospitality, and GAEW the University of Sussex for a Visiting
Fellowship. Portions of this work (for JMR) were carried out with
support from US National Science Foundation Grants AST 99-87654 and
00-98685. Arecibo Observatory is operated by Cornell University under
contract to the US National Science Foundation. The Westerbork
Synthesis Radio Telescope is operated by the ASTRON (Netherlands
Foundation for Research in Astronomy) with support from NWO.
|
3,212,635,537,700 | arxiv | \section{Introduction}
Direct simulations of star clusters have a long history.
As algorithms and hardware have improved, larger numbers of stars could be simulated, allowing a more realistic representation of the dynamical evolution of globular star clusters.
\nbody \citep{Aarseth2003} is a state-of-the-art direct $N$-body simulation code specifically designed for star clusters.
It uses several algorithms to enhance the computing speed and accuracy, especially for strong interactions that arise from a large fraction of binaries and relatively short relaxation timescales ($\le 100$ Myr for typical open clusters and $\le 1$ Gyr for typical globular clusters) in collisional and dense stellar systems.
Here, the terms ``collisional'' and ``dense'' are not well defined in the literature.
The classical two-body relaxation time, as defined e.g. by \cite{Chandrasekhar1942,Spitzer1987}, describes how important distant gravitational two-body encounters are for the orbital motion of stars.
If the relaxation time is very long, a system is denoted as ``collisionless'' (for example, galactic disks or bulges);
the motion of stars is entirely determined by the smooth mean gravitational field of the system.
If the relaxation time is short (e.g., shorter than the lifetime of the system) we denote the cluster as ``collisional'' (e.g., globular and open star clusters, nuclear star clusters).
If the stellar density is high enough, close two-body gravitational encounters and stellar collisions may occur.
This aspect is crucial when studying ``dense'' star clusters.
In dense and collisional star clusters a correct integration of stellar motions requires pairwise gravitational interactions to be included between many if not all stars in the cluster.
This is the situation for which codes such as \nbody are designed.
Direct \nb simulation of star clusters can be very time consuming.
In a system with $N$ particles, the full force calculation cost of one particle scales with $O(N)$.
With individual time steps for each particle, the cost per crossing time ($t_{\rm cr}$) depends on the number of steps per particle ($N_{\rm s}$) which varies with different time step criteria, integration methods and star cluster properties.
\cite{Makino1988} and \cite{Makino1992} found that for the Hermite scheme with a time step criterion based on relative force change \citep{Aarseth1985}, $N_{\rm s}$ is roughly proportional to $N^{1/3}$ for systems with homogeneous density.
Thus, when using individual time steps the total computational cost per crossing time of $N$ particles scales with $O(N^{7/3})$.
For systems with a power-law density distribution $\rho \propto r^{-\alpha}$, $N_{\rm s}$ depends on the power index $\alpha$.
Then the cost per crossing time scales with $O(N^{7/3})$ for $\alpha < 24/11$ and $O(N^{(6-\alpha)/(6-2\alpha)})$ for $\alpha \ge 24/11$ \citep{Makino1988}.
Considering the half-mass relaxation timescale, $t_{\rm rh}$ is proportional to $t_{\rm cr} N/\ln{N}$ \citep{Spitzer1987,Sugimoto1990}, the computational cost per $t_{\rm rh}$ is $O(N^{10/3}/\ln{N})$ for homogeneous systems and for power-law systems with $\alpha < 24/11$ and $O(N^{(12-3\alpha)/(6-2\alpha)}/\ln{N})$ for $\alpha \ge 24/11$.
Thus, an efficient parallelization of a direct \nb code is necessary for large particle numbers.
\cite{Sugimoto1990} discussed the fundamental problem that direct numerical simulations of globular star clusters could not be completed for decades if extrapolating the standard evolution of computational hardware (Moore's law).
They called for the construction of a special-purpose computer GRAPE, which finally was successfully initiated and completed by their team \citep{Makino1993,Makino1998,Makino2003}.
In the following years, graphical processing units (GPU) widely replaced GRAPE \citep[e.g.,][]{Harfst2007,Hennebelle2007,PZ2007,Belleman2008,Schive2008} and much of the GRAPE software could be ported to GPU \citep{Gaburov2009}.
\cite{Spurzem1999,Spurzem2008} and \cite{Hemsendorf2003} discussed several different types of hardware for parallelization and extended \nbody to \nbodypp for general parallel supercomputers.
Later, \cite{Nitadori2012} developed a GPU-based parallel force calculation for NBODY6.
As a result, large $N$-body simulations ($N\sim 10^5$) became possible on a single desktop computer or workstation with GPU hardware.
They also implemented the parallel force calculation based on Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) for recent CPU architectures.
\cite{Spurzem2011,Berczik2013a} and \cite{Berczik2013b} discussed the performance of large \nb simulations with the GPU-accelerated codes $\phi$GPU and a provisional version of NBODY6++GPU.
With these parallelization methods, we can now study star clusters with a number of stars exceeding $10^5$.
\cite{Hurley2012} simulated $200,000$ stars including $5000$ primordial binaries with initial half-mass radius $4.7$~pc using NBODY4 on a GRAPE-6 based computer to investigate core collapse and core oscillation.
Later, \cite{Sippel2013} studied the multiple stellar-mass black holes in globular clusters by simulating $262,500$ stars including $12,500$ primordial binaries with initial half-mass radius $6.2$~pc using NBODY6-GPU.
The current largest direct \nb simulation modeling the globular cluster M4 used one computing node including 12 Intel Xeon X5650 cores (2.66 GHz per core) and 2 NVIDIA TESLA C2050 GPUs with 448 cores each (1.15 GHz per core) with \nbodygpu \citep{Heggie2014}.
This simulation contained $~500,000$ stars with $7\%$ binaries and a small half mass radius of $0.58$ pc.
Nowadays, we can make an effort to reach one million stars by using parallel supercomputers with GPUs.
In this paper, we first introduce the parallel algorithm used by \nbodygpu and \nbodypp in Section~\ref{sec:feature}.
Then we describe the new version of \nbodyppgpu with a hybrid parallel method and also the new algorithms that are necessary for large number of particle parallelization in Section~\ref{sec:new}.
Performance tests are carried out in Section~\ref{sec:performance}.
In Section~\ref{sec:app} we show an application to a globular cluster with one million stars.
In Section~\ref{sec:discussion}, we discuss the parallelization limit and future development of NBODY6++GPU.
Finally, we present our conclusions in Section~\ref{sec:conclusion}.
\section{The features of NBODY6/6++}
\label{sec:feature}
\nbody uses the fourth-order Hermite integration method.
\cite{Makino1991} presented a careful analysis of the performance and energy error of the Hermite integrator. He showed that it reduces to the similar asymptotic error behaviour as the standard Aarseth scheme (fourth-order method; see \citealp{Aarseth1985}) but it has some advantages in the time step choice and data structure.
The hierarchical block time steps method is used together with the Hermite integrator \citep{McMillan1986,Makino1991} in NBODY6,
which avoids the overheads of particle position and velocity prediction in an individual time step method.
In this method, particle time steps are adjusted to quantized values, usually an integer power of $0.5$.
Then at each time step, active particles (the particles that satisfy the time step criterion) are integrated together.
To speed up the force calculation, \nbody uses the Ahmad-Cohen (AC) neighbor scheme \citep{Ahmad1973}.
The basic idea is to employ a neighbor list for each particle.
The integration is separated into two parts: regular force integration for large time steps (regular steps) and irregular force integration for small time steps (irregular steps).
The regular force is the summation of the forces from particles outside the neighbor radius and the irregular force accumulates only the neighbor forces.
During the irregular step, the regular force and its first order derivative calculated at the last regular step are used for position and velocity prediction.
The AC scheme gains efficiency with sequential computing (without parallelization).
The speed gained by the AC scheme is roughly proportional to $N^{1/4}$ \citep{Makino1988,Makino1992}.
However, in parallel computing, this gain is limited by the complexity of the implementation of this algorithm (see Section~\ref{sec:performance}).
Also, the benefit of reducing overheads of particle prediction in the block time step method is strongly limited in the neighbor scheme.
\cite{PZ2014} discussed the integration accuracy requirement for self-gravitating systems simulated with direct \nb codes.
They found that for three-body systems the integration should have total energy conserved better than $1/10{\rm th}$.
Although this accuracy requirement is uncertain when the simulation is extended to large particle number systems, this work indicates the importance of careful integration treatment for direct \nb systems.
One important feature of \nbody is that it uses the algorithms of \cite{Kustaanheimo1965} (hereafter KS) and chain regularization \citep{Mikkola1993} to deal with an accurate solution of close encounters, binaries and multiple systems, which play a significant role in star cluster dynamical evolution.
These strong interactions require very small time steps during integration and may produce large errors with standard integrators such as the Hermite scheme.
Using KS and chain regularization is also the most important feature of \nbody for star cluster simulations.
Here, we have briefly introduced the main algorithm used in NBODY6/6++.
In the next section we will focus on the parallelization of the codes.
\section{Parallelization of NBODY6++GPU}
\label{sec:new}
\subsection{MPI parallelization of NBODY6++}
\cite{Spurzem1999} and \cite{Hemsendorf2003} developed \nbodypp based on \nbody using MPI parallelization with the copy algorithm.
Both regular and irregular forces were parallelized.
Here different MPI processors calculate different subsets of the active particles.
Each MPI processor has the complete particle dataset.
Another available parallel algorithm is the ring algorithm which splits the full particle dataset for different MPI processors. It reduces the memory cost in each MPI process.
The benefit of the copy algorithm compared to the ring algorithm is that there is no requirement for extra communication of the neighbor particle data which is not in the same MPI process during the irregular force calculation.
The disadvantage is the particle number limit due to memory size on the computing node.
The MPI communication with the copy algorithm has constant time cost (independent of MPI processor number except for latency).
The scaling of the regular force with different MPI processors is very good.
Since the regular force dominates the calculation, this results in a good scaling of the total computing time.
\cite{Dorband2003} provided a detailed discussion of these communication algorithms.
\cite{Lippert1998} and \cite{Makino2002} suggested an efficient communication algorithm (hypersystolic) for extremely large processor numbers.
\subsection{Basic NBODY6-GPU implementation}
After the GPU computing (CUDA) became popular, the shared memory parallel \nbodygpu code was developed for workstation and desktop computers \citep{Nitadori2012}.
The OpenMP, GPU (CUDA) and AVX/SSE parallel methods are used to make the code as fast as possible.
However, \nbodygpu can only be used in a single node (no massively parallel MPI implementation) so the number of particles is limited for a reasonable simulation time.
\subsubsection{Regular force and potential (GPU)}
The GPU library of \cite{Nitadori2012} is used for calculating the regular force, which dominates the direct integration, and potential energy calculation.
The cost for regular force calculation per particle scales with $O(N)$ and for potential energy calculation scales with $O(N^2)$.
The performance of GPU force calculation is very good since the pure force calculation is easy to parallelize.
GPUs also help to accumulate the neighbor list very efficiently during the regular force calculation.
\subsubsection{Prediction and irregular force (AVX/SSE)}
When GPU accelerates the regular force very efficiently, the irregular force becomes expensive.
However, this part is hard to parallelize on GPUs due to the complexity of the AC neighbor scheme.
Thus, \cite{Nitadori2012} developed the AVX/SSE and OpenMP parallel library for neighbor particle prediction and irregular force calculation.
AVX/SSE is an instruction set for CPUs developed in recent years, which supports vector calculation in the specific cache.
The advantage of AVX/SSE with OpenMP is that there is no extra memory copy compared to GPU.
For both AVX/SSE and GPU libraries, the data needs to be copied once for changing data structure to obtain computing efficiency.
This is because that \nbody has a very long development history, thus to completely change the data structure to be consistent with AVX/SSE and GPU libraries is very time consuming.
But for GPU, there is extra data copy from the host memory on the mother board to the device memory on GPU.
Besides, since the neighbor force calculation is not efficient for the distributed memory parallel method (with MPI parallelization; see discussion below),
this kind of shared memory parallel method is more efficient.
\subsection{Code improvements in NBODY6++GPU}
In this subsection we describe our new implementations in NBODY6++GPU.
The GPU acceleration, especially of the long-range (regular) gravitational forces, is very efficient so this part does not dominate the computational time any more, as we show below.
Secondly, the AVX/SSE implementation accelerates prediction and neighbor (irregular) forces, which is the next most time consuming part of the code.
We have combined the GPU and AVX/SSE acceleration, which was done for a single node in NBODY6-GPU, with the MPI parallelized NBODY6++ designed for multi-node computing clusters for the new version NBODY6++GPU.
This work requires additional efforts to keep the code consistent (see below).
In addition, we have worked on remaining bottlenecks, such as time step scheduling and stellar evolution, which become important for million bodies because the usual computationally intensive tasks have been accelerated very effectively by GPU and AVX/SSE.
\subsubsection{New algorithm of selecting active particles for block time steps}
\label{sec:initb}
For the block time step method, active particles should be selected at every time step.
It is very expensive to search all particles for the active ones, especially for the irregular force calculation.
In this case, for one block time step the cost of selecting active particles scales with $O(N)$ while the irregular force calculation cost scale with $O(N_{\rm i} \langle N_{\rm b} \rangle)$.
If $N \gg N_{\rm i} \langle N_{\rm b} \rangle$, the former can be more expensive.
When the simulation reaches millions of particles, the block time step levels can be quite deep (the smallest time step can reach $0.5^{20}-0.5^{22}$) and the deep blocks with few particles and small time steps can easily satisfy this condition.
One may consider to use a temporary list to save particles with small time steps and only search all particles at some selected time interval from the temporary list each time step.
However, this method is still expensive where there are many particles with small time steps (such as the wide binaries that are not KS regularized).
Indeed, we find that the time of selecting active particles can be much larger than the irregular integration time, even with this temporary list algorithm for one million particles including $5\%$ primordial binaries.
Another reason that forces us to deal with this issue is that the active particles selection is very difficult to parallelize efficiently (the cost is almost independent of processor numbers) and would be prohibitive for a million-body simulation.
Thus, we propose a better algorithm that uses a time step sorting list (hereafter sorting list algorithm; see Figures~\ref{fig:sortchart} and \ref{fig:sortlist}).
\cite{Zhong2014} implemented a similar algorithm for $\phi$-GRAPE+GPU and evaluated its performance.
The basic idea is that when we have the index list sorted by particle time step from smallest to largest, and the indicators of each boundary offset $I_{\rm off}(i)$ between the block of the same step particles (the largest particle index with step $0.5^i$),
we only need to find the correct offset at each block time step by using the algorithm shown in Figure~\ref{fig:sortchart} to select active particles (shown as black squares in Figure~\ref{fig:sortlist}).
After integration, we adjust the sorted list by sorting the active particles' new time steps.
The specific sorting method for this adjustment can be optimized to $O(N_{\rm i})$
if we ignore the stability of sorting (stability means no exchange of the order for the particles with same steps) and assume that many active particles keep the same step as before or have small time step changes.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,height=!]{sorting_flowchart.eps}
\caption{The flow chart for obtaining the correct offset in the sorting list algorithm.
$T_{\rm new}$ is the time after next integration. $T_{\rm now}$ is current time. $dT_{\rm min}$ is the current smallest time step (the time step of the first particle in sorting list).
$i_{\rm prev}$ is the previous offset. }
\label{fig:sortchart}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,height=!]{sorting_step.eps}
\caption{Diagrammatic sketch of sorting list algorithm for selecting active particles.
The time step of each particle block is $0.5^i$ separated by boundary indicators $I_{\rm off}(i)$ (vertical lines).
The integration advances vertically in the chart. Active particles are shown as black squares.}
\label{fig:sortlist}
\end{figure}
\subsubsection{ The initialization }
\label{sec:init}
The initialization of a simulation in \nbody is relatively expensive.
We improve it with MPI, GPU and OpenMP parallelization and a better algorithm.
The initial model for million-body simulations is very important and needs to be carefully tested.
This improvement is very useful for fast testing of the initial models with large particle numbers, especially for a large number of primordial binaries.
The initialization of \nbody can be divided into four parts:
\begin{enumerate}
\item reading or generating masses, positions, velocities and stellar evolution parameters of all stars;
\item scaling all parameters into \nb units (the \nb units\footnote{It has been suggested to name the \nb time unit to honour M. H$\acute{e}$non as H$\acute{e}$non time unit (D.C. Heggie, private communication)} are defined in \citealp{Heggie1986});
\item initialization of forces, neighbor lists and time steps of all stars;
\item initialization of primordial KS binaries.
\end{enumerate}
In the second part of the intialization, the total potential energy of the system is needed and costs $O(N^2)$.
Actually, \nbodygpu does this calculation twice for scaling purpose in the case of an external tidal field.
The GPU is used in \nbodygpu to speed up this part and it is very efficient.
Our new improvements are for the third and fourth parts.
In the traditional \nbodygpu version forces and neighbor lists are initialized separately without parallelization.
NBODY6++ parallelizes the scaling and initialization of the force parts, but only through MPI.
For million-body simulations this is very slow and requires hours to be finished.
We improved it by using GPU based force and neighbor list calculations (the same as for the regular force calculation).
The fourth part is very costly with more than $5\%$ primordial KS binaries in the traditional \nbodygpu (several hours).
During initialization of KS binaries, the force and its three derivatives (Hermite scheme) need to be renewed for center-of-mass particles.
All neighbor lists that contain KS binary component indices also need to be replaced by the center-of-mass particle indices.
The cost is approximately $O(N \langle N_{\rm b} \rangle N_{\rm KS})$ where $N_{\rm KS}$ is the number of primordial KS binaries.
We find a much simpler way to initialize KS binaries (cost scales with $O(N_{\rm KS})$) by just switching the order of the third and fourth parts:
initialize KS binaries first without recalculating forces, their derivatives and neighbor lists (only the KS transformation is needed) and then do the third process with the new center-of-mass particles data generated by former process instead of each binary component in the old way.
In this case there is no need to update the forces and neighbor lists.
\subsubsection{Position and velocity prediction}
During the force calculation, the predicted positions and velocities are used to calculate the force and its first derivative for the Hermite integrator.
In principle, we can avoid prediction of the same particles with the AC neighbor scheme and block time steps.
However, in practice we need to search all neighbors of each active particle and the search itself is computationally expensive.
Thus, it does not save much time to avoid neighbor prediction overlap and it is much simpler to predict all neighbors and do the force calculation within one loop.
The disadvantage of this method is that it costs more when the average neighbor number $\langle N_{\rm b} \rangle$ multiplied by the active particle number $N_{\rm i}$ is larger than the total particle number $N$,
compared to all the particle predictions with a non-AC scheme block time step.
One solution is to try predicting all particles once instead of predicting each neighbor when $\langle N_{\rm b} \rangle N_{\rm i} > N$.
But the mixture of predicting only neighbors and predicting all particles increases the complexity of code.
We therefore use only neighbor prediction in the code.
However, there is a major complication for the parallel neighbor prediction in NBODY6++GPU, which does not exist in NBODY6-GPU.
Since we use AVX/SSE and GPU and the code is mixed with \texttt{CUDA}, \texttt{C++} and \texttt{Fortran~77} programming language, the AVX/SSE and GPU libraries keep the individual copies of particle datasets.
Thus, the predictions of particles have overlaps and are usually inconsistent for different copies distributed on MPI processors.
Due to the complexity of NBODY6/6++ (e.g., using predicted positions for regularization) this leads to problems of synchronization later on, such as differences of time steps for the same particle on different processors.
The safest but very costly way is to always predict all particles at every irregular integration step, which is the case in the older versions of NBODY6++.
To solve this problem, much effort has been made to ensure that every particle is predicted to the current time before it is used in stellar evolution, KS and hierarchical regularization, because these parts are not parallelized and should have the same computing results on every MPI processor.
\subsubsection{ Stellar evolution and neighbor force correction}
The neighbor scheme also leads to performance losses for the calculation of stellar evolution. When a star experiences mass loss, other stars feel a smaller force.
In the neighbor scheme, the regular force is predicted from the value calculated at the last regular time step,
thus if particles outside the neighbor radius experience mass loss between the previous and next regular time steps, the regular force will be inconsistent after that.
The correction for the regular force should be done for all particles which have the mass loss particle outside their neighbor radius.
To avoid a large value of the third and fourth derivatives of the force, the irregular force also needs to be updated if the mass loss particle is inside the neighbor radius.
When mass loss is frequent, the calculation performance will be reduced significantly.
We currently use OpenMP to speed up the force correction, but it cannot completely solve this issue since the force correction with cost of $O(N)$ per particle cannot be avoided.
\subsection{ Hybrid MPI parallelization}
Based on the above parallel methods, we develop a new version of \nbodyppgpu to include hybrid parallel procedures.
The parallel structure of \nbodyppgpu is shown in Figure~\ref{fig:structure}.
In computer clusters, each computing node uses one MPI process.
Each MPI process opens multiple threads via OpenMP for the irregular force calculation.
GPUs inside one node are controlled by OpenMP threads.
Each GPU has a similar particle dataset size for regular force and potential energy calculation.
GPUs of different nodes are isolated without communication.
Thus all GPUs in the same node together access the complete particle dataset.
The best code configuration is to use multiple CPU cores (such as $8-16$ cores) and several GPUs (such as $1-4$ GPUs with a few thousand cores) per node, and choose node numbers based on the total number of particles.
\begin{table*}
\caption{The definitions of abbreviations for all figures
\begin{tabular}{l|l}
\hline
Abbreviation & Definition \\\hline
Reg. & Regular integration (force) and neighbor list determination \\
Irr. & Irregular integration (force, prediction (AVX/SSE version) and correction)\\
Pred. & Neighbor (Non-AVX/SSE version) and all particles (for regular force) prediction \\
Init.B & Initialization of active particle list for block time step \\
Move & Particle data copy prepared for MPI communication \\
Comm.I. & MPI communication for irregular integration \\
Comm.R. & MPI communication for receiving integration \\
Send.I. & Particle data copy for AVX/SSE irregular force calculation\\
Send.R. & Particle data copy for GPU regular force calculation \\
Adjust & Energy checking, adjustment of parameters and data results \\
KS & KS regularization calculation (binary and hierarchical systems) \\
Barr. & MPI communication barrier waiting time due to the imbalance and network traffic between different nodes \\
\hline
\end{tabular}
\label{tab:def}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth,height=!]{nbodystructure.eps}
\caption{\nbodyppgpu code structure. It shows one cycle of simulation. Based on the time steps, the integration can be divided into three hierarchical parts (see Table~\ref{tab:def}): KS calculation (KS), irregular integration (Irr.) and regular integration (Reg.).
The KS has smallest time step distribution.
Thus, between two nearest Irr. block time steps there are several KS steps.
Similarly, between two Reg. block time steps there are several Irr. time steps.
After several Reg. time steps there is one ``Adjust'' (see Table~\ref{tab:def}).
Inside one node, Reg. and Adjust are parallelized by multiple GPUs and Irr. is parallelized by AVX/SSE with OpenMP.
MPI parallelization are done for all 4 parts between different nodes.
}
\label{fig:structure}
\end{figure*}
\section{Performance test}
\label{sec:performance}
\subsection{ Pure MPI and hybrid MPI}
Figure~\ref{fig:mpivshybrid} shows significant improvement of \nbodyppgpu by using hybrid MPI including GPU, as compared to the pure MPI case.
We see that the GPU gives about $33$ times faster regular force integration.
This is to be expected since GPU is designed for large parallelization by using many computing cores and large memory bandwidth within one card.
Using AVX/SSE with OpenMP gives about $3$ times faster irregular integration including predictions.
OpenMP reduces the MPI communication cost by a factor of $5-10$.
The individual MPI communication process is not directly sped up by OpenMP.
When we use MPI parallelization together with OpenMP method, inside one node the irregular and regular force calculations are done by multiple threads with OpenMP instead of MPI parallelization, thus we can set a larger block particle number threshold for MPI parallelization by a factor of the OpenMP thread number and reduce the total MPI communication frequency.
This then results in shorter total MPI communication time.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth,height=!]{mpivshyd.eps}
\caption{Comparison of performance between pure MPI and hybrid MPI (GPU + AVX/SSE + OpenMP + MPI) on the ``Kepler'' cluster at ARI, Heidelberg University.
The test uses 256k particles with a Plummer model, IMF from Kroupa (2001) with mass range $0.08 - 100 M_\odot$.
The hybrid MPI test uses $4$ nodes and each node includes $32$ Intel Xeon E5-2650 cores ($2.00$ GHz per core) and $4$ NVIDIA K20m with $2496$ cores each ($706$ MHz per core).
The pure MPI test uses the same configuration of nodes and CPU cores.
The label ``Total'' means total time cost for $1$ \nb unit and ``Init.'' denotes the initialization time of the simulations. }
\label{fig:mpivshybrid}
\end{figure*}
\subsection{Scaling with different particle numbers and processors}
\label{sec:scale}
The scaling with different particle numbers and processors demonstrate the possibility of using large computing resources for simulations.
We test hybrid parallel \nbodyppgpu scaling with different node numbers $N_{\rm node}$ (1, 2, 4, 8 and 16; up to 320 CPU cores and about 80k GPU cores) and different particle numbers (16k, 32k, 64k, 128k, 256k and 1024k) on the ``Hydra'' cluster of the Max-Planck Supercomputing Centre (RZG) Germany.
Each node is completely controlled (no other tasks on the node) and has two NVIDIA K20X with 2688 cores each (732 MHz per core) and 20 Intel Ivy Bridge cores (2.8 GHz per core).
The total computing time for one \nb time unit $T_{\rm tot}/T_{\rm NB}$ is shown in Figure~\ref{fig:scaling}.
The irregular and regular force integration computing time ($T_{\rm irr}$ and $T_{\rm reg}$) are shown in Figure~\ref{fig:regirr}.
All these three times are the averaged computing times of the first two \nb time units of each simulation.
We test two basic initial models.
One has no primordial binaries and another has $5\%$ binaries.
Both use a Plummer sphere \citep{Plummer1911} and initial mass function (IMF) from \cite{Kroupa1993} with mass range $0.08-20 M_\odot$ and no stellar evolution.
In the non-binary case, the scaling with different $N_{\rm node}$ for the total time is not ideal because of the communication cost.
Here the speed-up saturates at about $8-16$ nodes depending on the particle number.
But if we consider the number of cores per node (20 CPU cores and 5376 GPU cores), the scaling with cores is excellent, since with $16$ nodes $320$ CPU cores and $86016$ GPU cores are used.
With one node, the performance of NBODY6++GPU is similar to that of NBODY6-GPU. \cite{Nitadori2012} showed that NBODY6-GPU gives about $100$ times speed-up compared to the sequential NBODY6 with two NVIDIA GeForce GTX 560 Ti with 384 cores each (822 MHz per core) and 4 Intel i7-2600K cores (3.40 GHz per core).
On the ``Hydra'' cluster node, the speed-up can reach $100-500$ depending on the particle number.
Thus, with $16$ nodes for one million particles, we can reach a factor of $400-2000$ speed-up compared to the sequential NBODY6.
Besides, the absolute time cost is very good, especially for the million-body case, the total time is about $800$~s for $N_{\rm node} = 16$.
For the $\phi$GPU code tested in the ``Laohu'' cluster with 32 NVIDIA K20 GPU, one million particles take about $1500$~s.
Although we cannot compare the two codes directly with different computing cluster specifications, with the similar GPU type and number \nbodyppgpu can reach better performance.
CPU cores are ignored here because the time fractions on the CPU for these two codes are very different: $\phi$GPU spends about $90\%$ computing time on the GPU while \nbodyppgpu has much less time fraction (Section~\ref{sec:tfrac}).
In the case with $5\%$ primordial binaries, the scaling is not as good as for the case with no binaries due to the KS calculation. We discuss this issue in more detail in Section~\ref{sec:discussion}.
The regular and irregular integration times in Figure~\ref{fig:regirr} are close to ideal for million-body simulations.
It means that when ignoring MPI communications, the MPI parallelization speeds up regular and irregular calculation excellently for large number of particles.
For small particle numbers ($< 10^5$) the scalings of both regular and irregular integrations depart from the ideal parallel limit.
The reason is that the number of operations on each node (for regular integration is on GPU) is small.
Thus, the cost of internal memory accessing and modification during integration, which cannot be scaled with computing cores or nodes, dominates the time.
\begin{figure}
\centering
without primordial binaries\\
\includegraphics[width=0.5\textwidth,height=!]{ttotal.0.eps}\\
with $5\%$ primordial binaries\\
\includegraphics[width=0.5\textwidth,height=!]{ttotal.0.05.eps}
\caption{Performance of \nbodyppgpu with hybrid MPI on the ``Hydra'' cluster as the function of node number $N_{\rm node}$.
$T_{\rm tot}/T_{\rm NB}$ shows the computing time cost per \nb time unit.
The configurations of each node are indicated in the panels. The dashed line shows the
ideal parallel limit with zero communication cost. Different colors represent different particle numbers.}
\label{fig:scaling}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,height=!]{tirr.0.eps}\\
\includegraphics[width=0.5\textwidth,height=!]{treg.0.eps}
\caption{Performance of regular and irregular integration on the ``Hydra'' cluster as the function of $N_{\rm node}$. Here the same node configurations and line types as in Figure~\ref{fig:scaling} are used. $T_{\rm irr}/T_{\rm NB}$ and $T_{\rm reg}/T_{\rm NB}$ shows the irregular and regular integration computing time cost per \nb time unit respectively.}
\label{fig:regirr}
\end{figure}
\subsection{Time fraction for different parts}
\label{sec:tfrac}
We show the fraction of time spent on different parts of \nbodyppgpu in Figure~\ref{fig:frac}.
In the model without binaries, MPI communication and data moving consumes about half of the total time in the case of $1024k$ particles with $N_{\rm node} = 16$ and $128k$ particles with $N_{\rm node} = 8$,
which means the scaling reaches the MPI parallelization speed-up break-even point.
For the $1024k$ particles with $5\%$ binaries, the KS takes about half of the calculation time when $N_{\rm node} \ge 8$.
Thus, the KS procedures become the performance bottleneck.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=0.22\textwidth,height=!]{piechart_n1024k_b0.eps} &
\includegraphics[width=0.22\textwidth,height=!]{piechart_n1075.2k_b0.05.eps} \\
{\tiny 1024k without primordial binaries} & {\tiny 1024k singles + 51.2k primordial binaries} \\
\includegraphics[width=0.22\textwidth,height=!]{piechart_n128k_b0.eps} &
\includegraphics[width=0.20\textwidth,height=!]{piechart_legend.eps} \\
{\tiny 128k without primordial binaries} & \\
\end{tabular}
\caption{Pie charts showing the same test as Figure~\ref{fig:scaling} but time
fraction of different components in Hybrid MPI parallel NBODY6++. In each
chart different rings show different $N_{\rm node}$. From inside to outside
rings, $N_{\rm node}$ are $1$, $2$, $4$, $8$ and $16$. The two pie charts on the left
show the model without primordial binaries and the pie chart on the right shows the
model with $5\%$ binaries. The models in top two pie charts include $1024k$
particles (singles + binaries) and the model in the pie chart at the bottom
include $128k$ single particles. An explanation of the legends is provided in
Table~\ref{tab:def}}
\label{fig:frac}
\end{figure}
\subsection{Sorting list algorithm for selecting active particles}
In Figure~\ref{fig:initblock}, we compare the performance of the sorting list algorithm and temporary list algorithm described in Section~\ref{sec:initb}.
The star cluster in our test simulation is modelled as a King sphere \citep{King1966} with $W_0 = 6$ using $1024k$ stars, $5\%$ of primordial binaries and $8$ nodes with the same node configuration as in Figure~\ref{fig:scaling}.
To indicate that the sorting is very fast, the time of pure sorting part in this algorithm is also shown.
We can see the sorting list algorithm is about $5$ times faster than temporary list algorithm.
The time fraction of active particle selection with the new algorithm is shown as yellow part (Init.B.) in Figure~\ref{fig:frac}. We can see Init.B. costs more for simulations with a larger number of particles. Even with this new method, it is close to irregular integration cost for the one million particles case ($\sim7\%$).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,height=!]{tiblock.eps}
\caption{Comparison of performance between the sorting list algorithm and the temporary list algorithm.
The ``Pure Sorting'' means the time cost of sorting part in sorting list algorithm. }
\label{fig:initblock}
\end{figure}
\section{Application}
\label{sec:app}
The main task for \nbodyppgpu is to simulate large star clusters.
For a typical globular cluster, the total mass is $10^5-10^6$~$M_\odot$, thus the total number of particles is of the order $10^6$.
The typical age is about $12$~Gyr.
In our $1M$ stars with $5\%$ primordial binary globular cluster model (the same as shown in Figure~\ref{fig:initblock}), we choose the parameters similar to NGC~$4372$ \citep{Harris1996}.
The initial half-mass radius is $7.5$~pc and the tidal radius is $89.2$~pc with a circular orbit around a point-mass galactic potential.
One \nb time unit corresponds to $0.622$~Myr.
The computing time and number of particles are shown in Figure~\ref{fig:tevolve}.
Initially, the computing time per \nb time unit was about $3000$~s and this increased when several small time step particles formed.
Later, we carried out several adjustments, then the simulation sped up and became about $1500$~s.
The number of particles only decreased slightly during $4500$ \nb time units, but the computing speed actually increased at a later stage.
The reason for the early slow speed was the two unsuitable criteria for triggering or terminating the two-body KS regularization.
The first is the separation criterion $R_{\rm cl}$ and the second is the time step criterion $\Delta t_{\rm cl}$.
If the auto-adjustment of $R_{\rm cl}$ and $\Delta t_{\rm cl}$ are used, they are determined following \cite{Aarseth2003}
\begin{equation}
\label{eq:rtmin}
\begin{aligned}
R_{\rm cl} & = \frac{4 R_{\rm h}}{N (\rho_{\rm d} / \rho_{\rm h})^{1/3}}, \\
\Delta t_{\rm cl} & \simeq 0.04 \left( \frac{\eta_{\rm I}}{0.02} \right)^{1/2} \left( \frac{R_{\rm cl}^3}{\langle m \rangle} \right)^{1/2},
\end{aligned}
\end{equation}
where $\rho_{\rm d}/\rho_{\rm h}$ is the central density contrast, $\eta_{\rm I}$ is the standard irregular time step coefficient and $\langle m \rangle$ is average mass.
The factor $4 R_{\rm h} / N$ is the impact parameter for a $90$ degree deflection in a two-body encounter.
The auto-adjustment results in $R_{\rm cl} = 1.4\times 10^{-6}$ and $\Delta t_{\rm cl} = 6.8 \times 10^{-8}$ \nb units at the beginning of this simulation.
But these values are too small and many wide binaries including some unperturbed binaries are not regularized.
Thus we switched off auto-adjustment and used $R_{\rm cl} = 5.0\times 10^{-6}$ and $\Delta t_{\rm cl} \le 2.0 \times 10^{-7}$ before about $2800$ time units.
We found that the $R_{\rm cl}$ and $\Delta t_{\rm cl}$ parameters were still too small, thus we enlarged $R_{\rm cl}$ to $1.0 \times 10^{-5} $ and $\Delta t_{\rm cl}$ to $5.0 \times 10^{-7}$.
Then the computing sped up after $2800$ time units.
The small parameters from auto-adjustment is because Eq.~\ref{eq:rtmin} is originally designed for small number of particles ($N = 10^2-10^3$).
For the million-body simulation, the central density is usually high and $\rho_{\rm d}/\rho_{\rm h}$ is large.
The criterion from Eq.~\ref{eq:rtmin} is only suitable for the central region of the cluster but too small for the outer region.
There were several jumps in the computing time after the auto-restart with reduced time steps.
These happened when a large energy error appeared due to specific events, such as difficult triple systems or the sudden change of force caused by large mass loss or premature perturbation of the neighbor sphere (such as neutron stars with high kick velocities).
After we restore the normal time step parameters the computing time was again reduced.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,height=!]{timeevolve.eps}\\
\caption{The evolution of the computing time per \nb time unit and the number of particles for the $1M$ globular cluster simulation as function of \nb time.}
\label{fig:tevolve}
\end{figure}
There are also a few more models currently in progress and we will report in detail about the results from these simulations in a future publication.
\section{Discussion}
\label{sec:discussion}
While standard Hermite codes report a high efficiency using up to 700,000 cores on hundreds if not thousands of GPUs \citep{Berczik2013a,Berczik2013b}, we find that our performance saturates at about 86,000 GPU cores and 320 CPU cores.
This is not surprising, because \nbody and \nbodypp are inherently more efficient than standard Hermite codes (less operations for the same physical result).
A more detailed scaling analysis of \nbodyppgpu will be published separately (Huang et al., private communication).
As discussed in sections~\ref{sec:scale} and \ref{sec:tfrac}, the data movement and MPI communications become the bottleneck when the node number $N_{\rm node}$ is large since they have constant cost and the KS integration dominates the calculation when there are many primordial binaries.
For the data copying and communication limit, a better communication algorithm (such as non-blocking communication as suggested by \cite{Dorband2003}, which we will probably work on in the future), a higher network bandwidth between nodes and faster memory access are required.
In the common computer architecture today, the pure calculation operations for CPU is about two orders of magnitude faster than to access data from the host memory.
For the non-shared memory parallelization like MPI, if the data communication consumption is larger than the calculation,
the parallelization cannot improve the performance and sometimes even reduces the speed.
Table~\ref{tab:cost} compares the calculation and communication costs for the regular force, the irregular force and the KS perturbation calculations.
The ratio of calculation cost to communication cost, $R_{\rm c}$, for the regular force is proportional to the full particle number $N$.
Thus, the GPU and MPI parallelization for the regular force gives a very good scaling.
For the irregular force, $R_{\rm c}$ is proportional to the average neighbor number.
When there are many neighbors, the MPI parallelization is good.
For typical star cluster simulations, the neighbor number $N_{\rm b}$ is a few hundred, thus it is acceptable.
In NBODY6++GPU, the data movement and MPI communication is significant for the irregular force (Figure~\ref{fig:frac}).
For KS perturbation calculation, $R_{\rm c}$ is proportional to the average perturber number $N_{\rm p}$, which is usually quite small (less than $100$).
Thus, MPI parallelization for KS can be inefficient.
The reason for the small $N_{\rm p}$ is that usually in star cluster simulations, a large fraction of the KS binaries is unperturbed with $N_{\rm p} = 0$, and perturbed KS binaries also tend to have small $N_{\rm p}$ (otherwise they would be terminated or transformed to hierarchical systems).
Therefore, to get good performance of KS parallelization, the unperturbed and perturbed KS parts should be treated separately, since unperturbed KS only needs few operations and should avoid communication when parallelized (shared memory parallelization such as OpenMP or {\protect MPI-3}).
We are working on this and will show our KS parallelization method and benchmarks in a future publication.
There is also another effort to parallelize KS with block time steps (Nitadori 2014, private communication).
\begin{table}
\caption{Estimation of calculation and communication cost}
\label{tab:cost}
\begin{tabular}{llll}
\hline
Cost & Regular force & Irregular force & KS perturbation \\\hline
Calculation & $O(N_{\rm i} N)$ & $O(N_{\rm i} \langle N_{\rm b} \rangle)$ & $O(N_{\rm i} \langle N_{\rm p} \rangle)$ \\
Communication & $O(N_{\rm i})$ & $O(N_{\rm i})$ & $O(N_{\rm i})$ \\\hline
\end{tabular}\\
*$N_{\rm i}$: Active particle number \\
*$N$: Full particle number \\
*$\langle N_{\rm b} \rangle$: Average neighbor number \\
*$\langle N_{\rm p} \rangle$: Average perturber number for KS \\
\end{table}
We also find that the KS initialization and termination can be costly when there are wide binaries that frequently switch between KS and Hermite solutions.
As discussed in Section~\ref{sec:init}, during the KS initialization and termination, the force and its first three derivatives need to be renewed for center-of-mass particles or two components (cost of $O(N)$)
and the neighbor list of every particle and perturber list of KS pairs should be updated with new particle index (cost of $O(N \langle N_{\rm b} \rangle)$).
The regular part can be improved by using existing values instead of a direct calculation. The latter can be improved by using a reverse neighbor list for fast searching which particle has the KS pair as its neighbors.
However, this requires large memory cost and coding effort.
When testing the code performance in computer clusters, we usually use the empty nodes where no other tasks are performed simultaneously.
But for the applications, whether we can use scheduling whole nodes depends on the task management system in the clusters.
Some clusters, such as ``Laohu'' at NAOC and ``Milkyway'' at the J\"ulich Computing center, only allow very few CPU cores for GPU tasks ($1-2$ CPU cores per GPU) and all other CPU cores in the same nodes are reserved for pure CPU tasks.
\nbodyppgpu is not suitable for these kinds of clusters since it relies on heavy calculation on CPU (irregular and KS integration, data movements; see Figure~\ref{fig:frac}).
Moreover, in the shared nodes, different tasks compete with each other for network bandwidth, CPU loading and host memory.
This sometimes results in a serious load imbalance:
The MPI barrier time (Table~\ref{tab:def}) covers almost half of the total computing time.
The only solution is to use computing clusters in which GPU nodes can be fully occupied by one GPU task each time.
Both \nbody and \nbodypp have been developed over a long time.
The codes have become more and more complicated which makes it difficult for beginners.
Therefore, we also present documentation for the new version of NBODY6++GPU.
The document includes a detailed description of all input parameters and output data and will be updated with more details and new implementations.
We also show several important differences between \nbodyppgpu and \nbodygpu in the Appendix.
The future improvements of the codes and hardware may lead to simulating even larger particle numbers, e.g., for nuclear star clusters using more GPU nodes appears feasible.
The key to keep total wall clock times reasonable will be further optimization of communication and data management, especially for particles with very small time steps near a central black hole.
Also, bandwidth and latency of communication hardware may help to gain one more order of magnitude, but not to reach the Exaflop/s regime.
For the latter, hybrid codes seem more appropriate, which treat a large number of particles in the outskirts self-consistently, but not with full $N^2$ accuracy of the force computation (see, for recent examples, e.g., \citealp{Meiron2014,Karl2015}).
\section{Conclusions}
\label{sec:conclusion}
Direct numerical simulations of star clusters contribute significantly to the theoretical understanding of star cluster dynamics.
Due to hardware and software limits, direct \nb simulations of real globular clusters with large number of particles have been a major challenge for many years.
\cite{Sugimoto1990} pointed out that direct numerical simulations of globular star clusters could not be completed for the next decades unless there are breakthroughs in parallel computing which violate Moore's law.
After that, many efforts were made to reach this goal by using specially designed acceleration hardware (GRAPE and GPU).
In this paper, we present NBODY6++GPU.
It combines for the first time the massively parallel multi-node code (MPI parallelized) \nbodypp \citep{Spurzem1999, Hemsendorf2003} with the GPU and AVX/SSE acceleration on each node, using the libraries of \cite{Nitadori2012}.
We discuss the performance tests (Figure~\ref{fig:mpivshybrid}, \ref{fig:scaling}, \ref{fig:regirr} and \ref{fig:frac}) and new algorithms (Figure~\ref{fig:sortchart}, \ref{fig:sortlist} and \ref{fig:initblock}) to accelerate the NBODY6++GPU.
For the non-binary case, the overall scaling is good up to $16$ nodes ($320$ CPU cores and $32$ NIVDIA K20x GPUs including $86016$ GPU cores) with a speed up of $400$ up to $2000$ depending on the particle numbers.
The speed up is mainly achieved by the usage of GPUs to accelerate the long-range (regular) gravitational forces, which gives about $33$ times faster force calculation (Figure~\ref{fig:mpivshybrid}).
The AVX/SSE increase the speed of prediction of positions and velocities and neighbor (irregular) forces by a factor of $3$.
We also worked on the consistency of the code when combining several parallel methods together to ensure the stability.
When GPU and AVX/SSE accelerate the force calculation very efficiently, other parts become bottlenecks of performance, such as time step scheduling and stellar evolution. We designed new algorithms to improve these parts.
We have demonstrated how \nbodyppgpu can simulate a realistic globular cluster with one million particles, stellar evolution and 5\% primordial binaries for several Gyr (one half-mass crossing time requiring about an hour computational time; see Figure~\ref{fig:tevolve}). A few more models are currently in progress and we will report the detailed results of these simulations in future publications.
With our final code version, which is publicly available\footnote{We use Subversion and Github to manage \nbodyppgpu. The beta version can be downloaded by commands ``svn co http://silkroad.bao.ac.cn/repos/betanb6'' or ``git clone https://github.com/lwang-astro/betanb6pp.git''}, we can claim to have finally reached the goal of Sugimoto's ``dream'' of 1990.
A million-body cluster can be simulated for about $20$ crossing times in one day on $320$ cores with $32$ GPUs.
In the future, with the faster bandwidth and latency of hardware as well as optimizations of communication and data management, even larger system like the nuclear star clusters may be simulated by direct \nb codes.
The previous paragraphs show that our contribution to this would be impossible without the achievements of our predecessors and collaborators; in particular the current dominance of GPU hardware has been assisted by the development of GRAPE software over the last few decades which finally could be ported to GPU without fundamental problems.
\section*{Acknowledgments}
This work has been partly funded by National Natural Science Foundation of China, No. 11073025 (RS).
We acknowledge support through the Silk Road Project at National Astronomical Observatories of China (NAOC, http://silkroad.bao.ac.cn).
R.S. and P.B are grateful for support by the Chinese Academy of Sciences Visiting Professorship for Senior International Scientists, Grant Number 2009S1$-$5, and through the ``Qianren'' special foreign experts program of China, both at NAOC.
Most of the numerical simulations have been done on the ``Hydra'' GPU cluster of the Max-Planck Supercomputing Centre (RZG) Germany.
R.S. and P.B. and L.W. are grateful for kind hospitality and support during several visits at the Max-Planck-Institute for Astrophysics.
Other resources used for numerical simulations in the preparation of this paper are:
``Laohu'' GPU cluster at the Center of Information and Computing at NAOC, ``Kepler'' GPU cluster at ARI/ZAH, University of Heidelberg, Germany (funded by Volkswagen Foundation) and the ``MilkyWay'' cluster of SFB 881 “The Milky Way System” at the University of Heidelberg, Germany, hosted and co-funded by the J\"ulich Supercomputing Center (JSC).
S.A. and K.N. are grateful for support during their visits at Kavli Institute for Astronomy and Astrophysics, Peking University and NAOC.
P.B. acknowledges the special support by the NASU under the Main Astronomical Observatory GRID/GPU computing cluster project.
M.B.N.K. was supported by the Peter and Patricia Gruber Foundation through the PPGF fellowship, by the Peking University One Hundred Talent Fund (985), and by the National Natural Science Foundation of China (grants 11010237, 11050110414, 11173004).
This publication was made possible through the support of a grant from the John Templeton Foundation and NAOC.
The opinions expressed in this publication are those of the author(s) do not necessarily reflect the views of the John Templeton Foundation or NAOC.
The funds from John Templeton Foundation were awarded in a grant to The University of Chicago which also managed the program in conjunction with NAOC.
TN acknowledges support by the DFG cluster of excellence ‘Origin and Structure of the Universe’.
We thank the anonymous referee for many useful comments that helped to improve the paper.
|
3,212,635,537,701 | arxiv | \section{Introduction}
The Fr\'echet distance and its variants provide a versatile class of distance measures for parametrized curves as they occur in application areas such as trajectories of moving objects (e.g., vehicles, animals, or robots), outlines of shapes, signatures, gestures, and other types of time series from sensor data~\cite{acmsurvey20, su2020survey}. This distance measure is very similar to the Hausdorff distance, which is defined for sets, except that it takes the ordering of points along the curve into account. At the same time, by assuming the equivalence class of all reparametrizations of a curve, it is robust to local irregularities in the parametrization of the curves (e.g., errors due to local time delays or irregular measurements). An intuitive definition of the distance measure is given as follows. Imagine traversing the two curves at independent and varying speeds from the beginning to the end, and consider the maximum (Euclidean) distance that the two positions can maintain throughout the traversal without backtracking along the curves. Minimizing over all possible traversals yields the Fr\'echet distance of the two curves.
Due to the popularity of the distance measure for trajectory analysis and data analysis applications, many heuristics and algorithm engineering solutions have been proposed to speed up the distance computation and similarity retrieval~\cite{BergGM17, BaldusB2017,DutschV17,BuchinDDM17,BringmannKN19, GHPS20}.
A fundamental task in this area is near neighbor searching: Preprocess $n$ curves into a data structure, such that we can query this data structure with a curve and retrieve an input curve that has small distance to the query curve.
This problem has been studied intensively~\cite{BergCG13, BergMO17, I02, AD18, EP20, FiltserFK20, DS17, M20, DP21} and, for the discrete version of the Fr\'echet distance, these efforts lead to a simple and likely optimal data structure~\cite{FiltserFK20}. However, for the more classic continuous version of the Fr\'echet distance, the computational complexity of near neighbor searching is still largely open, and seems very challenging to resolve.
Therefore, in this paper we focus on the special case of one-dimensional curves, which we also refer to as time series. We aim to resolve approximate near neighbor searching for this special case of the continuous Fr\'echet distance. We obtain strong lower bounds based on the Orthogonal Vectors Hypothesis in the regime of small approximation factors. More specifically, we differentiate a range of lower bounds for different approximation factors and preprocessing/query time. We show that our bounds are tight by devising data structures that asymptotically match the lower bounds in all cases considered. The new data structures improve upon the state of the art in several ways. For the same preprocessing and query time, we can improve the approximation factor from $(2+\varepsilon)$ to $(1+\varepsilon)$. For the same approximation factor $(2+\varepsilon)$, we get a better time complexity---in some cases we can even achieve linear preprocessing time and space.
\subsection{Problem Definition}
Let us first formally define the distance measure considered in this work.
\begin{definition}[Fr\'{e}chet distance]
Given two curves $P,Q:~ [0,1] \mapsto \mathbb{R}^d $, their Fr\'{e}chet distance is
\[
\df(P,Q) \coloneqq \min\limits_{f,g \in \mathcal{T}} \max_{t \in [0,1]} \|P(f(t)) - Q(g(t))\|_2,
\]
where $\mathcal{T}$ is the set of all monotone and surjective functions from $[0,1]$ to $[0,1]$. For functions $f$ and $g$ that realize the minimum above, we define $\phi: [0,1] \to [0,1]^2$ with $\phi(t) = (f(t), g(t)), t \in [0,1]$ and we refer to $\phi$ as a realizing traversal of the two curves.
\end{definition}
The central problem of this work is then defined as follows.
\begin{definition}[$c$-Approximate Near Neighbors problem ($c$-ANN)]\label{Dgenann}
The input consists of a set ${\mathcal P}$ of $n$ curves in $\mathbb{R}^d$, each of complexity $m$, and a number $2 \le k \le m$. Given a distance threshold $\delta>0$ and an approximation factor $c>1$, preprocess ${\mathcal P}$ into a data structure such that for any query curve $Q$ of complexity $k$, the data structure reports as follows:
\begin{compactitem}
\item if $\exists P\in {\mathcal P}$ such that $\df(P,Q)\leq \delta$, then it returns $P'\in {\mathcal P}$ such that $\df(P',Q)\leq c\delta$,
\item if $\forall P \in {\mathcal P}$, $\df(P,Q)\geq c\delta$ then it returns “no”,
\item otherwise, it either returns a curve $P\in {\mathcal P}$ such that $\df(P,Q)\leq c\delta$, or “no”.
\end{compactitem}
\end{definition}
The assumption that all input curves have the same number of vertices $m$ and that the queries have $k$ vertices is mostly to simplify presentation; all our data structures are easily generalized to allow input curves of complexity \emph{at most} $m$ and query curves of complexity \emph{at most} $k$. Note, however, that we assume the input has size in $\Omega(nm)$ and that $2\leq k\leq m$. The case $k=1$ is a boundary case that is easier to solve; we ignore it throughout this paper.
\subsection{State of the Art}
We start by reviewing the state of the art for the \emph{discrete} variant of the Fréchet distance. In the discrete Fr\'echet distance, the continuous traversal $\phi$ is replaced by a discrete traversal of the two point sequences, we refer to~\cite{FiltserFK20} for the exact definition. The currently best known data structure for $(1+\varepsilon)$-ANN under the discrete Fr\'echet distance is by
Filtser et al.~\cite{FiltserFK20}. Their data structure uses space in $ n \cdot \ensuremath{\mathcal{O}}(1/\varepsilon)^{kd} + \ensuremath{\mathcal{O}}(nm)$ and query time in $\ensuremath{\mathcal{O}}(k d)$, where $k$ denotes the complexity of the query (measured in the number of vertices), $m$ denotes the complexity of an input curve and $n$ denotes the number of input curves. It is an interesting question whether the same bounds can be obtained for the continuous Fr\'echet distance.
At first glance, the discrete and continuous variants of the Fr\'echet distance seem very similar, but there is an important difference: while the metric space of bounded complexity curves under the discrete Fr\'echet distance has bounded doubling dimension, this does not hold in the continuous case, even when restricted to polygonal curves of constant complexity~\cite{DKS16}. (A metric space has doubling dimension at most $d$ if any ball of any radius $r$ can be covered by $2^d$ balls of radius $\frac{r}{2}$.) This immediately shows that the technique employed by Filtser et al., which effectively applies a doubling oracle to the metric balls centered at input curves (more specifically, simplifications thereof), does not directly extend to the continuous Fr\'echet distance, since such a doubling oracle cannot exist in this case.
So the \emph{discrete} Fr\'echet distance has a simple ANN that seems optimal, but there is indication that for the \emph{continuous} Fr\'echet distance resolving the time complexity of ANN is more challenging.
Note that it is possible to reduce the ANN problem for the continuous Fr\'echet distance to the ANN problem for the discrete Fr\'echet distance by subsampling along the continuous curves. However, it seems that this approach introduces an (otherwise avoidable) dependency on the arclength.
In 2018, Driemel and Afshani \cite{AD18} described data structures based on multi-level partition trees (using semi-algebraic range searching techniques) which can also be used for exact near neighbor searching under the continuous Fr\'echet distance. For $n$ curves of complexity $m$ in $\mathbb{R}^2$, their data structure uses space bounded by $n \cdot (\log \log n)^{\ensuremath{\mathcal{O}}(m^2)}$ and the query time is bounded by $\sqrt{n} \cdot (\log n)^{\ensuremath{\mathcal{O}}(m^2)}$. (If the input is restricted to curves in $\mathbb{R}$, these bounds can be slightly improved.)
Recently, Driemel and Psarros~\cite{DP21} obtained bounds for the continuous Fr\'echet distance that are similar to the bounds of Filtser et al., albeit at the expense of a higher approximation factor and only for curves in $\mathbb{R}$. They present a $(5+\varepsilon)$-ANN data structure which uses space in $n\cdot \ensuremath{\mathcal{O}}\left({\frac 1 \varepsilon}\right)^{k} + \ensuremath{\mathcal{O}}(nm)$
and has query time in $\ensuremath{\mathcal{O}}\left(k\right)$, and a $(2+\varepsilon)$-ANN data structure, which uses space in
$n\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k\varepsilon}\right)^{k} + \ensuremath{\mathcal{O}}(nm)$
and has query time in $\ensuremath{\mathcal{O}}\left(k\cdot 2^k\right)$.
Even more efficient data structures can be obtained at the expense of an even larger approximation factor, see the work of Driemel, Silvestri, and Psarros~\cite{DS17} and~\cite{DP21} which uses locality-sensitive hashing.
In these results neither the space nor the query time is exponential in the complexity of the curves (neither input nor query), but the approximation factor is linear in the query complexity $k$.
\paragraph{(Unconditional) lower bounds} Given these results, one may ask whether the cited bounds are optimal for the respective approximation factor that they guarantee. We review some efforts in answering this question and discuss the limitations of the current techniques.
Driemel and Psarros~\cite{DP21, DP20} approach this question using a technique by Miltersen~\cite{miltersen94} for proving cell-probe lower bounds. Their results indicate that any data structure answering a query for a near neighbor under the continuous Fr\'echet distance by using only a constant number of probes to memory cells cannot have a space usage that is independent of the arclength of the input curves (assuming a query radius of $1$).
In addition, their bounds indicate that, in some cases, space exponential in the complexity of the query $k$ is necessary.
However, these bounds hold only for data structures that use a constant number of probes to memory cells for answering a query, while we would also be interested in data structures that use higher query time, such as $\ensuremath{\mathcal{O}}(k)$ or $\ensuremath{\mathcal{O}}(\log n)$.
A different lower bound technique was used by Driemel and Afshani \cite{AD18}. They show a lower bound in the pointer model of computation on the space-time tradeoff for \emph{range reporting} under the Fr\'echet distance. In this problem, all curves contained inside the query radius need to be output by the query. The resulting lower bound matches the above cited upper bounds even up to the asymptotic number of factors of $\log(n)$.
The proof uses a construction of input curves in $\mathbb{R}^2$ and a set of queries, such that the intersection of any two query results has small volume while the queries themselves have large volume. The main drawback of this technique is that, being a volume argument, it inherently uses the fact that all curves inside the query need to be returned and therefore it cannot easily be applied in the near neighbor setting.
\paragraph{Conditional lower bounds} The recent rise of fine-grained complexity has also lead to a renewed interest in conditional lower bounds for nearest neighbor data structure problems, see, e.g.~\cite{AlmanW15,AbboudRW17,Rubinstein18,CGLRR19,DBLP:conf/soda/ChenW19}. These lower bounds are for the offline version of the data structure problem, by considering the total time needed for preprocessing and performing a number of queries. They are obtained in a similar way as NP-hardness, specifically via reductions from some fine-grained hypothesis such as the \emph{Strong Exponential Time Hypothesis (SETH)}~\cite{IP01} or the \emph{Orthogonal Vectors Hypothesis (OVH)}~\cite{Williams05}. In the Orthogonal Vectors problem we are given two sets of vectors $A, B \subseteq \{0,1\}^d$ of size $n$ and ask whether there exist two vectors $a \in A, b \in B$ such that $\langle a, b \rangle = 0$. The hypothesis postulates that for any constant $\varepsilon > 0$ there exists a constant $c>0$ such that there is no algorithm solving the Orthogonal Vectors problem in time $\ensuremath{\mathcal{O}}(n^{2-\varepsilon})$ in dimension $d = c \log n$. It should be noted that OVH is at least as believable as SETH, because SETH implies OVH~\cite{Williams05}.
As an example, based on the OV-hardness of bichromatic Euclidean closest pair~\cite{AlmanW15} and reducing via a variant of OV with unbalanced size $|A| \ll |B|$~\cite{AbboudW14}, one can show that for any
$\varepsilon,\beta > 0$
there is no data structure for Euclidean nearest neighbors on $n$ points in $\mathbb{R}^d$ with preprocessing time
$\ensuremath{\mathcal{O}}(n^{\beta})$
and query time
$\ensuremath{\mathcal{O}}(n^{1-\varepsilon})$, in some dimension $d = c \log n$.
This rules out any sublinear query time for any data structure with polynomial preprocessing time, unless OVH fails.
For computing the Fr\'echet distance of two polygonal curves there is a tight conditional lower bound~\cite{Bringmann14}, also for the one-dimensional case~\cite{BringmannM16,BuchinOS19}. However, thus far, there seems to be no comprehensive study of conditional lower bounds for the corresponding data structure problem. We want to close this gap and show tight bounds for the case of one-dimensional curves. These are similar in spirit to the Euclidean nearest neighbor lower bounds discussed above.
\subsection{Our Results}
\begin{table}[t]
\centering
\def1.5{1.5}
\caption{Known upper bounds and our results. For the discrete case we only cite the best known result. The space complexity is implicitly bounded by the preprocessing time in each case. Our preprocessing time is randomized; the bounds can be derandomized at the cost of a factor $\log n$ in preprocessing and query time (by using search trees instead of perfect hashing).}
\smallskip
\label{table:1}
\begin{tabular}{||c||c| c| c| c||}
\hline
Fr\'echet distance & Approximation & Preprocessing Time & Query Time & Reference \\ [0.5ex]
\hline\hline
discrete, dD & $(1+\varepsilon)$-ANN &
$nm \cdot\left( \ensuremath{\mathcal{O}}(\frac{1}{\varepsilon})^{dk} + \ensuremath{\mathcal{O}}( d\log m) \right)$
& $\ensuremath{\mathcal{O}}(dk)$ & \cite{FiltserFK20} \\
\hline\hline
continuous, 1D & $(2+\varepsilon)$-ANN & $n \cdot \ensuremath{\mathcal{O}}(\frac{m}{k\varepsilon})^k$ & $\ensuremath{\mathcal{O}}(1)^k$ & \cite{DP21} \\
& $(5+\varepsilon)$-ANN &
$n \cdot \ensuremath{\mathcal{O}}(\frac{1}{\varepsilon})^{k} + \ensuremath{\mathcal{O}}(n m)$
& $\ensuremath{\mathcal{O}}(k)$ & \cite{DP21} \\
\hline\hline
& $(1+\varepsilon)$-ANN & $n \cdot \ensuremath{\mathcal{O}}(\frac{m}{k\varepsilon})^k$ & $\ensuremath{\mathcal{O}}(1)^k$ & Theorem~\ref{thm:onepluseps} \\
& $(2+\varepsilon)$-ANN & $n \cdot \ensuremath{\mathcal{O}}(\frac{m}{k\varepsilon})^k$ & $\ensuremath{\mathcal{O}}(k)$ & Theorem~\ref{thm:twopluseps_one} \\
continuous, 1D & $(2+\varepsilon)$-ANN &
$n \cdot \ensuremath{\mathcal{O}}(\frac{1}{\varepsilon})^{k} + \ensuremath{\mathcal{O}}(n m)$
& $\ensuremath{\mathcal{O}}(1)^k$
& Theorem~\ref{thm:twopluseps_two} \\
& $(2+\varepsilon)$-ANN &
$\ensuremath{\mathcal{O}}(nm)$ & $\ensuremath{\mathcal{O}}(\frac{1}{\varepsilon})^{k+2}$
& Theorem~\ref{thm:twopluseps_three} \\
& $(3+\varepsilon)$-ANN &
$n \cdot \ensuremath{\mathcal{O}}(\frac{1}{\varepsilon})^{k} + \ensuremath{\mathcal{O}}(n m)$
& $\ensuremath{\mathcal{O}}(k)$ & Theorem~\ref{thm:threepluseps} \\ [1ex]
\hline
\end{tabular}
\caption{Our conditional lower bounds. Each row gives an approximation ratio and a setting of $k$ and $m$ where any $\ensuremath{\text{poly}}(n)$ preprocessing time and $\ensuremath{\mathcal{O}}(n^{1-\varepsilon'})$ query time cannot be achieved simultaneously. The constants $\varepsilon,\varepsilon',c$ are quantified as $\forall \varepsilon, \varepsilon'>0\colon \exists c>0$. By $f(n) \ll g(n)$ we mean $f(n) = o(g(n))$.
We refer to the respective theorems in Section~\ref{section:lowerbounds} for the exact statements.}
\smallskip
\label{table:2}
\begin{tabular}{||c||c|c|c|c|c||}
\hline
Fr\'echet dist. & Approx. & Preproc. & Query & Parameter Setting & Reference \\ [0.5ex]
\hline\hline
continuous, 1D & $2-\varepsilon$ & $\ensuremath{\text{poly}}(n)$ & $\ensuremath{\mathcal{O}}(n^{1-\varepsilon'})$ & $1 \ll k \ll \log n$ and $m = k \cdot n^{c/k}$ & Thm.~\ref{thm:2minusepshard1d} \\
& $3-\varepsilon$ & $\ensuremath{\text{poly}}(n)$ & $\ensuremath{\mathcal{O}}(n^{1-\varepsilon'})$ & $m=k=c \log n$ & Thm.~\ref{thm:3minuseps_lb_1d} \\ [1ex]
\hline
continuous, 2D & $3-\varepsilon$ & $\ensuremath{\text{poly}}(n)$ & $\ensuremath{\mathcal{O}}(n^{1-\varepsilon'})$ & $1 \ll k \ll \log n$ and $m = k \cdot n^{c/k}$ & Thm.~\ref{thm:3minuseps_lb_2d} \\ [1ex]
\hline
\end{tabular}
\end{table}
For the discrete Fr\'echet distance the ANN problem is by now well understood, but the continuous Fr\'echet distance remains very challenging. Therefore, in this paper we focus on the important special case of one-dimensional curves, which arise in various domains such as finance and signal processing, where they are typically called ``time series''. We give several new data structure bounds for the problem of approximate near neighbor searching for one-dimensional curves under the continuous Fr\'echet distance.
Table~\ref{table:1} provides an overview of our upper bounds, compared to known results.
In the second part of our paper, we show that most of these upper bounds are tight under the Orthogonal Vectors Hypothesis, when viewed as offline problems where the input and the set of queries are given in advance. To obtain these lower bounds, we introduce a novel OV-hard variant of Orthogonal Vectors in which one set contains sparse vectors, i.e., vectors that only contain few 1s; this problem may be of independent interest.
Table~\ref{table:2} gives an overview of our lower bound results.
To argue that most of our upper bounds are tight, we consider the following general scenario:
\begin{quote}
\emph{Suppose we have an $\alpha$-ANN for some fixed constant $\alpha$, we run its preprocessing on a data set of $n$ curves, and then we run $n$ queries.}
\end{quote}
In particular, consider this scenario for the following three ranges of $\alpha$.
\begin{itemize}
\item $1 < \alpha < 2$: Using our $(1+\varepsilon)$-ANN, this scenario takes total time $n \cdot \ensuremath{\mathcal{O}}(\frac{m}{k\varepsilon})^k$, which simplifies to $n \cdot \ensuremath{\mathcal{O}}(\frac{m}{k})^k$ since $\varepsilon = \alpha-1$ is fixed. Assuming OVH, our first lower bound shows that this running time cannot be improved to $n \cdot f(k) \cdot (\frac m k)^{o(k)}$ for any function $f$, for the following reason. Pick $k = k(n)$ sufficiently small such that $f(k) = n^{o(1)}$. Pick $m = k \cdot n^{c/k}$, so that $(\frac m k)^{o(k)} = (\frac{k \cdot n^{c/k}}{k})^{o(k)} = n^{o(1)}$. Then the total running time would be $n \cdot f(k) \cdot (\frac m k)^{o(k)} = n^{1+o(1)}$, which contradicts that either the preprocessing time is superpolynomial or the query time near-linear, as stated in Theorem~\ref{thm:2minusepshard1d}. This shows that the factor $(\frac m k)^{\Theta(k)}$ in our running time is necessary.
Our second lower bound shows that the running time cannot be improved to $n \cdot (\frac m k)^{f(k)} \cdot 2^{o(k)}$ for any function~$f$, as for $m = k = c \log n$ the total time would become $n \cdot (\frac m k)^{f(k)} \cdot 2^{o(k)} = n \cdot 1^{f(k)} \cdot n^{o(1)} = n^{1+o(1)}$, which contradicts that either the preprocessing time is superpolynomial or the query time near-linear, as stated in Theorem~\ref{thm:3minuseps_lb_1d}. This shows that the factor $\ensuremath{\mathcal{O}}(1)^k$ in our query time is necessary. In this sense, the running time of our $(1+\varepsilon)$-ANN is tight.
\item $2 < \alpha < 3$: By using our second or third $(2+\varepsilon)$-ANN (Theorem~\ref{thm:twopluseps_two} or~\ref{thm:twopluseps_three}) we solve this scenario in total time $\ensuremath{\mathcal{O}}(nm) + n \cdot \ensuremath{\mathcal{O}}(\frac 1 \varepsilon)^{k+2}$, which simplifies to $\ensuremath{\mathcal{O}}(nm) + n \cdot \ensuremath{\mathcal{O}}(1)^k$ since $\varepsilon = \alpha-2$ is fixed. Assuming OVH, our second lower bound shows that this cannot be improved to time $n \cdot (\frac m k)^{f(k)} \cdot 2^{o(k)}$ for any function~$f$, as for $m = k = c \log n$ we would obtain a total time of $n \cdot (\frac m k)^{f(k)} \cdot 2^{o(k)} = n \cdot 1^{f(k)} \cdot n^{o(1)} = n^{1+o(1)}$, which contradicts that either the preprocessing time is superpolynomial or the query time near-linear, as stated in Theorem~\ref{thm:3minuseps_lb_1d}. This shows that the factor $\ensuremath{\mathcal{O}}(1)^k$ in our running time is necessary. In this sense, the running time of our $(2+\varepsilon)$-ANNs from Theorems~\ref{thm:twopluseps_two} and~\ref{thm:twopluseps_three} are tight. (Our $(2+\varepsilon)$-ANN from Theorem~\ref{thm:twopluseps_one} is not tight in this sense, but it realizes a different tradeoff between preprocessing and query time.)
\item $\alpha>3$: In this range, our ANNs still require exponential time in terms of $k$, but we cannot hope for a tight lower bound using the current techniques. This is due to a fundamental limitation of proving inapproximability factor $>3$ for a metric problem, cf.~e.g.~\cite[Open Question 3]{Rubinstein18}. For this reason, we have no tight lower bounds in this range.
\end{itemize}
\subsection{Technical Overview}\label{section:overview}
The high-level view of our data structures employs a well-known technique: exhaustively enumerate a strategic subset of the query space with a set of ``candidate'' query curves during preprocessing, and store the answers to these candidate queries in a dictionary. During query time, we apply a simple transformation to the query curve (such as rounding vertices to a scaled integer grid) and look up the answer in the dictionary. Filtser et al.~\cite{FiltserFK20} used this technique for the discrete Fr\'echet distance and Driemel and Psarros~\cite{DP21} showed that it can also be applied for the continuous Fr\'echet distance of one-dimensional curves. A particular challenge that appears in the continuous case is that the doubling dimension can be unbounded, even if the complexity of the curves is small. Intuitively, what can happen is that the query contains some small noise that appears in the middle of a long edge. The continuous Fr\'echet distance---being robust to this noise---may match these short edges to the interior of a long edge on the near neighbor input curve. However, we cannot afford to generate all possible noisy query curves of this type, since this would introduce a dependency on the arclength in our time and space bounds.
Driemel and Psarros overcome this challenge with the use of signatures, which allow to ``guess'' the approximate shape of a query curve within some approximation factor. The idea is that the signature acts as a ``low-pass'' filter that eliminates the noisy short edges. However, this is a delicate process as the signature may eliminate too many edges on one of the curves (either on the near neighbor or on the query curve) leading to the near neighbor being missed during query time. In addition, the process may introduce false-positives, hence the high approximation factor of $(5+\varepsilon)$ in the result of~\cite{DP21}.
We see our contributions as three-fold:
\begin{enumerate}
\item Our first contribution is to improve the approximation factors of Driemel and Psarros~\cite{DP21} while staying within the same time bounds, cf.~Table~\ref{table:1} for a comparison.
\begin{enumerate}
\item For Theorem~\ref{thm:threepluseps}, we use almost the same algorithm as Driemel and Psarros, but combine this with a more careful analysis based on new observations on the Fr\'echet distance of approximately monotone curves. As a result, we can achieve a $(3+\varepsilon)$-approximation within the same time bounds as the previous $(5+\varepsilon)$-ANN.
\item In Theorem~\ref{thm:onepluseps} we even achieve an approximation factor of $(1+\varepsilon)$ within the same time bounds as the previous $(2+\varepsilon)$-ANN. To achieve this result, we introduce the concept of straightenings\xspace{} in Section~\ref{section:lemmas}. Straightenings\xspace{} share some properties of signatures, but they provide a more refined approximation, leading to fewer false positives. They allow us to ``guess'' the shape of a query curve up to approximation factor $(1+\varepsilon)$.
\end{enumerate}
We derive useful properties of both signatures and straightenings\xspace{}. Central to our analysis is the concept of $\delta$-visiting orders, which we introduce in Section~\ref{section:lemmas} and analyze in Section~\ref{section:missingproofs}.
\item Our second contribution is a range of data structures for the $(2+\varepsilon)$-ANN which together provide a tradeoff between preprocessing time and query time (see Theorems~\ref{thm:twopluseps_one}, \ref{thm:twopluseps_two}, and~\ref{thm:twopluseps_three}). In each case, the preprocessing time implicitly bounds the number of candidates that are generated and therefore the size of the dictionary used by the data structure. Thus, these data structures also achieve a tradeoff between space and query time. An important observation that leads to this result is that the enumeration of candidates can be ``dualized'' and then be shifted from the preprocessing time to the query time. In the extreme case, this allows us to design a data structure that has linear preprocessing time and space, by performing most of the candidate generation during query time, see Theorem~\ref{thm:twopluseps_three} for the exact result.
\item Given the diverse range of upper bounds, it is natural to ask if these bounds can be improved. Our third main contribution is to show that most of our upper bounds are tight under the Orthogonal Vectors Hypothesis.
All known OV-based hardness results for the Fréchet distance encode each of the dimensions using at least one vertex, thus transforming $d$-dimensional vectors into curves of length $k=\Omega(d)$. Since OVH postulates a lower bound in dimension $d = c \log n$, it is thus natural to prove OV-based lower bounds for curves of length $k=c \log n$. Our lower bound in Theorem \ref{thm:3minuseps_lb_1d} handles this setting, cf.~Table~\ref{table:2}.
However, for some of our lower bounds we require $k = o(\log n)$, as this is necessary to rule out time $(m/k)^{o(k)}$.
Surprisingly, we overcome the barrier of using at least one vertex per dimension. Specifically, we prove OV-based lower bounds for any $1 \ll k \ll \log n$, see Theorem~\ref{thm:2minusepshard1d}.
For this, we use two crucial observations: (i) it is possible to only encode the 1s of one vector set, while the 0s do not require any additional vertices on the curve, and (ii) we can show hardness of a variant of OV where one set contains only sparse vectors, i.e., vectors with a very small number of 1s. See Theorem \ref{thm:2minusepshard1d} for the hardness result we obtain in this case.
Interestingly, a similar construction is also possible for $(3-\varepsilon)$-ANN for two-dimensional curves, see Theorem \ref{thm:3minuseps_lb_2d}.
\end{enumerate}
\paragraph{Organization}
In Section~\ref{section:prelims} we define the notation and state some known facts and observations. In Section~\ref{section:lemmas} we define key concepts, and we present their properties and our main technical lemmas.
Our data structures are described and analyzed in Sections \ref{section:datastructure1apprx}, \ref{section:datastructure2apprx}, and \ref{section:datastructure3apprx}.
In Section~\ref{section:missingproofs} we prove our main technical lemmas.
In Section~\ref{section:lowerbounds} we present our conditional lower bounds.
\section{Preliminaries}\label{section:prelims}
For any positive integer $n$, we define $[n] \coloneqq \{1,\ldots,n\}$. For any two points $p,q\in \mathbb{R}^d$, $\overline{pq}$ denotes the directed line segment connecting $p$ with $q$ in the direction from $p$ to $q$.
Any sequence of points $p_1,\ldots,p_m \in \mathbb{R}^d$ defines a polygonal curve formed by the ordered line segments $\overline{p_i p_{i+1}}$. We call the points $p_i$ the \emph{vertices} of the curve and the line segments $\overline{p_i p_{i+1}}$ the \emph{edges}. The resulting curve can be viewed as a continuous function $P\colon [0,1]\mapsto \mathbb{R}^d$. For $d=1$, we may refer to the curve as a \emph{one-dimensional curve} or as a \emph{time series}. We define the \emph{complexity} of a polygonal curve $P$ as the number of its vertices and we denoted it by $|P|$.
We say a polygonal curve is \emph{degenerate} if there are three consecutive vertices $p,q,r$, such that $q$ lies on the line segment $\overline{p r}$. In this case, we call $q$ a degenerate vertex of this curve.
Given a sequence of points $p_1,\ldots,p_m$, we can define a non-degenerate curve by omitting degenerate vertices. We denote the resulting curve by $\seqtocurve{ p_1,\ldots,p_m}$. Note that for one-dimensional curves, the vertices of the resulting non-degenerate curve are the extrema of the function.
For any two $0\leq t_a<t_b\leq1$ and any curve $P$, we denote by $P[t_a,t_b]$ the subcurve of $P$ starting at $P(t_a)$ and ending at $P(t_b)$. For any two curves $P$, $Q$, with vertices $p_1,\ldots,p_a$ and $p_b,\ldots,p_m$, respectively,
$P \circ Q $ denotes the polygonal curve $\seqtocurve{ p_1,\ldots,p_a,p_b, \ldots p_m } $, that is the \emph{concatenation} of $P$ and $Q$. For $n$ polygonal curves $P_1,\ldots,P_n$, we denote by $\bigcirc_{i=1}^n P_i$ the concatenation $P_1\circ P_2 \circ \cdots \circ P_n$.
Given a polygonal curve $P = \seqtocurve{p_1, \dots, p_m}$ and a point $\tau$ in $\mathbb{R}^d$, we define the translated curve as $P + \tau \coloneqq \seqtocurve{p_1 + \tau, \dots, p_m + \tau}$.
For a point $x\in \mathbb{R}^d$ and a polygonal curve $P$, we use the notation $x \in P$ to indicate that there exists a $t\in[0,1]$ such that $P(t)=x$. Let ${\mathcal G}_{\varepsilon}:=\{i\cdot \varepsilon \mid i\in {\mathbb Z}\}$ be the regular grid with side-length $\varepsilon>0$.
We will use the following known observations~(see also \cite{buchin2008computing} and \cite{driemel2013jaywalking}).
\begin{observation}\label{observation:linesegment}
For any two line segments $X=\overline{ab}$, $Y=\overline{cd}$ it holds that
$\df(X,Y)=\max\{\|a-c\|,\|b-d\|\}$.
\end{observation}
\begin{observation}
\label{observation:concatenation}
Let two polygonal curves $Q : [0, 1]\mapsto \mathbb{R}^d$ and
$P : [0, 1] \mapsto \mathbb{R}^d$ be the concatenations of two subcurves
each, $Q = Q_1\circ Q_2$ and $P= P_1\circ P_2$. Then it holds that
$\df (P, Q) \leq \max \{\df (Q_1, P_1), \df (Q_2, P_2)\}$.
\end{observation}
\begin{observation}\label{observation:shortcut}
Let $Q$ be a line segment and let $P$ be a curve with $\df(P,Q) \leq \delta$. Let $P'$ be a curve that is formed from a subsequence of the vertex sequence of $P$ including the first and last vertex of $P$. Then, $\df(P',Q) \leq \delta$.
\end{observation}
We also make use of an algorithm by Alt and Godau~\cite{AltG95} for deciding whether the Fr\'echet distance between two polygonal curves exceeds a given threshold. \begin{theorem}[\cite{AltG95}]\label{theorem:frechetdecision}
There is an algorithm which, given polygonal curves $P$, $Q$ and a threshold parameter $\delta>0$, decides in $\ensuremath{\mathcal{O}}(|P|\cdot |Q|)$ time whether $\df(P,Q)\leq \delta$.
\end{theorem}
Our data structures can be implemented to work on the Word-RAM and under certain assumptions on the Real-RAM, as discussed next. Central to our approach is the use of a dictionary, which we define as follows.
\begin{definition}[Dictionary]
A \emph{dictionary} is a data structure which stores a set of (key, value) pairs and when presented with a key, either returns the corresponding value, or returns that the key is not stored in the dictionary.
\end{definition}
In the Word-RAM model, such a dictionary can be implemented using perfect hashing. For storing a set of $n$ (key,value) pairs, where the keys come from a universe $U^k$, perfect hashing provides us with a dictionary using $\ensuremath{\mathcal{O}}(n)$ space and $\ensuremath{\mathcal{O}}(k)$ query time which can be constructed in $\ensuremath{\mathcal{O}}(n)$ expected time~\cite{FKS84}. During look-up, we compute the hash function in $\ensuremath{\mathcal{O}}(k)$ time, we access the corresponding bucket in the hashtable in $\ensuremath{\mathcal{O}}(1)$ time and check if the key stored there is equal to the query in $\ensuremath{\mathcal{O}}(k)$ time. This gives an efficient randomized implementation of dictionaries. Alternatively, we can use balanced binary search trees and pay an additional $\log n$ factor in preprocessing and query time of the dictionary. This deterministic algorithm also works in the Real-RAM model, if we assume that the floor function can be computed in constant time---a model which is often used in the literature~\cite{H11}.
In the Word-RAM model, we use the standard assumption that the word size is logarithmic in the size of the input, and we ensure that all numbers (vertices of the time series, results of intermediate computations, etc.) are restricted to be of the form $a/b$ where $a$ is an integer in $[-(nm)^{\ensuremath{\mathcal{O}}(1)},(nm)^{\ensuremath{\mathcal{O}}(1)}]$ and $b=(nm)^{\ensuremath{\mathcal{O}}(1)}$.
\section{Simplifications, signatures, and straightenings\xspace}\label{section:lemmas}
In this section we state the main definitions and lemmas that we use to describe our algorithms and prove their correctness. To allow for an easier understanding of our results, we then already describe our algorithms and prove correctness using these lemmas. In Section \ref{section:missingproofs} we then give the proofs of the lemmas presented in the current section.
\subsection{Definitions}
Let us start with two basic definitions.
\begin{definition}
We say a curve $P:[0,1]\rightarrow \mathbb{R}$ is \emph{$\delta$-monotone\xspace{}} if one of the following statements holds:
\begin{compactenum}[(i)]
\item $\forall~ t<t'\in [0,1]:$ $P(t')\geq P(t) -\delta $,
\item $\forall~ t<t'\in [0,1]:$ $P(t')\leq P(t) +\delta $.
\end{compactenum}
More specifically, we say the curve is $\delta$-\emph{monotone increasing} in case (i) and $\delta$-\emph{monotone decreasing} in case (ii). Note that a curve can be both $\delta$-monotone increasing and decreasing at the same time. In addition, we may say $P$ is $\delta$-monotone with respect to a directed edge $\overline{ab}$, if $a \leq b$ in case (i) and if $b \leq a$ in case (ii).
\end{definition}
\begin{definition}
The $\delta$-range of a point $p \in \mathbb{R}$ is the interval $B(p,\delta)=[p-\delta,p+\delta]$. The $\delta$-range of a curve $P$ is the interval $B(P,\delta)=\bigcup_{x\in P} B(x,\delta)$.
\end{definition}
We now define the notion of \emph{simplification} that we use in this work.
\begin{definition}[$\delta$-simplification]
Given a curve $ P:~ [0, 1] \mapsto \mathbb{R}^d$, a $\delta$-simplification is a curve $P' :~ [0, 1] \mapsto \mathbb{R}^d$ that is given as $P'= \langle P(t_1),\dots,P(t_{\ell}) \rangle$ for a sequence of values $0=t_1 < \dots < t_{\ell}=1$, such that each $P(t_i)$ is a vertex of $P$,
$P'$ is non-degenerate, and
\begin{equation}\label{locality-property}
\df(\overline{P(t_i) P(t_{i+1})}, P[t_i,t_{i+1}]) \leq \delta,\ \text{for all } 1 \leq i < \ell.
\end{equation}
\end{definition}
We also refer to (\ref{locality-property}) as the \emph{locality property}. Furthermore, note that if $P'$ is a $\delta$-simplification of $P$, then $\df(P,P') \leq \delta$ and the complexity of $P'$ is at most the complexity of $P$.
Note that the vertices of a $\delta$-simplification $P'$ give us a natural partition of $P$. Furthermore, we want to highlight that our definition of a simplification is one out of many definitions that are used in literature. In particular, in other work curves which are degenerate or non vertex-restricted are also called simplifications.
Now we define some properties that a simplification can or must have.
\begin{observation}[direction-preserving property]
For any $\delta$-simplification $P' = \langle P(t_1),\dots,P(t_{\ell}) \rangle$ of a curve $ P:~ [0, 1] \mapsto \mathbb{R}$ and any index $i$, the subcurve $P[t_i,t_{i+1}]$ is $2\delta$-monotone with respect to $\overline{P(t_i)P(t_{i+1})}$.
\end{observation}
\begin{definition}[vertex-range-preserving property]
Let $P' = \langle P(t_1),\dots,P(t_{\ell}) \rangle$ be a $\delta$-simplification of a curve $P:~[0,1]\mapsto \mathbb{R}$.
We say $P'$ is range-preserving on the vertex $P(t_i)$ if the following holds:
\begin{enumerate}[(i)]
\item if $P(t_i)$ is a local maximum on $P'$, then $P(t) \leq P(t_i)$ for all $t$ in $[t_{i-1},t_{i+1}]$, and
\item if $P(t_i)$ is a local minimum on $P'$, then $P(t) \geq P(t_i)$ for all $t$ in $[t_{i-1},t_{i+1}]$.
\end{enumerate}
We say $P'$ is vertex-range-preserving, if it is vertex-range-preserving on all interior vertices.
\end{definition}
\begin{definition}[edge-range-preserving property]
Let $P' = \langle P(t_1),\dots,P(t_{\ell}) \rangle$ be a $\delta$-simplification of $P:~[0,1]\mapsto \mathbb{R}$.
We say that $P'$ is edge-range-preserving on edge $\overline{P(t_i)P(t_{i+1})}$ if for any $t \in [t_i, t_{i+1}]$ it holds that $P(t) \in \overline{P(t_i)P(t_{i+1})}$. We say $P'$ is edge-range-preserving if this condition holds for all edges of $P'$.
\end{definition}
Note that the vertex-range-preserving property is implied by the edge-range-preserving property, but not the other way around. However, the vertex-range preserving property implies the edge-range-preserving property on all edges except the first and the last edge.
\begin{definition}[$\delta$-edge-length property]
We say that a one-dimensional curve $P = \langle p_1, \dots, p_m \rangle$ has the $\delta$-edge-length property if
\begin{itemize}
\item $|p_1 - p_2| > \delta$ and $|p_{m-1} - p_m| > \delta$, and
\item $|p_i - p_{i+1}| > 2\delta$ for all $i \in \{2,\ldots,m-2\}$.
\end{itemize}
\end{definition}
Finally, we can define two of the main concepts that we use in our algorithms: $\delta$-signatures and $\delta$-straightenings\xspace. These two definitions help us to preprocess the input set of one-dimensional curves and the query curve in ways such that an efficient retrieval is possible.
\begin{definition}[$\delta$-signature]
A $\delta$-simplification $P'$ of a one-dimensional curve $P$ is a $\delta$-signature if it has the $\delta$-edge length property and is vertex-range-preserving.
\end{definition}
\begin{definition}[$\delta$-straightening\xspace]
A $\delta$-simplification $P'$ of a one-dimensional curve $P$ is a $\delta$-straightening\xspace if it is edge-range-preserving.
\end{definition}
The above definition of a $\delta$-signature is equivalent to the definition given in~\cite{DKS16}.
For any $\delta>0$ and any curve $P:~[0,1]\mapsto \mathbb{R}$ of complexity $m$, a $\delta$-signature of $P$ can be computed in $\ensuremath{\mathcal{O}}(m)$ time \cite{DKS16}.
The $\delta$-signature of a curve is unique under certain general-position assumptions, however we do not explicitly use this property in our proofs.
Note that $\delta$-straightenings\xspace are \emph{not} unique. In fact, there can be many different $\delta$-straightenings\xspace of the same curve, e.g., $P$ itself is a $\delta$-straightening\xspace of $P$ for any $\delta > 0$.
We give an example of a signature and different straightenings\xspace of the same curve in Figure~\ref{fig:defexamples}.
\begin{figure}
\centering
\includegraphics[scale=1.1]{figures/example_definitions.pdf}
\caption{$P_1$ is a $1$-signature of $P_0$, whereas $P_2$ and $P_3$ are $1$-straightenings\xspace of $P_0$.}
\label{fig:defexamples}
\end{figure}
We introduce the notion of visiting orders, which we will use to prove correctness of our data structures.
\begin{definition}\label{def:visitingorder}
Let $P:[0,1] \rightarrow \mathbb{R}$ and $Q:[0,1] \rightarrow \mathbb{R}$ be curves. Let $u_1,\dots,u_{\ell}$ denote the ordered vertices of $Q$ and let $v_1,\dots,v_{m}$ denote the ordered vertices of $P$. A (partial) \textbf{$\delta$-visiting order} of $Q$ on $P$ is a sequence of indices $i_1 \leq \dots \leq i_{\ell}$, such that $|u_j - v_{i_{j}}| \leq \delta$ for each vertex $u_{j}$ of $Q$.
\end{definition}
In particular, if we know that there exists a $\delta$-visiting order of $Q$ on $P$, then we can approximately ``guess'' $Q$ from the vertex sequence of $P$, by enumerating all possible visiting orders of the vertices of $P$ and for any fixed visiting order, enumerating all eligible grid sequences within the $\delta$-ranges of these vertices.
Driemel, Krivosija and Sohler proved the following lemma (rephrased using $\delta$-visiting orders).
\begin{lemma}[Lemma 3.2 \cite{DKS16}]\label{lemma:signatures2}
Let $P:[0,1] \rightarrow \mathbb{R}$ and $Q:[0,1] \rightarrow \mathbb{R}$ be curves and let $P'$ be a $\delta$-signature of $P$. If $\df (P,Q) \leq \delta$, then there exists a $\delta$-visiting order of $P'$ on $Q$.
\end{lemma}
\subsection{Main lemmas}
In this section we present the main lemmas for signatures and straightenings\xspace that we will use in Sections~\ref{section:datastructure1apprx}~to~\ref{section:datastructure3apprx}. Their proofs are deferred to Section~\ref{section:missingproofs}.
Most of our lemmas improve the basic triangle inequality $\df(P,Q) \le \df(P,X) + \df(X,Q)$ in some situations involving signatures and straightenings.
\begin{restatable}{lemma}{lemmastraightenings}\label{lemma:simplproxy}\label{lemmastraightenings}
Let $P:[0,1]\mapsto \mathbb{R}$ and $Q:[0,1]\mapsto \mathbb{R}$ be two curves and let $Q'$ be any $\delta$-straightening\xspace of $Q$. If $\df(P,Q') \leq \delta$ then $\df(P,Q)\leq \delta$.
\end{restatable}
We would like to show the equivalent statement of Lemma~\ref{lemma:simplproxy} for signatures. However, as the example in Figure~\ref{fig:counterexample} shows, this is not possible. Instead, we show a slightly weaker bound in the following lemma.
\begin{restatable}{lemma}{lemmasignatureproxy}\label{lemma:signatureproxy}\label{lemmasignatureproxy}
Let $\delta=\delta'+\delta''$ for $\delta,\delta',\delta''\geq 0$ and let $P:[0,1]\mapsto \mathbb{R}$ and $Q:[0,1]\mapsto \mathbb{R}$ be two curves. Let $Q'$ be any $\delta'$-signature of $Q$. If $\df(Q',P) \leq \delta$, $|Q(0)-P(0)| \leq \delta''$, and $|Q(1)-P(1)| \leq \delta''$, then $\df(P,Q)\leq \delta$.
\end{restatable}
Note that Lemma~\ref{lemma:simplproxy} is much stronger than what we would get by merely applying the triangle inequality on the Fr\'echet distances on the curves $P$, $Q$ and $Q'$. Lemma~\ref{lemma:signatureproxy}, although weaker, is still stronger than the bound we would get from the triangle inequality.
To illustrate this we include the following corollary. Note that merely using triangle inequality would yield $\df(P,Q) \le 6\delta$, instead of $\df(P,Q) \le 3\delta$.
\begin{corollary}
For one-dimensional curves $P,Q$ let $P'$ be a $\delta$-signature of $P$, and let $Q'$ be the $2\delta$-signature of $Q$. If $\df(P',Q') \le 3\delta$ and $|P'(0)-Q'(0)| \le \delta, |P'(1)-Q'(1)| \le \delta$, then $\df(P,Q) \le 3\delta$.
\end{corollary}
\begin{proof}
Follows from applying of Lemma~\ref{lemma:signatureproxy} twice.
We first apply the lemma to $P'$, $Q'$ and $P$ and obtain $\df(P,Q')\leq 3\delta$. In the second step, we apply the lemma to $P$, $Q'$ and $Q$ and obtain $\df(P,Q)\leq 3\delta$.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=1.1]{counterexample.pdf}
\caption{This example shows that an equivalent statement of Lemma~\ref{lemma:simplproxy} for signatures is not true. The curve $X=\seqtocurve{-1,2}$ is a $1$-signature of $Q=\seqtocurve{-1,-2,2}$ and the curve $P=\seqtocurve{0,1,-1,2}$ has Fr\'echet distance $1$ to $X$, but the Fr\'echet distance of $P$ to $Q$ is $2$.}
\label{fig:counterexample}
\end{figure}
The following lemma is used to show correctness for our $(1+\varepsilon)$ and $(2+\varepsilon)$-ANN.
\begin{restatable}{lemma}{lemmagoodsimplificationranges}
\label{lemma:goodsimplificationranges2}
Let $P:[0,1]\mapsto \mathbb{R}$ and $Q:[0,1]\mapsto \mathbb{R}$ be curves such that $\df(Q,P)\leq \delta$, there exists a $\delta$-straightening\xspace $Q'$ of $Q$ which satisfies the following properties:
\begin{compactenum}[(i)]
\item there exists a $11\delta$-visiting order of $Q'$ on $P$, and
\item $\df(Q',P)\leq \delta$.
\end{compactenum}
\end{restatable}
We use the following lemma to show correctness for our $(3+\varepsilon)$-ANN. One part of the lemma statement, the existence of a $2\delta$-visiting orders, was already used in~\cite{DP20}. However, the resulting approximation factor of the ANN obtained there was $(5+\varepsilon)$. In order to show correctness of our $(3+\varepsilon)$-ANN, it is necessary to prove the bound of $3\delta$ on the resulting Fr\'echet distance of the two signature curves. Note that the triangle inequality implies a bound of $4\delta$---which would not be sufficient for us.
\begin{restatable}{lemma}{lemmasignatures}
\label{lemma:signatures3}
For one-dimensional curves $P,Q$ let $P'$ be a $\delta$-signature of $P$, and let $Q'$ be a $2\delta$-signature of $Q$. If $\df(P,Q) \le \delta$ then $\df(P',Q') \le 3\delta$ and there exists a $2\delta$-visiting order of $Q'$ on~$P'$.
\end{restatable}
\section{$(1+\varepsilon)$-Approximation}
\label{section:datastructure1apprx}
In this section, we show that there exists a $(1+\varepsilon)$-ANN data structure for one-dimensional curves under the Fr\'echet distance, with space in $n\cdot \ensuremath{\mathcal{O}}(\frac{m}{k\varepsilon})^k$, expected preprocessing time in $nm\cdot \ensuremath{\mathcal{O}}(\frac{m}{k\varepsilon})^k$ and query time in $\ensuremath{\mathcal{O}}(k \cdot 2^k)$. We describe the data structure in Section~\ref{subsection:datastructure1apprx_ds} and we analyze its performance in Section~\ref{subsection:datastructure1apprx_an}.
\subsection{The data structure}
\label{subsection:datastructure1apprx_ds}
\paragraph*{Data structure}
We are given as input a set of one-dimensional curves $\mathcal{P}$, as sequences of vertices, the distance threshold $\delta>0$, the approximation error $\varepsilon>0$, and the complexity of the supported queries $k$. To discretize the query space, we use the grid ${\mathcal G}_{\varepsilon\delta/2}$ (recall that ${\mathcal G}_{\varepsilon}:=\{i\cdot \varepsilon \mid i\in {\mathbb Z}\}$ is the regular grid with side-length $\varepsilon$). Let ${\mathcal H}$ be a dictionary which is initially empty.
For each input one-dimensional curve $P \in \mathcal{P}$ we compute a set ${\mathcal C}':={\mathcal C}'(P)$ which contains all curves $Q$ such that:
\begin{inparaenum}[i)]
\item $Q$ has complexity at most $k$,
\item all vertices of $Q$ belong to ${\mathcal G}_{\varepsilon\delta/2}$, and
\item there is an $((11+\varepsilon/2)\delta)$-visiting order of $Q$ on $P$.
\end{inparaenum}
Formally,
\begin{multline*}
{\mathcal C}'=\{ \seqtocurve{u_1,\ldots,u_{\ell}} \mid \ell \leq k \text{ and }\exists (i_1,\ldots,i_{\ell}) ( i_1\leq \dots \leq i_{\ell} \text{ and }\\ (\forall j\in [\ell]) (u_j \in B(p_{i_j},(11+\varepsilon/2)\delta)\cap {\mathcal G}_{{\varepsilon\delta}/{2}})) \}.
\end{multline*}
Next, we filter ${\mathcal C}'$ to obtain
the set ${\mathcal C}(P)=\{Q\in {\mathcal C}' \mid \df(Q,P)\leq (1+\varepsilon/2)\delta \}$. We store ${\mathcal C}(P)$ in ${\mathcal H}$ as follows: for each $Q\in {\mathcal C}(P)$,
if $Q$ is not already stored in ${\mathcal H}$, then we insert $Q$
into ${\mathcal H}$, associated with a pointer to $P$.
The complete pseudocode for the preprocessing algorithm can be found in Algorithms~\ref{alg:generateorders} and \ref{alg:preprocessing1apprx}. To achieve approximation factor $(1+\varepsilon)$, we run \texttt{preprocess}$(P,\delta,\varepsilon/2,k)$.
\paragraph*{Query algorithm}
Let $Q$ be the query curve with vertices $q_1,\ldots,q_k$ and let $\varepsilon>0$ be the approximation error. The query algorithm first enumerates all curves $Q'$ such that $Q' \in \{\seqtocurve{q_1,S,q_k} \mid \text{$S$ is a subsequence of $q_2,\ldots,q_{k-1}$} \}.$ For each such $Q'$ we test whether it is a $\delta$-straightening\xspace of $Q$. To this end, we first test if each shortcut taken in $Q'$ is within distance $\delta$ from the corresponding subcurve of $Q$. Then we check for each shortcut if the corresponding subcurve of $Q$ stays within range by testing all vertices of the subcurve one by one.
If $Q'$ is a $\delta$-straightening\xspace of $Q$, then we snap the vertices of $Q'$ to ${\mathcal G}_{\varepsilon\delta/2}$, to obtain a new curve $Q''$ and we
probe ${\mathcal H}$: if $Q''$ is stored in ${\mathcal H}$, then we return its associated input curve $P\in \mathcal{P}$. If $Q''$ is not stored in ${\mathcal H}$, then we return “no”.
The complete pseudocode for the query algorithm can be found in
Algorithm~\ref{alg:query1apprx}. To achieve approximation factor $(1+\varepsilon)$, we run \texttt{query}$(Q,\delta,\varepsilon/2)$.
\label{section:pseudocode1apprx}
\begin{algorithm}
\caption{A call to \texttt{generate\_orders}($m$, $k$)
returns all $(i_1,\ldots,i_{\ell}) \in [m]^\ell$, where $\ell\in [k]$ and such that $1=i_1\leq \cdots \leq i_{\ell}=m$. We assume $k \ge 2$.\label{alg:generateorders}}
\begin{algorithmic}[1]
\Procedure{\texttt{generate\_orders}}{$m \in {\mathbb N}$, $k \in {\mathbb N}$}
\State ${\mathcal I}_2\gets \{(1,m)\}$
\For{\textbf{each} $\ell = 3,\ldots, k$}
\State ${\mathcal I}_{\ell}\gets \emptyset$
\For{\textbf{each} $(i_1,\ldots,i_{\ell-1})\in {\mathcal I}_{\ell-1}$} \Comment{$i_{\ell-1}=m$}
\For{\textbf{each} $j=i_{\ell-2},\ldots,m$}
\State \label{alg:generate_orders:update} ${\mathcal I}_{\ell} \gets {\mathcal I}_{\ell} \cup
\{(i_1,\ldots,i_{\ell-2},j,m)\}$
\EndFor
\EndFor
\EndFor
\State \Return $\bigcup_{2 \le \ell \le k} {\mathcal I}_{\ell}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Preprocessing algorithm. We call \texttt{preprocess} to build the data structure. \label{alg:preprocessing1apprx}}
\begin{algorithmic}[1]
\Procedure{\texttt{preprocess}}{input set ${\mathcal P}$, $\delta>0$, $\varepsilon>0$, $k \in {\mathbb N}$}
\State Initialize empty dictionary ${\mathcal H}$
\For {{\bf each} $P \in {\mathcal P}$}
\State ${\mathcal C}(P)\gets$ \texttt{generate\_keys}$(P, \delta , \varepsilon, k)$
\For{{\bf each} $Q\in {\mathcal C}(P)$}
\If{$Q$ not in ${\mathcal H}$}
\State insert key $Q$ in ${\mathcal H}$, associated with a pointer to $P$
\EndIf
\EndFor
\EndFor
\EndProcedure
\Procedure{\texttt{generate\_keys}}{curve $P$, $\delta>0$, $\varepsilon>0$, $k \in {\mathbb N}$}
\State ${\mathcal C}'\gets$\texttt{generate\_candidates}$(P,\delta, (11+\varepsilon),\varepsilon,k)$
\State ${\mathcal C} \gets \emptyset$
\For{{\bf each} $Q\in {\mathcal C}'$}
\If{$\df( P , {Q} )\leq (1+\varepsilon)\delta$}
\State ${\mathcal C} \gets {\mathcal C} \cup \{ Q\}$
\EndIf
\EndFor
\State {\bf return} ${\mathcal C} $
\EndProcedure
\Procedure{\texttt{generate\_candidates}}{curve $P$ with vertices $p_1,\ldots,p_m $, $\delta>0$, $r>0$, $\varepsilon>0$, $k \in {\mathbb N}$}
\State $\mathcal{S}\gets \emptyset$, ${\mathcal C}'\gets \emptyset$
\State $\mathcal{I}\gets$\texttt{generate\_orders}$(m,k)$
\For{\textbf{each} $(i_1,\ldots,i_{\ell}) \in \mathcal{I}$}
\State $\mathcal{S}\gets \mathcal{S}\cup \prod_{j=1}^{\ell} B(p_{i_j},r\delta)\cap {\mathcal G}_{\varepsilon\delta}$\label{algoline:candidates}
\EndFor
\For{\textbf{each} $\sigma \in {\mathcal S}$}
\State ${\mathcal C}'\gets {\mathcal C}'\cup \{ \seqtocurve{\sigma} \} $
\EndFor
\State \Return ${\mathcal C}'$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Query algorithm \label{alg:query1apprx}}
\begin{algorithmic}[1]
\Procedure{\texttt{query}}{curve $Q$ with vertices $q_1,\ldots,q_k$, $\delta>0$, $\varepsilon>0$}
\State $\mathcal{I}\gets$\texttt{generate\_orders}$(k,k)$
\For{\textbf{each} $(i_1,\ldots,i_{\ell})\in \mathcal{I}$}
\State $flag \gets 1 $
\For{$j=1,\ldots,\ell-1$}
\If{$\df(\overline{q_{i_j}q_{i_{j+1}}},\seqtocurve{ q_{i_j},\ldots,q_{i_{j+1}}})> \delta$}\Comment{test $\delta$-simplification property}
\State $flag \gets 0$
\EndIf
\For{\textbf{each} $t=i_j,\ldots , i_{j+1}$}
\Comment{test edge-range-preserving property}
\If{$q_t \notin \overline{q_{i_j}q_{i_{j+1}}}$}
\State $flag \gets 0$
\EndIf
\EndFor
\EndFor
\If{$flag =1$}
\State $Q' \gets \seqtocurve{{q_{i_1}}, \ldots, {q_{i_{\ell}}}} $ \Comment{a $\delta$-straightening\xspace of $Q$}
\State $Q'' \gets \seqtocurve{\left \lfloor \frac{q_{i_1}}{\varepsilon\delta} \right\rfloor\cdot (\varepsilon\delta), \ldots, \left \lfloor \frac{q_{i_{\ell}}}{\varepsilon\delta} \right\rfloor\cdot (\varepsilon\delta) }$ \Comment{snap $Q'$ to ${\mathcal G}_{\varepsilon\delta}$}
\If{$Q''$ in ${\mathcal H}$}
\State \Return input curve $P$ associated with $Q''$ in ${\mathcal H}$
\EndIf
\EndIf
\EndFor
\State \Return “no”
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Analysis}
\label{subsection:datastructure1apprx_an}
In this section, we analyze the performance of our data structure.
\begin{lemma}
\label{lemma:numberofcandidates}
For any curve $P$ with vertices $p_1,\ldots,p_m$, $\delta>0$, $\varepsilon>0$, $r\geq\varepsilon$, $k\in {\mathbb N}$, the procedure \texttt{generate\_candidates}$(P,\delta, r,\varepsilon,k)$ has running time in \[ {{m+k-2}\choose{k-2} }\cdot \ensuremath{\mathcal{O}}\left(\frac{ r}{\varepsilon}\right)^k.\]
\end{lemma}
\begin{proof}
The set $\mathcal{I}$ contains all
sequences of indices $(i_1,\ldots,i_{\ell}) \in [m]^{\ell}$ such that $\ell\leq k$, and $1=i_1\leq \cdots \leq i_{\ell}=m$.
Let ${\mathcal I}_{\ell}$ be the subset of ${\mathcal I}$ containing the sequences of length $\ell$ as denoted in \texttt{generate\_orders}.
We first claim that \texttt{generate\_orders}$(m,k)$ runs in time $\ensuremath{\mathcal{O}}(|{\mathcal I}| \cdot k)$. To see that, consider any sequence of indices $s \in {\mathcal I}$. During the execution of \texttt{generate\_orders}, $s$ is added to the sets of indices (Line~\ref{alg:generate_orders:update}) only once. This step costs $\ensuremath{\mathcal{O}}(k)$, therefore the running time of
\texttt{generate\_orders}$(m,k)$ is in $\ensuremath{\mathcal{O}}(|{\mathcal I}| \cdot k)$. Now,
let ${\mathcal S}'$ be a multiset which contains all sequences (including duplicates) which are generated and inserted to ${\mathcal S}$ in all executions of Line~\ref{algoline:candidates} of \texttt{generate\_candidates}.
The running time of \texttt{generate\_candidates}$(P,\delta,r,\varepsilon,k)$ is upper bounded by $\ensuremath{\mathcal{O}}(|{\mathcal S}'|\cdot k)$, because $|{\mathcal S}'|\geq |{\mathcal I}| $ and
computing ${\mathcal C}'$ costs $\ensuremath{\mathcal{O}}(|{\mathcal S}'|\cdot k)$ time. We proceed by showing an upper bound on $|{\mathcal S}'|$.
Any sequence $(x_1,\ldots,x_{\ell})\in {\mathcal G}_{\varepsilon\delta}^{\ell}$, which is included in ${\mathcal S}'$, may appear in the computation taking place in Line~\ref{algoline:candidates} multiple times: once for each sequence of indices $(i_1,\ldots,i_{\ell})\in \mathcal{I}$ such that for each $j\in [\ell]$, $x_j\in B(p_{i_j},r\delta)$. Notice that $|\mathcal{I}_{\ell}|$ is equal to the number of combinations of $\ell-2$ objects taken (with repetition) from a set of size $m$, i.e. $|\mathcal{I}_{\ell}|={{m+\ell-3}\choose{\ell-2} }$.
Hence, by the Hockey-stick identity,
\begin{align}
\label{eq:hockeystick}
| {\mathcal I}|=
\sum_{\ell=2}^k |\mathcal{I}_{\ell}|=
\sum_{\ell=2}^k {{m+\ell-3}\choose{\ell-2} }=
\sum_{\ell=0}^{k-2} {{m+\ell-1}\choose{\ell} }=
{{m+k-2}\choose{k-2}}. \end{align}
Using (\ref{eq:hockeystick}), we can bound $|{\mathcal S}'|$ as follows:
\begin{align*}
|{\mathcal S}'| \leq \sum_{\ell=2}^k \sum_{(i_1,\ldots i_{\ell}) \in \mathcal{I}_{\ell}} \left| \prod_{j=1}^{\ell} B(p_{i_j},r\delta)\cap {\mathcal G}_{\varepsilon\delta} \right|
\leq \sum_{\ell=2}^k |\mathcal{I}_{\ell}|\cdot \ensuremath{\mathcal{O}}\left( \frac{r}{\varepsilon}\right)^{\ell}
\leq |\mathcal{I}| \cdot \ensuremath{\mathcal{O}}\left(\frac{r}{\varepsilon} \right)^{k}
&\leq{{m+k-2}\choose{k-2} }\cdot \ensuremath{\mathcal{O}}\left(\frac{r}{\varepsilon} \right)^{k}.
\end{align*}
Hence, the running time is $\ensuremath{\mathcal{O}}(|{\mathcal S}'|\cdot k)={{m+k-2}\choose{k-2} }\cdot \ensuremath{\mathcal{O}}\left(\frac{r}{\varepsilon} \right)^{k}$.
\end{proof}
\begin{lemma}
\label{lemma:querycorrectness1}
If \texttt{query}$(Q,\delta, \varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, then $\df(Q,P)\leq (1+\varepsilon)\delta$. If \texttt{query}$(Q,\delta, \varepsilon/2)$ returns “no” then there is no $P\in \mathcal{P}$ such that $\df(Q,P)\leq \delta$.
\end{lemma}
\begin{proof}
When \texttt{query}$(Q, \delta,\varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, it must be that there is a $\delta$-straightening\xspace $Q'$ of $ Q$ such that $ P$ is associated with $Q''$ in ${\mathcal H}$. This implies that $\df( {Q''}, P )\leq (1+\varepsilon/2)\delta$. By the triangle inequality, \[\df( {Q'} ,P)\leq \df( { Q''},{ Q'})+\df( {Q''}, P)\leq (1+\varepsilon)\delta.\]
Since $ {Q'} $ is a $\delta$-straightening\xspace of $ Q $, we have that $\df({Q'}, Q )\leq \delta$. Hence, by Lemma~\ref{lemma:simplproxy} applied on $P,Q,{Q'}$ for distance threshold $(1+\varepsilon)\delta$, we obtain $\df(Q,P)\leq (1+\varepsilon)\delta$.
If \texttt{query}$(Q,\delta, \varepsilon/2)$ returns “no” then there is no $\delta$-straightening\xspace ${Q'}$ of $Q$ such that $Q''$ is associated with an input curve in ${\mathcal H}$. Suppose, for the sake of contradiction, that there exists a curve $P\in \mathcal{P}$ such that $\df(Q,P)\leq \delta$. By Lemma~\ref{lemma:goodsimplificationranges2}, there exists a $\delta$-straightening\xspace ${{Q}'}$ of $Q$ such that \begin{inparaenum}[i)]
\item there exists an $11\delta$-visiting order of ${{Q}'}$ on $P$ and
\item $\df({{Q}'},P)\leq \delta$.
\end{inparaenum}
Let ${{Q}''}$ be the curve obtained by snapping vertices of ${{Q}'}$ to the grid ${\mathcal G}_{\varepsilon\delta/2}$.
By the triangle inequality, there exists a $((11+\varepsilon/2)\delta)$-visiting order of ${{Q}''}$ on $P$ and \[\df({{Q}''},P)\leq \df({Q''},{Q'})+\df({Q'},P)\leq (1+\varepsilon/2)\delta.\] Hence, $Q''\in {\mathcal C}(P)$ and $Q''$ is associated with some input curve $P'$ in ${\mathcal H}$. This leads to contradiction and we conclude that if \texttt{query}$(Q,\delta, \varepsilon/2)$ returns “no” then there is no curve $P\in \mathcal{P}$ such that $\df(P,Q)\leq \delta$.
\end{proof}
\begin{lemma}
\label{lemma:querytime1}
For any query curve $Q$ of complexity $k$, $\delta>0$, $\varepsilon>0$, \texttt{query}$(Q,\delta,\varepsilon)$ runs in time $\ensuremath{\mathcal{O}}(k\cdot 2^{k})$.
\end{lemma}
\begin{proof}
Let $q_1,\ldots,q_k$ be the vertices of $Q$. We enumerate all sequences starting with $q_1$, followed by any possible subsequence of $q_2,\ldots,q_{k-1}$ and ending with $q_k$. There are at most $2^{k-2}$ such sequences, and for each one of them we test whether it defines a $\delta$-straightening\xspace of $Q$. This is done in two steps: we first test if each shortcut is within distance $\delta$ from the corresponding subcurve, and then we decide if the edge-range-preserving property is satisfied. Computing the Fr\'echet distance between a shortcut and the original subcurve costs linear time in the complexity of the subcurve by Theorem~\ref{theorem:frechetdecision}. Hence, we can decide in $\ensuremath{\mathcal{O}}(k)$ time if the sequence in question defines a $\delta$-simplification of $Q$. To decide if the edge-range-preserving property is satisfied, we check for each shortcut if the corresponding subcurve stays within range by testing all of its vertices one by one. Therefore, this step also costs $\ensuremath{\mathcal{O}}(k)$ time. Since we employ perfect hashing, each probe to ${\mathcal H}$ costs $\ensuremath{\mathcal{O}}(k)$ time.
We can also check in $\ensuremath{\mathcal{O}}(k)$ time if the answer returned by ${\mathcal H}$ is the one we are searching for.
Hence, the overall query time is in $\ensuremath{\mathcal{O}}(k\cdot 2^k)$.
\end{proof}
\begin{theorem} \label{thm:onepluseps}
Let $\varepsilon\in(0,1]$. There is a data structure for the $(1+\varepsilon)$-ANN problem,
which stores $n$ one-dimensional curves of complexity $m$ and supports query curves of complexity $k$, uses space in $n\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$, needs $\ensuremath{\mathcal{O}}(nm)\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$ expected preprocessing time and answers a query in $\ensuremath{\mathcal{O}}(k\cdot 2^{k})$ time.
\end{theorem}
\begin{proof}
The data structure is described in Section~\ref{subsection:datastructure1apprx_ds}. Correctness follows from Lemma~\ref{lemma:querycorrectness1}. The bound on the query time follows from Lemma~\ref{lemma:querytime1}. It remains to analyze the running time of \texttt{preprocess}$(\mathcal{P},\delta,\varepsilon/2,k)$ and the space complexity of the data structure.
By Lemma~\ref{lemma:numberofcandidates},
for any $P\in {\mathcal P}$,
the running time needed to compute ${\mathcal C}'$ is upper bounded by $ {{m+k-2}\choose{k-2} }\cdot \ensuremath{\mathcal{O}}\left(\frac{1}{\varepsilon} \right)^{k}=\ensuremath{\mathcal{O}}\left( \frac{m}{k\varepsilon}\right)^k$.
Hence, for each $P\in \mathcal{P}$, $|{\mathcal C}(P)| = \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$. Therefore, the space required for each input curve $P\in \mathcal{P}$ is upper bounded by $\ensuremath{\mathcal{O}}(|{\mathcal C}(P)|\cdot k)$.
Computing ${\mathcal C}(P)$ costs $\ensuremath{\mathcal{O}}(|{\mathcal C}'|\cdot mk)=\ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}\cdot \ensuremath{\mathcal{O}}(m)$ time, because we need to decide for each curve $Q\in {\mathcal C}'$, whether its Fr\'echet distance from $P$ is at most $(1+\varepsilon/2)\delta$, which can be done in $\ensuremath{\mathcal{O}}(|Q|\cdot |P|)$ time using Theorem~\ref{theorem:frechetdecision}. Assuming perfect hashing for ${\mathcal H}$, the overall expected preprocessing time is in $\ensuremath{\mathcal{O}}(nm)\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$ and the space usage is in $\ensuremath{\mathcal{O}}(n)\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$.
\end{proof}
\section{\boldmath $(2+\varepsilon)$-Approximation}\label{section:datastructure2apprx}
In this section we present three $(2+\varepsilon)$-ANN data structures with different tradeoffs between preprocessing and query time.
\subsection{Fast query algorithm}
\label{section:datastructure2apprxfastquery}
In this section, we propose a data structure for the $(2+\varepsilon)$-ANN problem, with query time in $\ensuremath{\mathcal{O}}(k)$. The space complexity and the preprocessing time are the same as in the $(1+\varepsilon)$-ANN data structure of Theorem~\ref{thm:onepluseps}.
\paragraph*{Data structure}
We are given as input a set of one-dimensional curves $\mathcal{P}$, as sequences of vertices, the distance threshold $\delta>0$, the approximation error $\varepsilon>0$ and the complexity of the supported queries $k$. The data structure is exactly the same as in Section~\ref{section:datastructure1apprx}. To build it, we call \texttt{preprocess}$(P,\delta, \varepsilon/2,k)$, as defined in Algorithm~\ref{alg:preprocessing1apprx}, in Section~\ref{section:pseudocode1apprx}. Let ${\mathcal H}$ be the resulting dictionary, constructed by \texttt{preprocess}$(P,\delta,\varepsilon/2,k)$.
\paragraph*{Query algorithm}
Let $Q$ be the query curve with vertices $q_1,\ldots,q_k$ and let $\varepsilon>0$ be the approximation error. The query algorithm first computes a $\delta$-signature $Q'$ of $Q$, and then it snaps the vertices of $Q'$ to the grid ${\mathcal G}_{\varepsilon\delta/2}$, to obtain a curve $Q''$. If $Q''$ is stored in ${\mathcal H}$, then we return its associated input curve $P\in\mathcal{P}$, otherwise we return "no". The query algorithm is implemented in \texttt{query2}, which can be found in Algorithm~\ref{alg:query2apprx}. To achieve approximation factor $2+\varepsilon$, we run \texttt{query2}$(Q,\delta,\varepsilon/2)$.
\begin{algorithm}[H]
\caption{Query algorithm\label{alg:query2apprx}}
\begin{algorithmic}[1]
\Procedure{\texttt{query2}}{curve $Q$ with vertices $q_1,\ldots,q_k$, $\delta>0$, $\varepsilon>0$}
\State $Q' \gets $ $\delta$-signature of $Q$
\State $q_1',\ldots,q_{\ell}' \gets $ vertices of $Q'$
\State $Q'' \gets \seqtocurve{\left \lfloor \frac{q_{1}'}{\varepsilon\delta} \right\rfloor\cdot (\varepsilon\delta), \ldots, \left \lfloor \frac{q_{{\ell}}'}{\varepsilon\delta} \right\rfloor\cdot (\varepsilon\delta) }$ \Comment{snap $Q'$ to ${\mathcal G}_{\varepsilon\delta}$}
\If{$Q''$ in ${\mathcal H}$}
\State \Return input curve $P$ associated with $Q''$ in ${\mathcal H}$
\EndIf
\State \Return “no”
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{lemma:querycorrectness2apprxfastquery}
If \texttt{query2}$(Q, \delta, \varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, then $\df(Q,P)\leq (2+\varepsilon)\delta$. If \texttt{query2}$(Q, \delta,\varepsilon/2)$ returns “no” then there is no $P\in \mathcal{P}$ such that $\df(Q,P)\leq \delta$.
\end{lemma}
\begin{proof}
If \texttt{query2}$(Q,\delta, \varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, then it must be that $Q''$ is stored in ${\mathcal H}$, and $P$ is its associated input curve. By the construction of ${\mathcal H}$, it must be that $\df(P, Q'' )\leq (1+\varepsilon/2)\delta$. By the definition of signatures we know that $\df(Q,{Q'})\leq \delta$, and by the triangle inequality we obtain \[\df( Q,{Q''} )\leq \df(Q, {Q'} )+\df({ Q''} ,{Q'})\leq (1+\varepsilon/2)\delta.\]
Hence, by the triangle inequality we obtain
\[
\df(P,Q)\leq \df(P,{Q''} )+\df( Q,{Q''} ) \leq (2+\varepsilon)\delta.
\]
Now suppose that \texttt{query2}$(Q,\delta, \varepsilon/2)$ returns “no”. This means that $Q''$ is not stored in ${\mathcal H}$. Suppose that there exists a $P\in\mathcal{P}$ such that $\df(P,Q)\leq \delta$. Then by
Lemma~\ref{lemma:signatures2} there exists a $\delta$-visiting order of $Q'$ on $P$. Therefore, by the triangle inequality, there exists a $((1+\varepsilon/2)\delta)$-visiting order of $ Q''$ on $P$, which implies that $Q''\in {\mathcal C}(P)$, and hence $Q''$ is stored in ${\mathcal H}$. This leads to a contradiction, since we have assumed that $Q''$ is not stored in ${\mathcal H}$. Hence, if \texttt{query2}$(Q, \delta,\varepsilon/2)$ returns “no” then there is no $P\in\mathcal{P}$ such that $\df(P,Q)\leq \delta$.
\end{proof}
\begin{theorem} \label{thm:twopluseps_one}
Let $\varepsilon\in(0,1]$. There is a data structure for the $(2+\varepsilon)$-ANN problem,
which stores $n$ one-dimensional curves of complexity $m$ and supports query curves of complexity $k$, uses space in $n\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$, needs $\ensuremath{\mathcal{O}}(nm)\cdot \ensuremath{\mathcal{O}}\left(\frac{m}{k \varepsilon}\right)^{k}$ expected preprocessing time and answers a query in $\ensuremath{\mathcal{O}}(k)$ time.
\end{theorem}
\begin{proof}
Correctness of the data structure follows from Lemma~\ref{lemma:querycorrectness2apprxfastquery}. The space complexity and the preprocessing time are analyzed in the proof of Theorem~\ref{thm:onepluseps}. It remains to show that \texttt{query2}$(Q, \delta,\varepsilon/2)$ runs in $\ensuremath{\mathcal{O}}(k)$ time.
To compute a $\delta$-signature of $Q$, we use the algorithm of Driemel, Krivosija and Sohler \cite{DKS16}, which runs in $\ensuremath{\mathcal{O}}(k)$ time.
Since we employ perfect hashing and we assume that the floor function can be computed in constant time, each probe to ${\mathcal H}$ costs $\ensuremath{\mathcal{O}}(k)$ time, and we can also check at the same time if the answer returned by ${\mathcal H}$ is the one we are searching for.
We conclude that \texttt{query2}$(Q,\delta, \varepsilon/2)$ runs in $\ensuremath{\mathcal{O}}(k)$ time.
\end{proof}
\subsection{Improved preprocessing time}
\label{section:twopluseps_two}
In this section, we show that there exists a data structure for the $(2+\varepsilon)$-ANN problem, with space complexity and preprocessing time in $n\cdot \ensuremath{\mathcal{O}}(1/\varepsilon)^k+\ensuremath{\mathcal{O}}(nm)$. The query time is in $\ensuremath{\mathcal{O}}(k\cdot 2^k)$. This avoids the factor $(m/k)^k$ of our previous data structures.
\paragraph*{Data structure}
We are given as input a set of one-dimensional curves $\mathcal{P}$, as sequences of vertices, the distance threshold $\delta>0$, the approximation error $\varepsilon>0$, and the complexity of the supported queries $k$. To build the data structure, we use a modified version of the preprocessing algorithm in Section~\ref{section:datastructure1apprx}. For each input curve $P \in \mathcal{P}$, we compute a $\delta$-signature $P'$ of $P$. If the complexity of $P'$ is at most $k+2$ then
we compute a set ${\mathcal C}':={\mathcal C}'(P')$ which contains all curves $Q$ such that:
\begin{inparaenum}[i)]
\item $Q$ has complexity at most $k$,
\item all vertices of $Q$ belong to ${\mathcal G}_{\varepsilon\delta/2}$, and
\item there is a $((16+\varepsilon/4)\delta)$-visiting order of $Q$ on $P'$.
\end{inparaenum}
This step is similar to the one in the preprocessing algorithm in Section~\ref{section:datastructure1apprx}, although here we consider signatures of the input curves instead of the original curves.
The filtering process is also slightly different.
We filter ${\mathcal C}'$ to obtain a set ${\mathcal C}(P)$ which contains only those curves of ${\mathcal C}'$ with: \begin{inparaenum}[i)]
\item Fr\'echet distance at most $(2+\varepsilon/4)\delta$ from $P$,
\item their first point within distance $(1+\varepsilon/4)\delta$ from $P(0)$, and
\item their last point within distance $(1+\varepsilon/4)\delta$ from $P(1)$.
\end{inparaenum} Let ${\mathcal H}$ be a dictionary which is initially empty. For each $P\in {\mathcal P}$, we store ${\mathcal C}(P)$ in ${\mathcal H}$ as follows: for each $Q\in {\mathcal C}(P)$,
if $Q$ is not already stored in ${\mathcal H}$, then we insert $Q$ into ${\mathcal H}$, associated with a pointer to $P$. The preprocessing algorithm is implemented in \texttt{preprocess2}, which can be found in Algorithm~\ref{alg:preprocessing2apprx}. We also make use of the subroutine \texttt{generate\_candidates} described in Algorithm \ref{alg:preprocessing1apprx}, in Section~\ref{section:pseudocode1apprx}. To achieve approximation factor $(2+\varepsilon)$, we run \texttt{preprocess2}$({\mathcal P},\delta,22,2,\varepsilon/4,k)$.
\paragraph*{Query algorithm}
Let $Q$ be the query curve with vertices $q_1,\ldots,q_k$ and let $\varepsilon>0$ be the approximation error. The query algorithm is the same as in the data structure of Section~\ref{section:datastructure1apprx}, but we run it with different input parameters.
In particular, we run \texttt{query}$(Q,2\delta,\varepsilon/4)$ (see Algorithm~\ref{alg:query1apprx}) on the dictionary ${\mathcal H}$ which is constructed by \texttt{preprocess2}$({\mathcal P},\delta,22,2,\varepsilon/4,k)$.
\begin{algorithm}[h]
\caption{Preprocessing algorithm. We call \texttt{preprocess2} to build the data structure. \label{alg:preprocessing2apprx}}
\begin{algorithmic}[1]
\Procedure{\texttt{preprocess2}}{input set ${\mathcal P}$, $\delta>0$, $r>0$, $t>0$, $\varepsilon>0$, $k\in{\mathbb N}$}
\State Initialize empty dictionary ${\mathcal H}$
\For {{\bf each} $P \in {\mathcal P}$}
\State $P' \gets $ $\delta$-signature of $P$
\If{$|P'|\leq k+2$}
\State ${\mathcal C}(P)\gets$ \texttt{generate\_keys2}$(P', \delta, r, t, \varepsilon, k)$
\For{{\bf each} $Q\in {\mathcal C}(P)$}
\If{$Q$ not in ${\mathcal H}$}
\State insert key $Q$ in ${\mathcal H}$, associated with a pointer to $P$
\EndIf
\EndFor
\EndIf
\EndFor
\EndProcedure
\Procedure{\texttt{generate\_keys2}}{curve $P$, $\delta>0$, $r>0$, $t>0$, $\varepsilon>0$, $k$}
\State ${\mathcal C}'\gets$\texttt{generate\_candidates}$(P,\delta,r+\varepsilon,\varepsilon,k)$
\State ${\mathcal C} \gets \emptyset$
\For{{\bf each} $Q\in {\mathcal C}'$}
\If{$\df( P , {Q} )\leq (t+\varepsilon)\delta$ \textbf{and} $|P(0)-{Q}(0)|\leq (1+\varepsilon)\delta$ \textbf{and} $|P(1)-{Q}(1)|\leq (1+\varepsilon)\delta$ }
\State ${\mathcal C} \gets {\mathcal C} \cup \{ Q\}$
\EndIf
\EndFor
\State {\bf return} ${\mathcal C} $
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{lemma:querycorrectness2apprxmed}
If \texttt{query}$(Q, 2\delta,\varepsilon/4)$ returns an input curve $P\in \mathcal{P}$, then $\df(Q,P)\leq (2+\varepsilon)\delta$. If \texttt{query}$(Q, 2\delta, \varepsilon/4)$ returns “no” then there is no $P\in \mathcal{P}$ such that $\df(Q,P)\leq \delta$.
\end{lemma}
\begin{proof}
When \texttt{query}$(Q,2\delta, \varepsilon/4)$ returns an input curve $P\in \mathcal{P}$, it must be that there is a $\delta$-straightening\xspace $Q'$ of $ Q$ such that $ P$ is associated with $Q''$ in ${\mathcal H}$, where $Q''$ denotes the curve produced by snapping vertices of $Q'$ to ${\mathcal G}_{\varepsilon\delta/4}$. This implies that $Q''\in {\mathcal C}(P)$, and therefore $\df(P',{Q''} )\leq (2+\varepsilon/4)\delta$, $| P'(0)-{Q''}(0)|\leq (1+\varepsilon/4)\delta $, $| P'(1)-{Q''}(1)|\leq (1+\varepsilon/4)\delta $, where $P'$ is the $\delta$-signature of $P$ computed by \texttt{preprocess}. By the triangle inequality,
\[
\df(P',{Q'} )\leq \df(P',{Q''} )+\df( {Q'} , {Q''} ) \leq (2+\varepsilon/2)\delta.
\]
Similarly, by the triangle inequality, $| P'(0)-{Q'}(0)|\leq (1+\varepsilon/2)\delta $, $| P'(1)-{Q'}(1)|\leq (1+\varepsilon/2)\delta$. Lemma~\ref{lemma:signatureproxy} implies that $\df(P,{Q'})\leq (2+\varepsilon)\delta$, because $P'$ is a $\delta$-signature of $P$, $\df(P',Q')\leq (2+\varepsilon/2)\delta $, $|P(0)-Q'(0)|\leq (1+\varepsilon/2)\delta$ and
$|P(1)-Q'(1)|\leq (1+\varepsilon/2)\delta$.
Then, by Lemma~\ref{lemma:simplproxy}, we conclude that $\df(P,Q)\leq (2+\varepsilon)\delta$.
If \texttt{query}$(Q, 2\delta, \varepsilon/4)$ returns “no”, then there is no input curve $P\in {\mathcal P}$ such that $|P'|\leq k+2$, where $P'$ is the $\delta$-signature computed by \texttt{preprocess2} and such that there exists a $\delta$-straightening\xspace $Q'$ of $Q$ with $Q'\in {\mathcal C}(P)$. Suppose for the sake of contradiction that there is an input curve $P\in {\mathcal P}$ such that $\df(Q,P)\leq \delta$. Then by the triangle inequality and the fact that $\df(P,P')\leq \delta$, we obtain $\df(Q,P')\leq 2\delta$. In addition, by Lemma~\ref{lemma:signatures2} there is a $\delta$-visiting order of $P'$ on $Q$. Since $P'$ satisfies the $\delta$-edge-length property, any two consecutive interior vertices lie at distance at least $2\delta$ to each other. Thus, no two consecutive interior vertices can belong to the same $\delta$-range. Hence, $|P'|\leq |Q|+2\leq k+2$.
By Lemma~\ref{lemma:goodsimplificationranges2}, there exists a $2\delta$-straightening\xspace ${Q'}$ of $Q$ which satisfies \begin{compactenum}[i)]
\item there exists a $22\delta$-visiting order of $ {Q'}$ on $P'$,
\item $\df({Q'},P')\leq 2\delta$.
\end{compactenum}
By the definition of signatures, we have $P(0)=P'(0)$ and $P(1)=P'(1)$, and since $\df(P,Q)\leq \delta$, we have $|P'(0)-Q(0)|\leq \delta$ and $|P'(1)-Q(1)|\leq \delta$.
By the definition of straightenings\xspace, we have ${Q'}(0)=Q(0)$ and ${Q'}(1)=Q(1) $ and therefore $|P'(0)-{Q'}(0)|\leq \delta$ and $|P'(1)-{Q'}(1)|\leq \delta$.
Hence, by the triangle inequality there exists a $((22+\varepsilon/4)\delta)$-visiting order of $ {Q''}$ on $P'$, $\df({Q''},P')\leq (2+\varepsilon/4)\delta$, $|P'(0)-{Q''}(0)|\leq (1+\varepsilon/4)\delta$ and $|P'(1)-{Q''}(1)|\leq (1+\varepsilon/4)\delta$. This implies that $Q''\in{\mathcal C}(P)$ which leads to a contradiction.
\end{proof}
\begin{theorem} \label{thm:twopluseps_two}
Let $\varepsilon\in(0,1]$. There is a data structure for the $(2+\varepsilon)$-ANN problem,
which stores $n$ one-dimensional curves of complexity $m$ and supports query curves of complexity $k$, uses space in $n\cdot \ensuremath{\mathcal{O}}\left(\frac{1}{ \varepsilon}\right)^{k} +\ensuremath{\mathcal{O}}(nm)$, needs $n\cdot \ensuremath{\mathcal{O}}\left(\frac{1}{ \varepsilon}\right)^{k}+\ensuremath{\mathcal{O}}(nm)$ expected preprocessing time and answers a query in $\ensuremath{\mathcal{O}}(k\cdot 2^k)$ time.
\end{theorem}
\begin{proof}
Correctness follows from Lemma~\ref{lemma:querycorrectness2apprxmed}. The bound on the query time follows from Lemma~\ref{lemma:querytime1}. It remains to bound the space complexity and the preprocessing time of the data structure.
Computing one $\delta$-signature for each $P \in {\mathcal P}$ takes linear time $\ensuremath{\mathcal{O}}(mn)$ in total, using the algorithm of Driemel, Krivosija and Sohler \cite{DKS16}.
Let $P'$ be the $\delta$-signature of some curve $P\in {\mathcal P}$ as computed during preprocessing. If $|P'|>k$ we ignore $P$.
By Lemma~\ref{lemma:numberofcandidates}, for any $P'\in {\mathcal P}'$, the running time needed to compute ${\mathcal C}'$, is upper bounded by $ {{|P'|+k-2}\choose{k-2} }\cdot \ensuremath{\mathcal{O}}\left(\frac{1}{\varepsilon} \right)^{k}=\ensuremath{\mathcal{O}}\left( \frac{1}{\varepsilon}\right)^k$.
The space required for $P'$ is upper bounded by $\ensuremath{\mathcal{O}}(|{\mathcal C}(P')|\cdot k + m)=\ensuremath{\mathcal{O}}(|{\mathcal C}'|\cdot k + m)=\ensuremath{\mathcal{O}}\left( \frac{1}{\varepsilon}\right)^k+\ensuremath{\mathcal{O}}(m)$.
Computing ${\mathcal C}(P')$ costs $\ensuremath{\mathcal{O}}(|{\mathcal C}'|\cdot k)=\ensuremath{\mathcal{O}}\left(\frac{1}{ \varepsilon}\right)^{k}$ time, since we take a decision on the Fr\'echet distance between each curve in ${\mathcal C}'$, and $P'$, by making use of Theorem~\ref{theorem:frechetdecision} . Assuming perfect hashing for ${\mathcal H}$, the overall expected preprocessing time is in $\ensuremath{\mathcal{O}}(n)\cdot \ensuremath{\mathcal{O}}\left(\frac{1}{ \varepsilon}\right)^{k}$ and the space usage is in $\ensuremath{\mathcal{O}}(n)\cdot \ensuremath{\mathcal{O}}\left(\frac{1}{ \varepsilon}\right)^{k}$.
\end{proof}
\subsection{Linear preprocessing time}
In this section we present a data structure for the $(2+\varepsilon)$-ANN problem with linear space and preprocessing time $\ensuremath{\mathcal{O}}(nm)$ and with query time in $\ensuremath{\mathcal{O}}(1/\varepsilon)^k$.
\paragraph*{Data structure}
We are given as input a set of one-dimensional curves $\mathcal{P}$, as sequences of vertices, a distance threshold $\delta>0$, the approximation error $\varepsilon>0$ and the complexity of the supported queries $k$. For each input curve $P\in {\mathcal P}$, we compute a $\delta$-signature $P'$ of $P$. If $|P'|>k+2$ then we ignore $P$, otherwise we snap it to ${\mathcal G}_{\varepsilon\delta/2}$ to obtain a curve $P''$. Let ${\mathcal H}$ be a dictionary which is initially empty. For each $P\in {\mathcal P}$, we store $P''$ in ${\mathcal H}$ as follows:
if $P''$ is not already stored in ${\mathcal H}$, then we insert $P''$ into ${\mathcal H}$, associated with a pointer to $P$. To achieve approximation factor $2+\varepsilon$, we run \texttt{preprocess3}$({\mathcal P},\delta,\varepsilon/2,k)$, as defined in Algorithm~\ref{alg:preprocessing2apprx3}.
\paragraph*{Query algorithm}
Let $Q$ be a query curve of complexity $k$.
We compute a set ${\mathcal C}':={\mathcal C}'(Q)$ which contains all curves $P$ such that:
\begin{inparaenum}[i)]
\item $P$ has complexity at most $k$,
\item all vertices of $P$ belong to ${\mathcal G}_{\varepsilon\delta/2}$, and
\item there is a $((1+\varepsilon/2)\delta)$-visiting order of $P$ on $Q$.
\end{inparaenum}
We filter ${\mathcal C}'$ to obtain a set ${\mathcal C}(Q)$ which contains only those curves of ${\mathcal C}'$ with: \begin{inparaenum}[i)]
\item Fr\'echet distance at most $(2+\varepsilon/2)\delta$ from $Q$,
\item their first point within distance $(1+\varepsilon/2)\delta$ from $Q(0)$, and
\item their last point within distance $(1+\varepsilon/2)\delta$ from $Q(1)$.
\end{inparaenum}
We probe ${\mathcal H}$ for each key $P\in{\mathcal C}(Q) $: if we find a $P\in {\mathcal C}(Q)$ stored in ${\mathcal H}$ then we return the associated input curve. If there is no $P\in {\mathcal C}(Q)$ stored in ${\mathcal H}$ then we return “no”. To achieve the desired approximation, we run \texttt{query3}$(Q,\delta,\varepsilon/2)$, as defined in Algorithm~\ref{alg:query2apprx3}.
\begin{algorithm}
\caption{Preprocessing algorithm \label{alg:preprocessing2apprx3}}
\begin{algorithmic}[1]
\Procedure{\texttt{preprocess3}}{input set ${\mathcal P}$, $\delta>0$, $\varepsilon>0$, $k$}
\State Initialize empty dictionary ${\mathcal H}$
\For{\textbf{each} $P \in {\mathcal P}$}
\State $P' \gets $ $\delta$-signature of $P$
\If{$|P'|\leq k+2$}
\State $p_1,\ldots ,p_{\ell} \gets $ vertices of $P'$
\State $P''\gets \seqtocurve{\left\lfloor \frac{p_1}{\varepsilon\delta}\right\rfloor \cdot (\varepsilon\delta) ,\ldots ,\left\lfloor \frac{p_{\ell}}{\varepsilon\delta}\right\rfloor \cdot (\varepsilon\delta)} $
\If{$P''$ not in ${\mathcal H}$}
\State insert key $P''$ in ${\mathcal H}$, associated with a pointer to $P$
\EndIf
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Query algorithm \label{alg:query2apprx3}}
\begin{algorithmic}[1]
\Procedure{\texttt{query3}}{curve $Q$ with vertices $q_1,\ldots,q_k$, $\delta>0$, $\varepsilon>0$}
\State ${\mathcal C}(Q)\gets $ \texttt{generate\_keys2}$(Q, \delta, 1,2, \varepsilon, k+2)$
\For{\textbf{each} $P'' \in {\mathcal C}(Q)$}
\If{$P''$ in ${\mathcal H}$}
\State \Return input curve $P$ associated with $P''$ in ${\mathcal H}$
\EndIf
\EndFor
\State \Return “no”
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{lemma:querycorrectness2apprx3}
If \texttt{query3}$(Q,\delta, \varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, then $\df(Q,P)\leq (2+\varepsilon)\delta$. If \texttt{query3}$(Q,\delta, \varepsilon/2)$ returns “no” then there is no $P\in \mathcal{P}$ such that $\df(Q,P)\leq \delta$.
\end{lemma}
\begin{proof}If \texttt{query3}$(Q,\delta, \varepsilon/2)$ returns an input curve, then it must be that there is a curve $P''\in {\mathcal C}(Q)$ which is stored in ${\mathcal H}$, associated with a pointer to $P$. Since $P''$ is stored in ${\mathcal H}$, there is a curve $P\in {\mathcal P}$ with a $\delta$-signature $P'$ such that $\df(P',P'')\leq \varepsilon\delta/2$. Moreover, since $P''\in {\mathcal C}(Q)$, we have that $\df(Q,P'')\leq (2+\varepsilon/2)\delta$, $|Q(0)-P''(0)|\leq (1+\varepsilon/2)\delta$, $|Q(1)-P''(1)|\leq (1+\varepsilon/2)\delta$. By the triangle inequality we obtain,
$\df(Q,P')\leq (2+\varepsilon)\delta$, $|Q(0)-P'(0)|\leq (1+\varepsilon)\delta$, $|Q(1)-P'(1)|\leq (1+\varepsilon)\delta$. By Lemma~\ref{lemma:signatureproxy},
since $\df(Q,P')\leq (2+\varepsilon)\delta$, $|Q(0)-P'(0)|\leq (1+\varepsilon)\delta$, $|Q(1)-P'(1)|\leq (1+\varepsilon)\delta$ and $P'$ is a $\delta$-signature of $P$, we conclude $\df(Q,P)\leq (2+\varepsilon)\delta$.
If \texttt{query3}$(Q, \delta, \varepsilon/2)$ returns “no”, then it must be that there is no curve $P''\in {\mathcal C}(Q)$ which is stored in ${\mathcal H}$. Suppose for the sake of contradiction that there is an input curve $P\in {\mathcal P}$ such that $\df(Q,P)\leq \delta$. Let $P'$ be the $\delta$-signature of $P$, as computed during preprocessing. By Lemma~\ref{lemma:signatures2}, there is a $\delta$-visiting order of $P'$ on $Q$ and therefore $|P'|\leq k+2$. Let $P''$ be the curve produced by snapping the vertices of $P'$ to the grid ${\mathcal G}_{\varepsilon\delta/2}$. By the triangle inequality there is a $((1+\varepsilon/2)\delta)$-visiting order of $P''$ on $Q$. Therefore, $P''$ must be included in ${\mathcal C}(Q)$, which leads to contradiction.
\end{proof}
\begin{theorem} \label{thm:twopluseps_three}
Let $\varepsilon\in(0,1]$. There is a data structure for the $(2+\varepsilon)$-ANN problem,
which stores $n$ one-dimensional curves of complexity $m$ and supports query curves of complexity $k$, uses space in $\ensuremath{\mathcal{O}}(nm)$, needs $\ensuremath{\mathcal{O}}(nm)$ expected preprocessing time and answers a query in $\ensuremath{\mathcal{O}}(1/\varepsilon)^{k+2}$ time.
\end{theorem}
\begin{proof}
Correctness follows from Lemma~\ref{lemma:querycorrectness2apprx3}. It remains to bound the space complexity, the preprocessing time and the query time.
Using the algorithm of Driemel, Krivosija and Sohler \cite{DKS16}, we can compute a signature in linear time. Since we assume that the floor function can be computed in $\ensuremath{\mathcal{O}}(1)$, and that ${\mathcal H}$ is implemented using perfect hashing, \texttt{preprocess3}$({\mathcal P},1,\varepsilon/2,k)$ has running time $\ensuremath{\mathcal{O}}(nm)$. Therefore, the space usage is also in $\ensuremath{\mathcal{O}}(nm)$.
To bound the query time, we first upper bound the running time of \texttt{generate\_keys2}$(Q,\delta, 1,2,\varepsilon/2,k+2)$, because the last part of \texttt{query3} is an enumeration over all curves returned by \texttt{generate\_keys2} and probing ${\mathcal H}$ for each one of them. To bound the running time of \texttt{generate\_keys2}$(Q,\delta, 1,2,\varepsilon/2,k+2)$, it suffices to bound the running time of \texttt{generate\_candidates}$(Q,\delta, (1+\varepsilon/2),\varepsilon/2,k+2)$. By Lemma~\ref{lemma:numberofcandidates}, this running time is upper bounded by ${{2k}\choose{k-2} }\cdot \ensuremath{\mathcal{O}}\left(\frac{ 1}{\varepsilon}\right)^{k+2} = \ensuremath{\mathcal{O}}\left(\frac{ 1}{\varepsilon}\right)^{k+2}$. Recall that we employ perfect hashing and we assume that the floor function can be computed in constant time. Hence each probe to ${\mathcal H}$ costs $\ensuremath{\mathcal{O}}(k)$ time, and we can also check in $\ensuremath{\mathcal{O}}(k)$ if ${\mathcal H}$ returns the correct answer.
We conclude that \texttt{query2}$(Q,\delta, \varepsilon/2)$ runs in time $ \ensuremath{\mathcal{O}}\left(\frac{ 1}{\varepsilon}\right)^{k+2}$.
\end{proof}
\section{\boldmath $(3+\varepsilon)$-Approximation}\label{section:datastructure3apprx}
In this section, we present a data structure for the $(3+\varepsilon)$-ANN problem with preprocessing time and space complexity in $n\cdot \ensuremath{\mathcal{O}}(1/\varepsilon)^k+\ensuremath{\mathcal{O}}(nm)$ and query time in $\ensuremath{\mathcal{O}}(k)$.
\paragraph*{Data structure} We are given as input a set of one-dimensional curves $\mathcal{P}$, as sequences of vertices, a distance threshold $\delta>0$, the approximation error $\varepsilon>0$ and the complexity of the supported queries $k$. To build the data structure, we use the preprocessing algorithm of the data structure in Section~\ref{section:twopluseps_two}.
Let ${\mathcal H}$ be the dictionary, constructed by \texttt{preprocess2}$({\mathcal P}, \delta ,2,3,\varepsilon/2,k)$.
\paragraph*{Query algorithm}Let $Q$ be a query curve. We run the query algorithm of the data structure in Section~\ref{section:datastructure2apprxfastquery}. In particular, we run \texttt{query2}$(Q,2\delta,\varepsilon/2)$ on ${\mathcal H}$.
\begin{lemma}
\label{lemma:querycorrectness3apprx}
If \texttt{query2}$(Q,2\delta, \varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, then $\df(Q,P)\leq (3+\varepsilon)\delta$. If \texttt{query2}$(Q, 2\delta, \varepsilon/2)$ returns “no” then there is no $P\in {\mathcal P}$ such that $\df(Q,P)\leq \delta$.
\end{lemma}
\begin{proof}
Let $Q'$ be the $2\delta$-signature of $Q$ and let $Q''$ be the curve obtained by snapping vertices of $Q'$ to ${\mathcal G}_{\varepsilon\delta/2}$, as computed in \texttt{query2}.
If \texttt{query2}$(Q,2\delta, \varepsilon/2)$ returns an input curve $P\in \mathcal{P}$, then it must be that $Q''\in {\mathcal C}(P)$, where ${\mathcal C}(P)$ is the result of \texttt{generate\_keys2}$(P',\delta, 2, 3, \varepsilon/2, k)$ and $P'$ is a $\delta$-signature of $P$, as computed by \texttt{preprocess2}. By the construction of ${\mathcal C}(P)$, it must be that $\df(P',Q'')\leq (3+\varepsilon/2)\delta$, $|P'(0)-Q''(0)|\leq (1+\varepsilon/2)\delta$ and $|P'(1)-Q''(1)|\leq (1+\varepsilon/2)\delta$. Hence, by the triangle inequality $\df(Q',P')\leq (3+\varepsilon)\delta$, $|P'(0)-Q'(0)|\leq (1+\varepsilon)\delta$ and $|P'(1)-Q'(1)|\leq (1+\varepsilon)\delta $. We now apply Lemma~\ref{lemma:signatureproxy} twice. We first apply it on $P'$, $Q'$, $Q$. Since $\df(P',Q')\leq (3+\varepsilon)\delta$, $|P'(0)-Q'(0)|\leq (1+\varepsilon)\delta$, $|P'(1)-Q'(1)|\leq (1+\varepsilon)\delta $ and $Q'$ is a $2\delta$-signature of $Q$,
we obtain $\df(P',Q)\leq (3+\varepsilon)\delta$. Then, we apply it on $P'$, $P$, $Q$. Since $\df(P',Q)\leq (3+\varepsilon)\delta$, $|P'(0)-Q(0)|=|P'(0)-Q'(0)|\leq (1+\varepsilon)\delta \leq (2+\varepsilon)\delta$, $|P'(1)-Q(1)|=|P'(1)-Q'(1)|\leq (1+\varepsilon)\delta \leq (2+\varepsilon)\delta$, and $P'$ is a $\delta $-signature of $P$,
we obtain $\df(P,Q)\leq (3+\varepsilon)\delta$.
If \texttt{query2}$(Q,2\delta, \varepsilon/2)$ returns “no” then $Q''$ is not stored in ${\mathcal H}$ as a key. For the sake of contradiction, we assume that there exists an input curve $P\in {\mathcal P}$ such that $\df(P,Q)\leq \delta$. Then by definition, $|P'(0)-Q'(0)|\leq \delta$ and $|P'(1)-Q'(1)|\leq \delta$. In addition, by Lemma~\ref{lemma:signatures3}, $\df(P',Q')\leq 3\delta$ and there is a $2\delta$-visiting order of $Q'$ on $P'$, By the triangle inequality we obtain $\df(P',Q'')\leq (3+\varepsilon/2)\delta$, $|P'(0)-Q''(0)|\leq (1+\varepsilon/2)\delta$, $|P'(1)-Q''(1)|\leq (1+\varepsilon/2)\delta$, and that there is a $((2+\varepsilon/2)\delta)$-visiting order of $Q''$ on $P'$. Hence, by the construction of ${\mathcal C}(P)$, it must be that $Q''\in {\mathcal C}(P)$ which implies that $Q''$ is stored as a key in ${\mathcal H}$. This is a contradiction.
\end{proof}
\begin{theorem}\label{thm:threepluseps}
Let $\varepsilon\in(0,1]$. There is a data structure for the $(3+\varepsilon)$-ANN problem,
which stores $n$ one-dimensional curves of complexity $m$ and supports query curves of complexity $k$, uses space in $n\cdot \ensuremath{\mathcal{O}}(1/\varepsilon)^k +\ensuremath{\mathcal{O}}(nm)$, needs $n\cdot \ensuremath{\mathcal{O}}(1/\varepsilon)^k+\ensuremath{\mathcal{O}}(nm)$ expected preprocessing time and answers a query in $\ensuremath{\mathcal{O}}(k)$ time.
where $k$ is the complexity of the query curve.
\end{theorem}
\begin{proof}
Correctness follows from Lemma~\ref{lemma:querycorrectness3apprx}. The bounds on the preprocessing time and space complexity follow from Theorem~\ref{thm:twopluseps_two}. The bound on the query time follows from Theorem~\ref{thm:twopluseps_one}. \end{proof}
\section{Proofs of main lemmas}
\label{section:missingproofs}
In this section we give full proofs of the lemmas stated in Section~\ref{section:lemmas}.
We start by proving a fundamental observation and lemma on the Fr\'echet distance of approximately monotone one-dimensional curves.
\begin{observation}\label{lemma:montonecurvesimple}
Let $Q$ be a directed line segment and let $ P:~[0, 1] \mapsto \mathbb{R}$ be a curve.
It holds that $\df(P,Q) \leq \delta$ if and only if the following conditions are satisfied:
\begin{compactenum}[(i)]
\item $P$ is $2\delta$-monotone with respect to $Q$, and
\item $|P(0)-Q(0)|\leq \delta$, $|P(1)-Q(1)|\leq \delta$, and
\item $P \subseteq B(Q,\delta)$.
\end{compactenum}
\end{observation}
\begin{proof}
We assume that $Q(0) \leq Q(1)$ as the other case is symmetric.
Now, assume first that $\df(P,Q) \leq \delta$, then (ii) holds because start and end points are matched in any traversal and (iii) holds as the Hausdorff distance is a lower bound for the Fréchet distance. Finally, (i) holds as otherwise there exist two indices $s,t \in [0,1]$ with $s < t$ and $P(t) < P(s) - 2\delta$. As $Q$ is increasing, no traversal can match $P(s)$ and $P(t)$ in distance at most $\delta$.
Second, assume that (i), (ii), and (iii) hold.
Then $\df(P,Q) \leq \delta$ is implied by Lemma \ref{lemma:montonecurves}, below, but to provide some intuition we give a simpler proof here.
The following traversal with position $s$ on $P$ and position $t$ on $Q$ stays within distance $\delta$. We start in $P(0), Q(0)$, then we continue on $P$ until $P(s) = Q(0)+\delta$. Then we always choose $t$ such that $Q(t) = \min_{s' \geq s} P(s') + \delta$ while traversing $P$, i.e., continuously increasing $s$. When we reach the end of $Q$, we can traverse $P$ until the end while staying in $Q(1)$. It is easy to check that properties (i), (ii), and (iii) ensure distance $\delta$ during the described traversal.
\end{proof}
The following lemma statement is similar to the above observation with the important difference that the line segment $Q$ is replaced by a $2\delta$-monotone curve. The proof works by constructing a traversal greedily and showing correctness of the greedy algorithm.
\begin{lemma}\label{lemma:montonecurves}
Let $P$ and $Q$ be $2\delta$-monotone curves with
\begin{compactenum}[(i)]
\item\label{mc_c4} $P$ is $2\delta$-monotone with respect to $\overline{Q(0)Q(1)}$, and
\item\label{mc_c1} $|P(0)-Q(0)|\leq \delta$, $|P(1)-Q(1)|\leq \delta$, and
\item\label{mc_c3} $P \subseteq B(Q,\delta)$, and
\item\label{mc_c2} $Q \subseteq \overline{Q(0)Q(1)}$.
\end{compactenum}
It holds that $\df(P,Q) \leq \delta$.
\end{lemma}
\begin{proof}
We assume that $Q(0) \leq Q(1)$ as the other case is symmetric.
If $Q$ is not $2\delta$-monotone increasing, then it also cannot be $2\delta$-monotone decreasing:
if there are two points $s,t \in [0,1]$ with $s < t$ such that $Q(t) < Q(s) - 2\delta$, then, as $Q(t) \geq Q(0)$ by condition (\ref{mc_c2}), we have that $Q(s) > Q(t) + 2\delta \geq Q(0) + 2\delta$ and thus $Q$ is not $2\delta$-monotone decreasing.
However, as $Q$ is $2\delta$-monotone, it has to be $2\delta$-monotone \emph{increasing}.
Due to condition~(\ref{mc_c4}), $P$ is also $2\delta$-monotone \emph{increasing}.
We give a traversal of $P, Q$ with distance at most $\delta$ --- denoting the position during the traversal with $(s,t) \in [0,1]^2$ --- that tries to maintain two invariants:
\begin{enumerate}[(1)]
\item\label{invariant1} $P$ and $Q$ are in a position $(s,t) \in [0,1]^2$ such that $P(s) = Q(t) + \delta$.
\item\label{invariant2} The suffix of $Q$ is strictly greater than the current value $Q(t)$, i.e., $\forall t' > t: Q(t') > Q(t)$.
\end{enumerate}
In general, both invariants may be violated at the very beginning of the traversal, that is, for $s=t=0$.
Let us first describe how we traverse from the beginning of $P, Q$ to a position $(s,t) \in [0,1]^2$ such that these invariants are fulfilled.
We first traverse $P$ until it first reaches $Q(0) + \delta$, while in $Q$ we stay in $Q(0)$. Note that by condition~(\ref{mc_c1}), we cannot have $P(0) > Q(0) + \delta$. Furthermore, this traversal is feasible as the traversed prefix of $P$ is in the range $[Q(0) - \delta, Q(0) + \delta]$, by condition~(\ref{mc_c3}), and thus within distance $\delta$ to $Q(0)$. If we reach $P(1)$ before reaching $Q(0) + \delta$, then we know that $Q \subseteq [P(1) - \delta, P(1) + \delta]$ and we can thus traverse complete $Q$ and $\df(P,Q) \leq \delta$. If we did not reach $P(1)$, we now traverse $Q$ until its last point with value $Q(0)$, which is possible as the traversed prefix of $Q$ lies in $[Q(0), Q(0) + 2\delta]$, due to condition~(\ref{mc_c2}) and as $Q$ is $2\delta$-monotone increasing, and the position on $P$ is currently $Q(0) + \delta$.
From now on, we traverse $P$ and $Q$ with the same speed in image space, unless one of the two invariants would be violated by continuing the traversal.
If both invariants would be violated at the same time, we break ties by restoring Invariant~(\ref{invariant1}) before Invariant~(\ref{invariant2}).
Now, let $s$ be the position on $P$ and $t$ the position on $Q$ when an invariant would be violated. When Invariant~(\ref{invariant1}) would be violated, we continue traversing $P$ while staying in $Q(t)$ on $Q$ until the next time we reach a position $s'$ on $P$ with value $P(s') = P(s)$. Note that we might not reach such a position $s'$ because we reached the end of $P$. However, if we did not reach the end of $P$, the invariant is restored. This traversal keeps the two positions at distance $\delta$ as $P(s) = Q(t) + \delta$ and as $P$ is 2$\delta$-monotone\xspace increasing.
In case Invariant~(\ref{invariant2}) would be violated, we continue traversing $Q$ until we reach the largest position $t' > t$ such that $Q(t') = Q(t)$. Note that afterwards, both invariants hold (as we restore Invariant~(\ref{invariant1}) before Invariant~(\ref{invariant2})), and, in particular, we cannot reach the end of $Q$ due to the existence of $Q(t')$ which we reach at the end of restoring Invariant~(\ref{invariant2}). This traversal also keeps the two positions at distance $\delta$ as initially $Q(t) = P(s) - \delta$ and $Q$ is 2$\delta$-monotone\xspace increasing and there is no position $t''$ on $Q$ with $Q(t'') < Q(t)$, i.e., all the points before reaching position $t'$ on $Q$ have to be in the range $[Q(t), Q(t) + 2\delta]$.
In all of the above cases we are guaranteed to make progress in our traversal. Furthermore, we will reach the end of $P$ before or at the same time as we reach the end of $Q$ because, first, while restoring invariants we can only reach the end of $P$ but not of $Q$ as argued above and, second, if we reach the end of $Q$ while both invariants would continue to hold, we also have to reach the end of $P$ at the same time as otherwise we would violate condition~(\ref{mc_c3}) of the lemma.
When we reach the end of $P$, we know that $P(1) \in [Q(1)-\delta, Q(1)+\delta]$ due to condition~(\ref{mc_c1}), and the remaining $Q$ is in $[P(1)-\delta, Q(1)]$. Thus, the remaining $Q$ is in $[P(1)-\delta, P(1)+\delta]$ and consequently $Q$ can be traversed until the end.
It follows from the traversal constructed thereby that $\df(P,Q) \leq \delta$.
\end{proof}
\subsection{Proofs of lemmas for straightenings\xspace}\label{sec:lemmatasim}
Next, we want to prove Lemma~\ref{lemmastraightenings} from Section~\ref{section:lemmas}.
We first prove a simpler statement, which can be thought of as a special case where the straightening\xspace consists of only one edge.
\begin{lemma}
\label{lemma:strongertriangleineq}
Let $X=\overline{ab}\subset \mathbb{R}$ be a line segment and let $Q:~[0,1]\mapsto \mathbb{R}$ be a curve such that:
$Q(0)=X(0)$, $Q(1)=X(1)$, for all $t \in [0,1]: Q(t) \in \overline{ab}$ and $\df(Q,X) \leq \delta$.
For any curve $P:~[0,1]\mapsto \mathbb{R}$ with $\df(P,X) \leq \delta$, it holds that $\df(P,Q) \leq \delta$.
\end{lemma}
\begin{proof}
To show the lemma statement, we want to apply Lemma~\ref{lemma:montonecurves} to $P$ and $Q$. For this, we need to show that the conditions on $Q$ and $P$ from the lemma statement are met. By Observation~\ref{lemma:montonecurvesimple} applied to $Q$ and the line segment $X$, it follows that $Q$ must be $2\delta$-monotone with respect to $X$, and by our assumptions, $Q$ is range-preserving (condition (\ref{mc_c2})). By Observation~\ref{lemma:montonecurvesimple} applied to $P$ and $X$, it also follows that $P$ is $2\delta$-monotone, and conditions (\ref{mc_c1}), (\ref{mc_c3}) and (\ref{mc_c4}) are satisfied. Therefore, Lemma~\ref{lemma:montonecurves} can be applied to $P$ and $Q$ and the claim is implied.
\end{proof}
\lemmastraightenings*
\begin{proof}Let $q_1,\ldots,q_{\ell}$ be the parameters corresponding to the vertices of $Q'$ in $Q$, i.e., the vertices of $Q'$ are $Q(q_1),\ldots,Q(q_{\ell})$.
Let $\phi:[0,1] \rightarrow [0,1]^2$ be a $\delta$-traversal between $P$ and $Q'$. Let $0=t_1 \leq \dots \leq t_{\ell}=1$ be a partition of the parameter space of $P$ such that for any $1 \leq i \leq \ell-1$, the edge $\overline{Q(q_i)Q(q_{i+1})}$ is mapped to $P[t_i,t_{i+1}]$ under $\phi$. As such, we have
\[ \df (P[t_i,t_{i+1}],\overline{Q(q_i)Q(q_{i+1})}) \leq \delta \]
By the locality property of $\delta$-simplifications, we also have that
\[ \df (Q[q_i,q_{i+1}],\overline{Q(q_i)Q(q_{i+1})}) \leq \delta \]
Now, Lemma \ref{lemma:strongertriangleineq} implies that
\[
\df(P[t_i,t_{i+1}], Q[q_i,q_{i+1}]) \leq \delta.
\]
Finally, we apply Observation \ref{observation:concatenation} on $P=\bigcirc_{i=1}^{\ell}P[t_i,t_{i+1}]$ and $Q=\bigcirc_{i=1}^{\ell}Q[q_i,q_{i+1}]$, and we obtain \[\df(P,Q)\leq \max_{i \in [\ell]}\df\left(P[t_i,t_{i+1}], Q[q_i,q_{i+1}] \right) \leq \delta.\]
\end{proof}
\subsection{Proofs of lemmas for signatures}\label{sec:lemmatasig}
Next, we want to prove Lemma~\ref{lemmasignatureproxy} from Section~\ref{section:lemmas}.
We first prove an auxiliary statement for signature edges in Lemma~\ref{lemma:signatureedge1}. In particular, we need to take care of the first and last edge of the signature. For the other edges we can use Lemma~\ref{lemma:strongertriangleineq}. Technically, we will also need the symmetric statement of this lemma for $a > b$; this follows by mirroring at the origin.
The proof of this lemma turns out be technically involved. For the proof of Lemma~\ref{lemmasignatureproxy} we can then use the same approach as for Lemma~\ref{lemmastraightenings} above.
\begin{lemma}\label{lemma:signatureedge1}
Let $\delta=\delta'+\delta''$ for $\delta,\delta',\delta''\geq 0$. Let $X=\overline{ab}\subset \mathbb{R}$ be a line segment with $a \leq b$ and let $Q:~[0,1]\mapsto \mathbb{R}$ be a curve such that:
$Q(0)=X(0)$, $Q(1)=X(1)$ and $\df(Q,X) \leq \delta'$. Let $P:~[0,1]\mapsto \mathbb{R}$ be a curve with $\df(P,X) \leq \delta$.
If either
\begin{compactenum}[(i)]
\item $|Q(0)-P(0)| \leq \delta''$ and $|Q(1)-P(1)| \leq \delta''$, or
\item $|Q(0)-P(0)| \leq \delta''$ and $\max_{t \in [0,1]}(Q(t)) \leq Q(1)$, or
\item $\min_{t \in [0,1]}(Q(t)) \geq Q(0)$ and $|Q(1)-P(1)| \leq \delta''$,
\end{compactenum}
then it holds that $\df(P,Q) \leq \delta$.
\end{lemma}
\begin{proof}
Let $t_{\min} = \argmin \{ Q(t) \}$ and $t_{\max} = \argmax \{ Q(t) \}$. In case the minimum (resp.\ maximum) is not unique, we choose any of them.
By Observation~\ref{lemma:montonecurvesimple}, we have that $\forall t\in [0,1]~ Q(t)\in [Q(0)-\delta',Q(1)+\delta']$ and by assumption of case (i) $|P(0)-Q(0)| \leq \delta''$ and $|P(1)-Q(1)| \leq \delta''$. Therefore, by triangle inequality, we have in case (i), that
\[ |P(0)-Q(t_{\min})| \leq \delta \quad\text{ and }\quad |P(1)-Q(t_{\max})| \leq \delta \]
It is easy to see that this holds in the cases (ii) and (iii), as well, since $|P(0)-Q(0)| \leq \delta$ and $|P(1)-Q(1)| \leq \delta$ holds in any case as we assume $\df(P,X) \leq \delta$.
Now, define
\[ t_1 = \min \{t \in [0,1] \mid P(t) \geq Q(t_{\min}) + \delta\} \]
\[ t_2 = \max \{t \in [0,1] \mid P(t) \leq Q(t_{\max}) - \delta\} \]
If such a $t_1$ does not exist, then we set $t_1=1$. If $t_2$ does not exist, then we set $t_2=0$.
Note that by construction and Observation~\ref{lemma:montonecurvesimple} we have
\begin{equation}\label{eq:signatureedge1:eq1}
\df(P[0,t_1],Q(0)) \leq \delta \quad\text{ and }\quad \df(P[t_2,1],Q(1)) \leq \delta
\end{equation}
Indeed, (\ref{eq:signatureedge1:eq1}) holds true since
$Q(t_{\min}) \leq Q(0) \leq Q(t_{\min})+\delta'$
and, likewise,
$Q(t_{\max}) \geq Q(1) \geq Q(t_{\max})-\delta'$, and, moreover, the image of the subcurve
$P[0,t_1]$ is contained in the interval $[Q(0)-\delta,Q(t_{\min})+\delta]$ and the image of the subcurve $P[t_2,1]$ is contained in the interval $[Q(t_{\max})-\delta, Q(1)+\delta]$.
In addition, we have
\begin{equation}\label{eq:signatureedge1:eq2} \df(P(t_1),Q[0,t_{\min}]) \leq \delta \quad\text{ and }\quad \df(P(t_2),Q[t_{\max},1]) \leq \delta
\end{equation}
Indeed, (\ref{eq:signatureedge1:eq2}) holds true, since $\delta'\leq \delta$ and by Observation~\ref{lemma:montonecurvesimple}, $Q$ is $2\delta'$-monotone increasing, and therefore the image of the subcurve $Q[0,t_{\min}]$ is contained in the interval $[Q(t_{\min}), Q(t_{\min})+2\delta']$ which by construction is equal to $[P(t_1)-\delta', P(t_1)+\delta']$ and the image of the subcurve $Q[t_{\max}, 1]$ is contained in the interval $[Q(t_{\max})-2\delta', Q(t_{\max})]$, which by construction is equal to $[P(t_2)-\delta', P(t_2)+\delta']$.
Now, assume that $t_1 \leq t_2$ and $t_{\min} \leq t_{\max}$. In this case, the subcurves $P[t_1,t_2]$ and $Q[t_{\min},t_{\max}]$ are well-defined.
By construction, $|P(t_1)-Q(t_{\min}) \mid \leq \delta $, $|P(t_2)-Q(t_{\max}) \mid \leq \delta $ and $Q[t_{\min},t_{\max}]\subseteq \overline{Q(t_{\min}) Q(t_{\max})}$. By Observation~\ref{lemma:montonecurvesimple}, $P$ and $Q$ are both $2\delta$-monotone with respect to $X$, and, by definition, $X=\overline{Q(0)Q(1)}$. Moreover, by the definition of $t_1,t_2$, we have $P[t_1,t_2]\subseteq B(Q[t_{\min},t_{\max}],\delta)$.
Therefore all conditions of Lemma~\ref{lemma:montonecurves} are satisfied, which implies that
\begin{equation}\label{eq:signatureedge1:eq3} \df({P[t_1,t_2],Q[t_{\min}, t_{\max}]}) \leq \delta
\end{equation}
In summary, we have by (\ref{eq:signatureedge1:eq1}),(\ref{eq:signatureedge1:eq2}), and (\ref{eq:signatureedge1:eq3}) that
\[ \max \begin{pmatrix}
\df(P[0,t_1],Q(0))\\
\df(P(t_1),Q[0,t_{\min}])\\
\df({P[t_1,t_2],Q[t_{\min}, t_{\max}]})\\
\df(P(t_2),Q[t_{\max},1])\\
\df(P[t_2, 1],Q(1))
\end{pmatrix} \leq \delta \]
Now, by Observation~\ref{observation:concatenation} we can concatenate these subcurves and $\df(P,Q) \leq \delta$ is implied.
If the assumption $t_1 \leq t_2$ fails, then, in fact, a simpler decomposition works.
Indeed, if $t_1 > t_2$, then it holds by (\ref{eq:signatureedge1:eq1}) and (\ref{eq:signatureedge1:eq2}) that
\[ \max \begin{pmatrix}
\df(P[0,t_1],Q(0))\\
\df(P(t_1),Q)\\
\df(P[t_1, 1],Q(1))
\end{pmatrix} \leq \delta \]
Therefore, also in this case, $\df(P,Q) \leq \delta$ holds true.
Finally, we need to consider the case that the assumption
$t_{\min} \leq t_{\max}$ fails. We may assume that $t_1 \leq t_2$, as we covered the case $t_1 > t_2$ above. We will consider the different cases from the lemma statement separately.
First, note that if $t_{\min} > t_{\max}$, then $|Q(t_{\max})-Q(t_{\min})| \leq 2\delta$, since $Q$ is $2\delta$-monotone, and therefore, $Q$ is contained in the interval $[P(t_1)-\delta, P(t_1)+\delta]$. By a similar argument, $Q$ is contained in the interval $[P(t_2)-\delta, P(t_2)+\delta]$.
Now, assume case (ii) from the lemma statement. In this case, we have by the above and by Lemma~\ref{lemma:montonecurves}
\[ \max \begin{pmatrix}
\df(P[0,t_1],Q(0))\\
\df(P(t_1),Q[0,t_{\min}])\\
\df({P[t_1,1],Q[t_{\min}, 1]}))
\end{pmatrix} \leq \delta \]
Assume case (iii) from the lemma statement. In this case, we have symmetrically
\[ \max \begin{pmatrix}
\df({P[0,t_2],Q[0, t_{\max}]})\\
\df(P(t_2),Q[t_{\max},1])\\
\df(P[t_2, 1],Q(1))
\end{pmatrix} \leq \delta \]
Now, for case (i), we claim that there exist $0 \leq q_1 \leq q_2 \leq 1$, such that
\[ \max \begin{pmatrix}
\df(P[0,t_1],Q(0))\\
\df(P(t_1),Q[0,q_1])\\
\df(P[t_1,t_2],Q[q_1,q_2])\\
\df(P(t_2),Q[q_2,1])\\
\df(P[t_2, 1],Q(1))
\end{pmatrix} \leq \delta \]
Indeed, from what we derived, $\df(P(t_1),Q[0,q_1])\leq \delta$ and $\df(P(t_2),Q[q_2,1])\leq \delta$ holds for any choice of $q_1, q_2 \in [0,1]$. The first and last line hold by (\ref{eq:signatureedge1:eq1}). It remains to show that we can choose $q_1,q_2$ so that $\df(P[t_1,t_2],Q[q_1,q_2]) \leq \delta$ holds. Since $\df(P,X)\leq \delta$, there must be a subsegment $X[x_1,x_2]$ of $X$, such that $\df(P[t_1,t_2],X[x_1,x_2]) \leq \delta$. Recall that $\overline{Q(0)Q(1)}=X$ and by the intermediate value theorem we can define suitable $q_1,q_2$ as follows
\[ q_1 = \max \{q \in [0,1] \mid Q(q)=X(x_1) \} \]
\[ q_2 = \min \{q \in [q_1,1] \mid Q(q)=X(x_2) \} \]
Now, we can apply Lemma~\ref{lemma:montonecurves} and conclude that $\df(P[t_1,t_2],Q[q_1,q_2]) \leq \delta$. Therefore, also in case (i), we have $\df(P,Q) \leq \delta$.
\end{proof}
Now we are ready to prove Lemma~\ref{lemma:signatureproxy}.
\lemmasignatureproxy*
\begin{proof}
This follows by a modification of the proof of Lemma~\ref{lemma:simplproxy}. Although the two proofs are very similar, the differences are subtle. Therefore, we give the full proof for the sake of completeness.
Let $q_1,\ldots,q_{\ell}$ be the parameters corresponding to the vertices of $Q'$ in $Q$, i.e., the vertices of $Q'$ are $Q(q_1),\ldots,Q(q_{\ell})$.
Let $\phi:[0,1] \rightarrow [0,1]^2$ be a $\delta$-traversal between $P$ and $Q'$. Let $0=t_1 \leq \dots \leq t_{\ell}=1$ be a partition of the parameter space of $P$ such that for any $1 \leq i \leq \ell-1$, the edge $\overline{Q(q_i)Q(q_{i+1})}$ is mapped to $P[t_i,t_{i+1}]$ under $\phi$. As such, we have
\begin{equation}\label{eq:PQ1}
\df (P[t_i,t_{i+1}],\overline{Q(q_i)Q(q_{i+1})}) \leq \delta
\end{equation}
By the definition of $\delta$-simplifications, we also have that
\begin{equation}\label{eq:PQ2}
\df (Q[q_i,q_{i+1}],\overline{Q(q_i)Q(q_{i+1})}) \leq \delta' \leq \delta
\end{equation}
Now, if the edge $\overline{Q(q_i)Q(q_{i+1})}$ of $Q'$ is range-preserving, then Lemma \ref{lemma:strongertriangleineq} implies that
\begin{equation}\label{eq:PQ3}
\df(P[t_i,t_{i+1}], Q[q_i,q_{i+1}]) \leq \delta.
\end{equation}
Otherwise, it must be (by the definition of signatures) that either $i=1$ or $i+1=\ell$ or both (the edge is the first or last edge of the signature $Q'$ or $Q'$ consists of just one edge). In any of those cases, Lemma~\ref{lemma:signatureedge1} implies $\df(P[t_i,t_{i+1}], Q[q_i,q_{i+1}]) \leq \delta$.
Finally, we apply Observation \ref{observation:concatenation} on $P=\bigcirc_{i=1}^{\ell}P[t_i,t_{i+1}]$ and $Q=\bigcirc_{i=1}^{\ell}Q[q_i,q_{i+1}]$, and we obtain \[\df(P,Q)\leq \max_{i \in [\ell]}\df\left(P[t_i,t_{i+1}], Q[q_i,q_{i+1}] \right) \leq \delta.\]
\end{proof}
\subsection{Proofs of lemmas for visiting orders}\label{sec:visiting}
In order to prove the existence of $\delta'$-visiting orders for some $\delta' \in O(\delta)$ as claimed in Lemma~\ref{lemma:goodsimplificationranges2}, we introduce the concept of a visiting sequence. A visiting sequence is not necessarily monotonically increasing, while visiting orders according to Definition~\ref{def:visitingorder} are. Nonetheless, this definition of visiting sequence will turn out to be useful. It is important that a $\delta$-visiting sequence is derived from a monotone traversal.
We will show (Lemma~\ref{lemma:nonmonotone} and \ref{lem:visitingorder}) that any non-monotonic visiting sequence can be turned into a monotonic one at the expense of a constant factor in the radius of the visiting relationship.
\begin{definition}\label{def:visit}
Let $P:[0,1] \rightarrow \mathbb{R}$ and $Q:[0,1] \rightarrow \mathbb{R}$ be curves, let $\delta > 0$, and let $\phi: [0,1] \rightarrow [0,1]^2$ be a monotone traversal. We say a vertex $w$ of $Q$ $\delta$-\textbf{visits} a vertex $v$ of $P$ \textbf{under} $\phi$ if the following holds: \begin{compactenum}[(i)]
\item $|w-v|\leq \delta$ and
\item at least one of the following holds:
\begin{compactenum}
\item $\phi$ associates $w$ with $v$, or
\item $\phi$ associates $w$ with the interior of an edge of $P$ that is incident to $v$, or
\item $\phi$ associates $v$ with the interior of an edge of $Q$ that is incident to $w$.
\end{compactenum}
\end{compactenum}
Note that the induced relation on the vertices is symmetric for any fixed $\delta$ and $\phi$.
\end{definition}
\begin{definition}\label{def:visitingseq}
Let $P:[0,1] \rightarrow \mathbb{R}$ and $Q:[0,1] \rightarrow \mathbb{R}$ be curves and let $\phi: [0,1] \rightarrow [0,1]^2$ be a monotone traversal. Let $S$ be a subsequence of the vertices of $Q$ of length $\ell$. Let $u_1,\dots,u_{\ell}$ denote the ordered vertices of $S$ and let $v_1,\dots,v_{m}$ denote the ordered vertices of $P$. A \textbf{$\delta$-visiting sequence} of $S$ on $P$ \textbf{under} $\phi$ is a sequence of indices $i_1,\dots, i_{\ell}$, such that each $u_{j}$ of $S$ $\delta$-visits the vertex $v_{i_{j}}$ of $P$ under $\phi$.
\end{definition}
\begin{lemma}\label{lemma:nonmonotone}
Let $P:[0,1]\mapsto \mathbb{R}$ and $Q:[0,1]\mapsto \mathbb{R}$ be curves such that $\df(Q,P)\leq \delta$ and let $\phi$ be a monotone traversal realizing this distance. Let $v_i,v_j$ be two vertices of $P$ with $i < j$ in the ordering along $P$. Assume $v_i$ $\delta$-visits a vertex $w_a$ of $Q$ under $\phi$ and $v_j$ $\delta$-visits a vertex $w_b$ of $Q$ under $\phi$ such that $a > b$ in the ordering along $Q$. Then, it must be that $v_i$ $3\delta$-visits $w_b$ under $\phi$ and that $v_j$ $3\delta$-visits $w_a$ under $\phi$.
\end{lemma}
\begin{proof}[Proof.]
As $a > b$, however in $\phi$ a point on an adjacent edge of $w_a$ is matched earlier than a point on an adjacent edge of $w_b$, we conclude due to the monotonicity of $\phi$ that $\overline{w_b w_a}$ is an edge in $Q$.
Let $P(t)$ and $P(t')$ be the points that $v_i$ and $v_j$ are mapped to on $\overline{w_b w_a}$ under $\phi$, respectively.
By the monotonicity of $\phi$ we have $t \leq t'$. See Figure~\ref{fig:caseP1aP2a} for an illustration.
Assume that $w_a < w_b$, as the case $w_a > w_b$ is symmetric. Since $P(t)$ and $P(t')$ are both on the edge $\overline{w_b w_a}$, the fact that $t \leq t'$ implies that $P(t') \leq P(t)$. Using the facts that $|v_i - w_a| \leq \delta$ and $|v_j - w_b| \leq \delta$, we obtain
\[ v_i - \delta \leq w_a \leq P(t') \leq P(t) \leq w_b \leq v_j + \delta.\]
At the same time we have
\[ v_j - \delta \leq P(t') \leq P(t) \leq v_i + \delta.\]
It follows that $|v_i-v_j| \leq 2\delta$.
Thus, the claim that $v_i$ is contained in the $3\delta$-range of $w_b$ is then implied by triangle inequality, as well as the symmetric claim that $v_j$ is contained in the $3\delta$-range of $w_a$. As $v_i$ and $v_j$ are both matched to the edge $\overline{w_b w_a}$, we also have that $v_i$ and $v_j$ visit the $3\delta$-ranges of $w_b$ and $w_a$, respectively.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics{vertex-ordering-3monotone.pdf}
\caption{Illustration to the proof of Lemma~\ref{lemma:nonmonotone}. Assuming $w_a<w_b$ as in the proof, $v_i$ visits $w_a$ and $v_j$ visits $w_b$, but $i<j$ and $a>b$, so the visiting sequence is not monotone. }
\label{fig:caseP1aP2a}
\end{figure}
\begin{lemma}\label{lem:visitingorder}
Let $P:[0,1] \rightarrow \mathbb{R}$ and $Q:[0,1] \rightarrow \mathbb{R}$ be curves and let $\phi:[0,1] \rightarrow [0,1]^2$ be a monotone traversal that maps them within distance $\delta$. Let $S$ be a subsequence of the vertices of $Q$. Any $\delta$-visiting sequence of $S$ on $P$ under $\phi$ implies a $3\delta$-visiting order of $S$ on $P$.
\end{lemma}
\begin{proof}
Let $u_1,\dots,u_{\ell}$ denote the vertices of $S$ and let $i_1,\dots,i_{\ell}$ denote the visiting sequence.
We generate a monotonically increasing sequence as follows. For every $u_{j}$, we set $i_{j}$ to the minimum of the suffix sequence $i_{j},\dots,i_{\ell}$. If $i_{j}$ was already a minimum, then nothing changes. Otherwise, let $i_k$ be an index, where this minimum was attained. By Lemma~\ref{lemma:nonmonotone} the vertex $u_{j}$ is contained in the $3\delta$-range of the vertex $v_{i_k}$. After applying this to all elements of the sequence, starting with $j=1$ and ending with $j=\ell$, the sequence $i_1,\dots,i_{\ell}$ is monotonically increasing.
\end{proof}
The next two lemmas are used in the proof of Lemma~\ref{lemma:goodsimplificationranges2}.
\begin{lemma}
\label{lemma:notvisiting}
Let $P:[0,1]\mapsto \mathbb{R}$ and $Q:[0,1]\mapsto \mathbb{R}$ be curves such that $\df(Q,P)\leq \delta$ and let $\phi$ be a monotone traversal realizing this distance.
If none of the inner vertices of $P$ and $Q$ $\delta$-visit each other under $\phi$, then $P$ and $Q$ are $2\delta$-monotone.
\end{lemma}
\begin{proof}[Proof.]
We prove the lemma by induction. We reconstruct the matching $\phi$ and use \enquote{matched} as shorthand for \enquote{matched under $\phi$}.
Recall that we denote the ordered vertices of $P$ and $Q$ by $p_1, p_2, \dots$ and $q_1, q_2, \dots$, respectively. Note that if either $P$ or $Q$ consist of a single vertex or single segment, then the claim immediately follows from Observation~\ref{lemma:montonecurvesimple}.
Otherwise, either $p_2$ is matched to a point on $\overline{q_1 q_2}$ or $q_2$ is matched to a point on $\overline{p_1 p_2}$ and $p_2, q_2$ are inner vertices. As the lemma statement is symmetric with respect to $P$ and $Q$, we assume without loss of generality that $p_2$ is matched to $\overline{q_1 q_2}$. As $p_2$ and $q_2$ are inner vertices, they cannot $\delta$-visit each other, and thus either $p_2 < q_2 - \delta$ or $p_2 > q_2 + \delta$. By mirroring the curves $P$ and $Q$ at the origin, these two cases are symmetric, and we thus assume $p_2 < q_2 - \delta$ without loss of generality. As $p_2$ is matched to $\overline{q_1 q_2}$, it follows that $q_1 < q_2$. Thus, $\overline{q_1 q_2}$ is increasing and $\overline{p_1 p_2}$ has to be 2$\delta$-monotone\xspace increasing as otherwise the matching would have distance larger than $\delta$.
Now, for the inductive step, assume that $\seqtocurve{ p_1, \dots, p_i }$ and $\seqtocurve{ q_1, \dots, q_j }$ are $2\delta$-monotone increasing curves, $p_i, q_j$ are inner vertices, and $p_i$ is matched to a point on $\overline{q_{j-1}q_j}$ with $p_i < q_j - \delta$. Note that this again implies $q_{j-1} < q_j$.
Let us now prove the inductive step. If $p_{i+1}$ is an inner vertex, then either (i) $p_{i+1}$ is also matched to a point on $\overline{q_{j-1}q_j}$ or (ii) $q_j$ is matched to a point on $\overline{p_i p_{i+1}}$.
In case (i), $p_{i+1}$ extends a subcurve $\seqtocurve{p_{i'}, \dots, p_i}$ with $i' \geq 1$ that is completely matched to a part of the increasing segment $\overline{q_{j-1}q_j}$. The subcurve $\seqtocurve{p_{i'}, \dots, p_i}$ has to be 2$\delta$-monotone\xspace increasing according to Observation~\ref{lemma:montonecurvesimple}. Either $p_{i'}$ is the start of $P$ (i.e, $i' = 1$) and thus $\seqtocurve{p_1, \dots, p_{i+1}}$ is 2$\delta$-monotone\xspace increasing, or $p_{i'-1}$ has to be matched to a part of $Q$ before $q_{j-1}$ and thus $q_{j-1}$ is an inner vertex.
As $q_{j-1}$ was already matched, it follows that either $p_{i'-1}$ is the start of $P$ (i.e., $i'-1 = 1$) and $p_{i'-1} \leq q_{j-1} + \delta$, or $p_{i'-1}$ is an inner vertex and $p_{i'-1} < q_{j-1} - \delta$ as they do not $\delta$-visit each other.
In both cases $\seqtocurve{p_1, \dots, p_{i'-1}}$ is contained in $[-\infty, q_{j-1}+\delta)$; for the first case this holds as $\seqtocurve{p_1, \dots, p_{i'-1}}$ is 2$\delta$-monotone\xspace increasing by induction. Consequently, the concatenation of $\seqtocurve{p_1, \dots, p_{i'-1}}$ and $\seqtocurve{p_{i'}, \dots, p_{i+1}}$ is also 2$\delta$-monotone\xspace increasing.
Now consider case (ii), i.e., $q_j$ is matched to a point on $\overline{p_i p_{i+1}}$. In this case $\overline{p_i p_{i+1}}$ is increasing as $p_i < q_j$ and $q_j < p_{i+1}$, which is the case because $q_j$ is matched to $\overline{p_i p_{i+1}}$ and $p_i < q_j - \delta$.
Therefore, also in this case it holds that $\seqtocurve{p_1, \dots, p_{i+1}}$ is 2$\delta$-monotone\xspace increasing.
Note that after exchanging $P$ and $Q$, we again fulfill the inductive hypothesis. In particular, since $q_j$ is matched to $\overline{p_i p_{i+1}}$ but $p_{i+1}$ and $q_j$ do not $\delta$-visit each other as both are inner vertices, we must have $q_j < p_{i+1} - \delta$.
Now consider the case that $p_{i+1}$ is not an inner vertex, i.e., it is the last vertex of $P$. In this case, part of $\overline{p_i p_{i+1}}$ has to be matched to $q_j$ as no previous part of $P$ was matched to $q_j$. This implies that $\overline{p_i p_{i+1}}$ again is increasing as $p_i < q_j - \delta$ and $p_{i+1} \geq q_j -\delta$. Hence $\seqtocurve{p_1 \dots p_{i+1}}$ is $2\delta$-monotone increasing. As the remainder of $Q$, starting from $q_j$, has to be matched to part of $\overline{p_i p_{i+1}}$ and therefore this part is $2\delta$-monotone increasing by Observation~\ref{lemma:montonecurvesimple}, and $\seqtocurve{q_1, \dots, q_{j-1}}$ is 2$\delta$-monotone\xspace by induction and also contained in $[-\infty,p_i-\delta)$ as $p_i$ is matched to the increasing $\overline{q_{j-1} q_j}$, it follows that the whole curve $Q$ is 2$\delta$-monotone\xspace increasing.
\end{proof}
\begin{lemma}
\label{lemma:shortcut}
Let $P:[0,1]\mapsto \mathbb{R}$ and $Q:[0,1]\mapsto \mathbb{R}$ be curves such that $\df(Q,P)\leq \delta$ and let $\phi$ be a monotone traversal realizing this distance. Further assume that for all $t \in [0,1]$ we have $Q(t) \in \overline{Q(0)Q(1)}$.
If none of the inner vertices of $Q$ $\delta$-visit an inner vertex of $P$ under $\phi$, then the line segment $Q'=\overline{Q(0) Q(1)}$ is a range-preserving $\delta$-simplification of $Q$ with $\df(Q',P)\leq \delta$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma:notvisiting}, $Q$ and $P$ must be $2\delta$-monotone. Moreover, $Q'$ is range-preserving by assumption. Therefore, $Q'$ is a range-preserving $\delta$-simplification of $Q$.
It remains to show the bound on the Fr\'echet distance of $P$ and $Q'$.
To this end, we want to invoke Observation~\ref{lemma:montonecurvesimple}. Indeed, it must be that
\[ \forall t\in [0,1]: P(t) \in \bigcup_{s \in [0,1]} B(Q(s),\delta), \]
since $\df(P,Q) \leq \delta$ and since $Q'$ is range-preserving. Therefore, the conditions of Observation~\ref{lemma:montonecurvesimple} are satisfied and the bound is implied.
\end{proof}
We are now ready to prove Lemma~\ref{lemma:goodsimplificationranges2} from Section~\ref{section:lemmas}.
\lemmagoodsimplificationranges*
\begin{proof}
Let $\phi$ be a monotone traversal that realizes the Fr\'echet distance between $P$ and $Q$. We will construct a $\delta$-straightening\xspace $Q'$ together with a $\ensuremath{\mathcal{O}}(\delta)$-visiting order of $Q'$ on $P$. To this end, consider the subset of vertices of $Q$ that each $\delta$-visit \emph{some} vertex of $P$ under $\phi$ (Definition~\ref{def:visit}). Denote this subset by $S$. Lemma~\ref{lem:visitingorder} implies that there exists a $3\delta$-visiting order of $S$ on $P$.
We denote this visiting order by the function $\kappa: S \rightarrow [m]$ that assigns every vertex of $S$ the index of a vertex of $P$ (where $m$ denotes the number of vertices of $P$).
It is quite possible that $S$ is not a $\delta$-simplification of $Q$ with the desired properties. In a second phase of the construction we will therefore add more vertices of $Q$ to $S$.
Consider any maximal subcurve $Q[s,s']$ of $Q$, such that none of the inner vertices of $Q[s,s']$ $\delta$-visit a vertex of $P$ under $\phi$. It must be that $Q(s)$ corresponds to some vertex $w$ of $S$ and $Q(s')$ corresponds to some vertex $w'$ of $S$. Moreover, $w'$ comes directly after $w$ along $Q$ among the vertices included in $S$. Assume that $Q[s,s']$ has at least one inner vertex. We distinguish two cases:
\begin{enumerate}[(C1)]
\item $B(v_{\kappa(w)},3\delta) \cap B(v_{\kappa(w')}, 3\delta) \neq \emptyset $,
\item otherwise
\end{enumerate}
In the first case (C1), we will add all inner vertices $Q[s,s']$ to $S$ and assign them the index $\kappa(w)$ in the constructed visiting order $\kappa$.
In the second case (C2), we will only add a specific subset of vertices, which we define as follows.
Define $\alpha$ and $\beta$ as follows:
\[ \alpha = \max \{ t ~\mid~ t \in [s,s'] \text{ and } Q(t) \in B(v_{\kappa(w)},3\delta) \} \]
\[ \beta = \min \{ t ~\mid~ t \in [\alpha,s'] \text{ and } Q(t) \in B(v_{\kappa(w')}, 3\delta) \} \]
Since the $3\delta$-ranges of $v_{\kappa(w)}$ and $v_{\kappa(w')}$ are disjoint, $\alpha$ and $\beta$ are well-defined and it follows by definition that $s \leq \alpha \leq \beta \leq s'$. Therefore, the subcurves $Q[s,\alpha]$, $Q[\alpha,\beta]$, and $Q[\beta,s']$ are well-defined. Now, we proceed as follows, we add the inner vertices of $Q[s,\alpha]$ to $S$ and assign them the index $\kappa(w)$ in the constructed visiting order $\kappa$. Secondly, we add the inner vertices of $Q[\beta,s]$ to $S$ and assign them the index $\kappa(w')$ in the constructed visiting order $\kappa$.
We apply this to all such maximal subcurves $Q[s,s']$ (note that these are pairwise disjoint), thereby constructing the sequence $S$ along with the visiting order $\kappa$.
Let $u_1,\dots,u_{\ell}$ be the sequence of vertices of the resulting $S$ in their order along $Q$. Denote with $Q'$ the curve that results from linearly interpolating $u_1,\dots,u_{\ell}$. Note that it is different from $Q$ only in the sections where we omitted the vertices of the subcurve $Q[\alpha,\beta]$ in case (C2).
We claim that $Q'$ is an edge-range-preserving $\delta$-simplification of $Q$. To see this, consider a subcurve $Q[s,s']$, assume we are in case (C2). By construction, the subcurve $Q[\alpha,\beta]$ is range-preserving (for all $x \in [\alpha,\beta]$ we have $Q(x) \in \overline{Q(\alpha) Q(\beta)}$). Let $P[t,t']$ be a subcurve of $P$ mapped to $Q[\alpha,\beta]$ under $\phi$. Now, Lemma~\ref{lemma:shortcut} applied to the subcurves $P[t,t']$ and $Q[\alpha,\beta]$ implies that $\overline{Q(\alpha)Q(\beta)}$ is an edge-range-preserving $\delta$-simplification of $Q[\alpha,\beta]$ with $\df(\overline{Q(\alpha)Q(\beta)},P[t,t']) \leq \delta$.
Therefore, by Observation~\ref{observation:concatenation}, when removing all vertices of $Q$ in the parameter range $(\alpha,\beta)$ for each such maximal subcurve $Q[s,s']$, we obtain a $\delta$-straightening\xspace $Q'$ of $Q$ with $\df(Q',P)\leq \delta$.
Finally, we argue that the constructed visiting order $\kappa(u_1), \dots, \kappa(u_{\ell})$ is an $11\delta$-visiting order of $Q'$ on $P$. Clearly it is monotonically increasing by construction. Also, it is clear that any vertex added in the first phase is contained in the $3\delta$-range of its assigned vertex of $P$. It remains to argue for any vertex added to $S$ in the second phase, that it is contained in the $11\delta$-range of its assigned vertex in $P$.
Consider a subcurve $Q[s,s']$ from above and assume we are in case (C1). We have that $Q(s) \in B(v_{\kappa(w)},3\delta)$ and $Q(s') \in B(v_{\kappa(w')},3\delta)$. By the case distinction, these two ranges are not disjoint. Therefore, the subcurve starts and ends in the $9\delta$-range of the assigned vertex $v_{\kappa(w)}$.
Moreover, by Lemma~\ref{lemma:notvisiting}, $Q[s,s']$ has to be $2\delta$-monotone. This implies that the entire subcurve lies in the $11\delta$-range of $v_{\kappa(w)}$ and this is also the vertex that we assigned to all of its inner vertices.
A similar argument can be applied in case (C2). By the way we chose $\alpha$, we have that $Q(\alpha)$ is contained in the $3\delta$-range of $v_{\kappa(w)}$, which is also the vertex assigned to the entire subcurve. Since also the subcurve $Q[s,\alpha]$ is $2\delta$-monotone, all remaining vertices in the range $[s,\alpha]$ are contained in the $5\delta$-range of the same vertex. A symmetric argument can be applied to show that all remaining vertices in the range $[\beta, s']$ are contained in the $5\delta$-range of their assigned vertex.
\end{proof}
Finally, we also prove Lemma~\ref{lemma:signatures3} from Section~\ref{section:lemmas}.
\lemmasignatures*
\begin{proof}
By the triangle inequality we have that $\df(P',Q) \leq \df(P',P)+\df(P,Q) \leq 2\delta$. Now Lemma~\ref{lemma:signatures2} applied to $P'$ and the $2\delta$-signature of $Q$ implies that there exists a $2\delta$-visiting order of $Q'$ on $P'$.
It remains to argue that $\df(P',Q') \le 3\delta$. Let $\phi:[0,1] \rightarrow [0,1]^2$ be a $\delta$-traversal of $P$ and $Q$. Consider an edge $X$ of $Q'$ and let $Q[\alpha,\beta]$ be the subcurve of $Q$ that corresponds to $X$.
Let $P[\alpha',\beta']$ be a subcurve of $P$ that is mapped to $Q[\alpha,\beta]$ under $\phi$.
By the triangle inequality
\[ \df(P[\alpha',\beta'],X) \leq \df(P[\alpha',\beta'],Q[\alpha,\beta]) + \df(Q[\alpha,\beta]),X) \leq 3\delta \]
Assume that $P'$ is range-preserving for now (we will treat the general case below) and let $P'[\alpha'',\beta'']$ be the corresponding subcurve of $P'$ starting at $P(\alpha')$, ending at $P(\beta')$, and with inner vertices being the $\delta$-signature vertices of $P$ in the parametrization interval $[\alpha',\beta']$. Note that $P'[\alpha'',\beta'']$ is well-defined since $P'$ is a range-preserving as assumed above.
By Observation~\ref{observation:shortcut} it follows that
$\df(P'[\alpha'',\beta''],X) \le 3\delta$.
To show the claim for the case of range-preserving $P'$, we now want to use Observation~\ref{observation:concatenation} to concatenate the corresponding subcurves of $P'$ and $Q'$ and obtain that $\df(P',Q') \le 3\delta$. For this, we can choose the values of $\alpha'$ and $\beta'$ in the above argument such that we obtain a decomposition of $P$ into subcurves. Concretely, let $X_1,\dots,X_s$ be the edges of $Q'$ in their order along $Q'$, with $X_i=\overline{Q(\alpha_i) Q(\beta_i})$.
Then, we can choose the corresponding subcurves of $P$ as $P[\alpha_i',\beta_i']$, with \[ \alpha_{i-1}' \leq \beta_{i-1}'=\alpha_i' \leq \beta_i' \]
for any $1 < i \leq s$, with $\alpha'_1 = 0$ and $\beta'_s = 1$. Thus, we obtain a decomposition of $P$.
Now, if $P'$ is a range-preserving simplification of $P$, then the above construction induces a decomposition of $P'$ into subcurves $P'[\alpha_i'',\beta_i'']$ and we can apply Observation~\ref{observation:concatenation}.
As noted above, $P'$ is not necessarily range-preserving on all edges since it is a signature. In particular, it may not be range-preserving on the first edge (or the last edge, or neither). This could lead to $P(\alpha_2')$ (resp. $P(\alpha_s')$ for the last edge) not being included in the image of the signature edge of $P'$ that corresponds to the subcurve of $P$ containing $\alpha_2'$ (resp., $\alpha_s'$).
Note that if $P(\alpha_2')$ is not contained in the image of the first signature edge, then it must be that $ |P(\alpha_2') - P(0)| \leq \delta $, and in fact, it must be that this holds for the entire subcurve, that is $|P(t) - P(0)| \leq \delta $ for any $t \in [0,\alpha_2']$.
We claim that in this case we can simply set $\alpha_2''$, and $\beta_1''$ to $0$ (resp., we can set $\beta_{s-1}''$, and $\alpha_s''$ to $1$). We argue that this way of choosing the decomposition leads to $\df(P'[\alpha_1'',\beta_1''],X_1) \leq 3\delta$ and $\df(P'[\alpha_2'',\beta_2''],X_2) \leq 3\delta$ so that the above arguments can be applied (for the last two edges of $Q'$ a symmetric argument can be applied and we will omit the explicit analysis).
By the triangle inequality, we have that
\[ |P'(0)-Q(\alpha_2)|\leq |P(0)-P(\alpha_2')|+| P(\alpha_2')-Q(\alpha_2)| \leq 2\delta.\]
Together with \[|P'(0)-Q(\alpha_1)| = |P(0)-Q(0)| \leq \delta \] this implies by Observation~\ref{observation:linesegment} that $\df(P'(0), X_1) \leq 3\delta$ since $X_1$ is a line segment and $X_1=\overline{Q(0)Q(\alpha_2)}$.
Applying the triangle inequality again, we obtain for any $t \in [0,\alpha_2']$ that
\[ |P(t) - Q(\alpha_2)| \leq |P(t)-P(0)| + |P(0)-Q(\alpha_2)| \leq 3\delta\]
By Observation~\ref{observation:concatenation} and since $X_2=\overline{Q(\alpha_2)Q(\beta_2)}$, this implies that \[
\df(P[0,\beta_2'],X_2) \leq \max \left(~
\df(P[0,\alpha_2'],Q(\alpha_2))~,~ \df(P[\alpha_2',\beta_2'], \overline{Q(\alpha_2)Q(\beta_2)})~ \right) \leq 3\delta\]
By Observation~\ref{observation:shortcut} it follows that
$\df(P'[0,\beta_2''],X_2) \le 3\delta$.
\end{proof}
\section{Lower Bounds}\label{section:lowerbounds}
In this section we show several conditional lower bounds for $(2-\varepsilon)$ and $(3-\varepsilon)$-approximate nearest neighbor data structures. We use the well-known Orthogonal Vectors problem as the problem that we base our hardness results on.
\begin{definition}[Orthogonal Vectors (OV)]
Given two sets of vectors $A, B \subseteq \{0,1\}^d$, do there exist two vectors $a \in A, b \in B$ such that $\langle a, b \rangle = 0$?
\end{definition}
\begin{definition}[Orthogonal Vectors Hypothesis (OVH)]
For all $\varepsilon > 0$ there exists a $c > 0$ such that there is no algorithm solving OV instances $A, B \subset \{0,1\}^d$ with $|A| = |B|$ and $d = c \log |A|$ in time $\ensuremath{\mathcal{O}}(|A|^{2-\varepsilon})$.
\end{definition}
The above hypothesis is also sometimes called the Low-Dimensional Orthogonal Vectors Hypothesis and it is implied by the Strong Exponential Time Hypothesis \cite{Williams05}.
We use this version of the Orthogonal Vectors Hypothesis as it allows us to rule out running times using an arbitrarily small $\varepsilon$ while still reducing from an instance where vectors have a logarithmic dimension.
It is well known that \emph{balanced} OV with sets of the same size is equally hard as \emph{unbalanced} OV \cite{AbboudW14, BringmannK18}.
\begin{lemma}[Unbalanced Orthogonal Vectors Hypothesis] \label{lem:unbalancedOVH}
Assume OVH holds true. For every $\alpha \in (0,1)$ and $\varepsilon > 0$ there exists a $c > 0$ such that there is no algorithm solving OV instances $A, B \subset \{0,1\}^d$ with $|B| = |A|^\alpha$ and $d = c \log |A|$ in time $\ensuremath{\mathcal{O}}(|A|^{1+\alpha-\varepsilon})$.
\end{lemma}
\begin{proof}[Proof Sketch.]
We briefly outline why this hardness holds. To that end, assume that we can solve the unbalanced case in time $\ensuremath{\mathcal{O}}(|A|^{1+\alpha-\varepsilon})$ for some $\varepsilon > 0$. Then we could solve the balanced case by splitting $B$ into $|A|^{1-\alpha}$ parts of size $|A|^\alpha$, solve these instances in time $\ensuremath{\mathcal{O}}(|A|^{1+\alpha-\varepsilon})$, and thus solve the balanced problem in time $\ensuremath{\mathcal{O}}(|A|^{1-\alpha} \cdot |A|^{1+\alpha-\varepsilon}) = \ensuremath{\mathcal{O}}(|A|^{2-\varepsilon})$.
\end{proof}
Leveraging this insight, we later reduce from unbalanced OV instances to show stronger hardness results.
For convenience, we introduce some additional notation. For a vector $a \in \{0,1\}^d$, we use $a[i]$ to refer to its $i$th entry, where the entries are 0-index, i.e., $a = (a[0], \dots, a[d-1])$. Recall that we use the \enquote{$\circ$} operator to concatenate curves and that the curve $P$ where each point is translated by $\tau$ is denoted as $P + \tau$.
Instead of reducing directly from OV, we introduce a novel problem called \textsc{OneSidedSparseOV}\xspace and show that it is hard under OV. Subsequently, we reduce from this problem to the ANN problems introduced above.
\subsection{\textsc{OneSidedSparseOV}\xspace}
This problem can be thought of as a variant of OV with an additional restriction on one of the input sets. More precisely, for one set we allow at most $k$ non-zero entries in each vector.
\begin{definition}[\textsc{OneSidedSparseOV}\xspace]
Given a value $k \in \mathbb{N}$ and two sets of vectors $A, B \subseteq \{0,1\}^d$ where each $a \in A$ contains at most $k$ non-zero entries, do there exist two vectors $a \in A, b \in B$ such that $\langle a, b \rangle = 0$?
\end{definition}
We also refer to \textsc{OneSidedSparseOV}\xspace with parameter $k$ as $\textsc{OneSidedSparseOV}\xspace(k)$. We now show that this problem is hard under OV, interestingly, this is already the case for $k \in \omega(1)$.
\begin{lemma}\label{lem:ossov}
Assume OVH holds true. For every $\alpha \in (0,1)$, $\varepsilon > 0$ there is a $c > 0$ such that for any $k \in \omega(1) \cap o(\log n)$ there is no algorithm solving $\textsc{OneSidedSparseOV}\xspace(k)$ instances $A,B \subset \{0,1\}^d$ with $|B| = |A|^\alpha$ and $d = k \cdot |A|^{c/k}$ in time $\ensuremath{\mathcal{O}}(|A|^{1+\alpha-\varepsilon})$. \end{lemma}
\begin{proof}
For any $\alpha \le 1, \varepsilon > 0$, let $c > 0$ be the constant from Lemma~\ref{lem:unbalancedOVH}. Thus, unless OVH fails, we cannot solve OV instances $A, B \subset \{0,1\}^d$ with $|B| = |A|^\alpha$ and $d = c \log |A|$ in time $\ensuremath{\mathcal{O}}(|A|^{1+\alpha-\varepsilon})$. For any $k \in \omega(1) \cap o(\log n)$, we now reduce to $\textsc{OneSidedSparseOV}\xspace(k)$ as follows.
We convert $A$ to a \emph{set of sparse vectors} $A'$ and $B$ to a set $B'$ such that $A', B'$ is an equivalent $\textsc{OneSidedSparseOV}\xspace(k)$ instance. To achieve this, we increase the dimensionality of the vectors in the \textsc{OneSidedSparseOV}\xspace instance.
Given a vector $a \in A$, partition the dimensions of $a$ into $k$ blocks of size $d/k$.\footnote{If $d$ is not divisible by $k$, increase the dimension until this is the case and fill these dimensions with zeros.}
More precisely, let
\[
a_i = \left(a\left[i \cdot \frac{d}{k}\right],\, a\left[i \cdot \frac{d}{k} + 1\right],\, \dots,\, a\left[i \cdot \frac{d}{k} + \frac{d}{k}-1\right]\right)
\]
for $i \in \{0,\dots,k-1\}$.
Let $\hat{a}_i \in \left[2^{d/k}\right]$ be defined as the binary vector $a_i$ interpreted as a binary number.
We now construct the corresponding $a' \in A'$ as follows.
We choose the dimension of the vectors in $A', B'$ as $d' = k \cdot 2^{d/k}$ --- note that this equals $k \cdot |A|^{c/k}$ as stated in the lemma.
For each $i \in \{0,\dots,k-1\}$, we set $a'[i \cdot 2^k + \hat{a}_i] = 1$.
All other entries of $a'$ are set to 0.
Thus, each vector $a' \in A'$ contains exactly $k$ $1$-entries.
The vectors $b' \in B'$ we construct as follows.
Given a vector $b \in B$, we also partition its dimensions the same way as we did for $a \in A$ and obtain vectors $b_0, \dots, b_{k-1}$.
For each $i \in \{0,\dots,k-1\}$ and all $\beta \in \{0,1\}^{d/k}$, where we again interpret $\beta$ as a vector or as a binary number $\hat{\beta}$, we set $b'[i \cdot 2^k + \hat{\beta}] = 1$ if $\langle b_i, \beta \rangle > 0$, otherwise we set it to zero. This completes the description of the reduction. Note that while we changed the dimension of the vectors, the size of the sets remained the same, that is $|A'| = |A|$ and $|B'| = |B|$.
Note that for any vectors $a \in A$ and $b \in B$ with $\langle a, b \rangle > 0$ there exist parts $a_i, b_i$ and a coordinate $\ell$ such that $a_i[\ell] = b_i[\ell] = 1$, and thus $\langle a_i,b_i \rangle > 0$. Hence, by construction of $b'$, there exists a dimension in $a'$ and $b'$ where both have a 1. On the other hand, if $a'$ and $b'$ contain a 1 in the same dimension, then by construction of $b'$ there have to be two parts $a_i, b_i$ such that $\langle a_i,b_i \rangle > 0$ and thus $\langle a,b \rangle > 0$.
The total running time of this reduction consists of constructing the vectors in $A'$ --- which takes time proportional to the number of entries --- and the inner product computation between vectors of dimensionality $d/k$ for each of the $k \cdot 2^{d/k}$ dimensions of each vector in $B'$:
\[
\ensuremath{\mathcal{O}}\left(|A'| \cdot k \cdot 2^{d/k} + |B'| \cdot k \cdot 2^{d/k} \cdot \frac{d}{k}\right) = \ensuremath{\mathcal{O}}\left(|A| \cdot 2^{c \log |A|/k} \cdot c \log |A|\right) = \ensuremath{\mathcal{O}}\left( |A|^{1 + c/k} \cdot c \log |A|\right),
\]
which simplifies to $|A|^{1+o(1)}$.
Thus, if we can solve $\textsc{OneSidedSparseOV}\xspace(k)$ in time $\ensuremath{\mathcal{O}}(|A'|^{1+\alpha-\varepsilon})$ and add the running time of the reduction, then we can solve unbalanced OV in time
\[
\ensuremath{\mathcal{O}}(|A'|^{1+\alpha-\varepsilon}) + |A|^{1+o(1)} = \ensuremath{\mathcal{O}}(|A|^{1+\alpha-\varepsilon}),
\]
which would refute OVH.
\end{proof}
Using this insight, we now proceed to proving hardness results for different approximation ratios for ANN under the continuous Fréchet distance.
\subsection{\boldmath Hardness of $(2-\varepsilon)$-Approximation in 1D}
In this section we present our first hardness result. We note that the gadgets that we use to encode our vectors are inspired by \cite{DP20}.
\begin{theorem}\label{thm:2minusepshard1d}
Assume OVH holds true. For any $\varepsilon,\varepsilon'>0$ there is a $c > 0$, such that there is no $(2-\varepsilon)$-ANN for the continuous Fréchet distance supporting query curves of any complexity $k \in \omega(1) \cap o(\log n)$ and storing $n$ one-dimensional curves of complexity $m = k \cdot n^{c/k}$ with preprocessing time $\ensuremath{\text{poly}}(n)$ and query time $\ensuremath{\mathcal{O}}(n^{1-\epsilon'})$.
\end{theorem}
\begin{proof}
We show the hardness by a reduction from $\textsc{OneSidedSparseOV}\xspace(k)$. To that end, let $A, B \subset \{0,1\}^d$ be a $\textsc{OneSidedSparseOV}\xspace(k)$ instance with $|B| = |A|^\alpha$ for a constant $\alpha \leq 1$ that we specify later, $k \in \omega(1) \cap o(\log |A|)$, and $d = k \cdot |A|^{c/k}$ with a constant $c > 0$ that we later choose sufficiently large.
Recall that, by Lemma \ref{lem:ossov}, there exists a $c > 0$ such that $\textsc{OneSidedSparseOV}\xspace(k)$ is OV-hard in this regime.
The goal is to use the $k$-sparsity of the vectors in $A$ to obtain short query curves of length $\ensuremath{\mathcal{O}}(k)$.
Let us first give the reduction. To that end, we define the following subcurves:
\[
0_A \coloneqq \seqtocurve{0,6}, \quad
1_A \coloneqq \seqtocurve{0,6,2,6}, \quad
0_B \coloneqq \seqtocurve{0,5,3,6}, \quad
1_B \coloneqq \seqtocurve{0,6}
\]
Now, given a $\textsc{OneSidedSparseOV}\xspace(k)$ instance $A,B$, we create the input set $\mathcal{P}$ and the query set $\mathcal{Q}$ of a $(2-\varepsilon)$-ANN instance with distance threshold $\delta = 1$ as follows. For each vector $a \in A$, we add the curve $Q_a$ to $\mathcal{Q}$ which is defined as
\[
Q_a \coloneqq \mathop{\bigcirc}_{i=0}^{d-1} \; V_a^i \quad \text{ with } \quad V_a^i \coloneqq a[i]_A + 6i,
\]
where $a[i]_A$ is either $0_A$ or $1_A$, depending on the value of $a[i]$, and the \enquote{$+6i$} is a translation of each point of the curve by $6i$. For each vector $b \in B$, we add the curve $P_b$ to $\mathcal{P}$ which is defined as
\[
P_b \coloneqq \mathop{\bigcirc}_{i=0}^{d-1} \; V_b^i \quad \text{ with } \quad V_b^i \coloneqq b[i]_B + 6i.
\]
It is crucial that we make the resulting curves non-degenerate by removing all degenerate vertices. In particular, all connecting vertices between gadget curves will be removed and any sequence of consecutive gadgets $0_A$ will be turned into a single line segment. Thus, the curves in $\mathcal{Q}$ will have complexity $\ensuremath{\mathcal{O}}(k)$. See Figure \ref{fig:2minuseps_lb_1d} for an example of the construction.
\begin{figure}
\centering
\includegraphics[scale=1.1]{figures/2_lb_1d.pdf}
\caption{Visualization of the $2-\varepsilon$ lower bound in 1D.}
\label{fig:2minuseps_lb_1d}
\end{figure}
We now show correctness of the reduction. Let $P_b \in \mathcal{P}$ and $Q_a \in \mathcal{Q}$ be any curves in these sets. Note that if $\df(P_b, Q_a) < 2$, then if the traversal is a distance $2$ into the gadget $V_a^i$, then we also have to be in the gadget $V_b^i$, as there is no other gadget in distance less than $2$. The same statement holds for $V_a^i$ and $V_b^i$ exchanged. Thus, we traverse the gadgets synchronously. Now consider the case $\langle a,b \rangle = 0$. As $\df(0_A, 0_B) = \df(0_A, 1_B) = \df(1_A, 0_B) = 1$, also $\df(P_b, Q_a) = 1$, as there is no $i \in \{0,\dots,d-1\}$ for which the gadget $V_a^i$ is of type $1_A$ and $V_b^i$ is of type $1_B$. Conversely, consider the case $\langle a,b \rangle = 1$. Then there exist an $i \in \{0,\dots,d-1\}$ such that $V_a^i$ is of type $1_A$ and $V_b^i$ is of type $1_B$. As we traverse the gadgets synchronously and as $\df(V_b^i, V_a^i) = 2$, we have $\df(P_b, Q_a) = 2$. Thus, if we have a $(2-\varepsilon)$-ANN, then we can use it to check if there exist orthogonal vectors $a \in A$ and $b \in B$ by the above reduction.
It remains to show that this reduction implies the claimed lower bound. The time to compute the reduction is linear in the output size and thus negligible.
Recall that $\mathcal{P}$ is the input set, i.e., it is the set that we preprocess, and we run a query for each curve in $\mathcal{Q}$.
Note that by the construction of the above reduction we have $|\mathcal{P}| = |A|^\alpha$, and $|\mathcal{Q}| = |A|$.
Towards a contradiction, assume that we can solve $(2-\varepsilon)$-ANN with preprocessing time $\ensuremath{\mathcal{O}}(|\mathcal{P}|^{\alpha'})$ for some $\alpha' > 0$ and query time $\ensuremath{\mathcal{O}}(|\mathcal{P}|^{1-\varepsilon'})$ for some $\varepsilon' > 0$.
Choosing $\alpha = 1/\alpha'$, we obtain preprocessing time $\ensuremath{\mathcal{O}}(|\mathcal{P}|^{\alpha'}) = \ensuremath{\mathcal{O}}(|A|^{\alpha \alpha'}) = \ensuremath{\mathcal{O}}(|A|)$ and total query time
\[
\ensuremath{\mathcal{O}}(|\mathcal{Q}| \cdot |\mathcal{P}|^{1-\varepsilon'}) = \ensuremath{\mathcal{O}}(|A| \cdot |A|^{\alpha (1-\varepsilon')}) = \ensuremath{\mathcal{O}}(|A|^{1 + \alpha - \varepsilon' \alpha}).
\]
Thus, we could solve $\textsc{OneSidedSparseOV}\xspace(k)$ in time $\ensuremath{\mathcal{O}}(|A|^{1 + \alpha - \varepsilon' \alpha})$. However, by Lemma \ref{lem:ossov}, there exists a $c > 0$ such that this contradicts OVH.
\end{proof}
\subsection{\boldmath Hardness of $(3-\varepsilon)$-Approximation in 1D}
We now show the first of two hardness results that rule out certain preprocessing and query times for $(3-\varepsilon)$-approximations. Note that ruling out higher approximation ratios is not possible using gadgets that encode the single coordinates, as the distance between the gadgets that encode 1-entries can be at most 3 times the threshold distance due to the triangle inequality between the other gadgets, for details see \cite{BuchinOS19}.
For one-dimensional curves we obtain the following lower bound.
We note that the gadgets that we use to encode our vectors are inspired by \cite{BuchinOS19}.
\begin{theorem}\label{thm:3minuseps_lb_1d}
Assume OVH holds true. For any $\varepsilon,\varepsilon'>0$ there is a $c > 0$, such that there is no $(3-\varepsilon)$-ANN for the continuous Fréchet distance storing $n$ one-dimensional curves of complexity $m$ and supporting query curves of complexity $k$ with $m = k = c \log n$ such that we have preprocessing time $\ensuremath{\text{poly}}(n)$ and query time $\ensuremath{\mathcal{O}}(n^{1-\epsilon'})$.
\end{theorem}
\begin{proof}
We show the hardness by a reduction from OV. To that end, let $A, B \subset \{0,1\}^d$ be an OV instance with $|B| = |A|^\alpha$ for a constant $\alpha \leq 1$ that we specify later and $d = c \log |A|$ for a constant $c > 0$ that we later choose sufficiently large.
Recall that, by Lemma \ref{lem:unbalancedOVH}, there exists a $c > 0$ such that this problem is OV-hard.
We now create the input set $\mathcal{P}$ and query set $\mathcal{Q}$ of a $(3-\varepsilon)$-ANN instance with distance threshold $\delta = 1$ as follows.
For convenience, we define the curves
\[
1_A \coloneqq \seqtocurve{0,6,0},\quad 0_B \coloneqq \seqtocurve{0,7,0},\quad 0_A \coloneqq \seqtocurve{0,8,0},\quad 1_B \coloneqq \seqtocurve{0,9,0}.
\]
First, for each vector $a \in A$ we create a new curve $Q_a \in \mathcal{Q}$ defined as
\[
Q_a \coloneqq \mathop{\bigcirc}_{i=0}^{d-1} \; V_a^i \quad \text{ with } \quad V_a^i \coloneqq a[i]_A,
\]
where $a[i]_A$ is either $1_A$ or $0_A$.
Second, for each vector $b \in B$ we create a new curve $P_b \in \mathcal{P}$ defined as
\[
P_b \coloneqq \mathop{\bigcirc}_{i=0}^{d-1} \; V_b^i \quad \text{ with } \quad V_b^i \coloneqq b[i]_B.
\]
See Figure \ref{fig:3minuseps_lb_1d} for examples of these curves.
\begin{figure}
\centering
\includegraphics[scale=1.1]{figures/3_lb_1d.pdf}
\caption{Visualization of the $3-\varepsilon$ lower bound in 1D.}
\label{fig:3minuseps_lb_1d}
\end{figure}
We now prove the correctness of the reduction. Consider any $Q_a \in \mathcal{Q}$ and $P_b \in \mathcal{P}$. We first show that if $\df(Q_a, P_b) < 3$, then any traversal realizing this distance has to visit vertices of both curves synchronously. More precisely, a traversal can be in the gadgets $V_a^i$ and $V_b^j$ with $i \neq j$ only if the positions on both curves are strictly less than $6$ in image space. Towards a contradiction, consider the first point in the traversal where this occurs and without loss of generality let the traversal be at position $6$ in $Q_a$. As the traversal on $Q_a$ visited $0$ before, the traversal on $P_b$ has to be below $3$ and thus the positions on $Q_a$ and $P_b$ are within distance more than $3$, which is a contradiction. Thus, when traversing gadgets $V_a^i$ and $V_b^j$ above $6$, then $i = j$.
We now proceed with showing that for all $a \in A$ and $b \in B$ it holds that $\df(Q_a,P_b) \leq 1$ if and only if $\langle a, b \rangle = 0$, and $\df(Q_a,P_b) \geq 3$ otherwise. Assume that $\langle a,b \rangle = 0$, then, by traversing all $V_a^i, V_b^i$ for $i \in \{0,\dots,d-1\}$ synchronously, they can always stay within distance at most 1, as $\df(0_A, 0_B) = \df(0_A, 1_B) = \df(1_A, 0_B) = 1$. However, if $\langle a,b \rangle > 0$, then there exists an index $i \in \{0,\dots,d-1\}$ such that $a[i] = b[i] = 1$. If $\df(Q_a,P_b) < 3$, then we have to traverse these $V_a^i$ and $V_b^i$ synchronously but as $\df(1_A, 1_B) = 3$, there is a point in the traversal where the curves have distance at least $3$ and thus $\df(Q_a,P_b) \geq 3$. It follows that, if we have a $(3-\varepsilon)$-ANN, then it would find if there exists orthogonal vectors in $A$ and $B$ by querying each $Q \in \mathcal{Q}$.
Let us now show that this implies the desired lower bounds. The time to compute the reduction is linear in the output size and thus negligible.
Note that by construction we have $m = k = \ensuremath{\mathcal{O}}(c \log |A|) \le c' \log |A|$ for some constant $c' > 0$. By adding dummy vertices, say many points close to the starting point, we can ensure $m = k = c' \log |A|$ (we could also achieve any intended value $m \ge k$, but this is not necessary for the theorem statement). Moreover, $|\mathcal{P}| = |A|^\alpha$ and $|\mathcal{Q}| = |A|$.
Towards a contradiction, assume that we can solve $(3-\varepsilon)$-ANN with preprocessing time $\ensuremath{\mathcal{O}}(|\mathcal{P}|^{\alpha'})$ for some $\alpha' > 0$ and query time $\ensuremath{\mathcal{O}}(|\mathcal{P}|^{1-\varepsilon'})$ for some $\varepsilon' > 0$.
Choosing $\alpha = 1/\alpha'$, we obtain preprocessing time $\ensuremath{\mathcal{O}}(|\mathcal{P}|^{\alpha'}) = \ensuremath{\mathcal{O}}(|A|^{\alpha \alpha'}) = \ensuremath{\mathcal{O}}(|A|)$ and total query time
\[
\ensuremath{\mathcal{O}}(|\mathcal{Q}| \cdot |\mathcal{P}|^{1-\varepsilon'}) = \ensuremath{\mathcal{O}}(|A| \cdot |A|^{\alpha (1-\varepsilon')}) = \ensuremath{\mathcal{O}}(|A|^{1 + \alpha - \varepsilon' \alpha}).
\]
Thus, we could solve unbalanced OV in time $\ensuremath{\mathcal{O}}(|A|^{1 + \alpha - \varepsilon' \alpha})$. However, by Lemma \ref{lem:unbalancedOVH}, there exists a $c > 0$ such that this contradicts OVH.
\end{proof}
\subsection{\boldmath Hardness of $(3-\varepsilon)$-Approximation in 2D}
While until here we only considered algorithmic and hardness results for one-dimensional curves, we now show a hardness result for \emph{two-dimensional} curves. This is the only technical section in this paper where we consider two-dimensional curves. Note that in Section \ref{section:prelims} we defined most of our notation for curves in $\mathbb{R}^d$ and thus the notation of the previous hardness results carries over. For two-dimensional curves we obtain the following lower bound.
\begin{theorem}\label{thm:3minuseps_lb_2d}
Assume OVH holds true. For any $\varepsilon,\varepsilon'>0$ there is a $c > 0$, such that there is no $(3-\varepsilon)$-ANN for the continuous Fréchet distance supporting query curves of any complexity $k \in \omega(1) \cap o(\log n)$ and storing $n$ two-dimensional curves of complexity $m = k \cdot n^{c/k}$ with preprocessing time $\ensuremath{\text{poly}}(n)$ and query time $\ensuremath{\mathcal{O}}(n^{1-\epsilon'})$.
\end{theorem}
\begin{proof}
This proof is very similar to the proof of Theorem \ref{thm:2minusepshard1d}. The significant difference is the gadgets that we construct. To this end, consider a $\textsc{OneSidedSparseOV}\xspace(k)$ instance $A,B$, where we again use the $k$-sparsity of the vectors in $A$ to obtain short query curves of length $\ensuremath{\mathcal{O}}(k)$.
We define the generic subcurve
\[
V(y) \coloneqq \seqtocurve{(0,0), (3,0), (3,y), (6,y), (6,0)}
\]
to then define the usual gadgets
\[
0_A \coloneqq V(0),\quad
1_A \coloneqq V(2),\quad
0_B \coloneqq V(1),\quad
1_B \coloneqq V(-1).
\]
Now, given a $\textsc{OneSidedSparseOV}\xspace(k)$ instance $A,B$, we create the input set $\mathcal{P}$ and query set $\mathcal{Q}$ of a $(3-\varepsilon)$-ANN with distance threshold $\delta = 1$ as follows. For each vector $a \in A$, we add the curve $Q_a$ to $\mathcal{Q}$ which is defined as
\[
Q_a \coloneqq \mathop{\bigcirc}_{i=0}^{d-1} \; V_a^i \quad \text{ with } \quad V_a^i \coloneqq a[i]_A + (6i,0),
\]
where $a[i]_A$ is either $0_A$ or $1_A$, depending on the value of $a[i]$, and the \enquote{$+(6i,0)$} is a translation of each point of the curve by $(6i,0)$. For each vector $b \in B$, we add the curve $P_b$ to $\mathcal{P}$ which is defined as
\[
P_b \coloneqq \mathop{\bigcirc}_{i=0}^{d-1} \; V_b^i \quad \text{ with } \quad V_b^i \coloneqq b[i]_B + (6i,0).
\]
It is crucial that we make the resulting curves non-degenerate by removing all degenerate vertices.
In particular, any sequence of consecutive gadgets $0_A$ will be turned into a single line segment. Thus, the curves in $\mathcal{Q}$ will have complexity $\ensuremath{\mathcal{O}}(k)$. See Figure \ref{fig:3minuseps_lb_2d} for an example of the construction.
\begin{figure}
\centering
\includegraphics[scale=1.1]{figures/3_lb_2d.pdf}
\caption{Visualization of the $3-\varepsilon$ lower bound in 2D.}
\label{fig:3minuseps_lb_2d}
\end{figure}
We now prove correctness of the reduction. Consider the case of two orthogonal vectors $a \in A$ and $b \in B$ such that there is an $i \in \{0,\dots,d-1\}$ with $a[i] = b[i] = 1$. Note that for $\df(Q_a, P_b) < 3$, there has to be a point in the traversal where we are in some point $(x_a, 2)$ in $Q_a$ and in some point $(x_b, -1)$ in $P_b$ as otherwise the distance of the $x$-coordinate would be at least 3. However, the $y$-distance of these points is 3 and thus $\df(Q_a, P_b) \geq 3$. On the other hand, if $a \in A$ and $b \in B$ are orthogonal, then we can traverse the two curves with the same speed in $x$-direction --- i.e., staying at the same $x$-coordinate at every point in time --- and obtain a Fréchet distance at most 1 as $\df(0_A, 0_B) = \df(0_A, 1_B) = \df(1_A, 0_B) = 1$, where the described traversal realizes these distances.
The remainder of the proof, i.e., the derivation of the claimed lower bound, is the same as in the proof of Theorem \ref{thm:2minusepshard1d} and we thus omit it for brevity.
\end{proof}
\section{Conclusions and Open Problems}\label{sec:discussion}
In this work we largely resolve the $\alpha$-ANN problem under the continuous Fréchet distance for one-dimensional curves from a fine-grained perspective for $1 < \alpha < 3$.
We show that, in general, most of the running times presented in this work cannot be improved significantly, however, other tradeoffs between preprocessing time and query time are still possible, and other parameter regimes might be shown hard or more tractable, e.g., for $k \in \ensuremath{\mathcal{O}}(1)$. Indeed, there is a line of work on related data structure problems using the continuous Fr\'echet distance for the specific value of $k=2$, which corresponds to queries with line segments, see~\cite{BergCG13, DBLP:journals/corr/abs-2102-05844}.
It also remains a fundamental problem to show fine-grained lower bounds for approximation factor larger than 3 for a metric problem, which seems to require fundamentally different techniques, cf.~\cite{Rubinstein18}.
As for the continuous Fr\'echet distance, our new upper and lower bounds show that the case of one-dimensional curves provides a kaleidoscopic view into the computational complexity and the underlying challenges posed by the general problem for polygonal curves in $\mathbb{R}^d$.
The obvious way forward in this line of research is to show upper and lower bounds for dimension~2 and higher. Some of our ideas might translate directly, such as the idea to generate candidate curves at query time in order to achieve a tradeoff between preprocessing and query time.
While our lower bounds also hold in higher dimension, it is conceivable that higher lower bounds can be shown already in the plane.
In fact, we already initiate this line of work by showing an equally high lower bound for $(3-\varepsilon)$-ANN in the plane as we have for $(2-\varepsilon)$-ANN for one-dimensional curves.
This lower bound already hints at techniques that can potentially achieve a matching upper bound. We leave this as an open problem.
Our notions of straightenings and signatures, which capture the approximate shape of one-dimensional curves in a best-possible way, currently do not exist in dimension 2 or higher. Extending these notions to the plane by itself would be very interesting.
\typeout{}
\bibliographystyle{alpha}
|
3,212,635,537,702 | arxiv | \section{Introduction}
In this paper, we aim attention at a family of linear singularly perturbed equations that involve linear fractional
transforms and partial derivatives of the form
\begin{equation}
\mathcal{P}(t,z,\epsilon,\{m_{k,t,\epsilon}\}_{k \in I},\partial_{t},\partial_{z})y(t,z,\epsilon)=0 \label{SPEm_intro}
\end{equation}
where $\mathcal{P}(t,z,\epsilon,\{U_{k}\}_{k \in I},V_{1},V_{2})$ is a polynomial in $V_{1},V_{2}$, linear in
$U_{k}$, with holomorphic coefficients relying on $t,z,\epsilon$ in the vicinity of the origin in
$\mathbb{C}^{2}$, where $m_{k,t,\epsilon}$ stands for the Moebius operator acting on the time variable
$m_{k,t,\epsilon}y(t,z,\epsilon) = y(\frac{t}{1 + k \epsilon t},z,\epsilon)$ for $k$ belonging to some finite
subset $I$ of $\mathbb{N}$.
More precisely, we assume that the operator $\mathcal{P}$ can be factorized in the following manner
$\mathcal{P} = \mathcal{P}_{1}\mathcal{P}_{2}$ where $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ are linear operators with the
specific shapes
\begin{multline*}
\mathcal{P}_{1}(t,z,\epsilon,\{ m_{k,t,\epsilon} \}_{k \in I},\partial_{t},\partial_{z}) =
P(\epsilon t^{2}\partial_{t})\partial_{z}^{S}
- \sum_{\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}}
c_{\underline{k}}(z,\epsilon) m_{k_{2},t,\epsilon} (t^{2}\partial_{t})^{k_0} \partial_{z}^{k_1},\\
\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z}) =
P_{\mathcal{B}}(\epsilon t^{2}\partial_{t}) \partial_{z}^{S_{\mathcal{B}}} -
\sum_{\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}} d_{\underline{l}}(z,\epsilon)
t^{l_0}\partial_{t}^{l_1}\partial_{z}^{l_{2}}.
\end{multline*}
Here, $\mathcal{A}$ and $\mathcal{B}$ are finite subsets of $\mathbb{N}^{3}$ and $S,S_{\mathcal{B}} \geq 1$ are integers
that are submitted to the constraints (\ref{cond_SPCP_first}) and (\ref{SB_underline_l_constraints})
with (\ref{d_l01_defin}). Moreover, $P(X)$ and $P_{\mathcal{B}}(X)$ represent polynomials that are not identically
vanishing with complex coefficients and suffer the property that their roots belong to the open right plane
$\mathbb{C}_{+} = \{ z \in \mathbb{C} / \mathrm{Re}(z) > 0 \}$ and avoid a finite set of suitable unbounded sectors
$S_{d_p} \subset \mathbb{C}_{+}$, $0 \leq p \leq \iota-1$ centered at 0 with bisecting directions $d_{p} \in \mathbb{R}$.
The coefficients $c_{\underline{k}}(z,\epsilon)$ and $d_{\underline{l}}(z,\epsilon)$ for
$\underline{k} \in \mathcal{A}$, $\underline{l} \in \mathcal{B}$ define holomorphic functions on some polydisc centered
at the origin in $\mathbb{C}^{2}$. We consider the equation (\ref{SPEm_intro}) together with a set of initial
Cauchy data
\begin{equation}
(\partial_{z}^{j}y)(t,0,\epsilon) = \begin{cases}
\psi_{j,k}(t,\epsilon) & \text{if } k \in \llbracket -n,n \rrbracket\\
\psi_{j,d_{p}}(t,\epsilon) & \text{if } 0 \leq p \leq \iota-1
\end{cases} \label{SPEm_ic_1}
\end{equation}
for $0 \leq j \leq S_{\mathcal{B}}-1$ and
\begin{equation}
(\partial_{z}^{h}\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z})y)(t,0,\epsilon) =
\begin{cases}
\varphi_{h,k}(t,\epsilon) & \text{if } k \in \llbracket -n,n \rrbracket\\
\varphi_{h,d_{p}}(t,\epsilon) & \text{if } 0 \leq p \leq \iota-1 \label{SPEm_ic_2}
\end{cases}
\end{equation}
for $0 \leq h \leq S-1$ and some integer $n \geq 1$. We write $\llbracket -n,n \rrbracket$ for the set of integer numbers $m$ such that $-n\le m\le n$. For $0 \leq j \leq S_{\mathcal{B}}-1$, $0 \leq h \leq S-1$,
the functions $\psi_{j,k}(t,\epsilon)$ and $\varphi_{h,k}(t,\epsilon)$
(resp. $\psi_{j,d_{p}}(t,\epsilon)$ and $\varphi_{h,d_{p}}(t,\epsilon)$) are holomorphic
on products $\mathcal{T} \times \mathcal{E}_{HJ_n}^{k}$ for $k \in \llbracket -n,n \rrbracket$
(resp. on $\mathcal{T} \times \mathcal{E}_{S_{d_p}}$ for $0 \leq p \leq \iota-1$), where $\mathcal{T}$ is a fixed open
bounded sector centered at 0 with bisecting direction $d=0$ and
$\underline{\mathcal{E}} = \{ \mathcal{E}_{HJ_n}^{k} \}_{k \in \llbracket -n,n \rrbracket}
\cup \{ \mathcal{E}_{S_{d_p}} \}_{0 \leq p \leq \iota-1}$ represents a collection of open bounded sectors centered
at 0 whose union form a covering of $\mathcal{U} \setminus \{ 0 \}$, where $\mathcal{U}$ stands for some
neighborhood of 0 in $\mathbb{C}$ (the complete list of constraints attached to $\underline{\mathcal{E}}$ is provided
at the beginning of Subsection 3.3).
This work is a continuation of a study harvested in the paper \cite{ma} dealing with small step size difference-differential
Cauchy problems of the form
\begin{equation}
\epsilon \partial_{s} \partial_{z}^{S}X_{i}(s,z,\epsilon) =
\mathcal{Q}(s,z,\epsilon,\{ T_{k,\epsilon} \}_{k \in J}, \partial_{s},\partial_{z})X_{i}(s,z,\epsilon)
+ P(z,\epsilon,X_{i}(s,z,\epsilon)) \label{SPE_shift_intro}
\end{equation}
for given initial Cauchy conditions $(\partial_{z}^{j}X_{i})(s,0,\epsilon) = x_{j,i}(s,\epsilon)$, for
$0 \leq i \leq \nu - 1$, $0 \leq j \leq S-1$, where $\nu,S \geq 2$ are integers, $\mathcal{Q}$ is some differential
operator which is polynomial in time $s$, holomorphic near the origin in $z,\epsilon$, that includes shift operators
acting on time, $T_{k,\epsilon}X_{i}(s,z,\epsilon) = X_{i}(s + k \epsilon,z,\epsilon)$ for $k \in J$ that represents
a finite subset of $\mathbb{N}$ and $P$ is some polynomial. Indeed, by performing the change of variable $t=1/s$,
the equation (\ref{SPEm_intro}) maps into a singularly perturbed linear PDE combined with small shifts $T_{k,\epsilon}$,
$k \in I$. The initial data $x_{j,i}(s,\epsilon)$ were supposed to define holomorphic functions on products
$(\mathcal{S} \cap \{ |s| > h \} ) \times \mathcal{E}_{i} \subset \mathbb{C}^{2}$ for some $h > 0$ large enough,
where $\mathcal{S}$ is a fixed open unbounded sector centered at 0 and
$\overline{\mathcal{E}} = \{ \mathcal{E}_{i} \}_{0 \leq i \leq \nu-1}$ forms a set of sectors which covers the vicinity
of the origin. Under appropriate restrictions regarding the shape of (\ref{SPE_shift_intro}) and the inputs
$x_{j,i}(s,\epsilon)$, we have built up bounded actual holomorphic solutions written as Laplace transforms
$$ X_{i}(s,z,\epsilon) = \int_{L_{e_i}} V_{i}(\tau,z,\epsilon) \exp( - \frac{s \tau}{\epsilon} ) d\tau $$
along halflines $L_{e_i} = \mathbb{R}_{+} e^{\sqrt{-1}e_{i}}$ contained in $\mathbb{C}_{+} \cup \{ 0 \}$ and, following
an approach by G. Immink (see \cite{im1}), written as truncated Laplace transforms
$$ X_{i}(s,z,\epsilon) = \int_{0}^{\Gamma_{i}\log(\Omega_{i}s/\epsilon)} V_{i}(\tau,z,\epsilon)
\exp( - \frac{s \tau}{\epsilon} ) d\tau $$
provided that $\Gamma_{i} \in \mathbb{C}_{-} = \{ z \in \mathbb{C}/ \mathrm{Re}(z) < 0 \}$, for well chosen
$\Omega_{i} \in \mathbb{C}^{\ast}$. In general, these truncated Laplace transforms do not fulfill the equation
(\ref{SPE_shift_intro}) but they are constructed in a way that all differences $X_{i+1} - X_{i}$ define flat
functions w.r.t $s$ on the intersections $\mathcal{E}_{i+1} \cap \mathcal{E}_{i}$. We have shown the existence of a
formal power series $\hat{X}(s,z,\epsilon) = \sum_{l \geq 0} h_{l}(s,z) \epsilon^{l}$ with coefficients
$h_{l}$ determining bounded holomorphic functions on $(\mathcal{S} \cap \{ |s| > h \}) \times D(0,\delta)$ for some
$\delta>0$, which solves (\ref{SPE_shift_intro}) and represents the $1-$Gevrey asymptotic expansion of each
$X_{i}$ w.r.t $\epsilon$ on $\mathcal{E}_{i}$, $0 \leq i \leq \nu-1$ (see Definition 7). Besides a precised
hierarchy that involves actually two levels of asymptotics has been uncovered. Namely, each function $X_{i}$
can be split into a sum of a convergent series, a piece $X_{i}^{1}$ which possesses an asymptotic expansion
of Gevrey order 1 w.r.t $\epsilon$ and a part $X_{i}^{2}$ whose asymptotic expansion is of Gevrey order $1^{+}$ as
$\epsilon$ tends to 0 on $\mathcal{E}_{i}$ (see Definition 8). However two major drawbacks of this result may be
pointed out. Namely, some part of the family $\{ X_{i} \}_{0 \leq i \leq \nu-1}$ do not define solutions of
(\ref{SPE_shift_intro}) and no unicity information were obtained concerning the $1^{+}-$Gevrey asymptotic expansion
(related to so-called $1^{+}-$summability as defined in \cite{im1}, \cite{im2}, \cite{im3}).
In this work, our objective is similar to the former one in \cite{ma}. Namely, we plan to construct actual holomorphic
solutions $y_{k}(t,z,\epsilon)$, $k \in \llbracket -n,n \rrbracket$ (resp. $y_{d_p}(t,z,\epsilon)$,
$0 \leq p \leq \iota-1$ ) to the problem (\ref{SPEm_intro}), (\ref{SPEm_ic_1}), (\ref{SPEm_ic_2}) on domains
$\mathcal{T} \times D(0,\delta) \times \mathcal{E}_{HJ_n}^{k}$
(resp. $\mathcal{T} \times D(0,\delta) \times \mathcal{E}_{S_{d_p}}$) for some small radius $\delta>0$ and to analyze
the nature of their asymptotic expansions as $\epsilon$ approaches 0. The main novelty is that we can now build solutions
to (\ref{SPEm_intro}), (\ref{SPEm_ic_1}), (\ref{SPEm_ic_2}) on a full covering
$\underline{\mathcal{E}}$ of a neighborhood of 0 w.r.t $\epsilon$. Besides, a structure with two levels of
Gevrey $1$ and $1^{+}$ asymptotics is also observed and unicity information leading to $1^{+}-$summability
is achieved according to a refined version of the Ramis-Sibuya Theorem obtained in \cite{ma} and to the recent progress
on so-called summability for a strongly regular sequence obtained by the authors and J. Sanz in \cite{lamasa} and \cite{sa}.
The manufacturing of the solutions $y_{k}$ and $y_{d_p}$ is divided in two main parts and can be outlined as follows.
We first set the problem
\begin{equation}
\mathcal{P}_{1}(t,z,\epsilon,\{ m_{k,t,\epsilon} \}_{k \in I},\partial_{t},\partial_{z})u(t,z,\epsilon) = 0
\label{SPEm_1_intro}
\end{equation}
for the given Cauchy inputs
\begin{equation}
(\partial_{z}^{h}u)(t,0,\epsilon) =
\begin{cases}
\varphi_{h,k}(t,\epsilon) & \text{if } k \in \llbracket -n,n \rrbracket\\
\varphi_{h,d_{p}}(t,\epsilon) & \text{if } 0 \leq p \leq \iota-1 \label{SPEm_1_ic}
\end{cases}
\end{equation}
for $0 \leq h \leq S-1$. Under the restriction (\ref{cond_SPCP_first}) and suitable control on the initial data
(displayed through (\ref{xi_larger_sigma}), (\ref{norm_w_initial_small}) and (\ref{norm_Sdp_initial_wj})), one can
build a first collection of actual solutions to (\ref{SPEm_1_intro}), (\ref{SPEm_1_ic}) as special Laplace transforms
$$ u_{k}(t,z,\epsilon) = \int_{P_k} w_{HJ_n}(u,z,\epsilon) \exp( - \frac{u}{\epsilon t} ) \frac{du}{u} $$
which are bounded holomorphic on $\mathcal{T} \times D(0,\delta) \times \mathcal{E}_{HJ_n}^{k}$, where
$w_{HJ_n}$ defines a holomorphic function on a domain
$HJ_{n} \times D(0,\delta) \times D(0,\epsilon_{0}) \setminus \{ 0 \}$ for some radii $\delta,\epsilon_{0}>0$ and
$HJ_{n}$ represents the union of two sets of consecutively overlapping horizontal strips
$$ H_{k} = \{ z \in \mathbb{C} / a_{k} \leq \mathrm{Im}(z) \leq b_{k}, \ \ \mathrm{Re}(z) \leq 0 \} \ \ , \ \
J_{k} = \{ z \in \mathbb{C} / c_{k} \leq \mathrm{Im}(z) \leq d_{k}, \ \ \mathrm{Re}(z) \leq 0 \}$$
as described at the beginning of Subsection 3.1 and $P_{k}$ is the union of a segment joining 0 and some
well chosen point $A_{k} \in H_{k}$ and the horizontal halfline $\{ A_{k} - s/ s \geq 0 \}$, for
$k \in \llbracket -n,n \rrbracket$. Moreover, $w_{HJ_n}(\tau,z,\epsilon)$ has (at most) super exponential decay w.r.t $\tau$
on $H_{k}$ (see (\ref{bds_WHJn_Hk})) and (at most) super exponential growth w.r.t $\tau$ along
$J_{k}$ (see (\ref{bds_WHJn_Jk})), uniformly in $z \in D(0,\delta)$, provided that
$\epsilon \in D(0,\epsilon_{0}) \setminus \{ 0 \}$ (Theorem 1).
The idea of considering function spaces sharing both super exponential growth and decay on strips and Laplace
transforms along piecewise linear paths departs from the next example worked out by B. Braaksma, B. Faber and
G. Immink in \cite{brfaim} (see also \cite{fab}),
\begin{equation}
h(s+1) - as^{-1}h(s) = s^{-1} \label{DE_BFI}
\end{equation}
for a real number $a>0$, for which solutions are given as special Laplace transforms
$$ h_{n}(s) = \int_{C_{n}} e^{-s \tau} e^{\tau - a} e^{ae^{\tau}} d\tau $$
for each $n \in \mathbb{Z}$, where $C_{n}$ is a path connecting 0 and $+\infty + i\theta$ for some
$\theta \in ( \frac{\pi}{2} + 2n\pi, \frac{3\pi}{2} + 2n\pi)$ built up with the help of a segment and a horizontal
halfline as above for the path $P_{k}$. The function $\tau \mapsto e^{\tau-a} e^{ae^{\tau}}$ has super exponential decay
(resp. growth) on a set of strips $-H_{k}$ (resp. $-J_{k}$) as explained in the example after Definition 3.
Furthermore, the functions $h_{n}(s)$ possess an asymptotic expansion of Gevrey order 1,
$\hat{h}(s) = \sum_{l \geq 1} h_{l} s^{-l}$ that formally solves (\ref{DE_BFI}), as $s \rightarrow \infty$ on
$\mathbb{C}_{+}$.
On the other hand, a second set of solutions to (\ref{SPEm_1_intro}), (\ref{SPEm_1_ic}) can be found as usual
Laplace transforms
$$ u_{d_p}(t,z,\epsilon) = \int_{L_{\gamma_{d_p}}} w_{d_p}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} $$
along halflines $L_{\gamma_{d_p}}=\mathbb{R}_{+}e^{\sqrt{-1}\gamma_{d_p}} \subset S_{d_p} \cup \{ 0 \}$, that define
bounded holomorphic functions on $\mathcal{T} \times D(0,\delta) \times \mathcal{E}_{S_{d_p}}$, where
$w_{d_p}(\tau,z,\epsilon)$ represents a holomorphic function on $(S_{d_p} \cup D(0,r)) \times D(0,\delta) \times
D(0,\epsilon_{0}) \setminus \{ 0 \}$ with (at most) exponential growth w.r.t $\tau$ on $S_{d_p}$, uniformly
in $z \in D(0,\delta)$, whenever $\epsilon \in D(0,\epsilon_{0}) \setminus \{ 0 \}$, $0 \leq p \leq \iota-1$ (Theorem 1).
In a second stage, we focus on both problems
\begin{equation}
\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z})y(t,z,\epsilon) = u_{k}(t,z,\epsilon) \label{SPE_2_1}
\end{equation}
with Cauchy data
\begin{equation}
(\partial_{z}^{j}y)(t,0,\epsilon) = \psi_{j,k}(t,\epsilon) \label{SPE_2_1_i_c}
\end{equation}
for $0 \leq j \leq S_{\mathcal{B}}-1$, $k \in \llbracket -n,n \rrbracket$ and
\begin{equation}
\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z})y(t,z,\epsilon) = u_{d_p}(t,z,\epsilon) \label{SPE_2_2}
\end{equation}
under the conditions
\begin{equation}
(\partial_{z}^{j}y)(t,0,\epsilon) = \psi_{j,d_{p}}(t,\epsilon) \label{SPE_2_2_i_c}
\end{equation}
for $0 \leq j \leq S_{\mathcal{B}}-1$, $0 \leq p \leq \iota-1$. We first observe that the coupling of the problems
(\ref{SPEm_1_intro}), (\ref{SPEm_1_ic}) together with (\ref{SPE_2_1}), (\ref{SPE_2_1_i_c}) and
(\ref{SPE_2_2}), (\ref{SPE_2_2_i_c}) is equivalent to our initial question of searching for solutions to
(\ref{SPEm_intro}) under the requirements (\ref{SPEm_ic_1}), (\ref{SPEm_ic_2}).
The approach which consists to consider equations presented in factorized form follows from a series of works by the same authors
\cite{lama1}, \cite{lama2}, \cite{lama3}. In our situation, the operator $\mathcal{P}_{1}$ cannot contain arbitrary
polynomials in $t$ neither general derivatives $\partial_{t}^{l_1}$, $l_{1} \geq 1$, since $w_{HJ_n}(\tau,z,\epsilon)$
would solve some equation of the form (\ref{1_aux_CP}) with exponential coefficients which would also contain convolution operators like those appearing in equation (\ref{ACP_forcterm_w}). But the spaces of functions with super exponential decay
are not stable under the action of these integral transforms. Those specific Banach spaces are however crucial to get
bounded (or at least with exponential growth) solutions $w_{HJ_n}(\tau,z,\epsilon)$ to (\ref{1_aux_CP})
leading to the existence of the special Laplace transforms $u_{k}(t,z,\epsilon)$ along the paths $P_{k}$. In order
to deal with more general sets of equations, we compose $\mathcal{P}_{1}$ with suitable differential operators
$\mathcal{P}_{2}$ which do not enmesh Moebius transforms. In this work, we have decided to focus only on linear problems.
We postpone the study of nonlinear equations for future investigation.
Taking for granted that the constraints (\ref{SB_underline_l_constraints}) and (\ref{d_l01_defin}) are observed, under
adequate handling on the Cauchy inputs (\ref{SPE_2_1_i_c}), (\ref{SPE_2_2_i_c}) (detailed in
(\ref{normRHJ_vj_Ivj}), (\ref{normSdp_vj_Ivj})), one can exhibit a foremost set of actual solutions to
(\ref{SPE_2_1}), (\ref{SPE_2_1_i_c}) as special Laplace transforms
$$ y_{k}(t,z,\epsilon) = \int_{P_{k}} v_{HJ_n}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} $$
that define bounded holomorphic functions on $\mathcal{T}\times D(0,\delta) \times \mathcal{E}_{HJ_n}^{k}$
where $v_{HJ_n}(\tau,z,\epsilon)$ represents a holomorphic function on $HJ_{n} \times D(0,\delta) \times
D(0,\epsilon_{0}) \setminus \{ 0 \}$ with (at most) exponential growth w.r.t $\tau$ along $H_{k}$
(see (\ref{bds_vHJn_Hk})) and withstanding (at most) super exponential growth w.r.t $\tau$ within $J_{k}$
(see (\ref{bds_vHJn_Jk})), uniformly in $z \in D(0,\delta)$ when $\epsilon \in D(0,\epsilon_{0}) \setminus \{ 0 \}$,
$k \in \llbracket -n,n \rrbracket$ (Theorem 2).
Furthermore, a second group of solutions to (\ref{SPE_2_2}), (\ref{SPE_2_2_i_c}) is achieved through usual Laplace
transforms
$$ y_{d_p}(t,z,\epsilon) = \int_{L_{\gamma_{d_p}}} v_{d_p}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} $$
defining holomorphic bounded functions on $\mathcal{T} \times D(0,\delta) \times \mathcal{E}_{S_{d_p}}$, where
$v_{d_p}(\tau,z,\epsilon)$ stands for a holomorphic function on $(S_{d_p} \cup D(0,r)) \times D(0,\delta)
\times D(0,\epsilon_{0}) \setminus \{ 0 \}$ with (at most) exponential growth w.r.t $\tau$ on
$S_{d_p}$, uniformly in $z \in D(0,\delta)$, for all $\epsilon \in D(0,\epsilon_{0}) \setminus \{ 0 \}$,
$0 \leq p \leq \iota-1$ (Theorem 2).
As a result, the merged family $\{ y_{k} \}_{k \in \llbracket -n,n \rrbracket}$ and
$\{ y_{d_p} \}_{0 \leq p \leq \iota - 1}$ defines a set of solutions on a full covering
$\underline{\mathcal{E}}$ of some neighborhood of 0 w.r.t $\epsilon$. It remains to describe the structure of their
asymptotic expansions as $\epsilon$ tend to 0. As in our previous work, we see that a double layer of Gevrey
asymptotics arise. Namely, each function $y_{k}(t,z,\epsilon)$, $k \in \llbracket -n,n \rrbracket$
(resp. $y_{d_p}(t,z,\epsilon)$, $0 \leq p \leq \iota-1$) can be decomposed as a sum of a convergent power series
in $\epsilon$, a piece $y_{k}^{1}(t,z,\epsilon)$ (resp. $y_{d_p}^{1}(t,z,\epsilon))$
that possesses an asymptotic expansion $\hat{y}^{1}(t,z,\epsilon) =
\sum_{l \geq 0} y_{l}^{1}(t,z) \epsilon^{l}$ of Gevrey order 1 w.r.t $\epsilon$ on $\mathcal{E}_{HJ_n}^{k}$
(resp. on $\mathcal{E}_{S_{d_p}}$) and a last tail $y_{k}^{2}(t,z,\epsilon)$ (resp. $y_{d_p}^{2}(t,z,\epsilon)$)
whose asymptotic expansion
$\hat{y}^{2}(t,z,\epsilon) = \sum_{l \geq 0} y_{l}^{2}(t,z) \epsilon^{l}$ is of Gevrey order $1^{+}$ as
$\epsilon$ becomes close to 0 on $\mathcal{E}_{HJ_n}^{k}$ (resp. on $\mathcal{E}_{S_{d_p}}$).
Furthermore, the functions $y_{\pm n}^{2}(t,z,\epsilon)$ and $y_{d_p}^{2}(t,z,\epsilon)$ are the restrictions of a
common holomorphic function $y^{2}(t,z,\epsilon)$ on $\mathcal{T} \times D(0,\delta)
\times ( \mathcal{E}_{HJ_n}^{-n} \cup \mathcal{E}_{HJ_n}^{n} \cup_{p=0}^{\iota-1} \mathcal{E}_{S_{d_p}} )$
which is the unique asymptotic expansion of $\hat{y}^{2}(t,z,\epsilon)$ of order $1^{+}$ called
$1^{+}-$sum in this work that can be reconstructed through an analog of a Borel/Laplace transform in the
framework of $\mathbb{M}-$summability for the strongly regular sequence
$\mathbb{M} = (M_{n})_{n \geq 0}$ with $M_{n} = (n/\mathrm{Log}(n+2))^{n}$ (Definition 8). On the other hand, the functions $y_{d_p}^{1}(t,z,\epsilon)$ represent $1-$sums of $\hat{y}^{1}$ w.r.t $\epsilon$ on $\mathcal{E}_{S_{d_p}}$ whenvener its aperture is strictly larger than $\pi$ in the classical sense as defined in reference books such as \cite{ba1}, \cite{ba2} or \cite{co}
(Theorem 3). These informations regarding Gevrey asymptotics complemented by unicity features is achieved through
a refinement of a version of the Ramis-Sibuya theorem obtained in \cite{ma} (Proposition 23) and the flatness properties
(\ref{log_flat_difference_yk_plus_1_minus_yk_HJn}), (\ref{exp_flat_difference_yk_plus_1_minus_yk_Sdp}),
(\ref{difference_y_HJn_Sd0}) and (\ref{difference_y_HJn_Sdiota}) for the differences of neighboring functions among the
two families $\{ y_{k} \}_{k \in \llbracket -n,n \rrbracket}$ and $\{ y_{d_p} \}_{0 \leq p \leq \iota-1}$.\bigskip
\noindent The paper is organized as follows.\medskip
In Section 2, we consider a first ancillary Cauchy problem with exponentially growing coefficients. We construct holomorphic
solutions belonging to the Banach space of functions with super exponential growth (resp. decay) on horizontal strips and
exponential growth on unbounded sectors. These Banach spaces and their properties under the action of linear continuous maps are
described in Subsections 2.1 and 2.2.
In Section 3, we provide solutions to the problem (\ref{SPEm_1_intro}), (\ref{SPEm_1_ic}) with the help of the problem solved in
Section 2. Namely, in Section 3.1, we construct the solutions $u_{k}(t,z,\epsilon)$ as special Laplace transforms, along piecewise
linear paths, on the sectors $\mathcal{E}_{HJ_n}^{k}$ w.r.t $\epsilon$, $k \in \llbracket -n,n \rrbracket$. In Section 3.2,
we build up the solutions $u_{d_p}(t,z,\epsilon)$ as usual Laplace transforms along halflines provided that $\epsilon$ belongs
to the sectors $\mathcal{E}_{S_{d_p}}$, $0 \leq p \leq \iota-1$. In Section 3.3, we combine both families
$\{ u_{k} \}_{k \in \llbracket -n,n \rrbracket}$ and $\{ u_{d_p} \}_{0 \leq p \leq \iota-1}$ in order to get a set of
solutions on a full covering $\underline{\mathcal{E}}$ of the origin in $\mathbb{C}^{\ast}$ and we provide bounds for the
differences of consecutive solutions (Theorem 1).
In Section 4, we concentrate on a second auxiliary convolution Cauchy problem with polynomial coefficients and forcing term that solves
the problem stated in Section 2. We establish the existence of holomorphic solutions which are part of the Banach spaces of
functions with super exponential (resp. exponential) growth on $L-$shaped domains and exponential growth on unbounded sectors.
A description of these Banach spaces and the action of integral operators on them are provided in Subsections 4.1, 4.2 and 4.3.
In Section 5, we present solutions for the problems (\ref{SPE_2_1}), (\ref{SPE_2_1_i_c}) and (\ref{SPE_2_2}), (\ref{SPE_2_2_i_c})
displayed as special and usual Laplace transforms forming a collection of functions on a full covering
$\underline{\mathcal{E}}$ of the origin in $\mathbb{C}^{\ast}$ (Theorem 2).
In Section 6, the structure of the asymptotic expansions of the solutions $u_{k}$, $y_{k}$ and
$u_{d_p}$, $y_{d_p}$ w.r.t $\epsilon$ (stated in Theorem 3) is described with the help of a version of the Ramis-Sibuya Theorem which entails
two Gevrey levels 1 and $1^{+}$ disclosed in Subsection 6.1.
\section{A first auxiliary Cauchy problem with exponential coefficients}
\subsection{Banach spaces of holomorphic functions with super-exponential decay on horizontal strips}
Let $\bar{D}(0,r)$ be the closed disc centered at 0 and with radius $r>0$ and let
$\dot{D}(0,\epsilon_{0}) = D(0,\epsilon_{0}) \setminus \{ 0 \}$ be the punctured disc centered at 0 with radius
$\epsilon_{0}>0$ in $\mathbb{C}$. We consider a
closed horizontal strip $H$ described as
\begin{equation}
H = \{ z \in \mathbb{C} / a \leq \mathrm{Im}(z) \leq b, \ \ \mathrm{Re}(z) \leq 0 \} \label{defin_strip_H}
\end{equation}
for some real numbers $a<b$. For any open set $\mathcal{D} \subset \mathbb{C}$, we denote
$\mathcal{O}(\mathcal{D})$ the vector space of holomorphic functions on $\mathcal{D}$. Let
$b>1$ be a real number, we define $\zeta(b) = \sum_{n=0}^{+\infty} 1/(n+1)^{b}$. Let $M$ be a
positive real number such that $M > \zeta(b)$. We introduce the sequences $r_{b}(\beta) =
\sum_{n = 0}^{\beta} \frac{1}{(n+1)^{b}}$ and $s_{b}(\beta) = M - r_{b}(\beta)$
for all $\beta \geq 0$.
\begin{defin} Let $\underline{\sigma} = (\sigma_{1},\sigma_{2},\sigma_{3})$ where
$\sigma_{1},\sigma_{2},\sigma_{3}>0$ be positive real numbers and $\beta \geq 0$ an
integer. Let $\epsilon \in \dot{D}(0,\epsilon_{0})$. We denote
$SED_{(\beta,\underline{\sigma},H,\epsilon)}$ the vector space of holomorphic functions $v(\tau)$ on
$\mathring{H}$ (which stands for the interior of $H$) and continuous on $H$
such that
$$ ||v(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)} = \sup_{\tau \in H} \frac{|v(\tau)|}{|\tau|}
\exp \left(-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta)|\tau| + \sigma_{2} s_{b}(\beta) \exp( \sigma_{3}|\tau| ) \right) $$
is finite. Let $\delta>0$ be a real number. We define
$SED_{(\underline{\sigma},H,\epsilon,\delta)}$
to be the vector space of all formal series $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$
with coefficients $v_{\beta}(\tau) \in SED_{(\beta,\underline{\sigma},H,\epsilon)}$, for
$\beta \geq 0$ and such that
$$ ||v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)} = \sum_{\beta \geq 0}
||v_{\beta}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)} \frac{\delta^{\beta}}{\beta !}$$
is finite. One can ascertain that
$SED_{(\underline{\sigma},H,\epsilon,\delta)}$ equipped with the norm
$||.||_{(\underline{\sigma},H,\epsilon,\delta)}$ turns out to be a Banach space.
\end{defin}
In the next proposition, we show that the formal series belonging to the latter Banach spaces define
actual holomorphic functions that are convergent on a disc w.r.t $z$ and with super exponential decay on
the strip $H$ w.r.t $\tau$.
\begin{prop} Let $v(\tau,z) \in SED_{(\underline{\sigma},H,\epsilon,\delta)}$. Let
$0 < \delta_{1} < 1$. Then, there
exists a constant $C_{0}>0$ (depending on $||v||_{(\underline{\sigma},H,\epsilon,\delta)}$ and
$\delta_{1}$) such that
\begin{equation}
|v(\tau,z)| \leq C_{0} |\tau| \exp \left(\frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau|
- \sigma_{2}(M-\zeta(b)) \exp( \sigma_{3} |\tau| ) \right) \label{v_up_bds}
\end{equation}
for all $\tau \in H$, all $z \in \mathbb{C}$ with $\frac{|z|}{\delta} < \delta_{1}$.
\end{prop}
\begin{proof}
Let $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta! \in
SED_{(\underline{\sigma},H,\epsilon,\delta)}$.
By construction, there exists a constant $c_{0}>0$ (depending on
$||v||_{(\underline{\sigma},H,\epsilon,\delta)}$) with
\begin{equation}
|v_{\beta}(\tau)| \leq c_{0} |\tau| \exp( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| -
\sigma_{2} s_{b}(\beta) \exp( \sigma_{3} |\tau| ) ) \beta!
(\frac{1}{\delta})^{\beta}
\end{equation}
for all $\beta \geq 0$, all $\tau \in H$. Take $0 < \delta_{1} < 1$. Departing from the
definition of $\zeta(b)$, we deduce that
\begin{multline}
|v(\tau,z)| \leq c_{0} |\tau| \sum_{\beta \geq 0}
\exp(\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| - \sigma_{2} s_{b}(\beta) \exp( \sigma_{3} |\tau|) )
(\delta_{1})^{\beta}\\
\leq c_{0}|\tau| \exp( \frac{\sigma_{1}}{|\epsilon|}\zeta(b)|\tau| - \sigma_{2}(M - \zeta(b))
\exp( \sigma_{3} |\tau|) ) \frac{1}{1 - \delta_{1}}
\label{v_up_bds_in_proof}
\end{multline}
for all $z \in \mathbb{C}$ such that $\frac{|z|}{\delta} < \delta_{1} < 1$, all $\tau \in H$. Therefore
(\ref{v_up_bds}) is a consequence of (\ref{v_up_bds_in_proof}).
\end{proof}
In the next three propositions, we study the action of linear operators constructed as multiplication
by exponential and polynomial functions and by bounded holomorphic functions on the Banach spaces introduced above.
\begin{prop} Let $k_{0},k_{2} \geq 0$ and $k_{1} \geq 1$ be integers. Assume that the next condition
\begin{equation}
k_{1} \geq bk_{0} + \frac{bk_{2}}{\sigma_{3}} \label{multipl_operators_continuity_cond_SED}
\end{equation}
holds. Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the operator
$v(\tau,z) \mapsto \tau^{k_0}\exp(-k_{2} \tau) \partial_{z}^{-k_1}v(\tau,z)$ is a bounded linear
operator from
$(SED_{(\underline{\sigma},H,\epsilon,\delta)}, ||.||_{(\underline{\sigma},H,\epsilon,\delta)}$ into itself.
Moreover, there exists a constant $C_{1}>0$ (depending on $k_{0},k_{1},k_{2},\underline{\sigma},b$),
independent of $\epsilon$, such that
\begin{equation}
|| \tau^{k_0} \exp(-k_{2} \tau) \partial_{z}^{-k_{1}}v(\tau,z) ||_{(\underline{\sigma},H,\epsilon,\delta)}
\leq C_{1}|\epsilon|^{k_0} \delta^{k_1} ||v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)}
\label{multipl_operators_exp_continuity_SED}
\end{equation}
for all $v \in SED_{(\underline{\sigma},H,\epsilon,\delta)}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof} Let $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$ belonging to
$SED_{(\underline{\sigma},H,\epsilon,\delta)}$. By definition,
\begin{equation}
|| \tau^{k_0} \exp(-k_{2} \tau) \partial_{z}^{-k_{1}}v(\tau,z) ||_{(\underline{\sigma},H,\epsilon,\delta)}
= \sum_{\beta \geq k_{1}}
|| \tau^{k_0} \exp(-k_{2} \tau) v_{\beta - k_{1}}(\tau) ||_{(\beta,\underline{\sigma},H,\epsilon)}
\frac{\delta^{\beta}}{\beta!}. \label{norm_exp_int_zk1_v}
\end{equation}
\begin{lemma} There exists a constant $C_{1.1}>0$ (depending on
$k_{0},k_{1},k_{2},\underline{\sigma},b$) such that
\begin{equation}
|| \tau^{k_0}\exp(-k_{2} \tau)v_{\beta - k_{1}}(\tau) ||_{(\beta,\underline{\sigma},H,\epsilon)}
\leq C_{1.1}|\epsilon|^{k_0}(\beta + 1)^{bk_{0} + \frac{k_{2}b}{\sigma_{3}}}
||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\underline{\sigma},H,\epsilon)}
\label{bds_norm_exp_v_beta_k1}
\end{equation}
for all $\beta \geq k_{1}$.
\end{lemma}
\begin{proof} First, we perform the next factorization
\begin{multline}
|\tau^{k_0}\exp(-k_{2}\tau) v_{\beta - k_{1}}(\tau)| \frac{1}{|\tau|}
\exp \left(-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta)|\tau| + \sigma_{2} s_{b}(\beta)
\exp( \sigma_{3}|\tau| ) \right)\\
= \frac{|v_{\beta - k_{1}}(\tau)|}{|\tau|}
\exp \left(-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta-k_{1})|\tau| + \sigma_{2} s_{b}(\beta-k_{1})
\exp( \sigma_{3}|\tau| ) \right)\\
\times \left( |\tau^{k_0}\exp( -k_{2} \tau)|
\exp( - \frac{\sigma_{1}}{|\epsilon|}( r_{b}(\beta) - r_{b}(\beta - k_{1}) )|\tau| )
\exp( \sigma_{2}( s_{b}(\beta) - s_{b}(\beta - k_{1}) ) \exp(\sigma_{3}|\tau|) ) \right)
\label{factor_v_beta_k1}
\end{multline}
On the other hand, by construction, we observe that
\begin{equation}
r_{b}(\beta) - r_{b}(\beta - k_{1}) \geq \frac{k_{1}}{(\beta + 1)^{b}} \ \ , \ \
s_{b}(\beta) - s_{b}(\beta - k_{1}) \leq -\frac{k_{1}}{(\beta + 1)^{b}} \label{difference_s_b_r_b}
\end{equation}
for all $\beta \geq k_{1}$. According to (\ref{factor_v_beta_k1}) and
(\ref{difference_s_b_r_b}), we deduce that
\begin{equation}
|| \tau^{k_0}\exp(-k_{2} \tau)v_{\beta - k_{1}}(\tau) ||_{(\beta,\underline{\sigma},H,\epsilon)}
\leq A(\beta) ||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\underline{\sigma},H,\epsilon)}
\label{norm_exp_v_beta_k1<norm_v_beta_k1}
\end{equation}
where
\begin{multline*}
A(\beta) = \sup_{ \tau \in H} |\tau|^{k_0} \exp(k_{2}|\tau|)
\exp( -\frac{\sigma_{1}}{|\epsilon|} \frac{k_1}{(\beta + 1)^{b}} |\tau| )\\
\times
\exp( -\sigma_{2} \frac{k_{1}}{(\beta + 1)^{b}} \exp( \sigma_{3} |\tau| ) ) \leq A_{1}(\beta)A_{2}(\beta)
\end{multline*}
with
$$ A_{1}(\beta) = \sup_{x \geq 0} x^{k_0}
\exp( -\frac{\sigma_{1}}{|\epsilon|} \frac{k_1}{(\beta + 1)^{b}} x )
$$
and
$$ A_{2}(\beta) = \sup_{x \geq 0} \exp(k_{2}x)
\exp( -\sigma_{2} \frac{k_{1}}{(\beta + 1)^{b}} \exp( \sigma_{3} x ) )
$$
for all $\beta \geq k_{1}$. In the next step, we provide estimates for $A_{1}(\beta)$. Namely, from the
classical bounds for exponential functions
\begin{equation}
\sup_{x \geq 0} x^{m_1} \exp( -m_{2} x) \leq (\frac{m_1}{m_2})^{m_1} \exp( -m_{1} ) \label{xm1_expm2x<}
\end{equation}
for any integers $m_{1} \geq 0$ and $m_{2} > 0$, we get that
\begin{multline}
A_{1}(\beta) = |\epsilon|^{k_0} \sup_{x \geq 0} (\frac{x}{|\epsilon|})^{k_0}
\exp( - \frac{\sigma_{1} k_{1}}{(\beta + 1)^{b}} \frac{x}{|\epsilon|} )\\
\leq |\epsilon|^{k_0} \sup_{X \geq 0} X^{k_0} \exp( - \frac{\sigma_{1}k_{1}}{(\beta + 1)^{b}} X ) =
|\epsilon|^{k_0} (\frac{k_0}{\sigma_{1}k_{1}})^{k_0} \exp( -k_{0} ) (\beta + 1)^{bk_{0}} \label{A_1_bounds}
\end{multline}
for all $\beta \geq k_{1}$. In the last part, we focus on the sequence $A_{2}(\beta)$. First of all,
if $k_{2} = 0$, we observe that $A_{2}(\beta) \leq 1$ for all $\beta \geq k_{1}$.
Now, we assume that $k_{2} \geq 1$. Again, we need the
help of classical bounds for exponential functions
$$ \sup_{x \geq 0} cx - a\exp(bx) \leq \frac{c}{b}( \log(\frac{c}{ab}) - 1) $$
for all positive integers $a,b,c>0$ provided that $c > ab$. We deduce that
$$
A_{2}(\beta) \leq \exp( \frac{k_2}{\sigma_{3}}(
\log( \frac{k_{2}(\beta + 1)^{b}}{\sigma_{3} \sigma_{2} k_{1}} ) - 1) =
\exp( -\frac{k_2}{\sigma_{3}} + \frac{k_2}{\sigma_3}\log( \frac{k_2}{\sigma_{3}\sigma_{2}k_{1}} ) )
(\beta + 1)^{\frac{k_{2}b}{\sigma_{3}}}
$$
whenever $\beta \geq k_{1}$ and $(\beta + 1)^{b} > \sigma_{2}\sigma_{3}k_{1}/k_{2}$. Besides, we also get a
constant $C_{1.0}>0$ (depending on $k_{2},\sigma_{2},k_{1},b,\sigma_{3}$) such that
$$
A_{2}(\beta) \leq C_{1.0}(\beta + 1)^{\frac{k_{2}b}{\sigma_3}}
$$
for all $\beta \geq k_{1}$ with $(\beta + 1)^{b} \leq \sigma_{2}\sigma_{3}k_{1}/k_{2}$. In summary, we get
a constant $\tilde{C}_{1.0}>0$ (depending on $k_{2},\sigma_{2},k_{1},b,\sigma_{3}$) with
\begin{equation}
A_{2}(\beta) \leq \tilde{C}_{1.0}(\beta + 1)^{\frac{k_{2}b}{\sigma_3}} \label{A_2_bounds}
\end{equation}
for all $\beta \geq k_{1}$. Finally, gathering (\ref{norm_exp_v_beta_k1<norm_v_beta_k1}),
(\ref{A_1_bounds}) and (\ref{A_2_bounds}) yields (\ref{bds_norm_exp_v_beta_k1}).
\end{proof}
Bearing in mind the definition of the norm (\ref{norm_exp_int_zk1_v}) and the upper bounds
(\ref{bds_norm_exp_v_beta_k1}), we deduce that
\begin{multline}
|| \tau^{k_0} \exp(-k_{2} \tau) \partial_{z}^{-k_{1}}v(\tau,z) ||_{(\underline{\sigma},H,\epsilon,\delta)}
\leq \sum_{\beta \geq k_{1}} C_{1.1}|\epsilon|^{k_0}(1 + \beta)^{bk_{0} + \frac{bk_{2}}{\sigma_{3}}}\\
\times
\frac{(\beta - k_{1})!}{\beta !} ||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\underline{\sigma},H,\epsilon)}
\delta^{k_1} \frac{\delta^{\beta - k_{1}}}{(\beta - k_{1})!}. \label{bds_norm_exp_intz_v}
\end{multline}
In accordance with the assumption (\ref{multipl_operators_continuity_cond_SED}), we get a constant $C_{1.2}>0$
(depending on $k_{0},k_{1},k_{2},b,\sigma_{3}$) such that
\begin{equation}
(1 + \beta)^{bk_{0} + \frac{bk_{2}}{\sigma_{3}}} \frac{(\beta - k_{1})!}{\beta!} \leq C_{1.2}
\label{power_beta_factorial_C12}
\end{equation}
for all $\beta \geq k_{1}$. Lastly, clustering (\ref{bds_norm_exp_intz_v}) and
(\ref{power_beta_factorial_C12}) furnishes (\ref{multipl_operators_exp_continuity_SED}).
\end{proof}
\begin{prop} Let $k_{0},k_{2} \geq 0$ be integers. Let $\underline{\sigma} =
(\sigma_{1},\sigma_{2},\sigma_{3})$ and $\underline{\sigma}' = (\sigma_{1}',\sigma_{2}',\sigma_{3}')$ with
$\sigma_{j} > 0$ and $\sigma_{j}'>0$ for $j=1,2,3$, such that
\begin{equation}
\sigma_{1} > \sigma_{1}' \ \ , \ \ \sigma_{2} < \sigma_{2}' \ \ , \ \ \sigma_{3} = \sigma_{3}'. \label{cond_sigma_sigma'}
\end{equation}
Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the operator $v(\tau,z)
\mapsto \tau^{k_0}\exp(-k_{2}\tau)v(\tau,z)$ is a bounded linear map from
$(SED_{(\underline{\sigma}',H,\epsilon,\delta)},||.||_{(\underline{\sigma}',H,\epsilon,\delta)})$ into
$(SED_{(\underline{\sigma},H,\epsilon,\delta)},||.||_{(\underline{\sigma},H,\epsilon,\delta)})$. Moreover,
there exists a constant $\check{C}_{1}>0$ (depending on
$k_{0},k_{2},\underline{\sigma},\underline{\sigma}',M,b$) such that
\begin{equation}
|| \tau^{k_0}\exp(-k_{2}\tau)v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq
\check{C}_{1}|\epsilon|^{k_0}||v(\tau,z)||_{(\underline{\sigma}',H,\epsilon,\delta)}
\end{equation}
for all $v \in SED_{(\underline{\sigma}',H,\epsilon,\delta)}$.
\end{prop}
\begin{proof} Take $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) \frac{z^{\beta}}{\beta!}$ within
$SED_{(\underline{\sigma}',H,\epsilon,\delta)}$. According to Definition 1, we see that
$$ ||\tau^{k_0}\exp(-k_{2}\tau)v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)}
= \sum_{\beta \geq 0} ||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)}
\frac{\delta^{\beta}}{\beta !}
$$
\begin{lemma}
There exists a constant $\check{C}_{1}>0$ (depending on $k_{0},k_{2},\underline{\sigma},\underline{\sigma}',M,b$) such that
$$ ||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)} \leq
\check{C}_{1}|\epsilon|^{k_0}||v_{\beta}(\tau)||_{(\beta,\underline{\sigma}',H,\epsilon)}$$
\end{lemma}
\begin{proof} We operate the next factorization
\begin{multline*}
|\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)| \frac{1}{|\tau|}\exp \left(-\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau|
+ \sigma_{2}s_{b}(\beta)\exp(\sigma_{3}|\tau|) \right)\\
= |v_{\beta}(\tau)|\frac{1}{|\tau|} \exp \left(-\frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau|
+ \sigma_{2}'s_{b}(\beta)\exp(\sigma_{3}'|\tau|) \right)\\
\times |\tau^{k_0}\exp(-k_{2}\tau)| \exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| )
\exp \left( (\sigma_{2} - \sigma_{2}')s_{b}(\beta) \exp( \sigma_{3}|\tau| ) \right).
\end{multline*}
We deduce that
$$ ||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)} \leq
\check{A}(\beta)||v_{\beta}(\tau)||_{(\beta,\underline{\sigma}',H,\epsilon)}$$
where
\begin{multline*}
\check{A}(\beta) = \sup_{\tau \in H} |\tau^{k_0}\exp(-k_{2}\tau)|
\exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| )
\exp \left( (\sigma_{2} - \sigma_{2}')s_{b}(\beta) \exp( \sigma_{3}|\tau| ) \right) \\
\leq \check{A}_{1}(\beta) \check{A}_{2}(\beta)
\end{multline*}
with
$$ \check{A}_{1}(\beta) = \sup_{x \geq 0}
x^{k_0}\exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)x ) \ \ , \ \
\check{A}_{2}(\beta) = \sup_{x \geq 0} \exp(k_{2}x)
\exp \left( (\sigma_{2} - \sigma_{2}')s_{b}(\beta) \exp( \sigma_{3}x ) \right).$$
Since $r_{b}(\beta) \geq 1$ for all $\beta \geq 0$, we deduce from (\ref{xm1_expm2x<}) that
\begin{equation}
\check{A}_{1}(\beta) \leq |\epsilon|^{k_0} \sup_{x \geq 0} (\frac{x}{|\epsilon|})^{k_0}
\exp( - (\sigma_{1} - \sigma_{1}') \frac{x}{|\epsilon|} )
\leq |\epsilon|^{k_0}( \frac{k_{0} e^{-1}}{\sigma_{1} - \sigma_{1}'})^{k_0}. \label{checkA1_bds}
\end{equation}
In order to handle the sequence $\check{A}_{2}(\beta)$, we observe that
$s_{b}(\beta) \geq M - \zeta(b) > 0$, for all $\beta \geq 0$. Therefore, we see that
$$ \check{A}_{2}(\beta) \leq \sup_{x \geq 0} \exp \left( k_{2}x + (\sigma_{2} - \sigma_{2}')(M - \zeta(b))
\exp(\sigma_{3}x) \right) $$
which is a finite upper bound for all $\beta \geq 0$.
\end{proof}
As a consequence, Proposition 3 follows directly from Lemma 2.
\end{proof}
\begin{prop} Let $c(\tau,z,\epsilon)$ be a holomorphic function on $\mathring{H} \times D(0,\rho) \times
D(0,\epsilon_{0})$,
continuous on $H \times D(0,\rho) \times D(0,\epsilon_{0})$, for some $\rho>0$, bounded by a constant $M_{c}>0$ on
$H \times D(0,\rho) \times D(0,\epsilon_{0})$. Let $0 < \delta < \rho$. Then, the linear map
$v(\tau,z) \mapsto c(\tau,z,\epsilon)v(\tau,z)$ is bounded from
$(SED_{(\underline{\sigma},H,\epsilon,\delta)},||.||_{(\underline{\sigma},H,\epsilon,\delta)})$ into itself, for
all $\epsilon \in \dot{D}(0,\epsilon_{0})$. Furthermore, one can choose a constant $\breve{C}_{1}>0$ (depending on
$M_{c},\delta,\rho$) independent of $\epsilon$ such that
\begin{equation}
||c(\tau,z,\epsilon)v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq \breve{C}_{1}
||v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)}
\end{equation}
for all $v \in SED_{(\underline{\sigma},H,\epsilon,\delta)}$.
\end{prop}
\begin{proof} We expand $c(\tau,z,\epsilon) = \sum_{\beta \geq 0} c_{\beta}(\tau,\epsilon) z^{\beta}/\beta!$ as a
convergent Taylor series w.r.t $z$ on $D(0,\rho)$ and we set $M_{c}>0$ with
$$ \sup_{\tau \in H, z \in \bar{D}(0,\rho),\epsilon \in \mathcal{E}} |c(\tau,z,\epsilon)| \leq M_{c}. $$
Let $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$ belonging to
$SED_{(\underline{\sigma},H,\epsilon,\delta)}$. According to Definition 1, we get that
\begin{equation}
||c(\tau,z,\epsilon)v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq
\sum_{\beta \geq 0} \left( \sum_{\beta_{1} + \beta_{2} = \beta}
||c_{\beta_{1}}(\tau,\epsilon)v_{\beta_{2}}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)}
\frac{\beta!}{\beta_{1}!\beta_{2}!} \right) \frac{\delta^{\beta}}{\beta!}.
\label{maj_norm_c_bded_v}
\end{equation}
Besides, the Cauchy formula implies the next estimates
$$ \sup_{\tau \in H,\epsilon \in \mathcal{E}} |c_{\beta}(\tau,\epsilon)| \leq M_{c} (\frac{1}{\delta'})^{\beta} \beta!
$$
for any $\delta < \delta' < \rho$, for all $\beta \geq 0$. By construction of the norm, since
$r_{b}(\beta) \geq r_{b}(\beta_{2})$ and $s_{b}(\beta) \leq s_{b}(\beta_{2})$ whenever $\beta_{2} \leq \beta$, we
deduce that
\begin{equation}
||c_{\beta_{1}}(\tau,\epsilon)v_{\beta_{2}}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)}
\leq M_{c} \beta_{1}!(\frac{1}{\delta'})^{\beta_{1}}||v_{\beta_{2}}(\tau)||_{(\beta,\underline{\sigma},H,\epsilon)}
\leq M_{c} \beta_{1}!(\frac{1}{\delta'})^{\beta_{1}}||v_{\beta_{2}}(\tau)||_{(\beta_{2},\underline{\sigma},H,\epsilon)}
\label{norm_beta_c_bded_v}
\end{equation}
for all $\beta_{1},\beta_{2} \geq 0$ with $\beta_{1} + \beta_{2}=\beta$. Gathering (\ref{maj_norm_c_bded_v})
and (\ref{norm_beta_c_bded_v}) yields the desired bounds
$$ ||c(\tau,z,\epsilon)v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq M_{c}
(\sum_{\beta \geq 0} (\frac{\delta}{\delta'})^{\beta} )||v(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)}. $$
\end{proof}
\subsection{Banach spaces of holomorphic functions with super exponential growth on horizontal strips and exponential
growth on sectors}
We keep the notations of the previous subsection 2.1. We consider a closed horizontal strip
\begin{equation}
J = \{ z \in \mathbb{C} / c \leq \mathrm{Im}(z) \leq d, \ \ \mathrm{Re}(z) \leq 0 \} \label{defin_strip_J}
\end{equation}
for some real numbers $c<d$. We denote $S_{d}$ an unbounded open sector with bisecting direction $d \in \mathbb{R}$
centered at 0 such that $S_{d} \subset \mathbb{C}_{+} = \{ z \in \mathbb{C} / \mathrm{Re}(z) > 0 \}$.
\begin{defin}
Let $\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ where
$\sigma_{1},\varsigma_{2},\varsigma_{3}>0$ be positive real numbers and $\beta \geq 0$ be an
integer. Take $\epsilon \in \dot{D}(0,\epsilon_{0})$. We designate
$SEG_{(\beta,\underline{\varsigma},J,\epsilon)}$ as the vector space of holomorphic functions $v(\tau)$ on
$\mathring{J}$ and continuous on $J$
such that
$$ ||v(\tau)||_{(\beta,\underline{\varsigma},J,\epsilon)} = \sup_{\tau \in J} \frac{|v(\tau)|}{|\tau|}
\exp \left(-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta)|\tau| - \varsigma_{2} r_{b}(\beta)
\exp( \varsigma_{3}|\tau| ) \right) $$
is finite. Similarly, we denote $EG_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}$ the vector space of
holomorphic functions $v(\tau)$ on
$S_{d} \cup D(0,r)$ and continuous on $\bar{S_d} \cup \bar{D}(0,r)$
such that
$$ ||v(\tau)||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)} = \sup_{\tau \in \bar{S_d} \cup \bar{D}(0,r)}
\frac{|v(\tau)|}{|\tau|}
\exp ( -\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta)|\tau| ) $$
is finite. Let us choose $\delta>0$ a real number. We define
$SEG_{(\underline{\varsigma},J,\epsilon,\delta})$
to be the vector space of all formal series $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$
with coefficients $v_{\beta}(\tau) \in SEG_{(\beta,\underline{\varsigma},J,\epsilon)}$, for
$\beta \geq 0$ and such that
$$ ||v(\tau,z)||_{(\underline{\varsigma},J,\epsilon,\delta)} = \sum_{\beta \geq 0}
||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma},J,\epsilon)} \frac{\delta^{\beta}}{\beta !}$$
is finite. Likewise, we set $EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta})$
as the vector space of all formal series $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$
with coefficients $v_{\beta}(\tau) \in EG_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}$, for
$\beta \geq 0$ with
$$ ||v(\tau,z)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)} = \sum_{\beta \geq 0}
||v_{\beta}(\tau)||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)} \frac{\delta^{\beta}}{\beta !}$$
being finite.
\end{defin}
\noindent {\bf Remark.} These Banach spaces are slight modifications of those introduced in the former work \cite{ma}
of the second author. The next proposition will be enounced without proof since it follows exactly the same steps as
Proposition 1 above. It states that the formal series appertaining to the latter Banach spaces turn out to be
holomorphic functions on some disc w.r.t $z$ and with super exponential growth (resp. exponential growth) w.r.t $\tau$
on the strip $J$ (resp. on the domain $S_{d} \cup D(0,r)$).
\begin{prop}
1) Let $v(\tau,z) \in SEG_{(\underline{\varsigma},J,\epsilon,\delta)}$. Take some real number $0 < \delta_{1} < 1$.
Then, there exists a constant $C_{2}>0$ depending on
$||v||_{(\underline{\varsigma},J,\epsilon,\delta)}$ and $\delta_{1}$ such that
\begin{equation}
|v(\tau,z)| \leq C_{2}|\tau| \exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| +
\varsigma_{2} \zeta(b) \exp( \varsigma_{3} |\tau| ) \right)
\end{equation}
for all $\tau \in J$, all $z \in \mathbb{C}$ with $\frac{|z|}{\delta} < \delta_{1}$.
\noindent 2) Let us take $v(\tau,z) \in EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$. Choose some real number
$0 < \delta_{1} < 1$.
Then, there exists a constant $C'_{2}>0$ depending on
$||v||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$
and $\delta_{1}$ such that
\begin{equation}
|v(\tau,z)| \leq C'_{2}|\tau| \exp ( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| )
\end{equation}
for all $\tau \in S_{d} \cup D(0,r)$, all $z \in \mathbb{C}$ with $\frac{|z|}{\delta} < \delta_{1}$.
\end{prop}
In the next coming propositions, we study the same linear operators as defined in Propositions 2,3 and 4 but acting on
the Banach spaces described in Definition 2.
\begin{prop} Let us choose integers $k_{0},k_{2} \geq 0$ and $k_{1} \geq 1$.
\noindent 1) We take for granted that the next
constraint
$$ k_{1} \geq bk_{0} + \frac{bk_{2}}{\varsigma_{3}} \label{multipl_operators_continuity_cond_SEG}$$
holds. Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the linear map
$v(\tau,z) \mapsto \tau^{k_0}\exp(-k_{2} \tau) \partial_{z}^{-k_1}v(\tau,z)$ is bounded from
$(SEG_{(\underline{\varsigma},J,\epsilon,\delta)}, ||.||_{(\underline{\varsigma},J,\epsilon,\delta)}$ into itself.
Moreover, there exists a constant $C_{3}>0$ (depending on
$k_{0},k_{1},k_{2},\underline{\varsigma},b$),
independent of $\epsilon$, such that
\begin{equation}
|| \tau^{k_0} \exp(-k_{2} \tau) \partial_{z}^{-k_{1}}v(\tau,z) ||_{(\underline{\varsigma},J,\epsilon,\delta)}
\leq C_{3}|\epsilon|^{k_0} \delta^{k_1} ||v(\tau,z)||_{(\underline{\varsigma},J,\epsilon,\delta)}
\label{multipl_operators_exp_continuity_SEG}
\end{equation}
for all $v(\tau,z) \in SEG_{(\underline{\varsigma},J,\epsilon,\delta)}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\noindent 2) We suppose that the next restriction
$$
k_{1} \geq bk_{0}
$$
holds. Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the linear map
$v(\tau,z) \mapsto \tau^{k_0}\exp(-k_{2} \tau) \partial_{z}^{-k_1}v(\tau,z)$ is bounded
from $EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$ into itself.
Moreover, there exists a constant $C'_{3}>0$ (depending on
$k_{0},k_{1},k_{2},\sigma_{1},r,b$),
independent of $\epsilon$, such that
\begin{equation}
|| \tau^{k_0} \exp(-k_{2} \tau) \partial_{z}^{-k_{1}}v(\tau,z) ||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}
\leq C'_{3}|\epsilon|^{k_0} \delta^{k_1} ||v(\tau,z)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}
\label{multipl_operators_exp_continuity_EG}
\end{equation}
for all $v(\tau,z) \in EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$, all $\epsilon \in \mathcal{E}$.
\end{prop}
\begin{proof} We only perform a sketch of proof since the lines of arguments are bordering the ones used in Proposition 2.
For the first point 1), we are reduced to show the next lemma
\begin{lemma} Let $v_{\beta - k_{1}}(\tau)$ in $SEG_{(\beta - k_{1},\underline{\varsigma},J,\epsilon)}$, for all
$\beta \geq k_{1}$. There exists a constant $C_{3.1}>0$ (depending on $k_{0},k_{1},k_{2},\underline{\varsigma},b$)
such that
$$
|| \tau^{k_0}\exp(-k_{2} \tau)v_{\beta - k_{1}}(\tau) ||_{(\beta,\underline{\varsigma},J,\epsilon)}
\leq C_{3.1}|\epsilon|^{k_0}(\beta + 1)^{bk_{0} + \frac{k_{2}b}{\varsigma_{3}}}
||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\underline{\varsigma},J,\epsilon)}
$$
for all $\beta \geq k_{1}$.
\end{lemma}
\begin{proof} We use the factorization
\begin{multline*}
|\tau^{k_0}\exp(-k_{2}\tau) v_{\beta - k_{1}}(\tau)| \frac{1}{|\tau|}
\exp \left(-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta)|\tau| - \varsigma_{2} r_{b}(\beta)
\exp( \varsigma_{3}|\tau| ) \right)\\
= \frac{|v_{\beta - k_{1}}(\tau)|}{|\tau|}
\exp \left(-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta-k_{1})|\tau| - \varsigma_{2} r_{b}(\beta-k_{1})
\exp( \varsigma_{3}|\tau| ) \right)\\
\times \left( |\tau^{k_0}\exp( -k_{2} \tau)|
\exp( - \frac{\sigma_{1}}{|\epsilon|}( r_{b}(\beta) - r_{b}(\beta - k_{1}) )|\tau| )
\exp( -\varsigma_{2}( r_{b}(\beta) - r_{b}(\beta - k_{1}) ) \exp(\varsigma_{3}|\tau|) ) \right).
\end{multline*}
In accordance with (\ref{difference_s_b_r_b}), we get that
$$
|| \tau^{k_0}\exp(-k_{2} \tau)v_{\beta - k_{1}}(\tau) ||_{(\beta,\underline{\varsigma},J,\epsilon)}
\leq B(\beta) ||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\underline{\varsigma},J,\epsilon)}
$$
where
\begin{multline*}
B(\beta) = \sup_{ \tau \in J} |\tau|^{k_0} \exp(k_{2}|\tau|)
\exp( -\frac{\sigma_{1}}{|\epsilon|} \frac{k_1}{(\beta + 1)^{b}} |\tau| )\\
\times
\exp( -\varsigma_{2} \frac{k_{1}}{(\beta + 1)^{b}} \exp( \varsigma_{3} |\tau| ) ) \leq B_{1}(\beta)B_{2}(\beta)
\end{multline*}
with
$$ B_{1}(\beta) = \sup_{x \geq 0} x^{k_0}
\exp( -\frac{\sigma_{1}}{|\epsilon|} \frac{k_1}{(\beta + 1)^{b}} x )
$$
and
$$ B_{2}(\beta) = \sup_{x \geq 0} \exp(k_{2}x)
\exp( -\varsigma_{2} \frac{k_{1}}{(\beta + 1)^{b}} \exp( \varsigma_{3} x ) )
$$
for all $\beta \geq k_{1}$. From the estimates (\ref{A_1_bounds}), we deduce that
$$B_{1}(\beta) \leq |\epsilon|^{k_0} (\frac{k_0}{\sigma_{1}k_{1}})^{k_0} \exp( -k_{0} ) (\beta + 1)^{bk_{0}}$$
for all $\beta \geq k_{1}$. Bearing in mind the estimates (\ref{A_2_bounds}), we get a constant
$\tilde{C}_{3.0}>0$ (depending on $k_{2},\varsigma_{2},k_{1},b,\varsigma_{3}$) with
$$ B_{2}(\beta) \leq \tilde{C}_{3.0}(\beta + 1)^{\frac{k_{2}b}{\varsigma_{3}}} $$
for all $\beta \geq k_{1}$, provided that $k_{2} \geq 1$. When $k_{2}=0$, we obviously see that
$B_{2}(\beta) \leq 1$ for all $\beta \geq k_{1}$. The lemma 3 follows.
\end{proof}
\noindent In order to explain the second point 2), we need to check the next lemma
\begin{lemma} Let $v_{\beta - k_{1}}(\tau)$ in $EG_{(\beta - k_{1},\sigma_{1},S_{d} \cup D(0,r),\epsilon)}$, for all
$\beta \geq k_{1}$. There exists a constant $C'_{3.1}>0$ (depending on $k_{0},k_{1},k_{2},\sigma_{1},r,b$)
such that
$$
|| \tau^{k_0}\exp(-k_{2} \tau)v_{\beta - k_{1}}(\tau) ||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq C'_{3.1}|\epsilon|^{k_0}(\beta + 1)^{bk_{0}}
||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
$$
for all $\beta \geq k_{1}$.
\end{lemma}
\begin{proof} We need the help of the factorization
\begin{multline*}
|\tau^{k_0}\exp(-k_{2}\tau) v_{\beta - k_{1}}(\tau)| \frac{1}{|\tau|}
\exp (-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta)|\tau| )
= \frac{|v_{\beta - k_{1}}(\tau)|}{|\tau|}
\exp (-\frac{\sigma_{1}}{|\epsilon|} r_{b}(\beta-k_{1})|\tau| ) \\
\times |\tau^{k_0}\exp( -k_{2} \tau)|
\exp( - \frac{\sigma_{1}}{|\epsilon|}( r_{b}(\beta) - r_{b}(\beta - k_{1}) )|\tau| ).
\end{multline*}
Due to the fact that there exists a constant $C'_{3.2}>0$ (depending on $k_{2},r$) such that
$|\exp(-k_{2}\tau)| \leq C'_{3.2}$ for all $\tau \in S_{d} \cup D(0,r)$ and according to (\ref{difference_s_b_r_b}),
we obtain that
$$
|| \tau^{k_0}\exp(-k_{2} \tau)v_{\beta - k_{1}}(\tau) ||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq C(\beta) ||v_{\beta - k_{1}}(\tau)||_{(\beta - k_{1},\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
$$
where
$$
C(\beta) = C'_{3.2}\sup_{ \tau \in S_{d} \cup D(0,r)} |\tau|^{k_0}
\exp( -\frac{\sigma_{1}}{|\epsilon|} \frac{k_1}{(\beta + 1)^{b}} |\tau| )\\
\leq C'_{3.2}C_{1}(\beta)
$$
with
$$ C_{1}(\beta) = \sup_{x \geq 0} x^{k_0}
\exp( -\frac{\sigma_{1}}{|\epsilon|} \frac{k_1}{(\beta + 1)^{b}} x )
$$
for all $\beta \geq k_{1}$. Again, keeping in view the estimates (\ref{A_1_bounds}), we deduce that
$$C_{1}(\beta) \leq |\epsilon|^{k_0} (\frac{k_0}{\sigma_{1}k_{1}})^{k_0} \exp( -k_{0} ) (\beta + 1)^{bk_{0}}$$
for all $\beta \geq k_{1}$. The lemma 4 follows.
\end{proof}
\end{proof}
\begin{prop} Let $k_{0},k_{2} \geq 0$ be integers.\\
1) We select $\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ and
$\underline{\varsigma}' = (\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ with $\sigma_{1},\sigma_{1}'>0$, $\varsigma_{j},\varsigma_{j}'>0$ for
$j=2,3$ in order that
\begin{equation}
\sigma_{1} > \sigma_{1}' \ \ , \ \ \varsigma_{2} > \varsigma_{2}' \ \ , \ \ \varsigma_{3} = \varsigma_{3}'.
\end{equation}
Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the map $v(\tau,z) \mapsto \tau^{k_0}\exp(-k_{2}\tau)v(\tau,z)$
is a bounded linear operator from
$(SEG_{(\underline{\varsigma}',J,\epsilon,\delta)},||.||_{(\underline{\varsigma}',J,\epsilon,\delta)})$ into
$(SEG_{(\underline{\varsigma},J,\epsilon,\delta)},||.||_{(\underline{\varsigma},J,\epsilon,\delta)})$. Furthermore,
there exists a constant $\check{C}_{3}>0$ (depending on $k_{0},k_{2},\underline{\varsigma},\underline{\varsigma}'$)
such that
\begin{equation}
|| \tau^{k_0}\exp( -k_{2} \tau)v(\tau,z) ||_{(\underline{\varsigma},J,\epsilon,\delta)}
\leq \check{C}_{3} |\epsilon|^{k_0} || v(\tau,z) ||_{(\underline{\varsigma}',J,\epsilon,\delta)}
\end{equation}
for all $v \in SEG_{(\underline{\varsigma}',J,\epsilon,\delta)}$.\\
2) Let $\sigma_{1},\sigma_{1}'>0$ such that
\begin{equation}
\sigma_{1} > \sigma_{1}'.
\end{equation}
Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the linear map $v(\tau,z) \mapsto \tau^{k_0}\exp(-k_{2}\tau)v(\tau,z)$
is bounded from the Banach space $(EG_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)},
||.||_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)})$ into $(EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)},
||.||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)})$. Besides, there exists a constant $\check{C}_{3}'>0$
(depending on $k_{0},k_{2},r,\sigma_{1},\sigma_{1}'$) such that
\begin{equation}
|| \tau^{k_0}\exp( -k_{2} \tau)v(\tau,z) ||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}
\leq \check{C}_{3}' |\epsilon|^{k_0} || v(\tau,z) ||_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)}
\end{equation}
for all $v \in EG_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)}$.\\
\end{prop}
\begin{proof}
As in Proposition 6, we only provide an outline of the proof since it keeps very close to the one of Proposition 3.
Concerning the first item 1), we are scaled down to show the next lemma
\begin{lemma}
There exists a constant $\check{C}_{3}>0$ (depending on $k_{0},k_{2},\underline{\varsigma},\underline{\varsigma}'$)
such that
$$ ||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\underline{\varsigma},J,\epsilon)}
\leq \check{C}_{3}|\epsilon|^{k_0}||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma}',J,\epsilon)} $$
\end{lemma}
\begin{proof}
We perform the factorization
\begin{multline*}
|\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)| \frac{1}{|\tau|}\exp \left(-\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau|
- \varsigma_{2}r_{b}(\beta)\exp(\varsigma_{3}|\tau|) \right)\\
= |v_{\beta}(\tau)|\frac{1}{|\tau|} \exp \left(-\frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau|
- \varsigma_{2}'r_{b}(\beta)\exp(\varsigma_{3}'|\tau|) \right)\\
\times |\tau^{k_0}\exp(-k_{2}\tau)| \exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| )
\exp \left( -(\varsigma_{2} - \varsigma_{2}')r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) \right).
\end{multline*}
We get that
$$ ||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\underline{\varsigma},J,\epsilon)} \leq
\check{B}(\beta)||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma}',J,\epsilon)}$$
where
\begin{multline*}
\check{B}(\beta) = \sup_{\tau \in J} |\tau|^{k_0}\exp(k_{2}|\tau|)
\exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| )
\exp \left( -(\varsigma_{2} - \varsigma_{2}')r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) \right) \\
\leq \check{B}_{1}(\beta) \check{B}_{2}(\beta)
\end{multline*}
with
$$ \check{B}_{1}(\beta) = \sup_{x \geq 0}
x^{k_0}\exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)x ) \ \ , \ \
\check{B}_{2}(\beta) = \sup_{x \geq 0} \exp(k_{2}x)
\exp \left( -(\varsigma_{2} - \varsigma_{2}')r_{b}(\beta) \exp( \varsigma_{3}x ) \right).$$
With the help of (\ref{checkA1_bds}), we check that
$$ \check{B}_{1}(\beta)
\leq |\epsilon|^{k_0}( \frac{k_{0} e^{-1}}{\sigma_{1} - \sigma_{1}'})^{k_0}$$
and since $r_{b}(\beta) \geq 1$ for all $\beta \geq 0$, we deduce
$$ \check{B}_{2}(\beta) \leq \sup_{x \geq 0} \exp \left( k_{2}x - (\varsigma_{2} - \varsigma_{2}')
\exp(\varsigma_{3}x) \right) $$
which is a finite majorant for all $\beta \geq 0$. The lemma follows.
\end{proof}
Regarding the second item 2), it boils down to the next lemma
\begin{lemma}
There exists a constant $\check{C}_{3}'>0$ (depending on $k_{0},k_{2},r,\sigma_{1},\sigma_{1}'$)
such that
$$ ||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq \check{C}_{3}'|\epsilon|^{k_0}||v_{\beta}(\tau)||_{(\beta,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)} $$
\end{lemma}
\begin{proof}
Again we need to factorize the next expression
\begin{multline*}
|\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)| \frac{1}{|\tau|}\exp (-\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| )
= |v_{\beta}(\tau)|\frac{1}{|\tau|} \exp ( -\frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| )\\
\times |\tau^{k_0}\exp(-k_{2}\tau)| \exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| ).
\end{multline*}
By construction, we can select a constant $\check{C}_{3.1}'>0$ (depending on $k_{2},r$) such that
$|\exp(-k_{2}\tau)| \leq \check{C}_{3.1}'$ for all $\tau \in S_{d} \cup D(0,r)$. We deduce that
\begin{equation}
||\tau^{k_0}\exp(-k_{2}\tau)v_{\beta}(\tau)||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq \check{C}(\beta) ||v_{\beta}(\tau)||_{(\beta,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)}
\end{equation}
where
$$ \check{C}(\beta) \leq \check{C}_{3.1}' \sup_{\tau \in S_{d} \cup D(0,r)} |\tau|^{k_0}
\exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)|\tau| ) \leq
\check{C}_{3.1}'\check{C}_{1}(\beta) $$
with
$$ \check{C}_{1}(\beta) = \sup_{x \geq 0}
x^{k_0}\exp( - \frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|}r_{b}(\beta)x ).$$
Through (\ref{checkA1_bds}) we notice that
$$ \check{C}_{1}(\beta)
\leq |\epsilon|^{k_0}( \frac{k_{0} e^{-1}}{\sigma_{1} - \sigma_{1}'})^{k_0}$$
for all $\beta \geq 0$. This yields the lemma.
\end{proof}
\end{proof}
The next proposition will be stated without proof since its explanation can be disclosed following exactly the same steps
and arguments as in Proposition 4.
\begin{prop}
1) Consider a holomorphic function $c(\tau,z,\epsilon)$ on $\mathring{J} \times D(0,\rho) \times D(0,\epsilon_{0})$,
continuous
on $J \times D(0,\rho) \times D(0,\epsilon_{0})$, for some $\rho>0$, bounded by a constant $M_{c}>0$ on
$J \times D(0,\rho) \times D(0,\epsilon_{0})$. We set $0 < \delta < \rho$. Then, the operator
$v(\tau,z) \mapsto c(\tau,z,\epsilon)v(\tau,z)$ is bounded from
$(SEG_{(\underline{\varsigma},J,\epsilon,\delta)},||.||_{(\underline{\varsigma},J,\epsilon,\delta)})$ into itself, for
all $\epsilon \in \dot{D}(0,\epsilon_{0})$. Besides, one can select a constant $\breve{C}_{3}>0$
(depending on $M_{c},\delta,\rho$) such that
$$ ||c(\tau,z,\epsilon)v(\tau,z)||_{(\underline{\varsigma},J,\epsilon,\delta)} \leq
\breve{C}_{3}||v(\tau,z)||_{(\underline{\varsigma},J,\epsilon,\delta)} $$
for all $v \in SEG_{(\underline{\varsigma},J,\epsilon,\delta)}$.\\
2) Let us take a function $c(\tau,z,\epsilon)$ holomorphic on $(S_{d} \cup D(0,r)) \times D(0,\rho) \times
D(0,\epsilon_{0})$,
continuous on $(\bar{S_{d}} \cup \bar{D}(0,r)) \times D(0,\rho) \times D(0,\epsilon_{0})$, for some $\rho>0$ and
bounded by a constant $M_{c}>0$ on $(\bar{S_{d}} \cup \bar{D}(0,r)) \times D(0,\rho) \times D(0,\epsilon_{0})$. Let
$0 < \delta < \rho$. Then, the linear map $v(\tau,z) \mapsto c(\tau,z,\epsilon)v(\tau,z)$ is bounded from
$(EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)},||.||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)})$
into itself, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$. Furthermore, one can sort a constant $\breve{C}_{3}'>0$
(depending on $M_{c},\delta,\rho$) with
$$ ||c(\tau,z,\epsilon)v(\tau,z)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)} \leq
\breve{C}_{3}'||v(\tau,z)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)} $$
for all $v \in EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$.\\
\end{prop}
\subsection{An auxiliary Cauchy problem whose coefficients suffer exponential growth on strips
and polynomial growth on unbounded sectors}
We start this subsection by introducing some notations. Let $\mathcal{A}$ be a finite subset of
$\mathbb{N}^{3}$. For all $\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}$, we consider a bounded
holomorphic function $c_{\underline{k}}(z,\epsilon)$ on a polydisc $D(0,\rho) \times D(0,\epsilon_{0})$ for some
radii $\rho,\epsilon_{0}>0$. Let $S \geq 1$ be an integer and let $P(\tau)$ be a polynomial (not identically equal to 0) with complex coefficients
whose roots belong to the open right halfplane $\mathbb{C}_{+} = \{ z \in \mathbb{C} / \mathrm{Re}(z) > 0 \}$.
We consider the following equation
\begin{equation}
\partial_{z}^{S}w(\tau,z,\epsilon) = \sum_{\underline{k}=(k_{0},k_{1},k_{2}) \in \mathcal{A}}
\frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \epsilon^{-k_0}\tau^{k_0} \exp(- k_{2}\tau) \partial_{z}^{k_1}
w(\tau,z,\epsilon) \label{1_aux_CP}
\end{equation}
Let us now enounce the principal statement of this subsection.
\begin{prop} 1) We impose the next requirements\\
a) There exist $\underline{\sigma} = (\sigma_{1},\sigma_{2},\sigma_{3})$ for $\sigma_{1},\sigma_{2},\sigma_{3}>0$
and $b>1$ being real numbers such that for all $\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}$, we have
\begin{equation}
S \geq k_{1} + bk_{0} + \frac{b k_{2}}{\sigma_{3}} \ \ , \ \ S > k_{1} \label{cond_ex_uniq_sol_1_aux_CP_SED_H}
\end{equation}
b) For all $0 \leq j \leq S-1$, we consider a function $\tau \mapsto w_{j}(\tau,\epsilon)$ that belong to
the Banach space $SED_{(0,\underline{\sigma}',H,\epsilon)}$ for all $\epsilon \in \dot{D}(0,\epsilon_{0})$,
for some closed horizontal strip $H$ described in (\ref{defin_strip_H}) and for a tuple
$\underline{\sigma}'=(\sigma_{1}',\sigma_{2}',\sigma_{3}')$ with $\sigma_{1}>\sigma_{1}'>0$,
$\sigma_{2}<\sigma_{2}'$ and $\sigma_{3}=\sigma_{3}'$.
Then, there exist some constants $I,R>0$ and $0 < \delta < \rho$ (independent of $\epsilon$) such that if one assumes that
\begin{equation}
\sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\sigma}',H,\epsilon)} \frac{\delta^{j}}{j!} \leq I
\label{initial_data_1_aux_CP_small_H}
\end{equation}
for all $0 \leq h \leq S-1$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the equation
(\ref{1_aux_CP}) with initial data
\begin{equation}
(\partial_{z}^{j}w)(\tau,0,\epsilon) = w_{j}(\tau,\epsilon) \ \ , \ \ 0 \leq j \leq S-1,
\label{1_aux_CP_initial_data}
\end{equation}
has a unique solution $w(\tau,z,\epsilon)$ in the space $SED_{(\underline{\sigma},H,\epsilon,\delta)}$, for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$ and satisfies furthermore the estimates
\begin{equation}
||w(\tau,z,\epsilon)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq \delta^{S}R + I
\label{norm_w_bd_in_epsilon_H}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
2) We demand the next restrictions\\
a) There exist $\underline{\varsigma}=(\sigma_{1},\varsigma_{2},\varsigma_{3})$ where
$\sigma_{1},\varsigma_{2},\varsigma_{3}>0$ and $b>1$ real numbers taken in way that all
$\underline{k}=(k_{0},k_{1},k_{2}) \in \mathcal{A}$ we have
\begin{equation}
S \geq k_{1} + bk_{0} + \frac{bk_{2}}{\varsigma_{3}} \ \ , \ \ S > k_{1}.
\end{equation}
b) For all $0 \leq j \leq S-1$, we choose a function $\tau \mapsto w_{j}(\tau,\epsilon)$ belonging to
the Banach space $SEG_{(0,\underline{\varsigma}',J,\epsilon)}$ for all $\epsilon \in \dot{D}(0,\epsilon_{0})$,
for some closed horizontal strip $J$ displayed in (\ref{defin_strip_J}) and for a tuple
$\underline{\varsigma}'=(\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ with $\sigma_{1}>\sigma_{1}'>0$,
$\varsigma_{2}>\varsigma_{2}'>0$ and $\varsigma_{3}=\varsigma_{3}'$.
Then, there exist some constants $I,R>0$ and $0 < \delta < \rho$ (independent of $\epsilon$) such that if one
takes for granted that
\begin{equation}
\sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\varsigma}',J,\epsilon)} \frac{\delta^{j}}{j!} \leq I
\label{initial_data_1_aux_CP_small_J}
\end{equation}
for all $0 \leq h \leq S-1$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the equation
(\ref{1_aux_CP}) with initial data (\ref{1_aux_CP_initial_data})
has a unique solution $w(\tau,z,\epsilon)$ in the space $SEG_{(\underline{\varsigma},J,\epsilon,\delta)}$, for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$ and fulfills the next constraint
\begin{equation}
||w(\tau,z,\epsilon)||_{(\underline{\varsigma},J,\epsilon,\delta)} \leq \delta^{S}R + I
\label{norm_w_bd_in_epsilon_J}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
3) We ask for the next conditions.\\
a) We fix some real number $\sigma_{1}>0$ and assume the existence of $b>1$ a real number such that for all
$\underline{k}=(k_{0},k_{1},k_{2}) \in \mathcal{A}$ we have
\begin{equation}
S \geq k_{1} + bk_{0} \ \ , \ \ S > k_{1}.
\end{equation}
b) For all $0 \leq j \leq S-1$, we select a function $\tau \mapsto w_{j}(\tau,\epsilon)$ that belong to
the Banach space $EG_{(0,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)}$ for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$,
for some open unbounded sector $S_{d}$ with bisecting direction $d$ with $S_{d} \subset \mathbb{C}_{+}$
and $D(0,r)$ a disc centered at 0 with radius $r$, for some $0 < \sigma_{1}' < \sigma_{1}$. The sector
$S_{d}$ and the disc $D(0,r)$ are chosen in a way that $\bar{S}_{d} \cup \bar{D}(0,r)$ does not contain any root of
the polynomial $P(\tau)$.
Then, some constants $I,R>0$ and $0 < \delta < \rho$ (independent of $\epsilon$) can be sorted if one
accepts that
\begin{equation}
\sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)} \frac{\delta^{j}}{j!} \leq I
\label{initial_data_1_aux_CP_small_S}
\end{equation}
for all $0 \leq h \leq S-1$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the equation
(\ref{1_aux_CP}) with initial data (\ref{1_aux_CP_initial_data})
has a unique solution $w(\tau,z,\epsilon)$ in the space $EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$, for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$, with the bounds
\begin{equation}
||w(\tau,z,\epsilon)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)} \leq \delta^{S}R + I
\label{norm_w_bd_in_epsilon_S}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
\end{prop}
\begin{proof}
Within the proof, we only plan to provide a detailed description of the point 1) since the same lines of arguments
apply for the points 2) and 3) by making use of Propositions 6,7 and 8 instead of Propositions 2,3 and 4. We consider
the function
$$ W_{S}(\tau,z,\epsilon) = \sum_{j=0}^{S-1} w_{j}(\tau,\epsilon) \frac{z^j}{j!} $$
where $w_{j}(\tau,\epsilon)$ is displayed in 1)b) above. We introduce a map $A_{\epsilon}$ defined as
\begin{multline*}
A_{\epsilon}(U(\tau,z)) := \sum_{\underline{k}=(k_{0},k_{1},k_{2}) \in \mathcal{A}}
\frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \epsilon^{-k_0}\tau^{k_0} \exp(- k_{2}\tau) \partial_{z}^{k_{1}-S}
U(\tau,z)\\
+ \sum_{\underline{k}=(k_{0},k_{1},k_{2}) \in \mathcal{A}}
\frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \epsilon^{-k_0}\tau^{k_0} \exp(- k_{2}\tau) \partial_{z}^{k_1}
W_{S}(\tau,z,\epsilon).
\end{multline*}
In the forthcoming lemma, we show that $A_{\epsilon}$ represents a Lipschitz shrinking map from and into a small
ball centered at the origin in the space $SED_{(\underline{\sigma},H,\epsilon,\delta)}$.
\begin{lemma}
Under the constraint (\ref{cond_ex_uniq_sol_1_aux_CP_SED_H}), let us consider a positive real number $I>0$ such that
$$ \sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\sigma}',H,\epsilon)} \frac{\delta^j}{j!}
\leq I
$$
for all $0 \leq h \leq S-1$, for $\epsilon \in \dot{D}(0,\epsilon_{0})$. Then, for an appropriate
choice of $I$,\\
a) There exists a constant $R>0$ (independent of $\epsilon$) such that
\begin{equation}
||A_{\epsilon}(U(\tau,z))||_{(\underline{\sigma},H,\epsilon,\delta)} \leq R \label{A_epsilon_ball_in_ball}
\end{equation}
for all $U(\tau,z) \in B(0,R)$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, where
$B(0,R)$ is the closed ball centered at 0 with radius $R$ in $SED_{(\underline{\sigma},H,\epsilon,\delta)}$.\\
b) The next inequality
\begin{equation}
||A_{\epsilon}(U_{1}(\tau,z)) - A_{\epsilon}(U_{2}(\tau,z))||_{(\underline{\sigma},H,\epsilon,\delta)} \leq
\frac{1}{2}||U_{1}(\tau,z) - U_{2}(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)}
\label{A_epsilon_shrink}
\end{equation}
holds for all $U_{1},U_{2} \in B(0,R)$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{lemma}
\begin{proof} Since $r_{b}(\beta) \geq r_{b}(0)$ and $s_{b}(\beta) \leq s_{b}(0)$ for all $\beta \geq 0$, we notice
that for any $0 \leq h \leq S-1$ and $0 \leq j \leq S-1-h$,
$$ ||w_{j+h}(\tau,\epsilon)||_{(j,\underline{\sigma}',H,\epsilon)} \leq
||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\sigma}',H,\epsilon)} $$
holds. We deduce that $\partial_{z}^{h}W_{S}(\tau,z,\epsilon)$ belongs to
$SED_{(\underline{\sigma}',H,\epsilon,\delta)}$ and moreover that
\begin{equation}
||\partial_{z}^{h}W_{S}(\tau,z,\epsilon)||_{(\underline{\sigma}',H,\epsilon,\delta)}
\leq \sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\sigma}',H,\epsilon)} \frac{\delta^j}{j!}
\leq I, \label{norm_partial_z_WS}
\end{equation}
for all $0 \leq h \leq S-1$. We start by focusing our attention to the estimates (\ref{A_epsilon_ball_in_ball}). Let $U(\tau,z)$ belonging
to $SED_{(\underline{\sigma},H,\epsilon,\delta)}$ with $||U(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq R$.
Assume that $0 < \delta < \rho$. We put
$$ M_{\underline{k}} = \sup_{\tau \in H,z \in D(0,\rho), \epsilon \in D(0,\epsilon_{0})}
\left| \frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \right|
$$
for all $\underline{k} \in \mathcal{A}$. Taking for granted the assumption (\ref{cond_ex_uniq_sol_1_aux_CP_SED_H}) and
according to Propositions 2 and 4, for all $\underline{k} \in \mathcal{A}$, we get two constants $C_{1}>0$
(depending on $k_{0},k_{1},k_{2},S,\underline{\sigma},b$) and $\breve{C}_{1}>0$ (depending on $M_{\underline{k}}$,
$\delta$,$\rho$) such that
\begin{multline}
|| \frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \epsilon^{-k_0} \tau^{k_0} \exp( -k_{2}\tau )
\partial_{z}^{k_{1}-S}U(\tau,z) ||_{(\underline{\sigma},H,\epsilon,\delta)} \\
\leq
\breve{C}_{1}C_{1} \delta^{S-k_{1}}
|| U(\tau,z) ||_{(\underline{\sigma},H,\epsilon,\delta)} = \breve{C}_{1}C_{1} \delta^{S-k_{1}}R
\label{A_epsilon_ball_in_ball_lin_part}
\end{multline}
On the other hand, in agreement with Propositions 3 and 4 and with the help of
(\ref{norm_partial_z_WS}), we obtain two constants $\check{C}_{1}>0$ (depending on
$k_{0},k_{2},\underline{\sigma},\underline{\sigma}',M,b$) and $\breve{C}_{1}>0$ (depending on
$M_{\underline{k}},\delta,\rho$) with
\begin{multline}
|| \frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \epsilon^{-k_0} \tau^{k_0} \exp( -k_{2}\tau )
\partial_{z}^{k_1}W_{S}(\tau,z,\epsilon) ||_{(\underline{\sigma},H,\epsilon,\delta)} \\
\leq
\breve{C}_{1}\check{C}_{1}
|| \partial_{z}^{k_1}W_{S}(\tau,z,\epsilon) ||_{(\underline{\sigma}',H,\epsilon,\delta)} \leq \breve{C}_{1}\check{C}_{1}I
\label{A_epsilon_ball_in_ball_nonhomog_part}
\end{multline}
Now, we choose $\delta,R,I>0$ in such a way that
\begin{equation}
\sum_{\underline{k} \in \mathcal{A}} (\breve{C}_{1}C_{1} \delta^{S-k_{1}}R + \breve{C}_{1}\check{C}_{1}I) \leq R
\label{cond_A_epsilon_ball_in_ball}
\end{equation}
holds. Assembling (\ref{A_epsilon_ball_in_ball_lin_part}) and (\ref{A_epsilon_ball_in_ball_nonhomog_part}) under
(\ref{cond_A_epsilon_ball_in_ball}) allows (\ref{A_epsilon_ball_in_ball}) to hold.
In a second part, we turn to the estimates (\ref{A_epsilon_shrink}). Let $R>0$ with $U_{1},U_{2}$ belonging
to $SED_{(\underline{\sigma},H,\epsilon,\delta)}$ inside the ball $B(0,R)$. By means of
(\ref{A_epsilon_ball_in_ball_lin_part}), we see that
\begin{multline}
|| \frac{c_{\underline{k}}(z,\epsilon)}{P(\tau)} \epsilon^{-k_0} \tau^{k_0} \exp( -k_{2}\tau )
\partial_{z}^{k_{1}-S}(U_{1}(\tau,z) - U_{2}(\tau,z)) ||_{(\underline{\sigma},H,\epsilon,\delta)} \\
\leq
\breve{C}_{1}C_{1} \delta^{S-k_{1}}
||U_{1}(\tau,z) - U_{2}(\tau,z)||_{(\underline{\sigma},H,\epsilon,\delta)}
\label{A_epsilon_ball_in_ball_shrink}
\end{multline}
where $C_{1},\breve{C}_{1}>0$ are given above. We select $\delta>0$ small enough in order that
\begin{equation}
\sum_{\underline{k} \in \mathcal{A}} \breve{C}_{1}C_{1}\delta^{S-k_{1}} \leq 1/2. \label{cond_A_epsilon_shrink}
\end{equation}
Therefore, (\ref{A_epsilon_ball_in_ball_shrink}) under (\ref{cond_A_epsilon_shrink}) supports that
(\ref{A_epsilon_shrink}) holds.
At last, we sort $\delta,R,I$ in a way that both (\ref{cond_A_epsilon_ball_in_ball}) and (\ref{cond_A_epsilon_shrink})
hold at the same time. Lemma 7 follows.
\end{proof}
Let the constraint (\ref{cond_ex_uniq_sol_1_aux_CP_SED_H}) be fulfilled. We choose the constants $I,R,\delta$ as in Lemma 7.
We select the initial data $w_{j}(\tau,\epsilon)$, $0 \leq j \leq S-1$ and a tuple $\underline{\sigma}'$ in a way that
the restriction (\ref{initial_data_1_aux_CP_small_H}) holds. Owing to Lemma 7 and to the classical contractive mapping
theorem on complete metric spaces, we deduce that the map $A_{\epsilon}$ has a unique fixed point called
$U(\tau,z,\epsilon)$ (depending analytically on $\epsilon \in \dot{D}(0,\epsilon_{0})$) in the closed
ball $B(0,R) \subset SED_{(\underline{\sigma},H,\epsilon,\delta)}$, for all $\epsilon \in
\dot{D}(0,\epsilon_{0})$. This means that
$A_{\epsilon}(U(\tau,z,\epsilon)) = U(\tau,z,\epsilon)$ with
$||U(\tau,z,\epsilon)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq R$. As a result, we get that the next expression
$$ w(\tau,z,\epsilon) = \partial_{z}^{-S}U(\tau,z,\epsilon) + W_{S}(\tau,z,\epsilon) $$
solves the equation (\ref{1_aux_CP}) with initial data (\ref{1_aux_CP_initial_data}). It remains to show that
$w(\tau,z,\epsilon)$ belongs to $SED_{(\underline{\sigma},H,\epsilon,\delta)}$ and to check the bounds
(\ref{norm_w_bd_in_epsilon_H}). By application of Proposition 2 for $k_{0}=k_{2}=0$ and $k_{1}=S$ we check that
\begin{equation}
||\partial_{z}^{-S}U(\tau,z,\epsilon)||_{(\underline{\sigma},H,\epsilon,\delta)} \leq
\delta^{S}||U(\tau,z,\epsilon)||_{(\underline{\sigma},H,\epsilon,\delta)}
\label{norm_partial_z_minus_S_U}
\end{equation}
Gathering (\ref{norm_partial_z_WS}) and (\ref{norm_partial_z_minus_S_U}) yields the fact that
$w(\tau,z,\epsilon)$ belongs to $SED_{(\underline{\sigma},H,\epsilon,\delta)}$ through the bounds
(\ref{norm_w_bd_in_epsilon_H}).
\end{proof}
\section{Sectorial analytic solutions in a complex parameter of a singular perturbed Cauchy problem involving fractional linear transforms}
Let $\mathcal{A}$ be a finite subset of
$\mathbb{N}^{3}$. For all $\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}$, we denote
$c_{\underline{k}}(z,\epsilon)$ a bounded holomorphic function on a polydisc $D(0,\rho) \times D(0,\epsilon_{0})$ for
given radii $\rho,\epsilon_{0}>0$. Let $S \geq 1$ be an integer and let $P(\tau)$ be
a polynomial (not identically equal to 0) with complex coefficients selected in a way that its
roots belong to the open right halfplane $\mathbb{C}_{+} = \{ z \in \mathbb{C} / \mathrm{Re}(z) > 0 \}$. We focus on the
following singularly perturbed Cauchy problem that incorporates fractional linear transforms
\begin{equation}
P(\epsilon t^{2}\partial_{t})\partial_{z}^{S}u(t,z,\epsilon)
= \sum_{\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}}
c_{\underline{k}}(z,\epsilon) \left((t^{2}\partial_{t})^{k_0}\partial_{z}^{k_1}u \right)( \frac{t}{1 + k_{2}\epsilon t},
z,\epsilon) \label{SPCP_first}
\end{equation}
for given initial data
\begin{equation}
(\partial_{z}^{j}u)(t,0,\epsilon) = \varphi_{j}(t,\epsilon) \ \ , \ \ 0 \leq j \leq S-1. \label{SPCP_first_i_d}
\end{equation}
We put the next assumption on the set $\mathcal{A}$. There exist two real numbers $\xi>0$ and $b>1$ such that for
all $\underline{k}=(k_{0},k_{1},k_{2}) \in \mathcal{A}$,
\begin{equation}
S \geq k_{1} + bk_{0} + \frac{bk_{2}}{\xi} \ \ , \ \ S > k_{1}. \label{cond_SPCP_first}
\end{equation}
\subsection{Construction of holomorphic solutions on a prescribed sector w.r.t $\epsilon$ using Banach spaces of
functions with super exponential growth and decay on strips}
Let $n \geq 1$ be an integer. We denote $\llbracket -n,n \rrbracket$ the set of integers
$\{ j \in \mathbb{N}, -n \leq j \leq n \}$. We consider two sets of closed horizontal strips
$\{ H_{k} \}_{k \in \llbracket -n,n
\rrbracket }$ and $\{ J_{k} \}_{k \in \llbracket -n,n \rrbracket }$ fulfilling the next conditions. If one displays
the strips $H_{k}$ and $J_{k}$ as follows,
$$ H_{k} = \{ z \in \mathbb{C} / a_{k} \leq \mathrm{Im}(z) \leq b_{k}, \ \ \mathrm{Re}(z) \leq 0 \} \ \ , \ \
J_{k} = \{ z \in \mathbb{C} / c_{k} \leq \mathrm{Im}(z) \leq d_{k}, \ \ \mathrm{Re}(z) \leq 0 \}$$
then, the real numbers $a_{k},b_{k},c_{k},d_{k}$ are asked to fulfill the next constraints.\\
1) The origin 0 belongs to $(c_{0},d_{0})$.\\
2) We have $c_{k} < a_{k} < d_{k}$ and $c_{k+1} < b_{k} < d_{k+1}$ for $-n \leq k \leq n-1$ together with
$c_{n} < a_{n} < d_{n}$ and $b_{n} > d_{n}$. In other words the strips
$J_{-n},H_{-n},J_{-n+1},\ldots,J_{n-1},H_{n-1},J_{n},H_{n}$ are consecutively overlapping.\\
3) We have $a_{k+1}>b_{k}$ and $c_{k+1}>d_{k}$ for $-n \leq k \leq n-1$. Namely, the strips $H_{k}$ (resp. $J_{k}$)
are disjoints for $k \in \llbracket -n,n \rrbracket$.
We denote $HJ_{n} = \{ z \in \mathbb{C} / c_{-n} \leq \mathrm{Im}(z) \leq b_{n}, \mathrm{Re}(z) \leq 0 \}$. We notice
that $HJ_{n}$ can be written as the union $\cup_{k \in \llbracket -n,n \rrbracket} H_{k} \cup J_{k}$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{strips.pdf}
\caption{Example of configuration for the sets $H_{k}$ and $J_k$}
\label{fig1}
\end{figure}
An example of configuration is shown in Figure~\ref{fig1}.
\begin{defin}
Let $n \geq 1$ be an integer. Let $w(\tau,\epsilon)$ be a holomorphic function on
$\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$ (where $\mathring{HJ}_{n}$ denotes the interior
of $HJ_{n}$), continuous on $HJ_{n} \times \dot{D}(0,\epsilon_{0})$. Assume that for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$, for all $k \in \llbracket -n,n \rrbracket$, the function
$\tau \mapsto w(\tau,\epsilon)$ belongs to the Banach spaces $SED_{(0,\underline{\sigma}',H_{k},\epsilon)}$
and $SEG_{(0,\underline{\varsigma}',J_{k},\epsilon)}$ with
$\underline{\sigma}' = (\sigma_{1}',\sigma_{2}',\sigma_{3}')$ and
$\underline{\varsigma}' = (\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ for some $\sigma_{1}'>0$ and
$\sigma_{j}',\varsigma_{j}'>0$ for $j=2,3$. Moreover, there exists a constant $I_{w}>0$ independent of
$\epsilon$, such that
\begin{equation}
||w(\tau,\epsilon)||_{(0,\underline{\sigma}',H_{k},\epsilon)} \leq I_{w} \ \ , \ \
||w(\tau,\epsilon)||_{(0,\underline{\varsigma}',J_{k},\epsilon)} \leq I_{w}, \label{bounds_w_initial}
\end{equation}
for all $k \in \llbracket -n,n \rrbracket$ and all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\noindent Let $\mathcal{E}_{HJ_{n}}$ be an open sector centered at 0 inside the disc $D(0,\epsilon_{0})$ with aperture
strictly less than $\pi$ and $\mathcal{T}$ be a bounded open sector centered at 0 with bisecting
direction $d=0$ chosen in a way that
\begin{equation}
\pi - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) \in (-\frac{\pi}{2} + \delta_{HJ_{n}},\frac{\pi}{2} - \delta_{HJ_{n}})
\label{rel_t_epsilon_mathcal_E_T}
\end{equation}
for some small $\delta_{HJ_{n}}>0$, for all $\epsilon \in \mathcal{E}_{HJ_{n}}$ and $t \in \mathcal{T}$.
\noindent We say that the set $(w(\tau,\epsilon),\mathcal{E}_{HJ_{n}},\mathcal{T})$ is
$(\underline{\sigma}',\underline{\varsigma}')-$admissible.
\end{defin}
\noindent {\bf Example:} Let $w(\tau,\epsilon) = \tau \exp(a\exp(-\tau))$ for some real number $a>0$. One can notice that
$$ |w(\tau,\epsilon)| \leq |\tau| \exp \left( a \cos(\mathrm{Im}(\tau)) \exp(-\mathrm{Re}(\tau)) \right) $$
for all $\tau \in \mathbb{C}$, all $\epsilon \in \mathbb{C}$. For all $k \in \mathbb{Z}$, let $H_{k}$ be the closed
strip defined as
$$ H_{k} = \{ z \in \mathbb{C} / \ \ \frac{\pi}{2} + \eta + 2k\pi \leq \mathrm{Im}(z) \leq
\frac{3\pi}{2} - \eta + 2k \pi, \ \ \mathrm{Re}(z) \leq 0 \} $$
for some real number $\eta>0$ and let $J_{k}$ be the closed strip described as
$$ J_{k} = \{ z \in \mathbb{C} / \ \ \frac{3\pi}{2} - \eta - \eta_{1} + 2(k-1)\pi \leq \mathrm{Im}(z)
\leq \frac{\pi}{2} + \eta + \eta_{1} + 2k\pi, \ \ \mathrm{Re}(z) \leq 0 \} $$
for some $\eta_{1}>0$. Provided that $\eta$ and $\eta_{1}$ are small enough, we can check that all the constraints
1) to 3) listed above are fulfilled for any fixed $n \geq 1$, for $k \in \llbracket -n,n \rrbracket$.
By construction, we get a constant $\Delta_{\eta}>0$ (depending on $\eta$) with
$\cos(\mathrm{Im}(\tau)) \leq -\Delta_{\eta}$ provided that $\tau \in H_{k}$, for all $k \in \mathbb{Z}$.
Let $m>0$ be a fixed real number. We first show that there exists $K_{m,k}>0$ (depending on $m$ and $k$)
such that
$$ -\mathrm{Re}(\tau) \geq K_{m,k}|\tau| $$
for all $\mathrm{Re}(\tau) \leq -m$ provided that $\tau \in H_{k}$. Indeed, if one puts
$$ y_{k} = \max \{ |y| / y \in [ \frac{\pi}{2} + \eta + 2k\pi, \frac{3\pi}{2} - \eta + 2k\pi ] \} $$
then the next inequality holds
$$ \frac{-\mathrm{Re}(\tau)}{|\tau|} \geq \min_{x \geq m}
\frac{x}{(x^{2} + y_{k}^{2})^{1/2}} = K_{m,k} > 0 $$
for all $\tau \in \mathbb{C}$ such that $\mathrm{Re}(\tau) \leq -m$ and $\tau \in H_{k}$. Now, we set
$K_{m;n} = \min_{k \in \llbracket -n,n \rrbracket} K_{m,k}$. As a result, we deduce the
existence of a constant $\Omega_{m,k}>0$ (depending on $m$,$k$ and $a$) such that
$$ |w(\tau,\epsilon)| \leq \Omega_{m,k}|\tau|\exp( -a \Delta_{\eta} \exp( K_{m;n}|\tau| ) ) $$
for all $\tau \in H_{k}$.
On the
other hand, we only have the upper bound $\cos(\mathrm{Im}(\tau)) \leq 1$ when $\tau \in J_{k}$, for all
$k \in \mathbb{Z}$. Since $-\mathrm{Re}(\tau) \leq |\tau|$, for all $\tau \in \mathbb{C}$, we deduce that
$$ |w(\tau,\epsilon)| \leq |\tau| \exp( a \exp(|\tau|))$$
whenever $\tau$ belongs to $J_{k}$, for all $\epsilon \in \mathbb{C}$. As a result, the function
$w(\tau,\epsilon)$ fulfills all the requirements asked in
Definition 3 for
$$ \underline{\sigma}'=(\sigma_{1}',a \Delta_{\eta}/(M-1),K_{m;n}) \ \ , \ \
\underline{\varsigma}'=(\sigma_{1}',a,1)$$
for any given $\sigma_{1}'>0$.
Let $n \geq 1$ be an integer and let us take some integer $k \in \llbracket -n,n \rrbracket$.
For each $0 \leq j \leq S-1$ and each integer $k \in \llbracket -n,n \rrbracket$, let
$\{ w_{j}(\tau,\epsilon), \mathcal{E}_{HJ_{n}}^{k},\mathcal{T} \}$ be a
$(\underline{\sigma}',\underline{\varsigma}')-$admissible set. As initial data (\ref{SPCP_first_i_d}), we set
\begin{equation}
\varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}(t,\epsilon) = \int_{P_k} w_{j}(u,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{SPCP_first_i_d_k}
\end{equation}
where the integration path $P_{k}$ is built as the union of two paths $P_{k,1}$ and $P_{k,2}$ described as follows.
$P_{k,1}$ is a segment joining the origin 0 and a prescribed point $A_{k} \in H_{k}$ and
$P_{k,2}$ is the horizontal line $\{ A_{k} - s / s \geq 0 \}$. According to (\ref{rel_t_epsilon_mathcal_E_T}),
we choose the point $A_{k}$ with
$|\mathrm{Re}(A_k)|$ suitably large in a way that
\begin{equation}
\mathrm{arg}(A_{k}) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) \in ( -\frac{\pi}{2} + \eta_{k},
\frac{\pi}{2} - \eta_{k} ) \label{choice_a_k}
\end{equation}
for some $\eta_{k}>0$ close to 0, provided that $\epsilon$ belongs to the sector
$\mathcal{E}_{HJ_{n}}^{k}$.
\begin{lemma} The function $\varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}(t,\epsilon)$ defines a bounded holomorphic function
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times \mathcal{E}_{HJ_{n}}^{k}$
for some well selected radius $r_{\mathcal{T}}>0$.
\end{lemma}
\begin{proof} We set
$$ \varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}^{1}(t,\epsilon) = \int_{P_{k,1}} w_{j}(u,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u}
$$
Since the path $P_{k,1}$ crosses the domains $H_{q},J_{q}$ for some $q \in \llbracket -n,n \rrbracket$, due to
(\ref{bounds_w_initial}), we have the coarse
upper bounds
$$ |w_{j}(\tau,\epsilon)| \leq I_{w_{j}}|\tau| \exp \left( \frac{\sigma_{1}'}{|\epsilon|}|\tau| +
\varsigma_{2}'\exp( \varsigma_{3}' |\tau| ) \right)
$$
for all $\tau \in P_{k,1}$. We deduce the next estimates
\begin{multline*}
|\int_{P_{k,1}} w_{j}(u,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u}| \leq
\int_{0}^{|A_{k}|} I_{w_j} \rho \exp \left( \frac{\sigma_{1}'}{|\epsilon|}\rho +
\varsigma_{2}'\exp( \varsigma_{3}' \rho ) \right)\\
\times
\exp( -\frac{\rho}{|\epsilon t|} \cos( \mathrm{arg}(A_{k}) - \mathrm{arg}(\epsilon t) ) ) \frac{d\rho}{\rho}.
\end{multline*}
From the choice of $A_{k}$ fulfilling (\ref{choice_a_k}), we can find some
real number $\delta_{1}>0$ with
$\cos( \mathrm{arg}(A_{k}) - \mathrm{arg}(\epsilon t) ) \geq \delta_{1}$
for all $\epsilon \in \mathcal{E}_{HJ_n}^{k}$. We choose $\delta_{2}>0$ and take $t \in \mathcal{T}$
with $|t| \leq \delta_{1}/(\delta_{2} + \sigma_{1}')$. Then, we get
$$
|\varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}^{1}(t,\epsilon)| \leq
I_{w_j} \int_{0}^{|A_{k}|} \exp( \varsigma_{2}'\exp( \varsigma_{3}' \rho ) ) \exp( -\frac{\rho}{|\epsilon|}\delta_{2} )
d \rho
$$
which implies that $\varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}^{1}(t,\epsilon)$ is bounded holomorphic on
$(\mathcal{T} \cap D(0,\frac{\delta_{1}}{\delta_{2} + \sigma_{1}'})) \times \mathcal{E}_{HJ_{n}}^{k}$.
In a second part, we put
$$ \varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}^{2}(t,\epsilon) = \int_{P_{k,2}} w_{j}(u,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u}
$$
Since the path $P_{k,2}$ is enclosed in the strip $H_{k}$, using the hypothesis (\ref{bounds_w_initial}), we check the
next estimates
\begin{multline}
|\int_{P_{k,2}} w_{j}(u,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u}| \\
\leq
\int_{0}^{+\infty} I_{w_j}|A_{k}-s|
\exp \left( \frac{\sigma_{1}'}{|\epsilon|}|A_{k}-s| - \sigma_{2}'(M-1)\exp(\sigma_{3}'|A_{k}-s|) \right)\\
\times \exp( -\frac{|A_{k}-s|}{|\epsilon t|} \cos( \mathrm{arg}(A_{k}-s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) )
\frac{ds}{|A_{k}-s|} \label{int_Pk2_w_j}
\end{multline}
From the choice of $A_{k}$ fulfilling (\ref{choice_a_k}), we observe that
\begin{equation}
\mathrm{arg}(A_{k}-s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) \in (-\frac{\pi}{2} + \eta_{k},
\frac{\pi}{2} - \eta_{k} ) \label{argument_ak_minus_s}
\end{equation}
for all $s \geq 0$, provided that $\epsilon \in \mathcal{E}_{HJ_{n}}^{k}$. Consequently, we can select some
$\delta_{1}>0$ with $\cos( \mathrm{arg}(A_{k}-s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) > \delta_{1}$. We sort
$\delta_{2}>0$ and take $t \in \mathcal{T}$ with $|t| \leq \delta_{1}/(\delta_{2} + \sigma_{1}')$. On the other hand, we may sort
a constant $K_{A_{k}}>0$ (depending on $A_{k}$) for which
$$ |A_{k} - s| \geq K_{A_k}(|A_{k}| + s) $$
whenever $s \geq 0$. Subsequently, we get
\begin{multline*}
|\varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}^{2}(t,\epsilon)| \leq I_{w_j}
\int_{0}^{+\infty} \exp \left( -\sigma_{2}'(M-1) \exp( \sigma_{3}'|A_{k}-s| ) \right)
\exp(-\frac{|A_{k}-s|}{|\epsilon|}\delta_{2}) ds \\
\leq I_{w_j} \int_{0}^{+\infty} \exp( -\frac{K_{A_k}\delta_{2}}{|\epsilon|} (|A_{k}| + s) ) ds =
\frac{I_{w_j}}{K_{A_k}\delta_{2}} |\epsilon| \exp( -\frac{K_{A_k}\delta_{2}}{|\epsilon|} |A_{k}| ).
\end{multline*}
As a consequence, $\varphi_{j,\mathcal{E}_{HJ_{n}}^{k}}^{2}(t,\epsilon)$ represents a bounded holomorphic function
on $(\mathcal{T} \cap D(0, \delta_{1}/(\delta_{2} + \sigma_{1}'))) \times \mathcal{E}_{HJ_{n}}^{k}$. Lemma 8 follows.
\end{proof}
\begin{prop} We make the assumption that the real number $\xi$ introduced in (\ref{cond_SPCP_first}) conforms the next
inequality
\begin{equation}
\xi \leq \min(\sigma_{3}',\varsigma_{3}'). \label{xi_larger_sigma}
\end{equation}
1) There exist some constants $I,\delta>0$ (independent of $\epsilon$) selected in a way that if one assumes
that
\begin{equation}
\sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\sigma}',H_{k},\epsilon)}
\frac{\delta^j}{j!} \leq I \ \ , \ \ \sum_{j=0}^{S-1-h}
||w_{j+h}(\tau,\epsilon)||_{(0,\underline{\varsigma}',J_{k},\epsilon)}
\frac{\delta^j}{j!} \leq I \label{norm_w_initial_small}
\end{equation}
for all $0 \leq h \leq S-1$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$, all
$k \in \llbracket -n,n \rrbracket$, then the Cauchy problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d})
with initial data given by (\ref{SPCP_first_i_d_k}) has a solution $u_{\mathcal{E}_{HJ_{n}}^{k}}(t,z,\epsilon)$ which turns
out to be bounded and holomorphic on a domain
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta\delta_{1}) \times \mathcal{E}_{HJ_{n}}^{k}$ for some fixed
radius $r_{\mathcal{T}}>0$ and $0 < \delta_{1} < 1$.
Furthermore,
$u_{\mathcal{E}_{HJ_{n}}^{k}}$ can be written as a special Laplace transform
\begin{equation}
u_{\mathcal{E}_{HJ_{n}}^{k}}(t,z,\epsilon) = \int_{P_{k}} w_{HJ_{n}}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{u_E_HJn_k_Laplace}
\end{equation}
where $w_{HJ_n}(\tau,z,\epsilon)$ defines a holomorphic function on
$\mathring{HJ}_{n} \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$, continuous
on $HJ_{n} \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$ that fulfills the next
constraints. For any choice of two tuples $\underline{\sigma} = (\sigma_{1},\sigma_{2},\sigma_{3})$ and
$\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ with
\begin{equation}
\sigma_{1} > \sigma_{1}', 0 < \sigma_{2} < \sigma_{2}', \sigma_{3}=\sigma_{3}',
\varsigma_{2}> \varsigma_{2}',\varsigma_{3}=\varsigma_{3}' \label{relations_sigma_sigma_prim}
\end{equation}
there exist a constant $C_{H_k}>0$ and
$C_{J_k}>0$ (independent of $\epsilon$) with
\begin{equation}
|w_{HJ_n}(\tau,z,\epsilon)| \leq C_{H_k}|\tau|
\exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| - \sigma_{2}(M-\zeta(b))\exp( \sigma_{3}|\tau| ) \right)
\label{bds_WHJn_Hk}
\end{equation}
for all $\tau \in H_{k}$, all $z \in D(0,\delta \delta_{1})$ and
\begin{equation}
|w_{HJ_n}(\tau,z,\epsilon)| \leq C_{J_k}|\tau|
\exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| + \varsigma_{2} \zeta(b) \exp( \varsigma_{3}|\tau| ) \right)
\label{bds_WHJn_Jk}
\end{equation}
for all $\tau \in J_{k}$, all $z \in D(0,\delta \delta_{1})$, provided that $\epsilon \in
\dot{D}(0,\epsilon_{0})$, for each $k \in \llbracket -n,n \rrbracket$.\\
2) Let $k \in \llbracket -n,n \rrbracket$ with $k \neq n$. Then, keeping $\epsilon_{0}$ and
$r_{\mathcal{T}}$ small enough, there exist constants $M_{k,1},M_{k,2}>0$ and $M_{k,3}>1$,
independent of $\epsilon$, such that
\begin{equation}
| u_{\mathcal{E}_{HJ_{n}}^{k+1}}(t,z,\epsilon) - u_{\mathcal{E}_{HJ_{n}}^{k}}(t,z,\epsilon) |
\leq M_{k,1} \exp( -\frac{M_{k,2}}{|\epsilon|} \mathrm{Log} \frac{M_{k,3}}{|\epsilon|} ) \label{log_flat_difference_uk_plus_1_minus_uk_HJn}
\end{equation}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, all $\epsilon \in \mathcal{E}_{HJ_{n}}^{k} \cap
\mathcal{E}_{HJ_{n}}^{k+1} \neq \emptyset$ and
all $z \in D(0,\delta \delta_{1})$.
\end{prop}
\begin{proof}
We consider the equation (\ref{1_aux_CP}) for the given initial data
\begin{equation}
(\partial_{z}^{j}w)(\tau,0,\epsilon) = w_{j}(\tau,\epsilon) \ \ , \ \ 0 \leq j \leq S-1 \label{1_aux_CP_i_d_admissible}
\end{equation}
where $w_{j}(\tau,\epsilon)$ are given above in order to construct the functions
$\varphi_{j,\mathcal{E}_{HJ_n}^{k}}(t,\epsilon)$ in (\ref{SPCP_first_i_d_k}).
In a first step, we check that the problem (\ref{1_aux_CP}), (\ref{1_aux_CP_i_d_admissible}) possesses a unique formal
solution
\begin{equation}
w_{HJ_n}(\tau,z,\epsilon) = \sum_{\beta \geq 0} w_{\beta}(\tau,\epsilon) \frac{z^{\beta}}{\beta !} \label{formal_wHJn}
\end{equation}
where $w_{\beta}(\tau,\epsilon)$ are holomorphic on $\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$,
continuous on $HJ_{n} \times \dot{D}(0,\epsilon_{0})$. Namely, if one expands
$c_{\underline{k}}(z,\epsilon) = \sum_{\beta \geq 0} c_{\underline{k},\beta}(\epsilon) z^{\beta}/\beta!$
as Taylor series at $z=0$, the formal series (\ref{formal_wHJn}) is solution of
(\ref{1_aux_CP}), (\ref{1_aux_CP_i_d_admissible}) if and only if the next recursion holds
\begin{equation}
w_{\beta + S}(\tau,\epsilon) = \sum_{\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}}
\frac{\epsilon^{-k_{0}}\tau^{k_0}}{P(\tau)} \exp( -k_{2} \tau)
\left( \sum_{\beta_{1} + \beta_{2} = \beta} \frac{c_{\underline{k},\beta_{1}}(\epsilon)}{\beta_{1}!}
\frac{w_{\beta_{2}+k_{1}}(\tau,\epsilon)}{\beta_{2}!} \beta! \right) \label{recursion_w_beta}
\end{equation}
for all $\beta \geq 0$. Since the initial data $w_{j}(\tau,\epsilon)$, for $0 \leq j \leq S-1$ are assumed to
define holomorphic functions on $\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$, continuous on
$HJ_{n} \times \dot{D}(0,\epsilon_{0})$, the recursion (\ref{recursion_w_beta}) implies in particular
that all $w_{n}(\tau,\epsilon)$ for $n \geq S$ are well defined and represent holomorphic functions on
$\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$, continuous on
$HJ_{n} \times \dot{D}(0,\epsilon_{0})$.
According to the assumption (\ref{cond_SPCP_first}) together with (\ref{xi_larger_sigma}) and the restriction on the size
of the initial data (\ref{norm_w_initial_small}), we notice that the requirements 1)a)b) and 2)a)b) in Proposition 9
are realized. We deduce that\\
1) The formal solution $w_{HJ_n}(\tau,z,\epsilon)$ belongs to the Banach spaces
$SED_{(\underline{\sigma},H_{k},\epsilon,\delta)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$,
all $k \in \llbracket -n,n \rrbracket$, for any tuple
$\underline{\sigma} = (\sigma_{1},\sigma_{2},\sigma_{3})$ chosen as in
(\ref{relations_sigma_sigma_prim}), with an upper bound $\tilde{C}_{H_k}>0$ (independent of $\epsilon$) such that
\begin{equation}
||w_{HJ_n}(\tau,z,\epsilon)||_{(\underline{\sigma},H_{k},\epsilon,\delta)} \leq \tilde{C}_{H_k}, \label{norm_wHJn_Hk}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
2) The formal series $w_{HJ_n}(\tau,z,\epsilon)$ belongs to the Banach spaces
$SEG_{(\underline{\varsigma},J_{k},\epsilon,\delta)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$,
all $k \in \llbracket -n,n \rrbracket$, for any tuple
$\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ selected as in
(\ref{relations_sigma_sigma_prim}). Besides, we can get a constant $\tilde{C}_{J_k}>0$ (independent of $\epsilon$) with
\begin{equation}
||w_{HJ_n}(\tau,z,\epsilon)||_{(\underline{\varsigma},J_{k},\epsilon,\delta)} \leq \tilde{C}_{J_k}, \label{norm_wHJn_Jk}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
Bearing in mind (\ref{norm_wHJn_Hk}) and (\ref{norm_wHJn_Jk}), the application of Proposition 1 and Proposition 5 1)
yields in particular the fact that the formal series $w_{HJ_n}(\tau,z,\epsilon)$ actually defines a holomorphic function
on $\mathring{HJ}_{n} \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$, continuous on
$HJ_{n} \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$, for some $0 < \delta_{1} < 1$, that
satisfies moreover the estimates (\ref{bds_WHJn_Hk}) and (\ref{bds_WHJn_Jk}).
Following the same steps as in the proof of Lemma 8, one can show that for each $k \in \llbracket -n,n \rrbracket$, the
function $u_{\mathcal{E}_{HJ_n}^{k}}$ defined as a special Laplace transform
$$ u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon) = \int_{P_k} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} $$
represents a bounded holomorphic function on
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta_{1}\delta) \times \mathcal{E}_{HJ_{n}}^{k}$ for some fixed
radius $r_{\mathcal{T}}>0$ and $0 < \delta_{1} < 1$. Besides, by a direct computation, we can check that
$u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$
solves the problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d}) with initial data (\ref{SPCP_first_i_d_k})
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta_{1}\delta) \times \mathcal{E}_{HJ_{n}}^{k}$.\medskip
In a second part of the proof, we focus our attention to the point 2). Take some $k \in \llbracket -n,n \rrbracket$ with
$k \neq n$. Let us choose two complex numbers
$$ h_{q} = -\varrho \mathrm{Log}( \frac{1}{\epsilon t} e^{i \chi_{q}}) $$
for $q=k,k+1$, where $0 < \varrho < 1$ and where $\chi_{q} \in \mathbb{R}$ are directions selected in a way that
\begin{equation}
i \varrho( \mathrm{arg}(t) + \mathrm{arg}(\epsilon) - \chi_{q} ) \in H_{q} \label{cond_chi_q}
\end{equation}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$, all $t \in \mathcal{T}$.
Notice that such
directions $\chi_{q}$ always exist for some $0 < \varrho < 1$ small enough since by definition the aperture of
$\mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$ is strictly less than $\pi$, the aperture of
$\mathcal{T}$ is close to 0.
By construction, we get that $h_{q}$ belongs to $H_{q}$ for $q=k,k+1$ since $h_{q}$ can be expressed as
$$ h_{q} = -\varrho \mathrm{Log}|\frac{1}{\epsilon t}| + i \varrho
(\mathrm{arg}(t) + \mathrm{arg}(\epsilon) - \chi_{q} ). $$
From the fact that $u \mapsto w_{HJ_n}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} )/u$ is holomorphic on
the strip $\mathring{HJ}_{n}$, for any fixed $z \in D(0,\delta \delta_{1})$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$, by means of a path deformation
argument (according to the classical Cauchy theorem, the integral of a holomorphic function along a closed path is vanishing)
we can rewrite the difference $u_{\mathcal{E}_{HJ_{n}}^{k+1}} - u_{\mathcal{E}_{HJ_{n}}^{k}}$ as a sum of three integrals
\begin{multline}
u_{\mathcal{E}_{HJ_{n}}^{k+1}}(t,z,\epsilon) - u_{\mathcal{E}_{HJ_{n}}^{k}}(t,z,\epsilon) =
- \int_{L_{h_{k},\infty}} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \\
+ \int_{L_{h_{k},h_{k+1}}} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} + \int_{L_{h_{k+1},\infty}} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{splitting_uk_plus_1_minus_uk}
\end{multline}
where $L_{h_{q},\infty} = \{ h_{q} - s / s \geq 0 \}$ for $q=k,k+1$ are horizontal halflines and
$L_{h_{k},h_{k+1}} = \{ (1-s)h_{k} + sh_{k+1} / s \in [0,1] \}$ is a segment joining $h_{k}$ and $h_{k+1}$. This situation is shown in Figure~\ref{fig2}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{path.pdf}
\caption{Integration path for the difference of solutions}
\label{fig2}
\end{figure}
We first furnish estimates for
$$ I_{1} = \left| \int_{L_{h_{k},\infty}} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
Since the path $L_{h_{k},\infty}$ is contained inside the strip $H_{k}$, in accordance with the bounds
(\ref{bds_WHJn_Hk}), we reach the estimates
\begin{multline}
I_{1} \leq C_{H_k}\int_{0}^{+\infty} |h_{k}-s| \exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |h_{k} - s|
- \sigma_{2}(M - \zeta(b)) \exp( \sigma_{3} |h_{k} - s| ) \right)\\
\times \exp \left( - \frac{|h_{k} - s|}{|\epsilon t|} \cos( \mathrm{arg}(h_{k} - s) - \mathrm{arg}(\epsilon)
- \mathrm{arg}(t) ) \right) \frac{ds}{|h_{k} - s|}
\end{multline}
Provided that $\epsilon_{0}>0$ is chosen small enough, $|\mathrm{Re}(h_{k})| = \varrho \mathrm{Log}(1/|\epsilon t|)$
becomes suitably large and implies the next range
$$ \mathrm{arg}(h_{k} - s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) \in
(-\frac{\pi}{2} + \eta_{k}, \frac{\pi}{2} - \eta_{k}) $$
for some $\eta_{k}>0$ close to 0, according that $\epsilon$ belongs to
$\mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$
and $t$ is inside $\mathcal{T}$, for all $s \geq 0$. Consequently, we can select some $\delta_{1}>0$ with
\begin{equation}
\cos( \mathrm{arg}(h_{k} - s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) > \delta_{1} \label{low_bds_cos_hk_minus_s}
\end{equation}
for all $s \geq 0$, $t \in \mathcal{T}$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$.
On the other hand, we can rewrite
\begin{multline*}
|h_{k} - s| = \left( ( \varrho \mathrm{Log}(\frac{1}{|\epsilon t|}) + s)^{2} + \varrho^{2}(
\mathrm{arg}(t) + \mathrm{arg}(\epsilon) - \chi_{k})^{2} \right)^{1/2}\\
= (\varrho \mathrm{Log}(\frac{1}{|\epsilon t|}) + s)
( 1 + \frac{\varrho^{2}(
\mathrm{arg}(t) + \mathrm{arg}(\epsilon) - \chi_{k})^{2}}{ ( \varrho \mathrm{Log}(\frac{1}{|\epsilon t|}) + s)^{2} })^{1/2}
\end{multline*}
provided that $|\epsilon t| < 1$ which holds if one assumes that $0 < \epsilon_{0} < 1$ and $0< r_{\mathcal{T}} < 1$.
For that reason, we get a constant $m_{k}>0$ (depending on $H_k$ and $\varrho$) such that
\begin{equation}
|h_{k} - s| \geq m_{k}( \varrho \mathrm{Log}( \frac{1}{|\epsilon t|} ) + s) \label{low_bds_hk_minus_s}
\end{equation}
for all $s \geq 0$, all $t \in \mathcal{T}$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$. Now, we select $\delta_{2}>0$ and
take $t \in \mathcal{T}$ with $|t| \leq \delta_{1}/ ( \sigma_{1} \zeta(b) + \delta_{2})$. Then, gathering
(\ref{low_bds_cos_hk_minus_s}) and (\ref{low_bds_hk_minus_s}) yields
\begin{multline}
I_{1} \leq C_{H_k}\int_{0}^{+\infty} \exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |h_{k} - s| -
\frac{|h_{k} - s|}{|\epsilon t|} \delta_{1} \right) ds \leq C_{H_k}\int_{0}^{+\infty}
\exp( -\delta_{2} \frac{|h_{k} - s|}{|\epsilon|} ) ds\\
\leq C_{H_k} \exp \left(-\delta_{2} m_{k} \frac{\varrho}{|\epsilon|} \mathrm{Log}( \frac{1}{|\epsilon t|} ) \right)
\int_{0}^{+\infty} \exp( -\delta_{2} m_{k} \frac{s}{|\epsilon|}) ds\\
\leq C_{H_k} \frac{\epsilon_{0}}{\delta_{2}m_{k}}
\exp \left(-\delta_{2} m_{k} \frac{\varrho}{|\epsilon|} \mathrm{Log}( \frac{1}{|\epsilon|r_{\mathcal{T}}} ) \right) \label{I1<=}
\end{multline}
whenever $t \in \mathcal{T} \cap D(0, \delta_{1}/( \sigma_{1} \zeta(b) + \delta_{2}))$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$.\medskip
Let
$$ I_{2} = \left| \int_{L_{h_{k+1},\infty}} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
In a similar manner, we can grab constants $\delta_{1},\delta_{2}>0$ and $m_{k+1}>0$ (depending on
$H_{k+1}$ and $\varrho$) with
\begin{equation}
I_{2} \leq C_{H_{k+1}} \frac{\epsilon_{0}}{\delta_{2}m_{k+1}}
\exp \left(-\delta_{2} m_{k+1} \frac{\varrho}{|\epsilon|} \mathrm{Log}( \frac{1}{|\epsilon|r_{\mathcal{T}}} ) \right) \label{I2<=}
\end{equation}
for all $t \in \mathcal{T} \cap D(0, \delta_{1}/( \sigma_{1} \zeta(b) + \delta_{2}))$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$.\medskip
In a final step, we need to show estimates for
$$ I_{3} = \left| \int_{L_{h_{k},h_{k+1}}} w_{HJ_n}(u,z,\epsilon)
\exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
We notice that the vertical segment $L_{h_{k},h_{k+1}}$ crosses the strips $H_{k},J_{k+1}$ and $H_{k+1}$ and belongs
to the union $H_{k} \cup J_{k+1} \cup H_{k+1}$. According to (\ref{bds_WHJn_Hk}) and (\ref{bds_WHJn_Jk}), we only have
the rough upper bounds
$$
|w_{HJ_{n}}(\tau,z,\epsilon)| \leq \max( C_{H_k}, C_{J_{k+1}}, C_{H_{k+1}} ) |\tau|
\exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| + \varsigma_{2} \zeta(b) \exp( \varsigma_{3}|\tau|)
\right)
$$
for all $\tau \in H_{k} \cup J_{k+1} \cup H_{k+1}$, all $z \in D(0,\delta \delta_{1})$, all
$\epsilon \in \dot{D}(0,\epsilon_{0})$. We deduce that
\begin{multline}
I_{3} \leq \max( C_{H_k}, C_{J_{k+1}}, C_{H_{k+1}} )
\int_{0}^{1} |(1-s)h_{k} + sh_{k+1}|\\
\exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |(1-s)h_{k} + sh_{k+1}|
+ \varsigma_{2} \zeta(b) \exp( \varsigma_{3} |(1-s)h_{k} + sh_{k+1}| ) \right)\\
\times \exp \left( - \frac{ |(1-s)h_{k} + sh_{k+1}| }{ |\epsilon t| }
\cos( \mathrm{arg}( (1-s)h_{k} + sh_{k+1} ) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) \right)\\
\times
\frac{|h_{k+1} - h_{k}|}{|(1-s)h_{k} + sh_{k+1}|} ds \label{I3<=first}
\end{multline}
Taking for granted that $\epsilon_{0}>0$ is chosen small enough, the quantity
$|\mathrm{Re}((1-s)h_{k} + sh_{k+1})| = \varrho \mathrm{Log}(1 / |\epsilon t|)$ turns out to be large and leads to the next
variation of arguments
$$ \mathrm{arg}( (1-s)h_{k} + sh_{k+1} ) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) \in
(-\frac{\pi}{2} + \eta_{k,k+1}, \frac{\pi}{2} - \eta_{k,k+1}) $$
for some $\eta_{k,k+1}>0$ close to 0, as
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$, for $s \in [0,1]$. Therefore,
one can find $\delta_{1}>0$ with
\begin{equation}
\cos( \mathrm{arg}((1-s)h_{k} + sh_{k+1}) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) > \delta_{1}
\label{low_bds_cos_hk_hk_plus_1}
\end{equation}
for all $t \in \mathcal{T}$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$, when $s \in [0,1]$.
Besides, we can compute the modulus
\begin{multline*}
|(1-s)h_{k} + sh_{k+1}| = \left( (\varrho \mathrm{Log}( \frac{1}{|\epsilon t|} ))^{2} +
\varrho^{2}( \mathrm{arg}(t) + \mathrm{arg}(\epsilon) - (1-s)\chi_{k} - s \chi_{k+1} )^{2} \right)^{1/2}\\
= \varrho \mathrm{Log}( \frac{1}{|\epsilon t|} )( 1 +
\frac{ ( \mathrm{arg}(t) + \mathrm{arg}(\epsilon) - (1-s)\chi_{k} - s \chi_{k+1} )^{2} }{
(\mathrm{Log}( \frac{1}{|\epsilon t|} ))^{2} } )^{1/2}
\end{multline*}
as long as $|\epsilon t|<1$, which occurs whenever $0<\epsilon_{0}<1$ and $0<r_{\mathcal{T}}<1$. Then, when
$\epsilon_{0}$ is taken small enough, we obtain two constants $m_{k,k+1}>0$ and $M_{k,k+1}>0$ with
\begin{equation}
\varrho m_{k,k+1} \mathrm{Log}( \frac{1}{|\epsilon t|} ) \leq
|(1-s)h_{k} + sh_{k+1}| \leq \varrho M_{k,k+1} \mathrm{Log}( \frac{1}{|\epsilon t|} ) \label{bds_hk_hk_plus_1}
\end{equation}
for all $s \in [0,1]$, when $t \in \mathcal{T}$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$. Moreover, we remark that
$|h_{k+1} - h_{k}| = \varrho |\chi_{k+1} - \chi_{k}|$. Bearing in mind (\ref{low_bds_cos_hk_hk_plus_1}) together with
(\ref{bds_hk_hk_plus_1}), we deduce from (\ref{I3<=first}) that the next inequality holds
\begin{multline*}
I_{3} \leq \max( C_{H_k}, C_{J_{k+1}}, C_{H_{k+1}} ) \varrho |\chi_{k+1} - \chi_{k}|\\
\times \exp \left( \frac{\sigma_{1}}{|\epsilon|}\zeta(b) \varrho M_{k,k+1} \mathrm{Log}( \frac{1}{|\epsilon t|} )
+ \varsigma_{2} \zeta(b) \exp( \varsigma_{3} \varrho M_{k,k+1} \mathrm{Log}(\frac{1}{|\epsilon t|}) ) \right) \\
\times \exp \left( -\varrho m_{k,k+1} \frac{1}{|\epsilon t|} \mathrm{Log}( \frac{1}{|\epsilon t|}) \delta_{1} \right)
\end{multline*}
for any $t \in \mathcal{T}$ and $\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$.
We choose $0 < \varrho < 1$ in a way that $\varsigma_{3} \varrho M_{k,k+1} \leq 1$. Let $\psi(x) =
\varsigma_{2} \zeta(b) x^{\varsigma_{3}\varrho M_{k,k+1}} - \varrho m_{k,k+1} \delta_{1} x \mathrm{Log}(x)$. Then,
we can check that there exists $B>0$ (depending on $\zeta(b),\varrho,\varsigma_{2},\varsigma_{3},M_{k,k+1},m_{k,k+1},\delta_{1}$) such that
$$ \psi(x) \leq - \frac{ \varrho m_{k,k+1} \delta_{1} }{2} x \mathrm{Log}(x) + B $$
for all $x \geq 1$. We deduce that
\begin{multline*}
I_{3} \leq \max( C_{H_k}, C_{J_{k+1}}, C_{H_{k+1}} ) \varrho |\chi_{k+1} - \chi_{k}|\\
\times \exp \left( \frac{\sigma_{1}}{|\epsilon|}\zeta(b) \varrho M_{k,k+1} \mathrm{Log}( \frac{1}{|\epsilon t|} )
- \frac{\varrho}{2} m_{k,k+1} \delta_{1} \frac{1}{|\epsilon t|} \mathrm{Log}( \frac{1}{|\epsilon t|} ) + B \right)
\end{multline*}
whenever $t \in \mathcal{T}$ and $\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$. We select
$\delta_{2}>0$ and take $t \in \mathcal{T}$ with the constraint
$|t| \leq d_{k,k+1}$ where
$$ d_{k,k+1} = \frac{\varrho m_{k,k+1} \delta_{1} / 2}{ \sigma_{1} \zeta(b) \varrho M_{k,k+1} + \delta_{2} }. $$
This last choice implies in particular that
\begin{multline}
I_{3} \leq \max( C_{H_k}, C_{J_{k+1}}, C_{H_{k+1}} ) \varrho |\chi_{k+1} - \chi_{k}|
\exp \left( - \frac{\delta_{2}}{|\epsilon|} \mathrm{Log}( \frac{1}{|\epsilon t|} ) + B \right)\\
\leq \max( C_{H_k}, C_{J_{k+1}}, C_{H_{k+1}} ) \varrho |\chi_{k+1} - \chi_{k}| e^{B}
\exp \left( - \frac{\delta_{2}}{|\epsilon|} \mathrm{Log}( \frac{1}{|\epsilon|r_{\mathcal{T}}}) \right) \label{I3<=}
\end{multline}
provided that $\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$.
Finally, starting from the splitting (\ref{splitting_uk_plus_1_minus_uk}) and gathering the upper bounds for the three pieces of this
decomposition (\ref{I1<=}), (\ref{I2<=}) and (\ref{I3<=}), we obtain the anticipated estimates
(\ref{log_flat_difference_uk_plus_1_minus_uk_HJn}).
\end{proof}
\subsection{Construction of sectorial holomorphic solutions in the parameter $\epsilon$ with the help of Banach spaces
with exponential growth on sectors}
In the next definition, we introduce the notion of $\sigma_{1}'-$admissible set in a similar way as in Definition 3.
\begin{defin} We consider an unbounded sector $S_{d}$ with bisecting direction $d \in \mathbb{R}$ with $S_{d} \subset \mathbb{C}_{+}$ and
$D(0,r)$ a disc centered at 0 with radius $r>0$ with the property that no root of $P(\tau)$ belongs to
$\bar{S}_{d} \cup \bar{D}(0,r)$. Let $w(\tau,\epsilon)$ be a holomorphic function on
$(S_{d} \cup D(0,r)) \times \dot{D}(0,\epsilon_{0})$, continuous on $(\bar{S}_{d} \cup \bar{D}(0,r)) \times
\dot{D}(0,\epsilon_{0})$. We assume that for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the function
$\tau \mapsto w(\tau,\epsilon)$ belongs to the Banach space $EG_{(0,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)}$ for given
$\sigma_{1}'>0$. Besides, the take for granted that some constant $I_{w}>0$, independent of $\epsilon$, exists with the bounds
\begin{equation}
||w(\tau,\epsilon)||_{(0,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)} \leq I_{w} \label{EG_norms_w_Iw}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
We denote $\mathcal{E}_{S_{d}}$ an open sector centered at 0 within the disc $D(0,\epsilon_{0})$, and let $\mathcal{T}$ be a bounded open sector centered at 0 with bisecting direction $d=0$ suitably chosen in a way that for all
$t \in \mathcal{T}$, all $\epsilon \in \mathcal{E}_{S_d}$, there exists a direction $\gamma_{d}$ (depending on $t$,$\epsilon$) such that
$\exp( \sqrt{-1} \gamma_{d}) \in S_{d}$
with
\begin{equation}
\gamma_{d} - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) \in (-\frac{\pi}{2} + \eta, \frac{\pi}{2}- \eta) \label{relation_gamma_epsilon_t}
\end{equation}
for some $\eta > 0$ close to 0.
The data $(w(\tau,\epsilon), \mathcal{E}_{S_d}, \mathcal{T})$ are said to be $\sigma_{1}'-$admissible.
\end{defin}
For all $0 \leq j \leq S-1$, all $0 \leq p \leq \iota - 1$ for some integer $\iota \geq 2$, we sort directions $d_{p} \in \mathbb{R}$,
unbounded sectors $S_{d_p}$ and corresponding bounded sectors $\mathcal{E}_{S_{d_p}}$, $\mathcal{T}$ such that the next given sets
$(w_{j}(\tau,\epsilon), \mathcal{E}_{S_{d_p}}, \mathcal{T})$ are $\sigma_{1}'-$admissible for some $\sigma_{1}'>0$. We assume moreover
that for each $0 \leq j \leq S-1$, $\tau \mapsto w_{j}(\tau,\epsilon)$ restricted to $S_{d_p}$ is an analytic continuation of a common
holomorphic function $\tau \mapsto w_{j}(\tau,\epsilon)$ on $D(0,r)$, for all $0 \leq p \leq \iota-1$. We adopt the convention that
$d_{p} < d_{p+1}$ and $S_{d_p} \cap S_{d_{p+1}} = \emptyset$ for all $0 \leq p \leq \iota-2$. As initial data
(\ref{SPCP_first_i_d}), we put
\begin{equation}
\varphi_{j,\mathcal{E}_{S_{d_p}}}(t,\epsilon) = \int_{L_{\gamma_{d_p}}} w_{j}(u,\epsilon) \exp( - \frac{u}{\epsilon t} ) \frac{du}{u}
\label{Laplace_varphi_j_along_halfline}
\end{equation}
where the integration path $L_{\gamma_{d_p}} = \mathbb{R}_{+}\exp(\sqrt{-1} \gamma_{d_p})$ is a halfline in direction $\gamma_{d_p}$ defined in
(\ref{relation_gamma_epsilon_t}).
\begin{lemma}
For all $0 \leq j \leq S-1$, $0 \leq p \leq \iota-1$, the Laplace integral $\varphi_{j,\mathcal{E}_{S_{d_p}}}(t,\epsilon)$
determines a bounded holomorphic function on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times \mathcal{E}_{S_{d_p}}$ for some
suitable radius $r_{\mathcal{T}}>0$.
\end{lemma}
\begin{proof} According to (\ref{EG_norms_w_Iw}), each function $w_{j}(\tau,\epsilon)$ satisfies the upper bounds
\begin{equation}
|w_{j}(\tau,\epsilon)| \leq I_{w_j} |\tau| \exp \left( \frac{\sigma_{1}'}{|\epsilon|} |\tau| \right) \label{bds_w_j_varsigma}
\end{equation}
for some constant $I_{w_j}>0$, whenever $\tau \in \bar{S}_{d_p} \cup \bar{D}(0,r)$, $\epsilon \in \dot{D}(0,\epsilon_{0})$.
Besides, due to (\ref{relation_gamma_epsilon_t}), we can grasp a constant $\delta_{1}>0$ with
\begin{equation}
\cos( \gamma_{d_p} - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) ) \geq \delta_{1} \label{cos >= delta1}
\end{equation}
for any $t \in \mathcal{T}$, $\epsilon \in \mathcal{E}_{S_{d_p}}$. We choose $\delta_{2}>0$ and take $t \in \mathcal{T}$ with
$|t| \leq \frac{\delta_{1}}{\delta_{2} + \sigma_{1}'}$. Then, collecting (\ref{bds_w_j_varsigma}) and (\ref{cos >= delta1}) allows us to
write
\begin{multline}
|\varphi_{j,\mathcal{E}_{S_{d_p}}}(t,\epsilon)| \leq \int_{0}^{+\infty} I_{w_j} \rho \exp( \frac{\sigma_{1}'}{|\epsilon|} \rho)
\exp( -\frac{\rho}{|\epsilon t|} \cos( \gamma_{d_p} - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) ) \frac{d \rho}{\rho}\\
\leq I_{w_j} \int_{0}^{+\infty} \exp( -\frac{\rho}{|\epsilon|} \delta_{2} ) d\rho = I_{w_j} \frac{|\epsilon|}{\delta_{2}}
\end{multline}
which implies in particular that $\varphi_{j,\mathcal{E}_{S_{d_p}}}(t,\epsilon)$ is holomorphic and bounded on
$(\mathcal{T} \cap D(0, \frac{\delta_{1}}{\delta_{2} + \sigma_{1}'})) \times \mathcal{E}_{S_{d_p}}$.
\end{proof}
In the next proposition, we construct actual holomorphic solutions of the problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d})
as Laplace transforms along halflines.
\begin{prop} 1) There exist two constants $I,\delta>0$ (independent of $\epsilon$) such that if one takes for granted that
\begin{equation}
\sum_{j=0}^{S-1-h} ||w_{j+h}(\tau,\epsilon)||_{(0,\sigma_{1}',S_{d_p} \cup D(0,r),\epsilon)} \frac{\delta^j}{j!} \leq I
\label{norm_Sdp_initial_wj}
\end{equation}
for all $0 \leq h \leq S-1$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$, all $0 \leq p \leq \iota-1$, then the
Cauchy problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d}) for initial conditions given by
(\ref{Laplace_varphi_j_along_halfline}) possesses a solution $u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ which represents a
bounded holomorphic function on a domain $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta_{1}\delta) \times
\mathcal{E}_{S_{d_p}}$, for suitable radius $r_{\mathcal{T}}>0$ and with $0 < \delta_{1} < 1$. Additionally,
$u_{\mathcal{E}_{S_{d_p}}}$ turns out to be a Laplace transform
\begin{equation}
u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) = \int_{L_{\gamma_{d_p}}} w_{S_{d_p}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u}
\label{u_E_Sdp_Laplace}
\end{equation}
where $w_{S_{d_p}}(u,z,\epsilon)$ stands for a holomorphic function on $(S_{d_p} \cup D(0,r)) \times D(0,\delta \delta_{1})
\times \dot{D}(0,\epsilon_{0})$, continuous on $(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times D(0,\delta \delta_{1})
\times \dot{D}(0,\epsilon_{0})$ which obeys the following restriction : for any choice of $\sigma_{1} > \sigma_{1}'$, we can find
a constant $C_{S_{d_p}}>0$ (independent of $\epsilon$) with
\begin{equation}
|w_{S_{d_p}}(\tau,z,\epsilon)| \leq C_{S_{d_p}} |\tau| \exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| )
\label{bds_w_Sdp}
\end{equation}
for all $\tau \in S_{d_p} \cup D(0,r)$, all $z \in D(0,\delta \delta_{1})$, whenever $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\noindent 2) Let $0 \leq p \leq \iota-2$. Provided that $r_{\mathcal{T}}>0$ is taken small enough, there exist two constants
$M_{p,1},M_{p,2}>0$ (independent of $\epsilon$) such that
\begin{equation}
|u_{\mathcal{E}_{S_{d_{p+1}}}}(t,z,\epsilon) - u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)| \leq M_{p,1}\exp( - \frac{M_{p,2}}{|\epsilon|} )
\label{difference_u_Sdp_exp_small}
\end{equation}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, all $\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}} \neq \emptyset$
and all $z \in D(0,\delta \delta_{1})$.
\end{prop}
\begin{proof}
The first step follows the one performed in Proposition 10. Namely, we can check that the problem (\ref{1_aux_CP}) with initial data
\begin{equation}
(\partial_{z}^{j}w)(\tau,0,\epsilon) = w_{j}(\tau,\epsilon) \ \ , \ \ 0 \leq j \leq S-1 \label{1_aux_CP_i_d_Sd}
\end{equation}
given above in the $\sigma_{1}'-$admissible sets appearing in the Laplace integrals (\ref{Laplace_varphi_j_along_halfline}), owns a unique
formal solution
\begin{equation}
w_{S_{d_p}}(\tau,z,\epsilon) = \sum_{\beta \geq 0} w_{\beta}(\tau,\epsilon) \frac{z^{\beta}}{\beta !} \label{defin_w_S_dp}
\end{equation}
where $w_{\beta}(\tau,\epsilon)$ define holomorphic functions on $(S_{d} \cup D(0,r)) \times \dot{D}(0,\epsilon_{0})$,
continuous on $(\bar{S}_{d} \cup \bar{D}(0,r)) \times \dot{D}(0,\epsilon_{0})$. Namely, the formal expansion
(\ref{defin_w_S_dp}) solves (\ref{1_aux_CP}) together with (\ref{1_aux_CP_i_d_Sd}) if and only if the recursion
(\ref{recursion_w_beta}) holds. As a result, it implies that all the coefficients $w_{n}(\tau,\epsilon)$ for $n \geq S$ represent
holomorphic functions on $(S_{d_p} \cup D(0,r)) \times \dot{D}(0,\epsilon_{0})$, continuous on
$(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times \dot{D}(0,\epsilon_{0})$ since this property already holds for the initial data
$w_{j}(\tau,\epsilon)$, $0 \leq j \leq S-1$, under our assumption (\ref{EG_norms_w_Iw}).
The assumption (\ref{cond_SPCP_first}) and the control on the norm range of the initial data (\ref{norm_Sdp_initial_wj}), let us
figure out that the demands 3)a)b) in Proposition 9 are scored. In particular, the formal series $w_{S_{d_p}}(\tau,z,\epsilon)$ is located
in the Banach space $EG_{(\sigma_{1},S_{d_p} \cup D(0,r),\epsilon,\delta)}$, for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$, for any real number $\sigma_{1}>\sigma_{1}'$, with a constant $\tilde{C}_{S_{d_p}}>0$
(independent of $\epsilon$) for which
$$ ||w_{S_{d_p}}(\tau,z,\epsilon)||_{(\sigma_{1},S_{d_p} \cup D(0,r),\epsilon,\delta)} \leq \tilde{C}_{S_{d_p}} $$
holds for all $\epsilon \in \dot{D}(0,\epsilon_{0})$. With the help of Proposition 5 2), we notice that the formal expansion
$w_{S_{d_p}}(\tau,z,\epsilon)$ turns out to be an actual holomorphic function on $(S_{d_p} \cup D(0,r)) \times D(0,\delta \delta_{1})
\times \dot{D}(0,\epsilon_{0})$, continuous on $(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times D(0,\delta \delta_{1})
\times \dot{D}(0,\epsilon_{0})$ for some $0 < \delta_{1} < 1$, that conforms to the bounds (\ref{bds_w_Sdp}).
By proceeding with the same lines of arguments as in Lemma 9, one can see that the function $u_{\mathcal{E}_{S_{d_p}}}$ defined as Laplace
transform
$$ u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) = \int_{L_{\gamma_{d_p}}} w_{S_{d_p}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} $$
represents a bounded holomorphic function on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times
\mathcal{E}_{S_{d_p}}$, for suitably small radius $r_{\mathcal{T}}>0$ and given $0 < \delta_{1} < 1$. Furthermore, by direct inspection,
one can testify that $u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ solves the problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d}) for initial
conditions (\ref{Laplace_varphi_j_along_halfline}) on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times
\mathcal{E}_{S_{d_p}}$.
In the last part of the proof, we concentrate on the second point 2). Let $0 \leq p \leq \iota-2$. We depart from the observation that the
maps $u \mapsto w_{S_{d_q}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} )/u$, for $q=p,p+1$, represent analytic continuations on the
sectors $S_{d_q}$ of a common analytic function defined on $D(0,r)$ (since
$w_{S_{d_p}}(u,z,\epsilon) = w_{S_{d_{p+1}}}(u,z,\epsilon)$ for $u \in D(0,r)$), for all fixed $z \in D(0,\delta \delta_{1})$ and
$\epsilon \in \mathcal{E}_{S_{d_p}} \cap \mathcal{E}_{S_{d_{p+1}}}$. Therefore, by carrying out a path deformation inside the domain
$S_{d_p} \cup S_{d_{p+1}} \cup D(0,r)$, we can recast the difference $u_{\mathcal{E}_{S_{d_{p+1}}}} - u_{\mathcal{E}_{S_{d_p}}}$ as a sum of
three paths integrals
\begin{multline}
u_{\mathcal{E}_{S_{d_{p+1}}}}(t,z,\epsilon) - u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) =\\
-\int_{L_{\gamma_{d_p},r/2}} w_{S_{d_p}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u}
+ \int_{C_{\gamma_{d_p},\gamma_{d_{p+1}},r/2}} w_{S_{d_p}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \\
+ \int_{L_{\gamma_{d_{p+1}},r/2}} w_{S_{d_{p+1}}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{decomp_difference_u_Sdp}
\end{multline}
where $L_{\gamma_{d_p},r/2} = [r/2,+\infty)\exp( \sqrt{-1} \gamma_{d_q})$ are unbounded segments for $q=p,p+1$,
$C_{\gamma_{d_p},\gamma_{d_{p+1}},r/2}$ stands for the arc of circle with radius $r/2$ joining the points
$\frac{r}{2}\exp(\sqrt{-1}\gamma_{d_p})$ and $\frac{r}{2}\exp(\sqrt{-1}\gamma_{d_{p+1}})$.
As an initial step, we provide estimates for
$$ I_{1} = \left| \int_{L_{\gamma_{d_p},r/2}} w_{S_{d_p}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
Due to the bounds (\ref{bds_w_Sdp}), we check that
$$ I_{1} \leq \int_{r/2}^{+\infty} C_{S_{d_p}} \rho \exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) \rho )
\exp( - \frac{\rho}{|\epsilon t|} \cos( \gamma_{d_p} - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) ) ) \frac{d\rho}{\rho} $$
for all $t \in \mathcal{T}$, $\epsilon \in \mathcal{E}_{S_{d_p}} \cap \mathcal{E}_{S_{d_{p+1}}}$. Besides, the lower bounds
(\ref{cos >= delta1}) hold for some constant
$\delta_{1}>0$ when $t \in \mathcal{T}$ and $\epsilon \in \mathcal{E}_{S_{d_p}} \cap \mathcal{E}_{S_{d_{p+1}}}$. Hence, if we select
$\delta_{2}>0$ and choose
$t \in \mathcal{T}$ with $|t| \leq \frac{\delta_{1}}{\delta_{2} + \sigma_{1} \zeta(b)}$, we get
\begin{equation}
I_{1} \leq C_{S_{d_p}} \int_{r/2}^{+\infty} \exp( -\frac{\rho}{|\epsilon|} \delta_{2} ) d\rho =
C_{S_{d_p}} \frac{|\epsilon|}{\delta_{2}} \exp( -\frac{r \delta_{2}}{2 |\epsilon|} ) \label{I1_Sdp}
\end{equation}
for all $\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$. Now, let
$$ I_{2} = \left| \int_{L_{\gamma_{d_{p+1}},r/2}} w_{S_{d_{p+1}}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
With a comparable approach, we can obtain two constants $\delta_{1},\delta_{2}>0$ with
\begin{equation}
I_{2} \leq C_{S_{d_{p+1}}} \frac{|\epsilon|}{\delta_{2}} \exp( -\frac{r \delta_{2}}{2 |\epsilon|} ) \label{I2_Sdp}
\end{equation}
for $t \in \mathcal{T} \cap D(0,\frac{\delta_{1}}{\delta_{2} + \sigma_{1} \zeta(b)})$ and
$\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$.
In a closing step, we focus on
$$ I_{3} = \left| \int_{C_{\gamma_{d_p},\gamma_{d_{p+1}},r/2}} w_{S_{d_p}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
Again, according to (\ref{bds_w_Sdp}), we guarantee that
$$ I_{3} \leq C_{S_{d_p}} \int_{\gamma_{d_p}}^{\gamma_{d_{p+1}}} \frac{r}{2} \exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) \frac{r}{2})
\exp( -\frac{r/2}{|\epsilon t|} \cos( \theta - \mathrm{arg}(t) -\mathrm{arg}(\epsilon) ) ) d\theta. $$
By construction, we also get a constant $\delta_{1}>0$ for which
$$ \cos( \theta - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) ) \geq \delta_{1} $$
when $\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$, $t \in \mathcal{T}$ and $\theta \in (\gamma_{d_p},\gamma_{d_{p+1}})$.
As a consequence, if one takes $\delta_{2}>0$ and selects $t \in \mathcal{T}$ with
$|t| \leq \frac{\delta_{1}}{\sigma_{1} \zeta(b) + \delta_{2}}$. Then,
\begin{equation}
I_{3} \leq C_{S_{d_p}} (\gamma_{d_{p+1}} - \gamma_{d_{p}}) \frac{r}{2} \exp( -\frac{r \delta_{2}}{2 |\epsilon|} ) \label{I3_Sdp}
\end{equation}
for all $\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$.
At last, departing from the decomposition (\ref{decomp_difference_u_Sdp}) and clustering the bounds (\ref{I1_Sdp}), (\ref{I2_Sdp}) and
(\ref{I3_Sdp}), we reach our expected estimates (\ref{difference_u_Sdp_exp_small}).
\end{proof}
\subsection{Construction of a finite set of holomorphic solutions when the parameter $\epsilon$ belongs to a good covering of the
origin in $\mathbb{C}^{\ast}$}
Let $n \geq 1$ and $\iota \geq 2$ be integers. We consider two collections of open bounded sectors
$\{ \mathcal{E}_{HJ_n}^{k} \}_{k \in \llbracket -n, n \rrbracket}$, $\{ \mathcal{E}_{S_{d_p}} \}_{0 \leq p \leq \iota-1}$ and
a bounded sector $\mathcal{T}$ with bisecting direction $d=0$ together with a family of functions $w_{j}(\tau,\epsilon)$,
$0 \leq j \leq S-1$ for which the data $(w_{j}(\tau,\epsilon), \mathcal{E}_{HJ_n}^{k}, \mathcal{T})$ are
$(\underline{\sigma}',\underline{\varsigma}')-$admissible in the sense of Definition 3 for some tuples
$\underline{\sigma}' = (\sigma_{1}',\sigma_{2}',\sigma_{3}')$
and $\underline{\varsigma}' = (\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ (where $\sigma_{1}'>0$, $\sigma_{j}',\varsigma_{j}'>0$ for $j=2,3$)
for $k \in \llbracket -n,n \rrbracket$
and $(w_{j}(\tau,\epsilon), \mathcal{E}_{S_{d_p}}, \mathcal{T})$ are $\sigma_{1}'-$admissible according to Definition 4 for
$0 \leq p \leq \iota-1$.\medskip
\noindent We make the next additional assumptions:\medskip
\noindent 1) For each $0 \leq j \leq S-1$, the map $\tau \mapsto w_{j}(\tau,\epsilon)$ restricted to $S_{d_p}$, for $0 \leq p \leq \iota-1$
and to $\mathring{HJ}_{n}$ is the analytic continuation of a common holomorphic function $\tau \mapsto w_{j}(\tau,\epsilon)$
on $D(0,r)$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$. Moreover, the radius $r$ is taken small enough such that
$D(0,r) \cap \{ z \in \mathbb{C} / \mathrm{Re}(z) \leq 0 \} \subset J_{0}$.\\
2) We assume that $d_{p} < d_{p+1}$ and $S_{d_p} \cap S_{d_{p+1}} = \emptyset$ for $0 \leq p \leq \iota-2$.\\
3) We take for granted that\\
3.1) $\mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1} \neq \emptyset$ for $-n \leq k \leq n-1$.\\
3.2) $\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}} \neq \emptyset$ for $0 \leq p \leq \iota-2$.\\
3.3) $\mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}} \neq \emptyset$ and
$\mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}} \neq \emptyset$.\\
4) We ask that
$$ ( \bigcup_{k=-n}^{n} \mathcal{E}_{HJ_n}^{k} ) \cup ( \bigcup_{p=0}^{\iota-1} \mathcal{E}_{S_{d_p}} ) = \mathcal{U}
\setminus \{ 0 \} $$
where $\mathcal{U}$ stands for some neighborhood of 0 in $\mathbb{C}$.\\
5) Among the set of sectors $\underline{\mathcal{E}} = \{ \mathcal{E}_{HJ_n}^{k} \}_{k \in \llbracket -n, n \rrbracket} \bigcup
\{ \mathcal{E}_{S_{d_p}} \}_{0 \leq p \leq \iota-1}$, every tuple of three sectors has empty intersection.\medskip
In the literature, when the requirements 3),4) and 5) hold, the set $\underline{\mathcal{E}}$ is called a good covering in
$\mathbb{C}^{\ast}$, see for instance \cite{ba1} or \cite{hssi}. An example of a good covering for $n=1$ and $\iota=2$ is displayed in Figure~\ref{fig3}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{goodcov.pdf}
\caption{Example of good covering, $n=1$ and $\iota=2$}
\label{fig3}
\end{figure}
We can state the first main result of our work.
\begin{theo}
Under the claim that the control on the initial data (\ref{norm_w_initial_small}) in Proposition 10 and
(\ref{norm_Sdp_initial_wj}) in Proposition 11 holds together with the restrictions (\ref{cond_SPCP_first}),
(\ref{xi_larger_sigma}), the next statements come forth.
1) The Cauchy problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d}) with initial data given by (\ref{SPCP_first_i_d_k}) has a bounded holomorphic
solution $u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ on a domain
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{HJ_n}^{k}$ for some radius $r_{\mathcal{T}}>0$
taken small enough. Furthermore, $u_{\mathcal{E}_{HJ_n}^{k}}$ can be written as a special Laplace transform (\ref{u_E_HJn_k_Laplace}) of a
function $w_{HJ_{n}}(\tau,z,\epsilon)$ fulfilling the bounds (\ref{bds_WHJn_Hk}), (\ref{bds_WHJn_Jk}). Besides, the
logarithmic tameness constraints
(\ref{log_flat_difference_uk_plus_1_minus_uk_HJn}) hold for all consecutive sectors $\mathcal{E}_{HJ_n}^{k}$,
$\mathcal{E}_{HJ_n}^{k+1}$ for $-n \leq k \leq n-1$.
2) The Cauchy problem (\ref{SPCP_first}), (\ref{SPCP_first_i_d}) for initial conditions (\ref{Laplace_varphi_j_along_halfline}) owns
a solution $u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ which is bounded and holomorphic on
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{S_{d_p}}$ for some well chosen radius
$r_{\mathcal{T}}>0$. Moreover, $u_{\mathcal{E}_{S_{d_p}}}$ can be expressed through a Laplace transform (\ref{u_E_Sdp_Laplace})
of a function $w_{S_{d_p}}(\tau,z,\epsilon)$ that undergoes (\ref{bds_w_Sdp}). Conjointly, the flatness estimates
(\ref{difference_u_Sdp_exp_small}) occur for any neighboring sectors $\mathcal{E}_{S_{d_{p+1}}}$,
$\mathcal{E}_{S_{d_{p}}}$, $0 \leq p \leq \iota-2$.
3) Provided that $r_{\mathcal{T}}>0$ is close to 0, there exist constants $M_{n,1},M_{n,2}>0$ (independent of $\epsilon$) with
\begin{equation}
| u_{\mathcal{E}_{HJ_n}^{-n}}(t,z,\epsilon) - u_{\mathcal{E}_{S_{d_0}}}(t,z,\epsilon) | \leq M_{n,1} \exp( -\frac{M_{n,2}}{|\epsilon|} )
\label{difference_u_HJn_Sd0}
\end{equation}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$ and
\begin{equation}
| u_{\mathcal{E}_{HJ_n}^{n}}(t,z,\epsilon) - u_{\mathcal{E}_{S_{d_{\iota-1}}}}(t,z,\epsilon) | \leq M_{n,1} \exp( -\frac{M_{n,2}}{|\epsilon|} )
\label{difference_u_HJn_Sdiota}
\end{equation}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$ whenever $t \in \mathcal{T} \cap D(0, r_{\mathcal{T}})$ and
$z \in D(0,\delta \delta_{1})$.
\end{theo}
\begin{proof}
The first two points 1) and 2) merely rephrase the statements already obtained in Propositions 10 and 11. It remains to show that the two
exponential bounds (\ref{difference_u_HJn_Sd0}) and (\ref{difference_u_HJn_Sdiota}) hold. We aim our attention only at the first estimates
(\ref{difference_u_HJn_Sd0}), the second ones (\ref{difference_u_HJn_Sdiota}) being of the same nature.
By construction, according to our additional assumption 1) described above, the functions
$\tau \mapsto w_{HJ_n}(\tau,z,\epsilon)$ on $\mathring{HJ}_{n}$ and $\tau \mapsto w_{S_{d_0}}(\tau,z,\epsilon)$
on $S_{d_0}$ are the restrictions of an holomorphic function denoted $\tau \mapsto w_{HJ_{n},S_{d_0}}(\tau,z,\epsilon)$
on $\mathring{HJ}_{n} \cup D(0,r) \cup S_{d_0}$, for all $z \in D(0,\delta \delta_{1})$, $\epsilon \in \dot{D}(0,\epsilon_{0})$.
As a consequence, we can realize a path deformation within the domain $\mathring{HJ}_{n} \cup D(0,r) \cup S_{d_0}$ and break up the difference
$u_{\mathcal{E}_{HJ_n}^{-n}} - u_{\mathcal{E}_{S_{d_0}}}$ into a sum of four path integrals
\begin{multline}
u_{\mathcal{E}_{HJ_n}^{-n}}(t,z,\epsilon) - u_{\mathcal{E}_{S_{d_0}}}(t,z,\epsilon)
= -\int_{L_{\gamma_{d_0},r/2}} w_{S_{d_0}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u}\\
+ \int_{C_{\gamma_{d_0},P_{-n,1},r/2}} w_{S_{d_0}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u}
+ \int_{P_{-n,1,r/2}} w_{HJ_n}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \\+
\int_{P_{-n,2}} w_{HJ_n}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{difference_u_HJn_Sd0_decomposition}
\end{multline}
where $L_{\gamma_{d_0},r/2} = [r/2,+\infty) \exp( \sqrt{-1} \gamma_{d_0})$ is an unbounded segment,
$C_{\gamma_{d_0},P_{-n,1},r/2}$ represents an arc of circle with radius $r/2$ joining the two points
$(r/2)\exp( \sqrt{-1}\gamma_{d_0})$ and \\
$(r/2)\exp( \sqrt{-1} \mathrm{arg}(A_{-n}))$, $P_{-n,1,r/2}$ stands for the segment
linking $(r/2)\exp( \sqrt{-1} \mathrm{arg}(A_{-n}))$ and $A_{-n}$ and finally as introduced earlier $P_{-n,2}$ denotes the horizontal
line $\{ A_{-n} - s / s \geq 0 \}$. An illustrative example is shown in Figure~\ref{fig4}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{intpath.pdf}
\caption{Deformation of the integration path}
\label{fig4}
\end{figure}
Let
$$ J_{1} = \left| \int_{L_{\gamma_{d_0},r/2}} w_{S_{d_0}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|.$$
In accordance with the bounds (\ref{I1_Sdp}), we can select $\delta_{2}>0$ and find $\delta_{1}>0$ with a constant
$C_{S_{d_0}}>0$ (independent of $\epsilon$) for which
\begin{equation}
J_{1} \leq C_{S_{d_0}} \frac{|\epsilon|}{\delta_{2}} \exp( -\frac{r \delta_{2}}{2 |\epsilon|} ) \label{J1_bds}
\end{equation}
holds whenever $t \in \mathcal{T} \cap D(0, \frac{\delta_{1}}{\delta_{2} + \sigma_{1} \zeta(b)})$ and
$\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$.
Now, consider
$$ J_{2} = \left| \int_{C_{\gamma_{d_0},P_{-n,1},r/2}} w_{S_{d_0}}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
The function $w_{S_{d_0}}(\tau,z,\epsilon)$ suffers both the bounds (\ref{bds_w_Sdp}) since
$C_{\gamma_{d_0},P_{-n,1},r/2} \subset D(0,r)$ and also (\ref{bds_WHJn_Jk}) when
$\tau \in C_{\gamma_{d_0},P_{-n,1},r/2} \cap J_{0}$. We deduce a constant $C_{J_{0},S_{d_0}}>0$ (independent of $\epsilon$) such that
$$ |w_{S_{d_0}}(\tau,z,\epsilon)| \leq C_{J_{0},S_{d_0}} |\tau| \exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| ) $$
for all $\tau \in C_{\gamma_{d_0},P_{-n,1},r/2}$, $z \in D(0,\delta \delta_{1})$ and $\epsilon \in
\dot{D}(0,\epsilon_{0})$. Hence,
$$ J_{2} \leq C_{J_{0},S_{d_0}} \left| \int_{\mathrm{arg}(A_{-n})}^{\gamma_{d_0}} \frac{r}{2}
\exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) \frac{r}{2} )
\exp( -\frac{r/2}{|\epsilon t|} \cos( \theta - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) ) ) d\theta \right|.
$$
The sectors $\mathcal{E}_{HJ_n}^{-n}$ and $\mathcal{E}_{S_{d_0}}$ are suitably chosen in a way that
$\cos( \theta - \mathrm{arg}(t) - \mathrm{arg}(\epsilon) ) \geq \delta_{1}$ for some constant $\delta_{1}>0$, when
$\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$, for $t \in \mathcal{T}$ and
$\theta \in (\mathrm{arg}(A_{-n}),\gamma_{d_0})$. As an issue,
\begin{equation}
J_{2} \leq C_{J_{0},S_{d_0}} |\gamma_{d_0} - \mathrm{arg}(A_{-n})| \frac{r}{2} \exp(- \frac{r \delta_{2}}{2 |\epsilon|} ) \label{J2_bds}
\end{equation}
when $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$, $t \in \mathcal{T} \cap
D(0, \frac{\delta_1}{\sigma_{1} \zeta(b) + \delta_{2}})$, for some fixed $\delta_{2}>0$.
We put
$$ J_{3} = \left| \int_{P_{-n,1,r/2}} w_{HJ_n}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|.$$
Owing to the fact that the path $P_{-n,1,r/2}$ lies across the domains $H_{q},J_{q}$ for $-n \leq q \leq 0$, the bounds
(\ref{bds_WHJn_Hk}) and (\ref{bds_WHJn_Jk}) entail that
$$ |w_{HJ_n}(\tau,z,\epsilon) \leq \max_{q \in \llbracket -n,0 \rrbracket}(C_{H_{q}},C_{J_{q}})
|\tau| \exp \left( \frac{\sigma_{1}}{|\epsilon|}\zeta(b) |\tau| + \varsigma_{2} \zeta(b) \exp( \varsigma_{3}|\tau| ) \right)
$$
for $\tau \in P_{-n,1,r/2}$, all $z \in D(0,\delta \delta_{1})$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$. Therefore,
\begin{multline*}
J_{3} \leq \int_{r/2}^{|A_{-n}|} \max_{q \in \llbracket -n,0 \rrbracket}(C_{H_{q}},C_{J_{q}})
\rho \exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) \rho + \varsigma_{2} \zeta(b) \exp( \varsigma_{3} \rho ) \right)\\
\times
\exp( -\frac{\rho}{|\epsilon t|} \cos( \mathrm{arg}(A_{-n}) - \mathrm{arg}(\epsilon t) ) ) \frac{d\rho}{\rho}.
\end{multline*}
Besides, according to (\ref{choice_a_k}), there exists some $\delta_{1}>0$ with
$\cos( \mathrm{arg}(A_{-n}) - \mathrm{arg}(\epsilon t) ) \geq \delta_{1}$ for
$\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$. Let $\delta_{2}>0$ and take $t \in \mathcal{T}$ with
$|t| \leq \frac{\delta_{1}}{\delta_{2} + \sigma_{1} \zeta(b) }$. We obtain
\begin{multline}
J_{3} \leq \max_{q \in \llbracket -n,0 \rrbracket}(C_{H_{q}},C_{J_{q}})
\int_{r/2}^{|A_{-n}|} \exp( \varsigma_{2} \zeta(b) \exp( \varsigma_{3} \rho ) ) \exp( -\frac{\rho}{|\epsilon|} \delta_{2} ) d\rho\\
\leq \max_{q \in \llbracket -n,0 \rrbracket}(C_{H_{q}},C_{J_{q}})
\exp( \varsigma_{2} \zeta(b) \exp( \varsigma_{3}|A_{-n}| ))
\frac{|\epsilon|}{\delta_{2}} \exp( -\frac{r}{2 |\epsilon|} \delta_{2} ) \label{J3_bds}
\end{multline}
provided that $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$.
Ultimately, let
$$ J_{4} = \left| \int_{P_{-n,2}} w_{HJ_n}(u,z,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \right|. $$
For the reason that the path $P_{-n,2}$ belongs to the strip $H_{-n}$, we can use the estimates
(\ref{bds_WHJn_Hk}) in order to get
\begin{multline*}
J_{4} \leq \int_{0}^{+\infty} C_{H_{-n}} |A_{-n} - s|
\exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |A_{-n}-s| - \sigma_{2}(M - \zeta(b))\exp( \sigma_{3}|A_{-n} - s| ) \right)\\
\times \exp \left( - \frac{|A_{-n} - s|}{|\epsilon t|} \cos( \mathrm{arg}(A_{-n} - s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) \right)
\frac{ds}{|A_{-n} - s|}.
\end{multline*}
From the controlled variation of arguments (\ref{argument_ak_minus_s}), we can pick up some constant $\delta_{1}>0$ for which
$$ \cos( \mathrm{arg}(A_{-n} - s) - \mathrm{arg}(\epsilon) - \mathrm{arg}(t) ) > \delta_{1} $$
for $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$ and $t \in \mathcal{T}$. We take $\delta_{2}>0$ and restrict $t$ inside
$\mathcal{T}$ in a way that $|t| \leq \frac{\delta_{1}}{\delta_{2} + \sigma_{1}\zeta(b)}$. Besides, we can find a constant
$K_{A_{-n}}>0$ (depending on $A_{-n}$) such that
$$ |A_{-n} - s| \geq K_{A_{-n}} (|A_{-n}| + s) $$
for all $s \geq 0$. Henceforth, we obtain
\begin{multline}
J_{4} \leq C_{H_{-n}} \int_{0}^{+\infty} \exp \left( - \sigma_{2}(M - \zeta(b)) \exp( \sigma_{3}|A_{-n} - s| ) \right)
\exp( - \frac{|A_{-n} -s|}{|\epsilon|} \delta_{2} ) ds\\
\leq C_{H_{-n}} \int_{0}^{+\infty} \exp \left( -\frac{K_{A_{-n}} \delta_{2}}{|\epsilon|} (|A_{-n}| + s) \right) ds
= \frac{C_{H_{-n}}|\epsilon|}{K_{A_{-n}} \delta_{2}} \exp \left( -\frac{K_{A_{-n}} \delta_{2} |A_{-n}|}{|\epsilon|} \right)
\label{J4_bds}
\end{multline}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$.
In conclusion, bearing in mind the splitting (\ref{difference_u_HJn_Sd0_decomposition}) and collecting the upper bounds
(\ref{J1_bds}), (\ref{J2_bds}), (\ref{J3_bds}) and (\ref{J4_bds}) yields the forseen estimates (\ref{difference_u_HJn_Sd0}).
\end{proof}
\section{A second auxiliary convolution Cauchy problem}
\subsection{Banach spaces of holomorphic functions with exponential growth on $L-$shaped domains}
We keep the same notations as in Section 3.1. We consider a closed horizontal strip $H$ as defined in (\ref{defin_strip_H}) with $a \neq 0$
which belongs to the set of strips $\{ H_{k} \}_{k \in \llbracket -n,n \rrbracket}$ described at the beginning of the subsection 3.1
and we single out a closed
rectangle $R_{a,b,\upsilon}$ defined as follows:\\
If $a>0$, then
\begin{equation}
R_{a,b,\upsilon} = \{ z \in \mathbb{C} / \upsilon \leq \mathrm{Re}(z) \leq 0, 0 \leq \mathrm{Im}(z) \leq b \} \label{R_ab_u_a_positive}
\end{equation}
and if $a < 0$
\begin{equation}
R_{a,b,\upsilon} = \{ z \in \mathbb{C} / \upsilon \leq \mathrm{Re}(z) \leq 0, a \leq \mathrm{Im}(z) \leq 0 \} \label{R_ab_u_a_negative}
\end{equation}
for some negative real number $\upsilon < 0$. We denote $RH_{a,b,\upsilon}$ the $L-$shaped domain $H \cup R_{a,b,\upsilon}$. See Figure~\ref{fig51}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{lshaped1.pdf}
\includegraphics[width=0.4\textwidth]{lshaped2.pdf}
\caption{Examples of sets $RH_{a,b,\upsilon}=H\cup R_{a,b,\upsilon}$}
\label{fig51}
\end{figure}
\begin{defin}
Let $\sigma_{1}>0$ be a positive real number and $\beta \geq 0$ be an integer. Let $\epsilon \in \dot{D}(0,\epsilon_{0})$. We set
$EG_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}$ as the vector space of holomorphic functions $v(\tau)$ on the interior domain
$\mathring{RH}_{a,b,\upsilon}$, continuous on $RH_{a,b,\upsilon}$ such that the norm
$$ ||v(\tau)||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)} = \sup_{\tau \in RH_{a,b,\upsilon}}
\frac{|v(\tau)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right) $$
is finite. Let us take some positive real number $\delta>0$. We define
$EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ as the vector space of all formal series
$v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$ with coefficients $v_{\beta}(\tau)$ inside
$EG_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}$ for all $\beta \geq 0$ and for which the norm
$$ ||v(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} = \sum_{\beta \geq 0}
||v_{\beta}(\tau)||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)} \frac{\delta^{\beta}}{\beta !}$$
is finite. It turns out that $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ endowed with the latter norm defines a Banach space.
\end{defin}
In the next proposition, we testify that the formal series belonging to the Banach space discussed above represent holomorphic functions
that are convergent in the vicinity of 0 w.r.t $z$ and with exponential growth on $RH_{a,b,\upsilon}$ regarding $\tau$. Its proof follows
the one of Proposition 1 in a straightforward manner.
\begin{prop} Let $v(\tau,z)$ chosen in $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$. Take some $0 < \delta_{1} < 1$. Then, one can
get a constant $C_{4}>0$ (depending on $||v||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ and $\delta_{1}$) such that
\begin{equation}
|v(\tau,z)| \leq C_{4}|\tau| \exp \left( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| \right)
\end{equation}
for all $\tau \in RH_{a,b,\upsilon}$, all $z \in D(0,\delta_{1}\delta)$.
\end{prop}
In the sequel, through the proposal of the next three propositions, we investigate the action of linear maps built as convolution products and
multiplication by bounded holomorphic functions on the Banach spaces defined above.
For all $\tau \in RH_{a,b,\upsilon}$, we denote
$L_{0,\tau}$ the path formed by the union of the segments $[0,c_{RH}(\tau)] \cup [c_{RH}(\tau),\tau]$, where $c_{RH}(\tau)$ is chosen in a way that
\begin{equation}
L_{0,\tau} \subset RH_{a,b,\upsilon}, \ \ c_{RH}(\tau) \in R_{a,b,\upsilon}, \ \ |c_{RH}(\tau)| \leq |\tau| \label{defin_cRH}
\end{equation}
for all $\tau \in RH_{a,b,\upsilon}$.
\begin{prop} Let $\gamma_{0},\gamma_{1} \geq 0$ and $\gamma_{2} \geq 1$ be integers. We take for granted that
\begin{equation}
\gamma_{2} \geq b(\gamma_{0} + \gamma_{1} + 2) \label{cond_gamma012}
\end{equation}
holds. Then, for any $\epsilon$ given in $\dot{D}(0,\epsilon_{0})$, the map
$v(\tau,z) \mapsto \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} \partial_{z}^{-\gamma_{2}} v(s,z) ds$
is a bounded linear operator from $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ into itself. Furthermore, we get a constant
$C_{5}>0$ (depending on $\gamma_{0},\gamma_{1},\gamma_{2}$, $\sigma_{1}$ and $b$) independent of $\epsilon$, such that
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} \partial_{z}^{-\gamma_{2}} v(s,z) ds ||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}
\leq C_{5}|\epsilon|^{\gamma_{0}+\gamma_{1}+2} \delta^{\gamma_2} ||v(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}
\label{norm_conv_partialz_v_C5}
\end{equation}
for all $v(\tau,z) \in EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof}
Take $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$ in $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$. In view of
Definition 5,
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} \partial_{z}^{-\gamma_{2}}
v(s,z) ds ||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}\\
= \sum_{\beta \geq \gamma_{2}}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\delta^{\beta}/\beta ! \label{defin_norm_convolution_partial_z_v}
\end{multline}
\begin{lemma}
One can choose a constant $C_{5.1}>0$ (depending on $\gamma_{0},\gamma_{1},\gamma_{2}$ and $\sigma_{1}$) such that
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\\
\leq C_{5.1} |\epsilon|^{\gamma_{0} + \gamma_{1} + 2}(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
||v_{\beta - \gamma_{2}}(\tau) ||_{(\beta - \gamma_{2},\sigma_{1},RH_{a,b,\upsilon},\epsilon)} \label{norm_conv_v_beta_minus_gamma2}
\end{multline}
for all $\beta \geq \gamma_{2}$.
\end{lemma}
\begin{proof} By construction of $L_{0,\tau}$, we can split the integral in two parts
\begin{multline*}
\tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds =
\tau \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds \\
+
\tau \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds
\end{multline*}
We first provide estimates for
$$ L_{1} = || \tau \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}. $$
We carry out the next factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) |\tau|
\left| \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \left| \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| \right) v_{\beta - \gamma_{2}}(s) \} \right. \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| \right) ds \right|.
\end{multline*}
We deduce that
\begin{equation}
L_{1} \leq C_{5.1.1}(\beta,\epsilon) ||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\end{equation}
where
\begin{multline*}
C_{5.1.1}(\beta,\epsilon) = \sup_{\tau \in RH_{a,b,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \int_{0}^{1} |\tau - c_{RH}(\tau)u|^{\gamma_0}
|c_{RH}(\tau)|^{\gamma_{1} + 2} u^{\gamma_{1}+1}\\
\times \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2}) |c_{RH}(\tau)u| \right) du.
\end{multline*}
As a consequence of the shape of $L_{0,\tau}$ through (\ref{defin_cRH}), according to the inequalities
(\ref{difference_s_b_r_b}), (\ref{A_1_bounds}) and taking account of the rough estimates
$|\tau - c_{RH}(\tau)u|^{\gamma_0} \leq 2^{\gamma_{0}}|\tau|^{\gamma_0}$
for $0 \leq u \leq 1$, we get
\begin{multline}
C_{5.1.1}(\beta,\epsilon) \leq 2^{\gamma_0} \sup_{\tau \in RH_{a,b,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}(r_{b}(\beta) - r_{b}(\beta - \gamma_{2})) |\tau| \right)\\
\leq 2^{\gamma_0} \sup_{x \geq 0} x^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}\frac{\gamma_2}{(\beta + 1)^{b}} x \right)\\
\leq
2^{\gamma_0} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{\gamma_{0}+\gamma_{1}+2}{\sigma_{1} \gamma_{2}} \right)^{\gamma_{0}+\gamma_{1}+2} \exp( -(\gamma_{0}+\gamma_{1}+2) )
(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)} \label{C511_bds}
\end{multline}
for all $\beta \geq \gamma_{2}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
In a second part, we seek bounds for
$$ L_{2} = ||\tau \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0}
s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}. $$
As above, we achieve the factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) |\tau|
\left| \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \left| \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| \right) v_{\beta - \gamma_{2}}(s) \} \right. \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| \right) ds \right|.
\end{multline*}
It follows that
\begin{equation}
L_{2} \leq C_{5.1.2}(\beta,\epsilon) ||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\end{equation}
with
\begin{multline*}
C_{5.1.2}(\beta,\epsilon) = \sup_{\tau \in RH_{a,b,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right)
\int_{0}^{1} |\tau - c_{RH}(\tau)|^{\gamma_{0}+1}(1-u)^{\gamma_{0}}\\
\times |(1-u)c_{RH}(\tau) + u\tau|^{\gamma_{1}+1}
\exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|(1-u)c_{RH}(\tau) + u\tau| \right) du.
\end{multline*}
By construction of the path $L_{0,\tau}$ by means of (\ref{defin_cRH}), bearing in mind
(\ref{difference_s_b_r_b}), (\ref{A_1_bounds}) and owing to the bounds
$|\tau - c_{RH}(\tau)|^{\gamma_{0}+1} \leq 2^{\gamma_{0}+1}|\tau|^{\gamma_{0}+1}$ with
$|(1-u)c_{RH}(\tau) + u\tau| \leq |\tau|$ for $0 \leq u \leq 1$, we obtain
\begin{multline}
C_{5.1.2}(\beta,\epsilon) \leq 2^{\gamma_{0}+1} \sup_{\tau \in RH_{a,b,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}(r_{b}(\beta) - r_{b}(\beta - \gamma_{2})) |\tau| \right)\\
\leq 2^{\gamma_{0}+1} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{\gamma_{0}+\gamma_{1}+2}{\sigma_{1} \gamma_{2}} \right)^{\gamma_{0}+\gamma_{1}+2} \exp( -(\gamma_{0}+\gamma_{1}+2) )
(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
\end{multline}
for all $\beta \geq \gamma_{2}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$. The lemma 10 follows.
\end{proof}
Gathering the expansion (\ref{defin_norm_convolution_partial_z_v}) and the upper bounds
(\ref{norm_conv_v_beta_minus_gamma2}), we get
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} \partial_{z}^{-\gamma_{2}}
v(s,z) ds ||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}\\
\leq \sum_{\beta \geq \gamma_{2}} C_{5.1} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
\frac{(\beta-\gamma_{2})!}{\beta!} ||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\delta^{\gamma_{2}} \frac{\delta^{\beta - \gamma_{2}}}{(\beta - \gamma_{2})!} \label{norm_conv_partialz_v_C51}
\end{multline}
Keeping in mind the guess (\ref{cond_gamma012}), we obtain a constant $C_{5.2}>0$ (depending on $\gamma_{0},\gamma_{1},\gamma_{2}$ and $b$)
for which
\begin{equation}
(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)} \frac{(\beta - \gamma_{2})!}{\beta !} \leq C_{5.2} \label{bds_beta_gamma012}
\end{equation}
holds for all $\beta \geq \gamma_{2}$. Piling up (\ref{norm_conv_partialz_v_C51}) and (\ref{bds_beta_gamma012}) grants the result
(\ref{norm_conv_partialz_v_C5}).
\end{proof}
\begin{prop}
Let $\gamma_{0},\gamma_{1} \geq 0$ be integers. Let $\sigma_{1},\sigma_{1}'>0$ be real numbers such that $\sigma_{1} > \sigma_{1}'$.
Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the linear operator
$v(\tau,z) \mapsto \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0}s^{\gamma_1}v(s,z) ds$ is bounded from
$(EG_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)}, ||.||_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)})$ into
$(EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}, ||.||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)})$. In addition, we can select
a constant $\check{C}_{5}>0$ (depending on $\gamma_{0},\gamma_{1},\sigma_{1}$ and $\sigma_{1}'$) with
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0}s^{\gamma_1}v(s,z) ds ||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq
\check{C}_{5} |\epsilon|^{\gamma_{0}+\gamma_{1}+2} ||v(\tau,z)||_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)}
\end{equation}
for all $v(\tau,z) \in EG_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof} Pick up some $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$ in
$EG_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)}$. Owing to Definition 5,
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1}
v(s,z) ds ||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}\\
= \sum_{\beta \geq 0}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\delta^{\beta}/\beta ! \label{defin_norm_convolution_v_sigma1}
\end{multline}
\begin{lemma}
One can assign a constant $\check{C}_{5}>0$ (depending on $\gamma_{0},\gamma_{1},\sigma_{1}$ and $\sigma_{1}'$) such that
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\leq \check{C}_{5} |\epsilon|^{\gamma_{0} + \gamma_{1} + 2}
||v_{\beta}(\tau) ||_{(\beta,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)} \label{norm_conv_v_beta_sigma1_sigma1_prim}
\end{equation}
for all $\beta \geq 0$.
\end{lemma}
\begin{proof} As above, we first cut the integral into two pieces
$$
\tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds =
\tau \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds
+
\tau \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds
$$
We first request estimates for
$$ \check{L}_{1} = || \tau \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}. $$
We do the next factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) |\tau|
\left| \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \left| \int_{0}^{c_{RH}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| \right) v_{\beta}(s) \} \right. \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| \right) ds \right|.
\end{multline*}
which leads to
\begin{equation}
\check{L}_{1} \leq \check{C}_{5.1}(\beta,\epsilon) ||v_{\beta}(\tau)||_{(\beta,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)}
\end{equation}
where
\begin{multline*}
\check{C}_{5.1}(\beta,\epsilon) = \sup_{\tau \in RH_{a,b,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \int_{0}^{1} |\tau - c_{RH}(\tau)u|^{\gamma_0}
|c_{RH}(\tau)|^{\gamma_{1} + 2} u^{\gamma_{1}+1}\\
\times \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta) |c_{RH}(\tau)u| \right) du.
\end{multline*}
Due to the constraints (\ref{defin_cRH}) and keeping in view the bounds (\ref{checkA1_bds}), we see that
\begin{multline}
\check{C}_{5.1}(\beta,\epsilon) \leq 2^{\gamma_0} \sup_{\tau \in RH_{a,b,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) |\tau| \right)\\
\leq
2^{\gamma_0} \sup_{x \geq 0} x^{\gamma_{0}+\gamma_{1}+2} \exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) x \right)
\leq 2^{\gamma_0} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{(\gamma_{0}+\gamma_{1}+2)e^{-1}}{\sigma_{1} - \sigma_{1}'} \right)^{\gamma_{0}+\gamma_{1}+2}
\end{multline}
for all $\beta \geq 0$, $\epsilon \in \dot{D}(0,\epsilon_{0})$.
Next in order, we point at
$$ \check{L}_{2} = ||\tau \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0}
s^{\gamma_1} v_{\beta}(s) ds||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}. $$
As before, we accomplish a factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) |\tau|
\left| \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \left| \int_{c_{RH}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| \right) v_{\beta}(s) \} \right. \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| \right) ds \right|
\end{multline*}
which entails
\begin{equation}
\check{L}_{2} \leq \check{C}_{5.2}(\beta,\epsilon) ||v_{\beta}(\tau)||_{(\beta,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)}
\end{equation}
with
\begin{multline*}
\check{C}_{5.2}(\beta,\epsilon) = \sup_{\tau \in RH_{a,b,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right)
\int_{0}^{1} |\tau - c_{RH}(\tau)|^{\gamma_{0}+1}(1-u)^{\gamma_{0}}\\
\times |(1-u)c_{RH}(\tau) + u\tau|^{\gamma_{1}+1}
\exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|(1-u)c_{RH}(\tau) + u\tau| \right) du.
\end{multline*}
By reason of the restriction (\ref{defin_cRH}) and by taking a glance at the bounds (\ref{checkA1_bds}), we deduce
\begin{multline}
\check{C}_{5.2}(\beta,\epsilon) \leq 2^{\gamma_{0}+1} \sup_{\tau \in RH_{a,b,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) |\tau| \right)\\
\leq 2^{\gamma_{0}+1} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{(\gamma_{0}+\gamma_{1}+2)e^{-1}}{\sigma_{1} - \sigma_{1}'} \right)^{\gamma_{0}+\gamma_{1}+2}
\end{multline}
provided that $\beta \geq 0$, $\epsilon \in \dot{D}(0,\epsilon_{0})$. Hence, Lemma 11 is verified.
\end{proof}
Finally, according to (\ref{defin_norm_convolution_v_sigma1}) we notice that Proposition 14 is just a byproduct of the lemma 11 above.
\end{proof}
The proof of the next proposition mirrors in a genuine way the one of Proposition 4.
\begin{prop} Let us consider some holomorphic function $c(\tau,z,\epsilon)$ on $\mathring{RH}_{a,b,\upsilon} \times D(0,\rho) \times
D(0,\epsilon_{0})$, continuous on $RH_{a,b,\upsilon} \times D(0,\rho) \times D(0,\epsilon_{0})$, for a radius $\rho>0$, bounded therein
by a constant $M_{c}>0$. Fix some $0 < \delta < \rho$. Then, the linear operator
$v(\tau,z) \mapsto c(\tau,z,\epsilon)v(\tau,z)$ is bounded from
$(EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)},||.||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)})$ into itself,
provided that $\epsilon \in \dot{D}(0,\epsilon_{0})$. Additionally, a constant $C_{6}>0$ (depending on
$M_{c},\delta,\rho$) independent of $\epsilon$ exists in a way that
\begin{equation}
||c(\tau,z,\epsilon)v(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq C_{6}
||v(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}
\end{equation}
for all $v \in EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$.
\end{prop}
\subsection{Banach spaces of holomorphic functions with super exponential growth on $L-$shaped domains}
We will refer to the notations of Sections 3.1 and 4.1 within this subsection. Namely, we set a closed horizontal strip $J$ as defined
in (\ref{defin_strip_J}) where $c$ is chosen different from 0 among the family of sectors $\{ J_{k} \}_{k \in \llbracket -n,n \rrbracket}$
built up at the onset of the subsection 3.1 and a closed rectangle $R_{c,d,\upsilon}$ as displayed in
(\ref{R_ab_u_a_positive}) and (\ref{R_ab_u_a_negative}) for some negative $\upsilon>0$. The set $RJ_{c,d,\upsilon}$ stands for the
$L-$shaped domain $J \cup R_{c,d,\upsilon}$. See Figure~\ref{fig52}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{lshaped3.pdf}
\includegraphics[width=0.4\textwidth]{lshaped4.pdf}
\caption{Examples of sets $RJ_{c,d,\upsilon}=J\cup R_{c,d,\upsilon}$}
\label{fig52}
\end{figure}
\begin{defin}
Let $\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ where $\sigma_{1},\varsigma_{2},\varsigma_{3}>0$ are assumed to be
positive real numbers and let $\beta \geq 0$ be an integer. For all $\epsilon \in \dot{D}(0,\epsilon_{0})$, we define
$SEG_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}$ as the vector space of holomorphic functions
$v(\tau)$ on $\mathring{RJ}_{c,d,\upsilon}$, continuous on $RJ_{c,d,\upsilon}$ for which
$$ ||v(\tau)||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)} =
\sup_{\tau \in RJ_{c,d,\upsilon}} \frac{|v(\tau)|}{|\tau|} \exp \left( -\frac{\sigma_1}{|\epsilon|} r_{b}(\beta) |\tau|
- \varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| ) \right)
$$
is finite. Let $\delta>0$ be some positive number. The set $SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$ stands
for the vector space of all formal series $v(\tau,z) = \sum_{\beta \geq 0} v_{\beta}(\tau) z^{\beta}/\beta!$ with coefficients
$v_{\beta}(\tau)$ belonging to $SEG_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}$ and whose norm
$$ ||v(\tau,z)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)} = \sum_{\beta \geq 0}
||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)} \frac{\delta^{\beta}}{\beta !}
$$
is finite. The space $SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$ equipped with this norm is a Banach space.
\end{defin}
The next statement can be checked exactly in the same manner as Proposition 5 1).
\begin{prop}
Let $v(\tau,z) \in SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$. Fix some $0 < \delta_{1} < 1$. Then, we get a constant
$C_{7}>0$ (depending on $||v||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$ and $\delta_{1}$) fulfilling
\begin{equation}
|v(\tau,z)| \leq C_{7}|\tau| \exp \left( \frac{\sigma_1}{|\epsilon|}\zeta(b)|\tau| + \varsigma_{2}\zeta(b) \exp(\varsigma_{3}|\tau|) \right)
\end{equation}
for all $\tau \in RJ_{c,d,\upsilon}$, all $z \in D(0,\delta_{1}\delta)$.
\end{prop}
In the upcoming propositions, we plan to analyze the same convolution maps and multiplication by bounded holomorphic functions as worked out
in Propositions 13,14 and 15 but operating on
the Banach spaces disclosed in Definition 6. As in Section 4.1, $L_{0,\tau}$ stands for a path defined as a union
$[0,c_{RJ}(\tau)] \cup [c_{RJ}(\tau),\tau]$, where $c_{RJ}(\tau)$ is selected with the next properties:
\begin{equation}
L_{0,\tau} \subset RJ_{c,d,\upsilon}, \ \ c_{RJ}(\tau) \in R_{c,d,\upsilon}, \ \ |c_{RJ}(\tau)| \leq |\tau| \label{defin_cRJ}
\end{equation}
whenever $\tau \in RJ_{c,d,\upsilon}$.
\begin{prop}
Let $\gamma_{0},\gamma_{1} \geq 0$ and $\gamma_{2} \geq 1$ be integers. We assume that
\begin{equation}
\gamma_{2} \geq b(\gamma_{0} + \gamma_{1} + 2)
\end{equation}
holds. Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the linear operator
$v(\tau,z) \mapsto \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_{0}} s^{\gamma_1} \partial_{z}^{-\gamma_{2}}v(s,z) ds$ is bounded from
$SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$ into itself. In addition, one gets a constant $C_{8}>0$
(depending on $\gamma_{0},\gamma_{1},\gamma_{2},\sigma_{1}$ and $b$) independent of $\epsilon$, such that
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} \partial_{z}^{-\gamma_{2}}
v(s,z) ds ||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}
\leq C_{8}|\epsilon|^{\gamma_{0}+\gamma_{1}+2} \delta^{\gamma_2} ||v(\tau,z)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}
\label{norm_conv_partialz_v_C8}
\end{equation}
for all $v(\tau,z) \in SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof}
Only a brief outline of the proof will be presented hereafter since its content resembles the one displayed in Proposition 13. Namely, it boils
down to show the next lemma.
\begin{lemma}
Take $v_{\beta - \gamma_{2}}(\tau) \in SEG_{(\beta - \gamma_{2},\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}$ for all $\beta \geq \gamma_{2}$.
One can sort a constant $C_{8.1}>0$ (depending on $\gamma_{0},\gamma_{1},\gamma_{2},\sigma_{1}$) for which
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)} \\
\leq
C_{8.1} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}
\end{multline}
\end{lemma}
\begin{proof} As before, we depart from the break up of the convolution product in two pieces
\begin{multline*}
\tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds =
\tau \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds \\
+
\tau \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds
\end{multline*}
We demand estimates for the first part
$$ LJ_{1} = || \tau \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}. $$
We perform a factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) \right)
|\tau|
\left| \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| - \varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3} |\tau| ) \right)
\left| \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1} \right. \\
\times
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| - \varsigma_{2}r_{b}(\beta - \gamma_{2})
\exp( \varsigma_{3}|s| ) \right) v_{\beta - \gamma_{2}}(s) \} \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| + \varsigma_{2}r_{b}(\beta - \gamma_{2})
\exp( \varsigma_{3}|s|) \right) ds \right|.
\end{multline*}
which induces
\begin{equation}
LJ_{1} \leq C_{8.1.1}(\beta,\epsilon) ||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}
\end{equation}
with
\begin{multline*}
C_{8.1.1}(\beta,\epsilon) = \sup_{\tau \in RJ_{c,d,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| - \varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3} |\tau| ) \right)
\int_{0}^{1} |\tau - c_{RJ}(\tau)u|^{\gamma_0} \\
\times |c_{RJ}(\tau)|^{\gamma_{1} + 2} u^{\gamma_{1}+1}
\exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2}) |c_{RJ}(\tau)u| +
\varsigma_{2}r_{b}(\beta - \gamma_{2})\exp( \varsigma_{3} |c_{RJ}(\tau) u| ) \right) du.
\end{multline*}
According to the properties (\ref{defin_cRJ}), we observe in particular that
\begin{multline}
-\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| ) + \varsigma_{2}r_{b}(\beta - \gamma_{2})\exp( \varsigma_{3}|c_{RJ}(\tau)|u )\\
\leq
\varsigma_{2}(r_{b}(\beta - \gamma_{2}) - r_{b}(\beta))\exp( \varsigma_{3}|\tau| ) \leq 0
\end{multline}
for all $\tau \in RJ_{c,d,\upsilon}$, all $0 \leq u \leq 1$. In addition, taking into account the bounds
(\ref{difference_s_b_r_b}), (\ref{A_1_bounds}), we get in a similar way as in (\ref{C511_bds}) that
\begin{multline*}
C_{8.1.1}(\beta,\epsilon) \leq 2^{\gamma_0} \sup_{\tau \in RJ_{c,d,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}(r_{b}(\beta) - r_{b}(\beta - \gamma_{2})) |\tau| \right)\\
\leq 2^{\gamma_0} \sup_{x \geq 0} x^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}\frac{\gamma_2}{(\beta + 1)^{b}} x \right)\\
\leq
2^{\gamma_0} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{\gamma_{0}+\gamma_{1}+2}{\sigma_{1} \gamma_{2}} \right)^{\gamma_{0}+\gamma_{1}+2} \exp( -(\gamma_{0}+\gamma_{1}+2) )
(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
\end{multline*}
for all $\beta \geq \gamma_{2}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
In the last part, we aim attention at
$$ LJ_{2} = ||\tau \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0}
s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}. $$
As aforementioned, we achieve a factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| )
\right) |\tau|
\left| \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| )
\right) \left| \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} \right.\\
\times
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| -\varsigma_{2}r_{b}(\beta-\gamma_{2})
\exp(\varsigma_{3}|s|) \right) v_{\beta - \gamma_{2}}(s) \} \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| + \varsigma_{2}r_{b}(\beta - \gamma_{2})
\exp(\varsigma_{3}|s|) \right) ds \right|.
\end{multline*}
It follows that
\begin{equation}
LJ_{2} \leq C_{8.1.2}(\beta,\epsilon) ||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}
\end{equation}
with
\begin{multline*}
C_{8.1.2}(\beta,\epsilon) = \sup_{\tau \in RJ_{c,d,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta)\exp(\varsigma_{3}|\tau|) \right)
\int_{0}^{1} |\tau - c_{RJ}(\tau)|^{\gamma_{0}+1}(1-u)^{\gamma_{0}}\\
\times |(1-u)c_{RJ}(\tau) + u\tau|^{\gamma_{1}+1}
\exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|(1-u)c_{RJ}(\tau) + u\tau| \right. \\
\left. +\varsigma_{2}r_{b}(\beta - \gamma_{2}) \exp( \varsigma_{3}|(1-u)c_{RJ}(\tau) + u\tau| ) \right) du.
\end{multline*}
Taking a glance at the features (\ref{defin_cRJ}) of the path $L_{0,\tau}$, we notice that
\begin{multline}
-\varsigma_{2}r_{b}(\beta)\exp(\varsigma_{3}|\tau|)
+\varsigma_{2}r_{b}(\beta - \gamma_{2}) \exp( \varsigma_{3}|(1-u)c_{RJ}(\tau) + u\tau| )\\
\leq -\varsigma_{2}(r_{b}(\beta) - r_{b}(\beta - \gamma_{2})) \exp( \varsigma_{3}|\tau|) \leq 0
\end{multline}
for all $\tau \in RJ_{c,d,\upsilon}$, all $0 \leq u \leq 1$. Keeping in mind (\ref{difference_s_b_r_b}), (\ref{A_1_bounds}), we obtain as
above
\begin{multline*}
C_{8.1.2}(\beta,\epsilon) \leq 2^{\gamma_{0}+1} \sup_{\tau \in RJ_{c,d,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}(r_{b}(\beta) - r_{b}(\beta - \gamma_{2})) |\tau| \right)\\
\leq
2^{\gamma_{0}+1} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{\gamma_{0}+\gamma_{1}+2}{\sigma_{1} \gamma_{2}} \right)^{\gamma_{0}+\gamma_{1}+2} \exp( -(\gamma_{0}+\gamma_{1}+2) )
(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
\end{multline*}
for all $\beta \geq \gamma_{2}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$. Lemma 12 follows.
\end{proof}
\end{proof}
\begin{prop} Take $\gamma_{0}$ and $\gamma_{1}$ as non negative integers. Let us select
$\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ and
$\underline{\varsigma}' = (\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ two tuples of positive real numbers in order that
\begin{equation}
\sigma_{1} > \sigma_{1}', \ \ \varsigma_{2} > \varsigma_{2}', \ \ \varsigma_{3} = \varsigma_{3}'.
\end{equation}
Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the map
$v(\tau,z) \mapsto \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_{0}} s^{\gamma_1} v(s,z) ds$ is a linear bounded operator from
$SEG_{(\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon,\delta)}$ into $SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$.
Besides, one can choose a constant $\check{C}_{8}>0$
(depending on $\gamma_{0},\gamma_{1},\sigma_{1}$ and $\sigma_{1}'$) independent of $\epsilon$, such that
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1}
v(s,z) ds ||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}
\leq \check{C}_{8}|\epsilon|^{\gamma_{0}+\gamma_{1}+2} ||v(\tau,z)||_{(\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon,\delta)}
\label{norm_conv_v_checkC8}
\end{equation}
for all $v(\tau,z) \in SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof} As above, we only concentrate on the main part of the proof since it is very close to the one of Proposition 14. More precisely,
we are scaled down to prove the next lemma.
\begin{lemma} Let $v_{\beta}(\tau)$ belonging to $SEG_{(\beta,\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon)}$.
One can sort a constant $\check{C}_{8}>0$ (depending on $\gamma_{0},\gamma_{1},\sigma_{1}$ and $\sigma_{1}'$) such that
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds ||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}
\leq
\check{C}_{8} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon)}
\end{equation}
for all $\beta \geq 0$.
\end{lemma}
\begin{proof} We first downsize the integral in two pieces
$$
\tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds =
\tau \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds
+
\tau \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds
$$
We ask for bounds regarding
$$ \check{LJ}_{1} = || \tau \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds ||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}. $$
The next factorization holds
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) \right)
|\tau|
\left| \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| - \varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3} |\tau| ) \right)
\left| \int_{0}^{c_{RJ}(\tau)} (\tau - s)^{\gamma_0} s^{\gamma_1} \right. \\
\times
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| - \varsigma_{2}'r_{b}(\beta)
\exp( \varsigma_{3}|s| ) \right) v_{\beta}(s) \} \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| + \varsigma_{2}'r_{b}(\beta)
\exp( \varsigma_{3}|s|) \right) ds \right|.
\end{multline*}
which induces
\begin{equation}
\check{LJ}_{1} \leq \check{C}_{8.1}(\beta,\epsilon) ||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon)}
\end{equation}
where
\begin{multline*}
\check{C}_{8.1}(\beta,\epsilon) = \sup_{\tau \in RJ_{c,d,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| - \varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3} |\tau| ) \right)
\int_{0}^{1} |\tau - c_{RJ}(\tau)u|^{\gamma_0} \\
\times |c_{RJ}(\tau)|^{\gamma_{1} + 2} u^{\gamma_{1}+1}
\exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta) |c_{RJ}(\tau)u| +
\varsigma_{2}'r_{b}(\beta)\exp( \varsigma_{3} |c_{RJ}(\tau) u| ) \right) du.
\end{multline*}
In accordance with the construction of the path $L_{0,\tau}$ described in (\ref{defin_cRJ}), we grant that
\begin{equation}
-\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| ) + \varsigma_{2}'r_{b}(\beta)\exp( \varsigma_{3}|c_{RJ}(\tau)|u )
\leq (\varsigma_{2}' - \varsigma_{2})r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) \leq 0
\end{equation}
for all $\tau \in RJ_{c,d,\upsilon}$, all $0 \leq u \leq 1$.
Besides, taking into account the bounds (\ref{checkA1_bds}), we deduce
\begin{multline}
\check{C}_{8.1}(\beta,\epsilon) \leq 2^{\gamma_0} \sup_{\tau \in RJ_{c,d,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) |\tau| \right)\\
\leq
2^{\gamma_0} \sup_{x \geq 0} x^{\gamma_{0}+\gamma_{1}+2} \exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) x \right)
\leq 2^{\gamma_0} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{(\gamma_{0}+\gamma_{1}+2)e^{-1}}{\sigma_{1} - \sigma_{1}'} \right)^{\gamma_{0}+\gamma_{1}+2}
\end{multline}
for all $\beta \geq 0$, $\epsilon \in \dot{D}(0,\epsilon_{0})$.
In a second part, we concentrate on
$$ \check{LJ}_{2} = ||\tau \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0}
s^{\gamma_1} v_{\beta}(s) ds||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)}. $$
Again we use a factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| )
\right) |\tau|
\left| \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| )
\right) \left| \int_{c_{RJ}(\tau)}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} \right.\\
\times
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| -\varsigma_{2}'r_{b}(\beta)
\exp(\varsigma_{3}|s|) \right) v_{\beta}(s) \} \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| + \varsigma_{2}'r_{b}(\beta)
\exp(\varsigma_{3}|s|) \right) ds \right|.
\end{multline*}
that induces
\begin{equation}
\check{LJ}_{2} \leq \check{C}_{8.2}(\beta,\epsilon) ||v_{\beta}(\tau)||_{(\beta,\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon)}
\end{equation}
with
\begin{multline*}
\check{C}_{8.2}(\beta,\epsilon) = \sup_{\tau \in RJ_{c,d,\upsilon}}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| -\varsigma_{2}r_{b}(\beta)\exp(\varsigma_{3}|\tau|) \right)
\int_{0}^{1} |\tau - c_{RJ}(\tau)|^{\gamma_{0}+1}(1-u)^{\gamma_{0}}\\
\times |(1-u)c_{RJ}(\tau) + u\tau|^{\gamma_{1}+1}
\exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|(1-u)c_{RJ}(\tau) + u\tau| \right. \\
\left. +\varsigma_{2}'r_{b}(\beta) \exp( \varsigma_{3}|(1-u)c_{RJ}(\tau) + u\tau| ) \right) du.
\end{multline*}
The construction of $L_{0,\tau}$ through (\ref{defin_cRJ}) entails
\begin{multline}
-\varsigma_{2}r_{b}(\beta)\exp(\varsigma_{3}|\tau|)
+\varsigma_{2}'r_{b}(\beta) \exp( \varsigma_{3}|(1-u)c_{RJ}(\tau) + u\tau| )\\
\leq -(\varsigma_{2} - \varsigma_{2}')r_{b}(\beta) \exp( \varsigma_{3}|\tau|) \leq 0
\end{multline}
for all $\tau \in RJ_{c,d,\upsilon}$, all $0 \leq u \leq 1$.
According to the bounds (\ref{checkA1_bds}), we get
\begin{multline}
\check{C}_{8.2}(\beta,\epsilon) \leq 2^{\gamma_{0}+1} \sup_{\tau \in RJ_{c,d,\upsilon}} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) |\tau| \right)\\
\leq 2^{\gamma_{0}+1} |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{(\gamma_{0}+\gamma_{1}+2)e^{-1}}{\sigma_{1} - \sigma_{1}'} \right)^{\gamma_{0}+\gamma_{1}+2}
\end{multline}
for all $\beta \geq 0$, $\epsilon \in \dot{D}(0,\epsilon_{0})$. Finally, Lemma 13 is justified.
\end{proof}
\end{proof}
The proof of the next proposition is a straightforward adaptation of the one disclosed in Proposition 4 and will therefore be overlooked.
\begin{prop} Let us consider some holomorphic function $c(\tau,z,\epsilon)$ on $\mathring{RJ}_{c,d,\upsilon} \times D(0,\rho) \times
D(0,\epsilon_{0})$, continuous on $RJ_{c,d,\upsilon} \times D(0,\rho) \times D(0,\epsilon_{0})$, for a radius $\rho>0$, bounded therein
by a constant $M_{c}>0$. Fix some $0 < \delta < \rho$. Then, the linear operator
$v(\tau,z) \mapsto c(\tau,z,\epsilon)v(\tau,z)$ is bounded from
$SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$ into itself,
provided that $\epsilon \in \dot{D}(0,\epsilon_{0})$. Additionally, a constant $C_{9}>0$ (depending on
$M_{c},\delta,\rho$) independent of $\epsilon$ exists in a way that
\begin{equation}
||c(\tau,z,\epsilon)v(\tau,z)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)} \leq C_{9}
||v(\tau,z)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}
\end{equation}
for all $v \in SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$.
\end{prop}
\subsection{Continuity bounds for linear convolution operators acting on the Banach spaces
$EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$}
We keep the notations of Section 3.2. By means of the statement of the next two propositions, we inspect linear maps constructed as
convolution
products acting on the Banach spaces of functions with exponential growth on sectors mentioned in Definition 2. In the sequel,
a sector $S_{d}$ will denote one the sector $S_{d_p}$, $0 \leq p \leq \iota-1$ just introduced after Definition 4. For all
$\tau \in S_{d} \cup D(0,r)$, $L_{0,\tau}$ merely denotes the segment $[0,\tau]$ which obviously belong to
$S_{d} \cup D(0,r)$.
\begin{prop}
Take $\gamma_{0},\gamma_{1} \geq 0$ and $\gamma_{2} \geq 1$ among the set of integers. Assume that
\begin{equation}
\gamma_{2} \geq b(\gamma_{0} + \gamma_{1} + 2)
\end{equation}
holds. Then, for any given $\epsilon$ in $\dot{D}(0,\epsilon_{0})$, the map
$v(\tau,z) \mapsto \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0}s^{\gamma_1} \partial_{z}^{-\gamma_2}v(s,z) ds$ represents a bounded linear
operator from $EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$ into itself. Moreover, there exists a constant
$C_{10}>0$ (depending on $\gamma_{0},\gamma_{1},\gamma_{2}$, $\sigma_{1}$ and $b$) independent of $\epsilon$, for which
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} \partial_{z}^{-\gamma_{2}}
v(s,z) ds ||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}\\
\leq C_{10}|\epsilon|^{\gamma_{0}+\gamma_{1}+2} \delta^{\gamma_2} ||v(\tau,z)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}
\label{norm_conv_partialz_v_C10}
\end{multline}
provided that $v(\tau,z) \in EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$ and $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof} Since the proof mirrors the one presented for Proposition 13, we only focus attention at the next lemma.
\begin{lemma} Let $v_{\beta - \gamma_{2}}(\tau)$ belonging to $EG_{(\beta - \gamma_{2},\sigma_{1},S_{d} \cup D(0,r),\epsilon)}$.
Then, one can select a constant $C_{10.1}>0$ (depending on $\gamma_{0},\gamma_{1},\gamma_{2}$ and $\sigma_{1}$) such that
\begin{multline}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\\
\leq C_{10.1} |\epsilon|^{\gamma_{0} + \gamma_{1} + 2}(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)}
||v_{\beta - \gamma_{2}}(\tau) ||_{(\beta - \gamma_{2},\sigma_{1},S_{d} \cup D(0,r),\epsilon)} \label{norm_conv_v_beta_minus_gamma2_C10.1}
\end{multline}
for all $\beta \geq \gamma_{2}$.
\end{lemma}
\begin{proof} We first perform a factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) |\tau|
\left| \int_{0}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta - \gamma_{2}}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \left| \int_{0}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| \right) v_{\beta - \gamma_{2}}(s) \} \right. \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2})|s| \right) ds \right|.
\end{multline*}
We deduce that
\begin{equation}
|| \tau \int_{0}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta - \gamma_{2}}(s) ds ||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq C_{10.1}(\beta,\epsilon) ||v_{\beta - \gamma_{2}}(\tau)||_{(\beta - \gamma_{2},\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\end{equation}
where $C_{10.1}(\beta,\epsilon)$ fulfills the next bounds, with the help of (\ref{difference_s_b_r_b}), (\ref{A_1_bounds}),
\begin{multline}
C_{10.1}(\beta,\epsilon) = \sup_{\tau \in S_{d} \cup D(0,r)}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \int_{0}^{1} |\tau|^{\gamma_{0}+\gamma_{1}+2} (1-u)^{\gamma_0}
u^{\gamma_{1}+1}\\
\times \exp \left( \frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta - \gamma_{2}) |\tau| u \right) du\\
\leq \sup_{\tau \in S_{d} \cup D(0,r)} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}(r_{b}(\beta) - r_{b}(\beta - \gamma_{2})) |\tau| \right)\\
\leq \sup_{x \geq 0} x^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}\frac{\gamma_2}{(\beta + 1)^{b}} x \right)\\
\leq |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{\gamma_{0}+\gamma_{1}+2}{\sigma_{1} \gamma_{2}} \right)^{\gamma_{0}+\gamma_{1}+2} \exp( -(\gamma_{0}+\gamma_{1}+2) )
(\beta + 1)^{b(\gamma_{0}+\gamma_{1}+2)} \label{C10.1_bds}
\end{multline}
for all $\beta \geq \gamma_{2}$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$. This yields the lemma 14.
\end{proof}
\end{proof}
\begin{prop} Let $\gamma_{0},\gamma_{1} \geq 0$ chosen among the set of integers. Let $\sigma_{1},\sigma_{1}'>0$ be real numbers
satisfying $\sigma_{1} > \sigma_{1}'$.
Then, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, the linear map
$v(\tau,z) \mapsto \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0}s^{\gamma_1}v(s,z) ds$ is a bounded operator from
$EG_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)}$ into
$EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$. Furthermore, we can get
a constant $\check{C}_{10}>0$ (depending on $\gamma_{0},\gamma_{1},\sigma_{1}$ and $\sigma_{1}'$) with
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0}s^{\gamma_1}v(s,z) ds ||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)} \leq
\check{C}_{10} |\epsilon|^{\gamma_{0}+\gamma_{1}+2} ||v(\tau,z)||_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)}
\end{equation}
for all $v(\tau,z) \in EG_{(\sigma_{1}',S_{d} \cup D(0,r),\epsilon,\delta)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof}
The proof mimics the one of Proposition 14 and is based on the next lemma
\begin{lemma}
One can attach a constant $\check{C}_{10}>0$ (depending on $\gamma_{0},\gamma_{1},\sigma_{1}$ and $\sigma_{1}'$) such that
\begin{equation}
|| \tau \int_{L_{0,\tau}} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds ||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq \check{C}_{10} |\epsilon|^{\gamma_{0} + \gamma_{1} + 2}
||v_{\beta}(\tau) ||_{(\beta,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)} \label{norm_conv_v_beta_sigma1_sigma1_prim_checkC10}
\end{equation}
for all $\beta \geq 0$.
\end{lemma}
\begin{proof}
We apply the next factorization
\begin{multline*}
\frac{1}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) |\tau|
\left| \int_{0}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
v_{\beta}(s) ds \right|\\
= \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \left| \int_{0}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1}
\{ \frac{1}{|s|} \exp \left( - \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| \right) v_{\beta}(s) \} \right. \\
\left. \times
|s| \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta)|s| \right) ds \right|.
\end{multline*}
which entails
\begin{equation}
|| \tau \int_{0}^{\tau} (\tau - s)^{\gamma_0} s^{\gamma_1} v_{\beta}(s) ds ||_{(\beta,\sigma_{1},S_{d} \cup D(0,r),\epsilon)}
\leq \check{C}_{10}(\beta,\epsilon) ||v_{\beta}(\tau)||_{(\beta,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)}
\end{equation}
for $\check{C}_{10}(\beta,\epsilon)$ submitted to the next bounds, keeping in view (\ref{checkA1_bds}),
\begin{multline}
\check{C}_{10}(\beta,\epsilon) = \sup_{\tau \in S_{d} \cup D(0,r)}
\exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right) \int_{0}^{1} |\tau|^{\gamma_{0}+\gamma_{1}+2} (1 - u)^{\gamma_0}
u^{\gamma_{1}+1}\\
\times \exp \left( \frac{\sigma_{1}'}{|\epsilon|}r_{b}(\beta) |\tau| u \right) du\\
\leq \sup_{\tau \in S_{d} \cup D(0,r)} |\tau|^{\gamma_{0}+\gamma_{1}+2}
\exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) |\tau| \right)\\
\leq \sup_{x \geq 0} x^{\gamma_{0}+\gamma_{1}+2} \exp \left( -\frac{\sigma_{1} - \sigma_{1}'}{|\epsilon|} r_{b}(\beta) x \right)
\leq |\epsilon|^{\gamma_{0}+\gamma_{1}+2}
\left( \frac{(\gamma_{0}+\gamma_{1}+2)e^{-1}}{\sigma_{1} - \sigma_{1}'} \right)^{\gamma_{0}+\gamma_{1}+2}
\end{multline}
for all $\beta \geq 0$, $\epsilon \in \dot{D}(0,\epsilon_{0})$. Lemma 15 follows.
\end{proof}
\end{proof}
\subsection{An accessory convolution problem with rational coefficients}
We set $\mathcal{B}$ as a finite subset of $\mathbb{N}^{3}$. For any $\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}$, we consider
a bounded holomorphic function $d_{\underline{l}}(z,\epsilon)$ on a polydisc $D(0,\rho) \times D(0,\epsilon_{0})$ for some radii
$\rho,\epsilon_{0}>0$.
Let $S_{\mathcal{B}} \geq 1$ be an integer and $P_{\mathcal{B}}(\tau)$ be a polynomial (not identically equal to 0) with complex coefficients
which is either constant or whose roots that are located
in the open right halfplane $\mathbb{C}_{+} = \{ z \in \mathbb{C} / \mathrm{Re}(z) > 0 \}$. We introduce the following notations. When
$\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}$, we put $d_{l_{0},l_{1}} = l_{0} - 2l_{1}$ and assume that $d_{l_{0},l_{1}} \geq 1$,
we also set $A_{l_{1},p}$ as real numbers for all $1 \leq p \leq l_{1}-1$ when $l_{1} \geq 2$. When $\tau \in \mathbb{C}$, the symbol
$L_{0,\tau}$ stands for a path in $\mathbb{C}$ joining $0$ and $\tau$ as constructed in the previous subsections.
We concentrate on the next convolution equation
\begin{multline}
\partial_{z}^{S_{\mathcal{B}}}v(\tau,z,\epsilon) = \sum_{\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}}
\frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \left \{ \frac{ \epsilon^{l_{1} - l_{0}} \tau}{\Gamma( d_{l_{0},l_{1}} )}
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} - 1} s^{l_1} \partial_{z}^{l_2}v(s,z,\epsilon) \frac{ds}{s} \right. \\
\left . +
\sum_{1 \leq p \leq l_{1}-1} A_{l_{1},p} \frac{ \epsilon^{l_{1} - l_{0}} \tau }{\Gamma( d_{l_{0},l_{1}} + (l_{1} -p) )}
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} + (l_{1} - p) - 1} s^{p} \partial_{z}^{l_2}v(s,z,\epsilon) \frac{ds}{s}
\right \} + w(\tau,z,\epsilon) \label{ACP_forcterm_w}
\end{multline}
where $w(\tau,z,\epsilon)$ stands for solutions of the equation (\ref{1_aux_CP}) that are constructed in Propositions 10 and 11. We
use the convention that the
sum $\sum_{1 \leq p \leq l_{1}-1}$ is reduced to 0 when $l_{1}=1$.
In the next assertion, we build solutions to the convolution equation (\ref{ACP_forcterm_w}) within the three families of Banach spaces
described in Definitions 2, 5 and 6.
\begin{prop} 1) We ask for the next constraints\\
a) There exists a real number $b>1$ such that for all $\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}$,
\begin{equation}
S_{\mathcal{B}} \geq b(l_{0} - l_{1}) + l_{2} \ \ , \ \ S_{\mathcal{B}} > l_{2} \ \ , \ \ l_{1} \geq 1 \label{cond_S_B_b_l}
\end{equation}
holds.\\
b) For all $0 \leq j \leq S_{\mathcal{B}} - 1$, we set $\tau \mapsto v_{j}(\tau,\epsilon)$ as a function that belongs to the Banach space
$EG_{(0,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, for a $L-$shaped domain
$RH_{a,b,\upsilon}$ displayed at the onset of Subsection 4.1 and some real number $\sigma_{1}'>0$. Furthermore, we assume the existence of
positive real numbers $J,\delta>0$ for which
\begin{equation}
\sum_{j=0}^{S_{\mathcal{B}} - 1 -h} ||v_{j+h}(\tau,\epsilon)||_{(0,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)} \frac{\delta^{j}}{j!} \leq J
\label{i_d_v_j_less_J}
\end{equation}
for any $0 \leq h \leq S_{\mathcal{B}} - 1$, for $\epsilon \in \dot{D}(0,\epsilon_{0})$.
Then, for any given $\sigma_{1} > \sigma_{1}'$, for a suitable choice of constants $\Lambda>0$ and $0 < \delta < \rho$, the equation
(\ref{ACP_forcterm_w}) where the forcing term $w(\tau,z,\epsilon)$ needs to be supplanted by $w_{HJ_n}(\tau,z,\epsilon)$ along with the initial data
\begin{equation}
(\partial_{z}^{j}v)(\tau,0,\epsilon) = v_{j}(\tau,\epsilon) \ \ , \ \ 0 \leq j \leq S_{\mathcal{B}} - 1 \label{ACP_forcterm_w_i_d}
\end{equation}
has a unique solution $v(\tau,z,\epsilon)$ in the space $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$, for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$ and is submitted to the bounds
\begin{equation}
||v(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq \delta^{S_{\mathcal{B}}}\Lambda + J
\label{norm_v_RHab_less_J}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
2) We need the following restrictions to hold\\
a) There exists a real number $b>1$ for which (\ref{cond_S_B_b_l}) occurs.\\
b) For all $0 \leq j \leq S_{\mathcal{B}}-1$, we define $\tau \mapsto v_{j}(\tau,\epsilon)$ as a function that belongs to the Banach space
$SEG_{(0,\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon)}$, for any $\epsilon \in \dot{D}(0,\epsilon_{0})$, for some
$L-$shaped domain $RJ_{c,d,\upsilon}$ described at the beginning of Subsection 4.2 and for some tuple
$\underline{\varsigma}'=(\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ with $\sigma_{1}'>0$,$\varsigma_{2}'>0$ and $\varsigma_{3}'>0$. Moreover,
we can select real numbers $J,\delta>0$ with
$$
\sum_{j=0}^{S_{\mathcal{B}} - 1 -h} ||v_{j+h}(\tau,\epsilon)||_{(0,\underline{\varsigma}',RJ_{c,d,\upsilon},\epsilon)} \frac{\delta^{j}}{j!} \leq J
$$
for any $0 \leq h \leq S_{\mathcal{B}} - 1$, for $\epsilon \in \dot{D}(0,\epsilon_{0})$.
Then, for any given tuple $\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ with $\sigma_{1} > \sigma_{1}'$,
$\varsigma_{2}>\varsigma_{2}'$ and $\varsigma_{3}=\varsigma_{3}'$, for an appropriate choice of constants $\Lambda>0$ and
$0 < \delta < \rho$, the equation (\ref{ACP_forcterm_w}) where the forcing term $w(\tau,z,\epsilon)$ must be interchanged with
$w_{HJ_n}(\tau,z,\epsilon)$ together with the initial data (\ref{ACP_forcterm_w_i_d}) possesses a unique
solution $v(\tau,z,\epsilon)$ in the space $SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$ which suffers the bounds
\begin{equation}
||v(\tau,z,\epsilon)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)} \leq \delta^{S_{\mathcal{B}}}\Lambda + J
\label{norm_v_RJcd_less_J}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
3) We request the next assumptions\\
a) For a suitable real number $b>1$, the inequalities (\ref{cond_S_B_b_l}) hold.\\
b) For each $0 \leq j \leq S_{\mathcal{B}}-1$, we single out a function $\tau \mapsto v_{j}(\tau,\epsilon)$ belonging to the Banach space
$EG_{(0,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, where $S_{d}$ is one of sectors
$S_{d_p}$, $0 \leq p \leq \iota-1$ displayed after Definition 4, for some real number $\sigma_{1}'>0$. Furthermore, we assume that no root
of $P_{\mathcal{B}}(\tau)$ is located in $\bar{S}_{d} \cup \bar{D}(0,r)$. We impose the existence of two real numbers $J,\delta>0$ in a way that
$$
\sum_{j=0}^{S_{\mathcal{B}} - 1 -h} ||v_{j+h}(\tau,\epsilon)||_{(0,\sigma_{1}',S_{d} \cup D(0,r),\epsilon)} \frac{\delta^{j}}{j!} \leq J
$$
holds for any $0 \leq h \leq S_{\mathcal{B}} - 1$, for $\epsilon \in \dot{D}(0,\epsilon_{0})$.
Then, for any given $\sigma_{1}>\sigma_{1}'$, for an adequate guess of constants $\Lambda>0$ and $0 < \delta < \rho$, the equation
(\ref{ACP_forcterm_w}) where the forcing term $w(\tau,z,\epsilon)$ shall be replaced by $w_{S_d}(\tau,z,\epsilon)$ accompanied by the
initial data (\ref{ACP_forcterm_w_i_d}) has a unique solution $v(\tau,z,\epsilon)$ in the space
$EG_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)}$ withstanding the bounds
\begin{equation}
||v(\tau,z,\epsilon)||_{(\sigma_{1},S_{d} \cup D(0,r),\epsilon,\delta)} \leq \delta^{S_{\mathcal{B}}}\Lambda + J
\label{norm_v_Sd_less_J}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{prop}
\begin{proof} The proof will only be concerned with a thorough inspection of the first point 1) since a similar discourse holds
for the second (resp. third) point by merely replacing Propositions 13, 14 and 15 by Propositions 17, 18 and 19 (resp. 20, 21 and 8).
We keep the notations of the subsection 3.1 and we depart from a lemma dealing with the forcing term
$w(\tau,z,\epsilon)$ of the equation (\ref{ACP_forcterm_w}).
\begin{lemma}
1) The formal series $w_{HJ_n}(\tau,z,\epsilon)$ built in (\ref{formal_wHJn}) belongs both to the spaces\\
$EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ and $SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$
for the tuples $\underline{\sigma},\underline{\varsigma}$ and $\delta$ considered in Proposition 10, for any choice of
$\upsilon<0$, provided that the sector
$H$ from $RH_{a,b,\upsilon}$ belongs to the set $\{ H_{k} \}_{k \in \llbracket -n,n \rrbracket}$ and
$J$ out of $RJ_{c,d,\upsilon}$ appertain to $\{ J_{k} \}_{k \in \llbracket -n,n \rrbracket}$. Moreover, there exist constants
$\tilde{C}_{RH_{a,b,\upsilon}}>0$ and $\tilde{C}_{RJ_{c,d,\upsilon}}>0$ for which
\begin{equation}
||w_{HJ_n}(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq \tilde{C}_{RH_{a,b,\upsilon}}, \ \
||w_{HJ_n}(\tau,z,\epsilon)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)} \leq \tilde{C}_{RJ_{c,d,\upsilon}}
\label{bds_wHJn_on_Lshaped}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
2) The formal series $w_{S_{d_p}}(\tau,z,\epsilon)$ defined in (\ref{defin_w_S_dp}) is located in the space
$EG_{(\sigma_{1},S_{d_p} \cup D(0,r),\epsilon,\delta)}$. Besides, there exists a constant $\tilde{C}_{S_{d_p}}>0$ with
\begin{equation}
||w_{S_{d_p}}(\tau,z,\epsilon)||_{(\sigma_{1},S_{d_p} \cup D(0,r),\epsilon,\delta)} \leq \tilde{C}_{S_{d_p}}
\end{equation}
whenever $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{lemma}
\begin{proof} We focus on the first point 1). According to (\ref{formal_wHJn}), the formal series
$w_{HJ_n}(\tau,z,\epsilon)$ has the following expansion
$w_{HJ_n}(\tau,z,\epsilon) = \sum_{\beta \geq 0} w_{\beta}(\tau,\epsilon) z^{\beta}/\beta!$ where
$w_{\beta}(\tau,\epsilon)$ stand for holomorphic functions on $\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$, continuous
on $HJ_{n} \times \dot{D}(0,\epsilon_{0})$, for all $\beta \geq 0$. Besides, the estimates (\ref{norm_wHJn_Hk}) and
(\ref{norm_wHJn_Jk}) hold.
We first prove that $w_{HJ_n}(\tau,z,\epsilon)$ belongs to $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$.
We need upper bounds for the quantity
$$ Rw_{a,b}(\beta,\epsilon) = \sup_{\tau \in R_{a,b,\upsilon}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta) |\tau| \right). $$
Since $R_{a,b,\upsilon} \subset HJ_{n} = \cup_{k \in \llbracket -n,n \rrbracket} H_{k} \cup J_{k}$, we get in particular the coarse bounds
\begin{multline}
Rw_{a,b}(\beta,\epsilon) \leq \sum_{k \in \llbracket -n,n \rrbracket} \sup_{\tau \in R_{a,b,\upsilon} \cap H_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right)\\
+ \sum_{k \in \llbracket -n,n \rrbracket} \sup_{\tau \in R_{a,b,\upsilon} \cap J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right). \label{Rw_defin}
\end{multline}
The sums above are taken over the integers $k$ for which $R_{a,b,\upsilon} \cap H_{k}$ and
$R_{a,b,\upsilon} \cap J_{k}$ are not empty. But, we observe that
\begin{multline}
\sup_{\tau \in R_{a,b,\upsilon} \cap H_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right)\\
\leq
\sup_{\tau \in H_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| +
\sigma_{2}s_{b}(\beta)\exp( \sigma_{3} |\tau| ) \right) = ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\sigma},H_{k},\epsilon)} \label{Rw_first}
\end{multline}
and if one set
$$ \mathcal{C}_{a,b,\upsilon,k} = \sup_{\tau \in R_{a,b,\upsilon} \cap J_{k}}
\exp \left( \varsigma_{2} \zeta(b) \exp( \varsigma_{3}|\tau| ) \right) $$
we see that
\begin{multline}
\sup_{\tau \in R_{a,b,\upsilon} \cap J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right)
= \sup_{\tau \in R_{a,b,\upsilon} \cap J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right)\\
\times
\exp( - \varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) )
\times \exp( \varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) ) \leq
\mathcal{C}_{a,b,\upsilon,k} \sup_{\tau \in J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right)\\
\times
\exp( - \varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) )
=\mathcal{C}_{a,b,\upsilon,k} ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\varsigma},J_{k},\epsilon)}. \label{Rw_second}
\end{multline}
Hence, gathering (\ref{Rw_defin}) and (\ref{Rw_first}), (\ref{Rw_second}) yields
\begin{equation}
Rw_{a,b}(\beta,\epsilon) \leq \sum_{k \in \llbracket -n,n \rrbracket} ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\sigma},H_{k},\epsilon)}
+ \sum_{k \in \llbracket -n,n \rrbracket} \mathcal{C}_{a,b,\upsilon,k}||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\varsigma},J_{k},\epsilon)}
\label{Rw_third}
\end{equation}
Now, we notice that
\begin{multline}
||w_{\beta}(\tau,\epsilon)||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)} \leq
\sup_{\tau \in R_{a,b,\upsilon}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| \right)\\
+ \sup_{\tau \in H} \frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau|
+ \sigma_{2}s_{b}(\beta)\exp( \sigma_{3}|\tau| ) \right) = Rw_{a,b}(\beta,\epsilon) +
||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\sigma},H,\epsilon)} \label{norm_w_beta_RHabu}
\end{multline}
Finally, clustering (\ref{Rw_third}) and (\ref{norm_w_beta_RHabu}) yields that
\begin{equation}
||w_{HJ}(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq
\sum_{k \in \llbracket -n,n \rrbracket} \tilde{C}_{H_{k}} + \sum_{k \in \llbracket -n,n \rrbracket} \mathcal{C}_{a,b,\upsilon,k}
\tilde{C}_{J_{k}} + \tilde{C}_{H}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, bearing in mind the notations within the bounds (\ref{norm_wHJn_Hk}) and (\ref{norm_wHJn_Jk}).
In a second step, we show that $w_{HJ_n}(\tau,z,\epsilon)$ belongs to
$SEG_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)}$.
We search for upper bounds concerning
$$ RJw_{c,d}(\beta,\epsilon) = \sup_{\tau \in R_{c,d,\upsilon}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)
|\tau| - \varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3} |\tau| ) \right). $$
According to the inclusion $R_{c,d,\upsilon} \subset HJ_{n} = \cup_{k \in \llbracket -n,n \rrbracket} H_{k} \cup J_{k}$, we observe that
\begin{multline}
RJw_{c,d}(\beta,\epsilon) \leq \sum_{k \in \llbracket -n,n \rrbracket} \sup_{\tau \in R_{c,d,\upsilon} \cap H_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| -
\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| ) \right)\\
+ \sum_{k \in \llbracket -n,n \rrbracket} \sup_{\tau \in R_{c,d,\upsilon} \cap J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| -
\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| ) \right). \label{RJw_defin}
\end{multline}
As above, the sums belonging to the latter inequalities are performed over the integers $k$ for which $R_{c,d,\upsilon} \cap H_{k}$ and
$R_{c,d,\upsilon} \cap J_{k}$ are not empty. Furthermore, we see that
\begin{multline}
\sup_{\tau \in R_{c,d,\upsilon} \cap H_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| -
\varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3} |\tau| ) \right)\\
\leq
\sup_{\tau \in H_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| +
\sigma_{2}s_{b}(\beta)\exp( \sigma_{3} |\tau| ) \right) = ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\sigma},H_{k},\epsilon)} \label{RJw_first}
\end{multline}
and
\begin{multline}
\sup_{\tau \in R_{c,d,\upsilon} \cap J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau| -
\varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3} |\tau| ) \right)\\
\leq
\sup_{\tau \in J_{k}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau|
- \varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3} |\tau| ) \right)
= ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\varsigma},J_{k},\epsilon)}. \label{RJw_second}
\end{multline}
As a result, collecting (\ref{RJw_defin}) and (\ref{RJw_first}), (\ref{RJw_second}) leads to
\begin{equation}
RJw_{c,d}(\beta,\epsilon) \leq \sum_{k \in \llbracket -n,n \rrbracket} ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\sigma},H_{k},\epsilon)}
+ \sum_{k \in \llbracket -n,n \rrbracket} ||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\varsigma},J_{k},\epsilon)}
\label{RJw_third}
\end{equation}
Besides, we remark that
\begin{multline}
||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon)} \leq
\sup_{\tau \in R_{c,d,\upsilon}}
\frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau|
-\varsigma_{2}r_{b}(\beta) \exp( \varsigma_{3}|\tau| ) \right)\\
+ \sup_{\tau \in J} \frac{|w_{\beta}(\tau,\epsilon)|}{|\tau|} \exp \left( -\frac{\sigma_{1}}{|\epsilon|}r_{b}(\beta)|\tau|
- \varsigma_{2}r_{b}(\beta)\exp( \varsigma_{3}|\tau| ) \right) = RJw_{c,d}(\beta,\epsilon) +
||w_{\beta}(\tau,\epsilon)||_{(\beta,\underline{\varsigma},J,\epsilon)} \label{norm_w_beta_RJcdu}
\end{multline}
At last, storing up (\ref{RJw_third}) and (\ref{norm_w_beta_RJcdu}) returns the bounds
\begin{equation}
||w_{HJ}(\tau,z,\epsilon)||_{(\underline{\varsigma},RJ_{c,d,\upsilon},\epsilon,\delta)} \leq
\sum_{k \in \llbracket -n,n \rrbracket} \tilde{C}_{H_{k}} + \sum_{k \in \llbracket -n,n \rrbracket}
\tilde{C}_{J_{k}} + \tilde{C}_{J}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, in accordance with the bounds (\ref{norm_wHJn_Hk}) and (\ref{norm_wHJn_Jk}).
The second point 2) has already been checked in the proof of Proposition 11.
\end{proof}
Let us introduce the function
$$ V_{S_{\mathcal{B}}}(\tau,z,\epsilon) = \sum_{j=0}^{S_{\mathcal{B}}-1} v_{j}(\tau,\epsilon) \frac{z^j}{j!} $$
with $v_{j}(\tau,\epsilon)$ disclosed in 1)b) above. We set a map $B_{\epsilon}$ described as follows
\begin{multline*}
B_{\epsilon}(H(\tau,z)) := \sum_{\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}}
\frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \left \{ \frac{ \epsilon^{l_{1} - l_{0}} \tau}{\Gamma( d_{l_{0},l_{1}} )}
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} - 1} s^{l_1} \partial_{z}^{l_{2}-S_{\mathcal{B}}}H(s,z) \frac{ds}{s} \right. \\
\left . +
\sum_{1 \leq p \leq l_{1}-1} A_{l_{1},p} \frac{ \epsilon^{l_{1} - l_{0}} \tau }{\Gamma( d_{l_{0},l_{1}} + (l_{1} -p) )}
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} + (l_{1} - p) - 1} s^{p} \partial_{z}^{l_{2}-S_{\mathcal{B}}}H(s,z) \frac{ds}{s}
\right \}\\
+ \sum_{\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}}
\frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \left \{ \frac{ \epsilon^{l_{1} - l_{0}} \tau}{\Gamma( d_{l_{0},l_{1}} )}
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} - 1} s^{l_1} \partial_{z}^{l_{2}}V_{S_{\mathcal{B}}}(s,z,\epsilon) \frac{ds}{s} \right. \\
\left . +
\sum_{1 \leq p \leq l_{1}-1} A_{l_{1},p} \frac{ \epsilon^{l_{1} - l_{0}} \tau }{\Gamma( d_{l_{0},l_{1}} + (l_{1} -p) )}
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} + (l_{1} - p) - 1} s^{p} \partial_{z}^{l_{2}}V_{S_{\mathcal{B}}}(s,z,\epsilon) \frac{ds}{s}
\right \}\\
+ w_{HJ_n}(\tau,z,\epsilon)
\end{multline*}
In the next lemma, we explain why $B_{\epsilon}$ induces a Lipschitz shrinking map on the space\\
$EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$, for any given $\sigma_{1} > \sigma_{1}'$.
\begin{lemma}
We take for granted that the restriction (\ref{cond_S_B_b_l}) hold. Let us choose a positive real number $J$ and $\delta>0$ with
(\ref{i_d_v_j_less_J}). Then, if $\delta>0$ is close enough to 0,\\
a) We can select a constant $\Lambda>0$ for which
\begin{equation}
||B_{\epsilon}(H(\tau,z))||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq \Lambda \label{B_epsilon_ball_in_ball}
\end{equation}
for any $H(\tau,z) \in B(0,\Lambda)$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, where $B(0,\Lambda)$ stands for the closed
ball centered at 0 with radius $\Lambda>0$ in $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$.\\
b) The map $B_{\epsilon}$ is shrinking in the sense that
\begin{equation}
||B_{\epsilon}(H_{1}(\tau,z)) - B_{\epsilon}(H_{2}(\tau,z))||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq
\frac{1}{2} ||H_{1}(\tau,z) - H_{2}(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \label{B_epsilon_shrink}
\end{equation}
occurs whenever $H_{1},H_{2}$ belong to $B(0,\Lambda)$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.
\end{lemma}
\begin{proof}
According to the inequality $r_{b}(\beta) \geq r_{b}(0)$ for all $\beta \geq 0$, we observe that for all $0 \leq h \leq S_{\mathcal{B}}-1$
and $0 \leq j \leq S_{\mathcal{B}}-1-h$,
$$ ||v_{j+h}(\tau,\epsilon)||_{(j,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)} \leq ||v_{j+h}(\tau,\epsilon)||_{(0,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)} $$
holds. As a consequence, the function $\partial_{z}^{h}V_{S_{\mathcal{B}}}(\tau,z,\epsilon)$ is located in
$EG_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)}$ with the upper estimates
\begin{equation}
||\partial_{z}^{h}V_{S_{\mathcal{B}}}(\tau,z,\epsilon)||_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)}
\leq \sum_{j=0}^{S_{\mathcal{B}}-1-h} ||v_{j+h}(\tau,\epsilon)||_{(0,\sigma_{1}',RH_{a,b,\upsilon},\epsilon)} \frac{\delta^j}{j!} \leq J.
\label{norm_partial_z_h_V_SB_lessJ}
\end{equation}
We first concentrate our attention on the bounds (\ref{B_epsilon_ball_in_ball}). Let $H(\tau,z)$ in
$EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ submitted to
$||H(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq \Lambda$. Assume that $0 < \delta < \rho$. We set
$$ M_{\mathcal{B},\underline{l}} = \sup_{\tau \in RH_{a,b,\upsilon}, \epsilon \in \dot{D}(0,\epsilon),z \in D(0,\rho)}
\left| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \right| $$
for all $\underline{l} \in \mathcal{B}$. Under the oversight of (\ref{cond_S_B_b_l}) and due to Propositions 13 and 15, we get constants
$C_{5}>0$ (depending on $\underline{l},S_{\mathcal{B}},\sigma_{1},b$) and $C_{6}>0$ (depending on
$M_{\mathcal{B},\underline{l}},\delta,\rho$) such that
\begin{multline}
|| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \epsilon^{l_{1} - l_{0}} \tau
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} - 1} s^{l_1} \partial_{z}^{l_{2}-S_{\mathcal{B}}}H(s,z) \frac{ds}{s}
||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq C_{6}C_{5} \delta^{S_{\mathcal{B}}-l_{2}}
||H(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \label{norm_partial_z_H_1}
\end{multline}
and
\begin{multline}
|| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \epsilon^{l_{1} - l_{0}} \tau
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} + (l_{1} - p) - 1} s^{p} \partial_{z}^{l_{2}-S_{\mathcal{B}}}H(s,z) \frac{ds}{s}
||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq C_{6}C_{5} \delta^{S_{\mathcal{B}}-l_{2}}
||H(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \label{norm_partial_z_H_2}
\end{multline}
for all $1 \leq p \leq l_{1}-1$. Besides, keeping in mind Propositions 14 and 15 with the help of
(\ref{norm_partial_z_h_V_SB_lessJ}), two constants $\check{C}_{5}>0$ (depending on $\underline{l},\sigma_{1},\sigma_{1}'$) and
$C_{6}>0$ (depending on $M_{\mathcal{B},\underline{l}},\delta,\rho$) are obtained for which
\begin{multline}
|| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \epsilon^{l_{1} - l_{0}} \tau
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} - 1} s^{l_1} \partial_{z}^{l_{2}}V_{S_{\mathcal{B}}}(s,z,\epsilon) \frac{ds}{s}
||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq C_{6}\check{C}_{5}
||\partial_{z}^{l_2}V_{S_{\mathcal{B}}}(\tau,z,\epsilon)||_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)} \leq
C_{6}\check{C}_{5}J \label{norm_V_SB_1}
\end{multline}
together with
\begin{multline}
|| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \epsilon^{l_{1} - l_{0}} \tau
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} + (l_{1} - p) - 1} s^{p} \partial_{z}^{l_{2}}V_{S_{\mathcal{B}}}(s,z,\epsilon)
\frac{ds}{s} ||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq C_{6}\check{C}_{5}
||\partial_{z}^{l_2}V_{S_{\mathcal{B}}}(\tau,z,\epsilon)||_{(\sigma_{1}',RH_{a,b,\upsilon},\epsilon,\delta)} \leq
C_{6}\check{C}_{5}J \label{norm_V_SB_2}
\end{multline}
for all $1 \leq p \leq l_{1}-1$.
At last, from Lemma 16 1), one can select a constant $\tilde{C}_{RH_{a,b,\upsilon}}>0$ for which the first inequality of
(\ref{bds_wHJn_on_Lshaped}) holds. We choose $\delta>0$ small enough and $\Lambda>0$ sufficiently large such that
\begin{multline}
\sum_{\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}}
\frac{C_{6}C_{5}\delta^{S_{\mathcal{B}}-l_{2}} }{\Gamma(d_{l_{0},l_{1}})} \Lambda + \sum_{1 \leq p \leq l_{1}-1}
|A_{l_{1},p}| \frac{C_{6}C_{5} \delta^{S_{\mathcal{B}}-l_{2}}}{\Gamma( d_{l_{0},l_{1}} + (l_{1}-p) )} \Lambda \\
+
\sum_{\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}}
\frac{C_{6}\check{C}_{5}}{\Gamma(d_{l_{0},l_{1}})}J + \sum_{1 \leq p \leq l_{1}-1} |A_{l_{1},p}|
\frac{C_{6}\check{C}_{5}}{\Gamma( d_{l_{0},l_{1}} + (l_{1}-p) )} J + \tilde{C}_{RH_{a,b,\upsilon}} \leq \Lambda
\label{constraints_delta_Lambda_data_l_B}
\end{multline}
holds. Finally, gathering (\ref{norm_partial_z_H_1}), (\ref{norm_partial_z_H_2}), (\ref{norm_V_SB_1}), (\ref{norm_V_SB_2})
and (\ref{constraints_delta_Lambda_data_l_B}) implies (\ref{B_epsilon_ball_in_ball}).
In a second phase, we show that $B_{\epsilon}$ represents a shrinking map on the ball $B(0,\Lambda)$. Namely, let $H_{1},H_{2}$ be taken in the
ball $B(0,\Lambda)$. The bounds (\ref{norm_partial_z_H_1}) and (\ref{norm_partial_z_H_2}) just established above entail
\begin{multline}
|| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \epsilon^{l_{1} - l_{0}} \tau
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} - 1} s^{l_1} \partial_{z}^{l_{2}-S_{\mathcal{B}}}(H_{1}(s,z) - H_{2}(s,z)) \frac{ds}{s}
||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq C_{6}C_{5} \delta^{S_{\mathcal{B}}-l_{2}}
||H_{1}(\tau,z) - H_{2}(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \label{norm_partial_z_H_1_shrink}
\end{multline}
in a row with
\begin{multline}
|| \frac{d_{\underline{l}}(z,\epsilon)}{P_{\mathcal{B}}(\tau)} \epsilon^{l_{1} - l_{0}} \tau
\int_{L_{0,\tau}} (\tau - s)^{d_{l_{0},l_{1}} + (l_{1} - p) - 1} s^{p}
\partial_{z}^{l_{2}-S_{\mathcal{B}}}(H_{1}(s,z) - H_{2}(s,z)) \frac{ds}{s}
||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq C_{6}C_{5} \delta^{S_{\mathcal{B}}-l_{2}}
||H_{1}(\tau,z) - H_{2}(\tau,z)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \label{norm_partial_z_H_2_shrink}
\end{multline}
for all $1 \leq p \leq l_{1}-1$. We take $\delta>0$ small scaled in order that
\begin{equation}
\sum_{\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}} \frac{C_{6}C_{5}}{\Gamma(d_{l_{0},l_{1}})} \delta^{S_{\mathcal{B}}-l_{2}}
+ \sum_{1 \leq p \leq l_{1}-1} |A_{l_{1},p}| \frac{C_{6}C_{5}}{\Gamma(d_{l_{0},l_{1}} + (l_{1}-p))} \delta^{S_{\mathcal{B}}-l_{2}}
\leq \frac{1}{2} \label{constraints_delta_data_l_B_shrink}
\end{equation}
As a result, we obtain (\ref{B_epsilon_shrink}).
In conclusion, we set $\delta>0$ and $\Lambda>0$ in a way that
(\ref{constraints_delta_Lambda_data_l_B}) and (\ref{constraints_delta_data_l_B_shrink}) are concurrently fulfilled. Lemma 17 follows.
\end{proof}
Assume the restriction (\ref{cond_S_B_b_l}) holds. Take the constants $J,\Lambda$ and $\delta$ as in Lemma 17. The initial data
$v_{j}(\tau,\epsilon)$, $0 \leq j \leq S_{\mathcal{B}}-1$ and $\sigma_{1}'$ are chosen in a way that (\ref{i_d_v_j_less_J}) occurs.
In view of the points a) and b) of Lemma 17 and according to the classical contractive mapping theorem on complete metric spaces, we notice that
the map $B_{\epsilon}$ carries a unique fixed point named $H(\tau,z,\epsilon)$ (that relies analytically upon
$\epsilon \in \dot{D}(0,\epsilon_{0})$) inside the closed ball $B(0,\Lambda) \subset EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ for all
$\epsilon \in \dot{D}(0,\epsilon_{0})$. In other words, $B_{\epsilon}(H(\tau,z,\epsilon))$ equates $H(\tau,z,\epsilon)$ with
$||H(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \leq \Lambda$. As a consequence, the expression
$$ v(\tau,z,\epsilon) = \partial_{z}^{-S_{\mathcal{B}}}H(\tau,z,\epsilon) + V_{S_{\mathcal{B}}}(\tau,z,\epsilon) $$
fufills the convolution equation (\ref{ACP_forcterm_w}) with initial data (\ref{ACP_forcterm_w_i_d}). In the last step, we explain
the reason why $v(\tau,z,\epsilon)$ shall belong to $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$. Indeed, if one expands
$H(\tau,z,\epsilon)$ into a formal series in $z$, $H(\tau,z,\epsilon) = \sum_{\beta \geq 0} H_{\beta}(\tau,\epsilon) z^{\beta}/\beta!$, one checks
that
$$ ||\partial_{z}^{-S_{\mathcal{B}}}H(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}
= \sum_{\beta \geq S_{\mathcal{B}}} ||H_{\beta - S_{\mathcal{B}}}(\tau,\epsilon)||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\delta^{\beta}/\beta! $$
From $r_{b}(\beta) \geq r_{b}(\beta - S_{\mathcal{B}})$, we notice that
$$ ||H_{\beta - S_{\mathcal{B}}}(\tau,\epsilon) ||_{(\beta,\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\leq ||H_{\beta - S_{\mathcal{B}}}(\tau,\epsilon) ||_{(\beta - S_{\mathcal{B}},\sigma_{1},RH_{a,b,\upsilon},\epsilon)} $$
for all $\beta \geq S_{\mathcal{B}}$. Hence,
\begin{multline}
||\partial_{z}^{-S_{\mathcal{B}}}H(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \\
\leq \sum_{\beta \geq S_{\mathcal{B}}} \left( \frac{(\beta - S_{\mathcal{B}})!}{\beta!} \delta^{S_{\mathcal{B}}} \right)
||H_{\beta - S_{\mathcal{B}}}(\tau,\epsilon)||_{(\beta - S_{\mathcal{B}},\sigma_{1},RH_{a,b,\upsilon},\epsilon)}
\frac{\delta^{\beta - S_{\mathcal{B}}}}{(\beta - S_{\mathcal{B}})!} \\
\leq \delta^{S_{\mathcal{B}}}
||H(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)} \label{norm_partial_z_H_less_norm_H}
\end{multline}
Altogether, according to (\ref{norm_partial_z_h_V_SB_lessJ}) and (\ref{norm_partial_z_H_less_norm_H}), it follows that
$v(\tau,z,\epsilon)$ belongs to $EG_{(\sigma_{1},RH_{a,b,\upsilon},\epsilon,\delta)}$ with the upper bounds (\ref{norm_v_RHab_less_J}).
\end{proof}
\section{Sectorial analytic solutions in a complex parameter for a singularly perturbed differential Cauchy problem}
Let $\mathcal{B}$ be a finite set in $\mathbb{N}^{3}$. For all $\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}$, we set
$d_{\underline{l}}(z,\epsilon)$ as a bounded holomorphic function on a polydisc $D(0,\rho) \times D(0,\epsilon_{0})$ for given radii
$\rho,\epsilon_{0}>0$. Let $S_{\mathcal{B}} \geq 1$ be an integer and let $P_{\mathcal{B}}(\tau)$ be a polynomial (not identically equal to 0)
with complex coefficients which is either constant or whose complex roots that are asked to lie in the open right halfplane $\mathbb{C}_{+}$ and
are imposed to avoid all the closed sets
$\bar{S}_{d_p} \cup \bar{D}(0,r)$, for $0 \leq p \leq \iota-1$, where the sectors $S_{d_p}$ and the disc $D(0,r)$ are introduced just after Definition 4.
We aim attention at the next partial differential Cauchy problem
\begin{equation}
P_{\mathcal{B}}(\epsilon t^{2}\partial_{t}) \partial_{z}^{S_{\mathcal{B}}} y(t,z,\epsilon) =
\sum_{\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}} d_{\underline{l}}(z,\epsilon)
t^{l_0}\partial_{t}^{l_1}\partial_{z}^{l_{2}}y(t,z,\epsilon) + u(t,z,\epsilon) \label{SPCP_second}
\end{equation}
for given initial data
\begin{equation}
(\partial_{z}^{j}y)(t,0,\epsilon) = \psi_{j}(t,\epsilon) \label{SPCP_second_i_d}
\end{equation}
for $0 \leq j \leq S_{\mathcal{B}}-1$, where $u(t,z,\epsilon)$ belongs to the sets of solutions to the Cauchy problem
(\ref{SPCP_first}), (\ref{SPCP_first_i_d}) constructed in Section 3.3 and displayed as
$\{ u_{\mathcal{E}_{HJ_n}^{k}} \}_{k \in \llbracket -n,n \rrbracket}$ or
$\{ u_{\mathcal{E}_{S_{d_p}}} \}_{0 \leq p \leq \iota-1}$.
We require the forthcoming constraints on the set $\mathcal{B}$ to hold. There exists a real number $b>1$ such that
\begin{equation}
S_{\mathcal{B}} \geq b(l_{0} - l_{1}) + l_{2} \ \ , \ \ S_{\mathcal{B}} > l_{2} \ \ , \ \ l_{1} \geq 1
\label{SB_underline_l_constraints}
\end{equation}
holds for all $\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}$ and we assume the existence of an integer $d_{l_{0},l_{1}} \geq 1$ for which
\begin{equation}
l_{0} = 2l_{1} + d_{l_{0},l_{1}}, \label{d_l01_defin}
\end{equation}
for all $\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}$. With the help of (\ref{d_l01_defin}), according to the formula
(8.7) p. 3630 from \cite{taya}, one can expand the differential operators
\begin{equation}
t^{l_0}\partial_{t}^{l_1} = t^{d_{l_{0},l_{1}}}(t^{2l_{1}}\partial_{t}^{l_1})
= t^{d_{l_{0},l_{1}}} \left( (t^{2}\partial_{t})^{l_1} + \sum_{1 \leq p \leq l_{1}-1} A_{l_{1},p}
t^{(l_{1}-p)} (t^{2}\partial_{t})^{p} \right) \label{Tahara_expansion_diff_op}
\end{equation}
for suitable real numbers $A_{l_{1},p}$, with $1 \leq p \leq l_{1}-1$ for $l_{1} \geq 1$ (with the convention that the
sum $\sum_{1 \leq p \leq l_{1}-1}$ is reduced to 0 when $l_{1}=1$).\medskip
In the sequel, we explain how we build up the initial data $\psi_{j}(t,\epsilon)$, $0 \leq j \leq S_{\mathcal{B}}-1$. We take for granted
that all the constraints disclosed at the beginning of Subsection 3.3 hold.
We depart from a family of functions $\tau \mapsto v_{j}(\tau,\epsilon)$, $0 \leq j \leq S_{\mathcal{B}}-1$, which are holomorphic on
the disc $D(0,r)$, on each sector $S_{d_p}$, $0 \leq p \leq \iota-1$ and on the interior of the domain $HJ_{n}$ defined at the onset of the
Section 3.1 for some integer $n \geq 1$ and relies analytically on $\epsilon$ over $\dot{D}(0,\epsilon_{0})$. Furthermore, we require the next
additional properties.\medskip
\noindent a) For all $0 \leq j \leq S_{\mathcal{B}}-1$, all $k \in \llbracket -n,n \rrbracket$, the function $\tau \mapsto v_{j}(\tau,\epsilon)$
belongs to
the Banach spaces $EG_{(0,\sigma_{1}',RH_{a_{k},b_{k},\upsilon_{k}},\epsilon)}$ and
$SEG_{(0,\underline{\varsigma}',RJ_{c_{k},d_{k},\upsilon_{k}},\epsilon)}$ for all $\epsilon \in \dot{D}(0,\epsilon_{0})$,
where $\sigma_{1}'>0$ and the tuple $\underline{\varsigma}'=(\sigma_{1}',\varsigma_{2}',\varsigma_{3}')$ satisfies
$\varsigma_{2}'>0,\varsigma_{3}'>0$, the real numbers $a_{k},b_{k},c_{k},d_{k}$ are defined at the outstart of Subsection 3.1 and
$\upsilon_{k}>0$ is a real number suitably chosen in a way that $\upsilon_{k} < \mathrm{Re}(A_{k})$, where
$A_{k}$ is a point inside the strip $H_{k}$ defined through (\ref{SPCP_first_i_d_k}) and (\ref{choice_a_k}). Besides, for any
$0 \leq j \leq S_{\mathcal{B}}-1$, there exists a constant $J_{v_j}>0$ (independent of $\epsilon$) such that
\begin{equation}
||v_{j}(\tau,\epsilon)||_{(0,\sigma_{1}',RH_{a_{k},b_{k},\upsilon_{k}},\epsilon)} \leq J_{v_j} \ \ , \ \
||v_{j}(\tau,\epsilon)||_{(0,\underline{\varsigma}',RJ_{c_{k},d_{k},\upsilon_{k}},\epsilon)} \leq J_{v_j}
\label{normRHJ_vj_Ivj}
\end{equation}
for all $k \in \llbracket -n,n \rrbracket$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\medskip
\noindent b) For all $0 \leq j \leq S_{\mathcal{B}}-1$, all $0 \leq p \leq \iota-1$, the map $\tau \mapsto v_{j}(\tau,\epsilon)$
appertains to the Banach space $EG_{(0,\sigma_{1}',S_{d_p} \cup D(0,r),\epsilon)}$ for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, where
$\sigma_{1}'>0$. Furthermore, for each $0 \leq j \leq S_{\mathcal{B}}-1$, we have a constant $J_{v_j}>0$ (independent of $\epsilon$) for which
\begin{equation}
||v_{j}(\tau,\epsilon)||_{(0,\sigma_{1}',S_{d_p} \cup D(0,r),\epsilon)} \leq J_{v_j} \label{normSdp_vj_Ivj}
\end{equation}
for all $0 \leq p \leq \iota-1$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\medskip
\noindent 1) We construct a first set of initial data
\begin{equation}
\psi_{j,\mathcal{E}_{HJ_n}^{k}}(t,\epsilon) = \int_{P_k} v_{j}(u,\epsilon) \exp(-\frac{u}{\epsilon t}) \frac{du}{u}
\label{defin_psi_j_HJ_i_d}
\end{equation}
for all $k \in \llbracket -n,n \rrbracket$, where the integration path is the same as the one involved in (\ref{SPCP_first_i_d_k}). The same
proof as the one presented in Lemma 8 justifies that
\begin{lemma}
The Laplace transform $\psi_{j,\mathcal{E}_{HJ_n}^{k}}(t,\epsilon)$ represents a bounded holomorphic function
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times \mathcal{E}_{HJ_n}^{k}$ for a suitable radius $r_{\mathcal{T}}>0$, where
$\mathcal{T}$ and $\mathcal{E}_{HJ_n}^{k}$ are bounded open sectors described in Definition 3.
\end{lemma}
2) For any $0 \leq j \leq S_{\mathcal{B}}-1$, we set up a second family of initial data
\begin{equation}
\psi_{j,\mathcal{E}_{S_{d_p}}}(t,\epsilon) = \int_{L_{\gamma_{d_p}}} v_{j}(u,\epsilon) \exp( -\frac{u}{\epsilon t} ) \frac{du}{u}
\label{defin_psi_j_Sd_i_d}
\end{equation}
where the integration path is a halfline with direction $\gamma_{d_p}$ described in (\ref{relation_gamma_epsilon_t}) and
(\ref{Laplace_varphi_j_along_halfline}). Following similar lines of arguments as in Lemma 9, we observe that
\begin{lemma}
The Laplace integral $\psi_{j,\mathcal{E}_{S_{d_p}}}(t,\epsilon)$ defines a bounded holomorphic function on
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times \mathcal{E}_{S_{d_p}}$ for a convenient radius $r_{\mathcal{T}}>0$, where
$\mathcal{T}$ and $\mathcal{E}_{S_{d_p}}$ are bounded open sectors displayed in Definition 4.
\end{lemma}
We are now in position to set forth the second main result of our work.
\begin{theo}
Under all the restrictions assumed above till the unfolding of Section 5, provided that the real number $\delta>0$ is chosen close enough to 0,
the following statements arise.\medskip
\noindent 1) 1.1) The Cauchy problem (\ref{SPCP_second}) where $u(t,z,\epsilon)$ stands for $u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ with initial
data (\ref{SPCP_second_i_d}) given by (\ref{defin_psi_j_HJ_i_d}) has a bounded holomorphic solution
$y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ on a domain
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{HJ_n}^{k}$ for some radius $r_{\mathcal{T}}>0$ chosen
close to 0 and $0 < \delta_{1} < 1$. Besides, $y_{\mathcal{E}_{HJ_n}^{k}}$ can be expressed through a special Laplace transform
\begin{equation}
y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon) = \int_{P_k} v_{HJ_n}(u,z,\epsilon) \exp(-\frac{u}{\epsilon t}) \frac{du}{u} \label{defin_Laplace_y_EHJnk}
\end{equation}
where $v_{HJ_n}(\tau,z,\epsilon)$ determines a holomorphic function on $\mathring{HJ}_{n} \times D(0,\delta \delta_{1}) \times
\dot{D}(0,\epsilon_{0})$, continuous on $HJ_{n} \times D(0,\delta \delta_{1}) \times
\dot{D}(0,\epsilon_{0})$, submitted to the next restrictions. For any choice of $\sigma_{1}>0$ and a tuple
$\underline{\varsigma} = (\sigma_{1},\varsigma_{2},\varsigma_{3})$ with
\begin{equation}
\sigma_{1} > \sigma_{1}' \ \ , \ \ \varsigma_{2} > \varsigma_{2}' \ \ , \ \ \varsigma_{3} = \varsigma_{3}' \label{cond_sigma_varsigma_theo2}
\end{equation}
one obtains constants $C_{H_k}^{v}>0$ and $C_{J_k}^{v}>0$ (independent of $\epsilon$) with
\begin{equation}
|v_{HJ_n}(\tau,z,\epsilon)| \leq C_{H_k}^{v}|\tau| \exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| ) \label{bds_vHJn_Hk}
\end{equation}
for all $\tau \in H_{k}$, all $z \in D(0,\delta \delta_{1})$ and
\begin{equation}
|v_{HJ_n}(\tau,z,\epsilon)| \leq C_{J_k}^{v}|\tau|
\exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| + \varsigma_{2}\zeta(b) \exp( \varsigma_{3}|\tau| ) ) \label{bds_vHJn_Jk}
\end{equation}
for all $\tau \in J_{k}$, all $z \in D(0,\delta \delta_{1})$, whenever $\epsilon \in \dot{D}(0,\epsilon_{0})$, for
all $k \in \llbracket -n,n \rrbracket$.\\
1.2) Let $k \in \llbracket -n,n \rrbracket$ with $k \neq n$. Then, there exist constants $M_{k,1},M_{k,2}>0$ and $M_{k,3}>1$
independent of $\epsilon$, such that
\begin{equation}
| y_{\mathcal{E}_{HJ_{n}}^{k+1}}(t,z,\epsilon) - y_{\mathcal{E}_{HJ_{n}}^{k}}(t,z,\epsilon) |
\leq M_{k,1} \exp( -\frac{M_{k,2}}{|\epsilon|} \mathrm{Log} \frac{M_{k,3}}{|\epsilon|} ) \label{log_flat_difference_yk_plus_1_minus_yk_HJn}
\end{equation}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, all $\epsilon \in \mathcal{E}_{HJ_{n}}^{k} \cap
\mathcal{E}_{HJ_{n}}^{k+1} \neq \emptyset$ and
all $z \in D(0,\delta \delta_{1})$.\medskip
\noindent 2) 2.1) The Cauchy problem (\ref{SPCP_second}) where $u(t,z,\epsilon)$ must be replaced by $u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ along
with initial
data (\ref{SPCP_second_i_d}) given by (\ref{defin_psi_j_Sd_i_d}) possesses a bounded holomorphic solution
$y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ on a domain
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{S_{d_p}}$ for some radius $r_{\mathcal{T}}>0$ chosen
small enough and $0 < \delta_{1} < 1$. Moreover, $y_{\mathcal{E}_{S_{d_p}}}$ appears to be a Laplace transform
\begin{equation}
y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) = \int_{L_{\gamma_{d_p}}} v_{S_{d_p}}(u,z,\epsilon) \exp(-\frac{u}{\epsilon t}) \frac{du}{u}
\label{defin_y_ESdp}
\end{equation}
where $v_{S_{d_p}}(\tau,z,\epsilon)$ represents a holomorphic function on $(S_{d_p} \cup D(0,r)) \times D(0,\delta \delta_{1}) \times
\dot{D}(0,\epsilon_{0})$, continuous on $(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times D(0,\delta \delta_{1}) \times
\dot{D}(0,\epsilon_{0})$ that conforms the next demand: For any choice of $\sigma_{1}>\sigma_{1}'$, one can select
a constant $C_{S_{d_p}}^{v}>0$ (independent of $\epsilon$) with
\begin{equation}
|v_{S_{d_p}}(\tau,z,\epsilon)| \leq C_{S_{d_p}}^{v}|\tau| \exp( \frac{\sigma_{1}}{|\epsilon|} \zeta(b) |\tau| ) \label{bds_vSdp}
\end{equation}
for all $\tau \in S_{d_p} \cup D(0,r)$, all $z \in D(0,\delta \delta_{1})$, all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
2.2) Let $0 \leq p \leq \iota-2$. We can find two constants $M_{p,1},M_{p,2}>0$ independent of $\epsilon$, such that
\begin{equation}
| y_{\mathcal{E}_{S_{d_{p+1}}}}(t,z,\epsilon) - y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) |
\leq M_{p,1} \exp( -\frac{M_{p,2}}{|\epsilon|} ) \label{exp_flat_difference_yk_plus_1_minus_yk_Sdp}
\end{equation}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, all $\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap
\mathcal{E}_{S_{d_p}} \neq \emptyset$ and
all $z \in D(0,\delta \delta_{1})$.\medskip
\noindent 3) The next additional bounds hold among the two families described above : There exist constants $M_{n,1},M_{n,2}>0$
(independent of $\epsilon$) with
\begin{equation}
| y_{\mathcal{E}_{HJ_n}^{-n}}(t,z,\epsilon) - y_{\mathcal{E}_{S_{d_0}}}(t,z,\epsilon) | \leq M_{n,1} \exp( -\frac{M_{n,2}}{|\epsilon|} )
\label{difference_y_HJn_Sd0}
\end{equation}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$ and
\begin{equation}
| y_{\mathcal{E}_{HJ_n}^{n}}(t,z,\epsilon) - y_{\mathcal{E}_{S_{d_{\iota-1}}}}(t,z,\epsilon) | \leq M_{n,1} \exp( -\frac{M_{n,2}}{|\epsilon|} )
\label{difference_y_HJn_Sdiota}
\end{equation}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$ whenever $t \in \mathcal{T} \cap D(0, r_{\mathcal{T}})$ and
$z \in D(0,\delta \delta_{1})$.
\end{theo}
\begin{proof} We consider the convolution equation (\ref{ACP_forcterm_w}) with the forcing term
$w(\tau,z,\epsilon) = w_{HJ_n}(\tau,z,\epsilon)$ for given initial data
\begin{equation}
(\partial_{z}^{j}v)(\tau,0,\epsilon) = v_{j}(\tau,\epsilon) \ \ , \ \ 0 \leq j \leq S_{\mathcal{B}}-1. \label{ACP_forcterm_w_iv_d_j}
\end{equation}
We certify that the problem (\ref{ACP_forcterm_w}) along with (\ref{ACP_forcterm_w_iv_d_j}) carries a unique formal solution
\begin{equation}
v_{HJ_n}(\tau,z,\epsilon) = \sum_{\beta \geq 0} v_{\beta}(\tau,\epsilon) \frac{z^{\beta}}{\beta !} \label{defin_vHJn}
\end{equation}
where $v_{\beta}(\tau,\epsilon)$ are holomorphic on $\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$, continuous on
$HJ_{n} \times \dot{D}(0,\epsilon_{0})$. Indeed, if one develops
$d_{\underline{l}}(z,\epsilon) = \sum_{\beta \geq 0} d_{\underline{l},\beta}(\epsilon) z^{\beta}/\beta!$ as Taylor expansion at $z=0$, the formal
series (\ref{defin_vHJn}) solves (\ref{ACP_forcterm_w}), (\ref{ACP_forcterm_w_iv_d_j}) if and only if the next recursion formula holds true
\begin{multline}
v_{\beta + S_{\mathcal{B}}}(\tau,\epsilon) = \sum_{\underline{l} = (l_{0},l_{1},l_{2}) \in \mathcal{B}}
\frac{\epsilon^{l_{1}-l_{0}}\tau}{\Gamma(d_{l_{0},l_{1}}) P_{\mathcal{B}}(\tau)}
\sum_{\beta_{1} + \beta_{2} = \beta} \frac{d_{\underline{l},\beta_{1}}(\epsilon)}{\beta_{1}!}\\
\times
\int_{L_{0,\tau}} (\tau-s)^{d_{l_{0},l_{1}}-1} s^{l_1} \frac{v_{\beta_{2}+l_{2}}(s,\epsilon)}{\beta_{2}!} \frac{ds}{s} \beta! +
\sum_{\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}} \sum_{1 \leq p \leq l_{1}-1} A_{l_{1},p}\\
\times
\frac{\epsilon^{l_{1} - l_{0}}\tau}{\Gamma(d_{l_{0},l_{1}} + (l_{1}-p))P_{\mathcal{B}}(\tau)}
\sum_{\beta_{1} + \beta_{2} = \beta} \frac{d_{\underline{l},\beta_{1}}(\epsilon)}{\beta_{1}!}
\int_{L_{0,\tau}} (\tau-s)^{d_{l_{0},l_{1}} + (l_{1}-p)-1} s^{p} \\
\times \frac{v_{\beta_{2}+l_{2}}(s,\epsilon)}{\beta_{2}!} \frac{ds}{s}\beta! \label{recursion_v_beta}
+ w_{\beta}(\tau,\epsilon)
\end{multline}
for all $\beta \geq 0$, where $w_{\beta}(\tau,\epsilon)$ are the Taylor coefficients of the forcing term $w_{HJ_n}(\tau,z,\epsilon)$
in the variable $z$ which solve the recursion (\ref{recursion_w_beta}). Since the initial data $v_{j}(\tau,\epsilon)$,
$0 \leq j \leq S_{\mathcal{B}}-1$ and all the functions $w_{\beta}(\tau,\epsilon)$, $\beta \geq 0$, define holomorphic functions
on $\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$, continuous on $HJ_{n} \times \dot{D}(0,\epsilon_{0})$, the recursion
(\ref{recursion_v_beta}) is well defined provided that $L_{0,\tau}$ stands for any path joining $0$ and $\tau$ that remains inside the
domain $HJ_{n}$. Furthermore, all $v_{n}(\tau,\epsilon)$ for $n \geq S_{\mathcal{B}}$ represent holomorphic functions on
$\mathring{HJ}_{n} \times \dot{D}(0,\epsilon_{0})$, continuous on $HJ_{n} \times \dot{D}(0,\epsilon_{0})$.
Bearing in mind all the assumptions set above since the beginning of Section 5, we observe in particular that the conditions
1)a)b) and 2)a)b) asked in Proposition 22 are satisfied. Therefore, the next features hold:\\
1) The formal series $v_{HJ_n}(\tau,z,\epsilon)$ belongs to the Banach spaces
$EG_{(\sigma_{1},RH_{a_{k},b_{k},\upsilon_{k}},\epsilon,\delta)}$, for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, all
$k \in \llbracket -n,n \rrbracket$, for any $\sigma_{1} > \sigma_{1}'$ and one can sort a constant $C_{H_k}^{v}>0$ for which
\begin{equation}
||v_{HJ_{n}}(\tau,z,\epsilon)||_{(\sigma_{1},RH_{a_{k},b_{k},\upsilon_{k}},\epsilon,\delta)} \leq C_{H_k}^{v} \label{norm_vHJn_RHk}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$.\\
2) The formal series $v_{HJ_n}(\tau,z,\epsilon)$ appertains to the Banach spaces
$SEG_{(\underline{\varsigma},RJ_{c_{k},d_{k},\upsilon_{k}},\epsilon,\delta)}$, whenever $\epsilon \in \dot{D}(0,\epsilon_{0})$
and $k \in \llbracket -n,n \rrbracket$, provided that the tuple $\underline{\varsigma}$ is chosen as in
(\ref{cond_sigma_varsigma_theo2}). Furthermore, one can get a constant $C_{J_k}^{v}>0$ with
\begin{equation}
||v_{HJ_{n}}(\tau,z,\epsilon)||_{(\underline{\varsigma},RJ_{c_{k},d_{k},\upsilon_{k}},\epsilon,\delta)} \leq C_{J_k}^{v} \label{norm_vHJn_RJk}
\end{equation}
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$. As a consequence of (\ref{norm_vHJn_RHk}), (\ref{norm_vHJn_RJk}), with the help of
Proposition 12 and 16, we deduce that $v_{HJ_n}(\tau,z,\epsilon)$ represents a holomorphic function on
$\mathring{HJ}_{n} \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$, continuous on
$HJ_{n} \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$ for some $0 < \delta_{1} < 1$, that withstands the bounds
(\ref{bds_vHJn_Hk}) and (\ref{bds_vHJn_Jk}). By application of a similar proof as in Lemma 8, one can show that
for each $k \in \llbracket -n,n \rrbracket$, the function $y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ defined as
(\ref{defin_Laplace_y_EHJnk}) represents a bounded holomorphic function on
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta_{1}\delta) \times \mathcal{E}_{HJ_n}^{k}$, for some fixed radius
$r_{\mathcal{T}}>0$ and $0 < \delta_{1} < 1$. In addition, following exactly the same reasoning as in Proposition 10 2), one can obtain
the estimates (\ref{log_flat_difference_yk_plus_1_minus_yk_HJn}).
It remains to show that $y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ actually solves the problem (\ref{SPCP_second}), (\ref{SPCP_second_i_d}).
In accordance with the expansion (\ref{Tahara_expansion_diff_op}), we are scaled down to prove that
\begin{lemma} The next identity
\begin{multline}
t^{d_{l_{0},l_{1}}} (t^{2}\partial_{t})^{l_1} y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon) =
\frac{\epsilon^{-(d_{l_{0},l_{1}}+l_{1})}}{\Gamma(d_{l_{0},l_{1}})} \int_{P_{k}}
u \int_{L_{0,u}} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_{1}}\\
\times v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{expansion_diff_op_Laplace_y_HJnk}
\end{multline}
holds
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, $\epsilon \in \mathcal{E}_{HJ_n}^{k}$, all given positive integers
$d_{l_{0},l_{1}},l_{1} \geq 1$. We recall that the path $P_{k}$ is the union of a segment $P_{k,1}$ joining 0 and a prescribed
point $A_{k} \in H_{k}$ and of a horizontal halfline $P_{k,2} = \{ A_{k} - s / s \geq 0 \}$ and here $L_{0,u}$ stands for the union
$[0,c_{RH}(u)] \cup [c_{RH}(u),u]$ where $c_{RH}(u)$ is chosen in a way that
$$ L_{0,u} \subset RH_{a_{k},b_{k},\upsilon_{k}} \ \ , \ \ c_{RH}(u) \in R_{a_{k},b_{k},\upsilon_{k}} \ \ , \ \ |c_{RH}(u)| \leq |u| $$
for all $u \in P_{k} \subset RH_{a_{k},b_{k},\upsilon_{k}}$ (Notice that this last inclusion stems from the assumption
$\upsilon_{k} < \mathrm{Re}(A_{k})$).
\end{lemma}
\begin{proof} We first specify an appropriate choice for the points $c_{RH}(u)$ that will simplify the computations, namely\\
1) When $u$ belongs to $P_{k,1} \subset R_{a_{k},b_{k},\upsilon_{k}}$, then we select $c_{RH}(u)$ somewhere inside the segment $[0,u]$, in
that case $L_{0,u}=[0,u]$.\\
2) For $u \in P_{k,2}$, we choose $c_{RH}(u)=A_{k}$. Hence $L_{0,u}$ becomes the union of the segments $[0,A_{k}]$ and $[A_{k},u]$.
As a result, the right handside of the equality (\ref{expansion_diff_op_Laplace_y_HJnk}) can be written
\begin{multline*}
R = \frac{\epsilon^{-(d_{l_{0},l_{1}} + l_{1})}}{\Gamma(d_{l_{0},l_{1}})}
\left\{ \int_{P_{k,1}} ( \int_{[0,u]} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_1}v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} ) \exp( -\frac{u}{\epsilon t} )
\right. du\\
+ \int_{P_{k,2}} \left( \int_{[0,A_{k}]} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_1}v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} \right. \\
+ \left. \left. \int_{[A_{k},u]} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_1}v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} \right) \exp( -\frac{u}{\epsilon t} ) du
\right\}
\end{multline*}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, $\epsilon \in \mathcal{E}_{HJ_n}^{k}$. Now, with the help of the Fubini theorem and a
path deformation argument, we can express each piece of $R$ as some truncated Laplace transforms
of $v_{HJ_n}(\tau,z,\epsilon)$. Namely,
\begin{multline*}
\int_{P_{k,1}} ( \int_{[0,u]} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_1}v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} ) \exp( -\frac{u}{\epsilon t} ) du \\
= \int_{[0,A_{k}]} \left( \int_{[s,A_{k}]} (u-s)^{d_{l_{0},l_{1}}-1} \exp( - \frac{u}{\epsilon t} ) du \right) s^{l_1}v_{HJ_n}(s,z,\epsilon)
\frac{ds}{s} \\
= \int_{[0,A_{k}]} \left( \int_{[0,A_{k}-s]} (u')^{d_{l_{0},l_{1}}-1} \exp( -\frac{u'}{\epsilon t} ) du' \right)
s^{l_{1}} v_{HJ_n}(s,z,\epsilon) \exp( -\frac{s}{\epsilon t} ) \frac{ds}{s}
\end{multline*}
and
\begin{multline*}
\int_{P_{k,2}} \left( \int_{[0,A_{k}]} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_1}v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} \right)
\exp( -\frac{u}{\epsilon t} ) du \\
= \int_{[0,A_{k}]} \left( \int_{P_{k,2}} (u-s)^{d_{l_{0},l_{1}}-1} \exp( -\frac{u}{\epsilon t} ) du \right)
s^{l_1} v_{HJ_n}(s,z,\epsilon) \frac{ds}{s}\\
= \int_{[0,A_{k}]} \left( \int_{P_{k,2}-s} (u')^{d_{l_{0},l_{1}}-1} \exp( -\frac{u'}{\epsilon t} ) du' \right)
s^{l_1} v_{HJ_n}(s,z,\epsilon) \exp( -\frac{s}{\epsilon t} ) \frac{ds}{s}
\end{multline*}
where $P_{k,2}-s$ denotes the path $\{ A_{k}-h-s / h \geq 0 \}$, together with
\begin{multline*}
\int_{P_{k,2}} \left( \int_{[A_{k},u]} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_1}v_{HJ_n}(s,z,\epsilon) \frac{ds}{s} \right) \exp( -\frac{u}{\epsilon t} )
du \\
= \int_{P_{k,2}} \left( \int_{P_{s;2}} (u-s)^{d_{l_{0},l_{1}}-1} \exp( -\frac{u}{\epsilon t} ) du \right) s^{l_1}v_{HJ_n}(s,z,\epsilon)
\frac{ds}{s} \\
= \int_{P_{k,2}} \left( \int_{\mathbb{R}_{-}} (u')^{d_{l_{0},l_{1}}-1} \exp( -\frac{u'}{\epsilon t} ) du' \right)
s^{l_1}v_{HJ_n}(s,z,\epsilon) \exp( -\frac{s}{\epsilon t} ) \frac{ds}{s}
\end{multline*}
where $P_{s;2} = \{ s - h / h \geq 0 \}$ and $\mathbb{R}_{-}$ stands for the path $\{-h / h \geq 0 \}$,
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, $\epsilon \in \mathcal{E}_{HJ_n}^{k}$. On the other hand, a path deformation argument
and the very definition of the Gamma function yields
\begin{multline*}
\int_{[0,A_{k}-s]} (u')^{d_{l_{0},l_{1}}-1} \exp( -\frac{u'}{\epsilon t} ) du' +
\int_{P_{k,2}-s} (u')^{d_{l_{0},l_{1}}-1} \exp( -\frac{u'}{\epsilon t} ) du' \\
=
\int_{\mathbb{R}_{-}} (u')^{d_{l_{0},l_{1}}-1} \exp( -\frac{u'}{\epsilon t} ) du' = \Gamma(d_{l_{0},l_{1}}) (\epsilon t)^{d_{l_{0},l_{1}}}
\end{multline*}
for all $s \in [0,A_{k}]$, all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, $\epsilon \in \mathcal{E}_{HJ_n}^{k}$. By clustering
the above estimates, we can rewrite the quantity $R$ as
\begin{equation}
R = t^{d_{l_{0},l_{1}}} \epsilon^{-l_{1}}\int_{P_k}s^{l_1}v_{HJ_n}(s,z,\epsilon) \exp( -\frac{s}{\epsilon t} ) \frac{ds}{s}
= t^{d_{l_{0},l_{1}}} (t^{2}\partial_{t})^{l_1}y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)
\end{equation}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, $\epsilon \in \mathcal{E}_{HJ_n}^{k}$. Lemma 20 follows.
\end{proof}
In order to discuss the second point 2) of the statement, let us concentrate on the equation
(\ref{ACP_forcterm_w}) equipped with the forcing term $w(\tau,z,\epsilon) = w_{S_{d_p}}(\tau,z,\epsilon)$ for given initial
data (\ref{ACP_forcterm_w_iv_d_j}). We must check that the problem (\ref{ACP_forcterm_w}), (\ref{ACP_forcterm_w_iv_d_j}) has a unique
formal series solution
\begin{equation}
v_{S_{d_p}}(\tau,z,\epsilon) = \sum_{\beta \geq 0} v_{\beta}(\tau,\epsilon) \frac{z^{\beta}}{\beta!} \label{defin_vSdp}
\end{equation}
where $v_{\beta}(\tau,\epsilon)$ are holomorphic on $(S_{d_p} \cup D(0,r)) \times \dot{D}(0,\epsilon_{0})$, continuous
on $(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times \dot{D}(0,\epsilon_{0})$. Indeed, the formal expansion
(\ref{defin_vSdp}) solves (\ref{ACP_forcterm_w}), (\ref{ACP_forcterm_w_iv_d_j}) if and only if $v_{\beta}(\tau,\epsilon)$ fulfills the
recursion (\ref{recursion_v_beta}) for all $\beta \geq 0$, where $w_{\beta}(\tau,\epsilon)$ represent the Taylor coefficients of the
forcing term $w_{S_{d_{p}}}(\tau,\epsilon)$ which are implemented by the recursion (\ref{recursion_w_beta}). As a consequence, all the
coefficients $v_{n}(\tau,\epsilon)$ for $n \geq S_{\mathcal{B}}$ define holomorphic functions on
$(S_{d_p} \cup D(0,r)) \times \dot{D}(0,\epsilon_{0})$, continuous on
$(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times \dot{D}(0,\epsilon_{0})$ in view of the fact that it is already the case for
$w_{\beta}(\tau,\epsilon)$, $\beta \geq 0$ and the initial conditions (\ref{ACP_forcterm_w_iv_d_j}).
In accordance with the whole set of requirements made since the onset of Section 5, we can see that the constraints 3)a)b) imposed
in Proposition 22 are obeyed. Hence, the formal series $v_{S_{d_p}}(\tau,z,\epsilon)$ belongs to the Banach spaces
$EG_{(\sigma_{1},S_{d_p} \cup D(0,r),\epsilon,\delta)}$ for all $\epsilon \in \dot{D}(0,\epsilon_{0})$, for any
$\sigma_{1} > \sigma_{1}'$ and a constant $C_{S_{d_p}}^{v}>0$ is given for which
$$ ||v_{S_{d_p}}(\tau,z,\epsilon)||_{(\sigma_{1},S_{d_p} \cup D(0,r),\epsilon,\delta)} \leq C_{S_{d_p}}^{v} $$
for all $\epsilon \in \dot{D}(0,\epsilon_{0})$. As a byproduct, bearing in mind Proposition 5 2), $v_{S_{d_p}}(\tau,z,\epsilon)$ defines
a holomorphic function on $(S_{d_p} \cup D(0,r)) \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$, continuous
on $(\bar{S}_{d_p} \cup \bar{D}(0,r)) \times D(0,\delta \delta_{1}) \times \dot{D}(0,\epsilon_{0})$, for some $0 < \delta_{1} < 1$ that
suffers the bounds (\ref{bds_vSdp}). By application of the same arguments as in Lemma 9, one can prove that the function
$y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ defined as (\ref{defin_y_ESdp}) induces a bounded holomorphic function on
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{S_{d_p}}$. Moreover, an analogous reasoning
as the one in Proposition 11 2) leads to the bounds (\ref{exp_flat_difference_yk_plus_1_minus_yk_Sdp}).
Lastly, we notice that $y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ shall solve the problem (\ref{SPCP_second}), (\ref{SPCP_second_i_d}).
Bearing in mind the operators unfoldings (\ref{Tahara_expansion_diff_op}), this follows from the observation that the next identity holds
\begin{multline}
t^{d_{l_{0},l_{1}}} (t^{2}\partial_{t})^{l_1} y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) =
\frac{\epsilon^{-(d_{l_{0},l_{1}}+l_{1})}}{\Gamma(d_{l_{0},l_{1}})} \int_{L_{\gamma_{d_p}}}
u \int_{0}^{u} (u-s)^{d_{l_{0},l_{1}}-1}s^{l_{1}}\\
\times v_{S_{d_{p}}}(s,z,\epsilon) \frac{ds}{s} \exp( -\frac{u}{\epsilon t} ) \frac{du}{u} \label{expansion_diff_op_Laplace_y_Sdp}
\end{multline}
for all $t \in \mathcal{T} \cap D(0,r_{\mathcal{T}})$, $\epsilon \in \mathcal{E}_{S_{d_p}}$, all given positive integers
$d_{l_{0},l_{1}},l_{1} \geq 1$. Its proof remains a straightforward adaptation of the one of Lemma 20 and is therefore omitted.
Ultimately, we are left to testify the estimates (\ref{difference_y_HJn_Sd0}) and (\ref{difference_y_HJn_Sdiota}). Again, this follows
from paths deformations methods which mirrors the lines of arguments detailed in the proof of Theorem 1 3).
\end{proof}
Since the forcing term $u(t,z,\epsilon)$ in the equation (\ref{SPCP_second}) in particular solves the Cauchy problem
(\ref{SPCP_first}), (\ref{SPCP_first_i_d}), we deduce that the functions $y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ and
$y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ themselves solve a Cauchy problem with holomorphic coefficients in the vicinity of the
origin in $\mathbb{C}^{3}$. Namely,
\begin{corol} Let us introduce the next differential and linear fractional operators
\begin{multline*}
\mathcal{P}_{1}(t,z,\epsilon,\{ m_{k,t,\epsilon} \}_{k \in I_{\mathcal{A}}},\partial_{t},\partial_{z}) =
P(\epsilon t^{2}\partial_{t})\partial_{z}^{S}
- \sum_{\underline{k} = (k_{0},k_{1},k_{2}) \in \mathcal{A}}
c_{\underline{k}}(z,\epsilon) m_{k_{2},t,\epsilon}(t^{2}\partial_{t})^{k_0} \partial_{z}^{k_1},\\
\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z}) =
P_{\mathcal{B}}(\epsilon t^{2}\partial_{t}) \partial_{z}^{S_{\mathcal{B}}} -
\sum_{\underline{l}=(l_{0},l_{1},l_{2}) \in \mathcal{B}} d_{\underline{l}}(z,\epsilon)
t^{l_0}\partial_{t}^{l_1}\partial_{z}^{l_{2}}
\end{multline*}
where $m_{k_{2},t,\epsilon}$ stands for the Moebius operator
$m_{k_{2},t,\epsilon}(u(t,z,\epsilon)) = u(\frac{t}{1 + k_{2} \epsilon t},z,\epsilon)$.
Then, the functions $y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$, for $k \in \llbracket -n,n \rrbracket$ and
$y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ for $0 \leq p \leq \iota-1$ are actual holomorphic solutions to the next
Cauchy problem
$$ \mathcal{P}_{1}(t,z,\epsilon,\{ m_{k,t,\epsilon} \}_{k \in I_{\mathcal{A}}},\partial_{t},\partial_{z})
\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z})y(t,z,\epsilon) = 0 $$
whose coefficients are holomorphic w.r.t $z$ and $\epsilon$ near on a neighborhood of the origin and polynomial in $t$,
under the constraints
$$
\left\{ \begin{aligned}
(\partial_{z}^{j}y)(t,0,\epsilon) = \psi_{j}(t,\epsilon) \ \ , \ \ 0 \leq j \leq S_{\mathcal{B}}-1 \\
(\partial_{z}^{j}\mathcal{P}_{2}(t,z,\epsilon,\partial_{t},\partial_{z})y)(t,0,\epsilon) = \varphi_{j}(t,\epsilon) \ \ , \ \
0 \leq j \leq S-1.
\end{aligned} \right.
$$
\end{corol}
\section{Parametric Gevrey asymptotic expansions with two levels 1 and $1^{+}$ for the analytic solutions to the Cauchy problems displayed
in Sections 3 and 5}
\subsection{A version of the Ramis-Sibuya Theorem involving two levels}
Within this section we state a version of a variant of a classical cohomological criterion in the framework of Gevrey asymptotics known as the
Ramis-Sibuya Theorem (see \cite{hssi}, Theorem XI-2-3) obtained by the first author in the work \cite{ma}. Besides, in view of the recent results
on so-called $\mathbb{M}-$summability for strongly regular sequences $\mathbb{M} = (M_{n})_{n \geq 0}$ obtained by the authors and
J. Sanz, we can provide sufficient conditions which gives rise to the special situation that involves both 1 and $1^{+}$ summability.\medskip
\noindent We depart from the definitions of Gevrey 1 and $1^{+}$ asymptotics.\medskip
\noindent Let $(\mathbb{F},||.||_{\mathbb{F}})$ be a Banach space over $\mathbb{C}$. The set $\mathbb{F}[[\epsilon]]$ stands for the
space of all formal series $\sum_{k \geq 0} a_{k} \epsilon^{k}$ with coefficients $a_{k}$ belonging to $\mathbb{F}$ for all integers $k \geq 0$.
We consider $f : \mathcal{F} \rightarrow \mathbb{F}$ be a holomorphic function on a bounded open sector $\mathcal{F}$
centered at 0 and $\hat{f}(\epsilon) = \sum_{k \geq 0} a_{k} \epsilon^{k} \in \mathbb{F}[[\epsilon]]$ be a formal series.
\begin{defin} The function $f$ is said to possess the formal series $\hat{f}$
as $1-$Gevrey asymptotic expansion if, for any closed proper subsector $\mathcal{W} \subset \mathcal{F}$ centered at 0, there exist
$C,M>0$ such that
\begin{equation}
||f(\epsilon) - \sum_{k=0}^{N-1} a_{k} \epsilon^{k}||_{\mathbb{F}} \leq CM^{N}(N/e)^{N}|\epsilon|^{N} \label{f_expansion_Gevrey_1}
\end{equation}
for all $N \geq 1$, all $\epsilon \in \mathcal{W}$. When the aperture of $\mathcal{F}$ is slightly larger than $\pi$, then according to
the Watson's lemma (see \cite{ba2}, Proposition 11), $f$ is the unique holomorphic function on $\mathcal{F}$ satisfying
(\ref{f_expansion_Gevrey_1}). The function $f$ is then called the $1-$sum of $\hat{f}$ on $\mathcal{F}$ and can be reconstructed from $\hat{f}$ using
Borel/Laplace transforms as detailed in Chapter 3 of \cite{ba1}.
\end{defin}
\begin{defin} We say that $f$ has the formal series $\hat{f}$
as $1^{+}-$Gevrey asymptotic expansion if, for any closed proper subsector $\mathcal{W} \subset \mathcal{F}$ centered at 0,
there exist $C,M>0$ such that
\begin{equation}
||f(\epsilon) - \sum_{k=0}^{N-1} a_{k} \epsilon^{k}||_{\mathbb{F}} \leq CM^{N}(N/\mathrm{Log} N)^{N}|\epsilon|^{N} \label{f_expansion_Gevrey_1_plus}
\end{equation}
for all $N \geq 2$, all $\epsilon \in \mathcal{W}$. In particular, the formal series $\hat{f}$ is itself of $1^{+}-$Gevrey type, meaning that
there exist two constants $C',M'>0$ such that $||a_{k}||_{\mathbb{F}} \leq C'M'^{k}(k/\mathrm{Log} k)^{k}$ for all $k \geq 2$.
Provided that the aperture of $\mathcal{F}$ is slightly larger than $\pi$, Theorem 3.1 in \cite{lamasa} ensures the unicity of the analytic
function $f$ fulfilling the estimates (\ref{f_expansion_Gevrey_1_plus}) on $\mathcal{F}$
(see the next remark underneath). In that case, $\hat{f}$ is named
$\mathbb{M}-$summable on $\mathcal{F}$ for the strongly regular sequence $\mathbb{M} = (M_{n})_{n \geq 0}$ where
$M_{n} = (\frac{n}{\mathrm{Log}(n+2)})^{n}$ and $f$ denotes the $\mathbb{M}-$sum of $\hat{f}$ on $\mathcal{F}$.
For brevity of notation, we will call it also $1^{+}-$sum. As explained in \cite{lamasa}, the $1^{+}-$sum $f$ can be recovered from
the formal expansions $\hat{f}$ with the help of an analog of a Borel/Laplace procedure. It is worthwhile noting that this notion
of $1^{+}-$summability has to be distinguished from the notion of $1^{+}-$summability introduced
in the papers of G. Immink whose sums are defined on domains which are not sectors, see \cite{im1},\cite{im2},\cite{im3}.
\end{defin}
{\bf Remark :} The strongly regular sequence $\mathbb{M}$ stated above is equivalent, in the sense that the functional spaces associated to them coincide, to $\mathbb{M}_{\alpha,\beta}=(n!^{\alpha}\prod_{m=0}^{n}\log^{\beta}(e+m))_{n\ge 0}$, for $\alpha=1,\beta=-1$. In this case, one has $\omega(\mathbb{M})=1$, meaning that unicity of the sum $f$ in (\ref{f_expansion_Gevrey_1_plus}) is guaranteed, for a prescribed asymptotic expansion, when departing from a sector of opening larger than $\pi$. The criteria leans on the divergence of a series of positive real numbers, see~\cite{korem}.
\medskip
We consider the set of sectors $ \underline{\mathcal{E}} = \{ \mathcal{E}_{HJ_n}^{k} \}_{k \in \llbracket -n,n \rrbracket}
\cup \{ \mathcal{E}_{S_{d_p}} \}_{0 \leq p \leq \iota - 1}$ constructed in Section 3.3 that fufills the constraints 3),4) and 5).
The set $\underline{\mathcal{E}}$ forms a so-called good covering in $\mathbb{C}^{\ast}$ as given in Definition 3 of \cite{ma}.
We rephrase the version of the Ramis-Sibuya which entails both $1-$Gevrey and $1^{+}-$Gevrey asymptotics displayed in \cite{ma}
for the specific covering $\underline{\mathcal{E}}$ with additional informations concerning $1$ and $1^{+}$ summability.\medskip
\begin{prop} Let $(\mathbb{F},||.||_{\mathbb{F}})$ be a Banach space over $\mathbb{C}$. For all $k \in \llbracket -n,n \rrbracket$
and $0 \leq p \leq \iota - 1$, let $G_{k}$ be a holomorphic function from $\mathcal{E}_{HJ_n}^{k}$ into $(\mathbb{F},||.||_{\mathbb{F}})$
and $\breve{G}_{p}$ be a holomorphic function from $\mathcal{E}_{S_{d_p}}$ into $(\mathbb{F},||.||_{\mathbb{F}})$.
\noindent We consider a cocycle $\underline{\Delta}(\epsilon)$ defined as the set of functions
$\breve{\Delta}_{p} = \breve{G}_{p+1}(\epsilon) - \breve{G}_{p}(\epsilon)$ for $0 \leq p \leq \iota-2$ when
$\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$, $\Delta_{k}(\epsilon) = G_{k}(\epsilon) - G_{k+1}(\epsilon)$
for $-n \leq k \leq n-1$ and $\epsilon \in \mathcal{E}_{HJ_n}^{k} \cap \mathcal{E}_{HJ_n}^{k+1}$ together with
$\Delta_{-n,0}(\epsilon) = \breve{G}_{0}(\epsilon) - G_{-n}(\epsilon)$ on $\mathcal{E}_{S_{d_0}} \cap \mathcal{E}_{HJ_n}^{-n}$
and $\Delta_{\iota-1,n}(\epsilon) = G_{n}(\epsilon) - \breve{G}_{\iota-1}(\epsilon)$ on
$\mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$.
\noindent We make the next assumptions:\\
1) The functions $G_{k}$ and $\breve{G}_{p}$ are bounded as $\epsilon$ tends to 0 on their domains of definition.\\
2) For all $0 \leq p \leq \iota-2$, $\breve{\Delta}_{p}(\epsilon)$ and both $\Delta_{-n,0}(\epsilon)$, $\Delta_{\iota-1,n}(\epsilon)$ are
exponentially flat. This means that one can sort constants $\breve{K}_{p},\breve{M}_{p}>0$ and $K_{-n,0},M_{-n,0}>0$ with
$K_{\iota-1,n},M_{\iota-1,n}>0$ such that
\begin{multline}
|| \breve{\Delta}_{p}(\epsilon) ||_{\mathbb{F}} \leq \breve{K}_{p}\exp( - \frac{\breve{M}_{p}}{|\epsilon|} ) \ \
\mbox{for $\epsilon \in \mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_{p}}}$ },\\
||\Delta_{-n,0}(\epsilon)||_{\mathbb{F}} \leq K_{-n,0}\exp( - \frac{M_{n,0}}{|\epsilon|} ) \ \
\mbox{for $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$ },\\
||\Delta_{\iota-1,n}(\epsilon)||_{\mathbb{F}} \leq K_{\iota-1,n} \exp( -\frac{M_{\iota-1,n}}{|\epsilon|} ) \ \
\mbox{for $\epsilon \in \mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$.} \label{cond_Delta_cocycle_exp_flat}
\end{multline}
3) For $-n \leq k \leq n-1$, $\Delta_{k}(\epsilon)$ are super-exponentially flat on
$\mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$. This signifies that one can pick up constants $K_{k},M_{k}>0$ and $L_{k}>1$ such that
\begin{equation}
|| \Delta_{k}(\epsilon) ||_{\mathbb{F}} \leq K_{k} \exp( -\frac{M_k}{|\epsilon|} \mathrm{Log} \frac{L_k}{|\epsilon|} )
\label{cond_Delta_cocycle_log_exp_flat}
\end{equation}
for all $\epsilon \in \mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$.
Then, there exist a convergent power series $a(\epsilon) \in \mathbb{F}\{ \epsilon \}$ near $\epsilon=0$ and two formal series
$\hat{G}^{1}(\epsilon),\hat{G}^{2}(\epsilon) \in \mathbb{F} [[ \epsilon ]]$ with the property that $G_{k}(\epsilon)$ and $\breve{G}_{p}(\epsilon)$
admit the next decomposition
\begin{equation}
G_{k}(\epsilon) = a(\epsilon) + G_{k}^{1}(\epsilon) + G_{k}^{2}(\epsilon) \ \ , \ \
\breve{G}_{p}(\epsilon) = a(\epsilon) + \breve{G}_{p}^{1}(\epsilon) + \breve{G}_{p}^{2}(\epsilon)
\end{equation}
for $k \in \llbracket -n,n \rrbracket$, $0 \leq p \leq \iota-1$, where $G_{k}^{1}(\epsilon)$ (resp. $G_{k}^{2}(\epsilon)$) are
holomorphic on $\mathcal{E}_{HJ_n}^{k}$ and have $\hat{G}^{1}(\epsilon)$ (resp. $\hat{G}^{2}(\epsilon)$) as $1-$Gevrey (resp.
$1^{+}-$Gevrey) asymptotic expansion on $\mathcal{E}_{HJ_n}^{k}$ and where
$\breve{G}_{p}^{1}$ (resp. $\breve{G}_{p}^{2}(\epsilon)$) are holomorphic on $\mathcal{E}_{S_{d_p}}$ and possesses
$\hat{G}^{1}(\epsilon)$ (resp. $\hat{G}^{2}(\epsilon)$) as $1-$Gevrey (resp. $1^{+}-$Gevrey) asymptotic expansion
on $\mathcal{E}_{S_{d_p}}$. Besides, the functions $G_{-n}^{2}(\epsilon)$,$G_{n}^{2}(\epsilon)$ and $\breve{G}_{h}^{2}(\epsilon)$
for $0 \leq h \leq \iota-1$ turn out to be the restriction of a common holomorphic function denoted $G^{2}(\epsilon)$
on the large sector $\mathcal{E}_{HS} = \mathcal{E}_{HJ_n}^{-n} \cup \bigcup_{h=0}^{\iota-1} \mathcal{E}_{S_{d_h}} \cup \mathcal{E}_{HJ_n}^{n}$
which determines the $1^{+}-$sum of $\hat{G}^{2}(\epsilon)$ on $\mathcal{E}_{HS}$. Moreover, $\check{G}_{p}^{1}(\epsilon)$ represents the $1-$sum of $\hat{G}^{1}(\epsilon)$ on $\mathcal{E}_{S_{d_p}}$ whenever the aperture of
$\mathcal{E}_{S_{d_p}}$ is strictly larger than $\pi$.
\end{prop}
\begin{proof} Since the notations used here are rather different from the ones within the result enounced in \cite{ma} and in order
to explain the
part of the proposition concerning $1$ and $1^{+}$ summability which is not mentioned in our previous work \cite{ma}, we have decided
to present a sketch of proof of the statement.
We consider a first cocycle $\underline{\Delta}^{1}(\epsilon)$ defined by the next family of functions
\begin{multline}
\breve{\Delta}_{p}^{1}(\epsilon) = \breve{\Delta}_{p}(\epsilon) \ \ \mbox{for $0 \leq p \leq \iota-2$ on
$\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$},\\
\Delta_{-n,0}^{1}(\epsilon) = \Delta_{-n,0}(\epsilon) \ \ \mbox{on $\mathcal{E}_{S_{d_0}} \cap \mathcal{E}_{HJ_n}^{-n}$},
\ \ \Delta_{\iota-1,n}^{1}(\epsilon) = \Delta_{\iota-1,n}(\epsilon) \ \ \mbox{on $\mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$},\\
\Delta_{k}^{1}(\epsilon) = 0 \ \ \mbox{for $-n \leq k \leq n-1$ on $\mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$},
\label{cocycle_1_delta}
\end{multline}
and a second cocycle $\underline{\Delta}^{2}(\epsilon)$ described by the forthcoming set of functions
\begin{multline}
\breve{\Delta}_{p}^{2}(\epsilon) = 0 \ \ \mbox{for $0 \leq p \leq \iota-2$ on
$\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$}, \\
\Delta_{-n,0}^{2}(\epsilon) = 0 \ \ \mbox{on $\mathcal{E}_{S_{d_0}} \cap \mathcal{E}_{HJ_n}^{-n}$}, \ \ \Delta_{\iota-1,n}^{2} = 0,
\ \ \mbox{on $\mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$},\\
\Delta_{k}^{2}(\epsilon) = \Delta_{k}(\epsilon) \ \
\mbox{for $-n \leq k \leq n-1$ on $\mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$}. \label{cocycle_2_delta}
\end{multline}
The next lemma restate Lemma 14 from \cite{ma}.
\begin{lemma} For all $k \in \llbracket -n,n \rrbracket$, all $0 \leq p \leq \iota-1$, there exist bounded holomorphic functions
$G_{k}^{1} : \mathcal{E}_{HJ_n}^{k} \rightarrow \mathbb{C}$ and $\breve{G}_{p}^{1}:\mathcal{E}_{S_{d_p}} \rightarrow \mathbb{C}$
that satisfy the property
\begin{multline}
\breve{\Delta}_{p}^{1}(\epsilon) = \breve{G}_{p+1}^{1}(\epsilon) - \breve{G}_{p}^{1}(\epsilon) \ \ \mbox{for $0 \leq p \leq \iota-2$ on
$\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$},\\
\Delta_{-n,0}^{1}(\epsilon) = \breve{G}_{0}^{1}(\epsilon) - G_{-n}^{1}(\epsilon)
\ \ \mbox{on $\mathcal{E}_{S_{d_0}} \cap \mathcal{E}_{HJ_n}^{-n}$}, \ \ \Delta_{\iota-1,n}^{1}(\epsilon) =
G_{n}^{1}(\epsilon) - \breve{G}_{\iota-1}^{1}(\epsilon) \ \ \mbox{on $\mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$},\\
\Delta_{k}^{1}(\epsilon) = G_{k}^{1}(\epsilon) - G_{k+1}^{1}(\epsilon) \ \ \mbox{for $-n \leq k \leq n-1$ on
$\mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$}. \label{cocycle_1_delta_split}
\end{multline}
Furthermore, one can get coefficients $\varphi_{m}^{1} \in \mathbb{F}$, for $m \geq 0$ such that\\
1) For all $k \in \llbracket -n,n \rrbracket$,
any closed proper subsector $\mathcal{W} \subset \mathcal{E}_{HJ_n}^{k}$, centered at 0, there exist constants $K_{k},M_{k}>0$ with
\begin{equation}
||G_{k}^{1}(\epsilon) - \sum_{m=0}^{N-1} \varphi_{m}^{1} \epsilon^{m} ||_{\mathbb{F}} \leq
K_{k}(M_{k})^{N}(\frac{N}{e})^{N} |\epsilon|^{N}
\end{equation}
for all $\epsilon \in \mathcal{W}$, all $N \geq 1$.\\
2) For $0 \leq p \leq \iota-1$, any closed proper subsector $\mathcal{W} \subset \mathcal{E}_{S_{d_p}}$, centered at 0,
one can grab constants $K_{p},M_{p}>0$ with
\begin{equation}
||\breve{G}_{p}^{1}(\epsilon) - \sum_{m=0}^{N-1} \varphi_{m}^{1} \epsilon^{m} ||_{\mathbb{F}} \leq
K_{p}(M_{p})^{N}(\frac{N}{e})^{N} |\epsilon|^{N} \label{expansion_breveG_p1}
\end{equation}
for all $\epsilon \in \mathcal{W}$, all $N \geq 1$.
\end{lemma}
Likewise, the next lemma recapitulates Lemma 15 from \cite{ma}.
\begin{lemma} For all $k \in \llbracket -n,n \rrbracket$, all $0 \leq p \leq \iota-1$, one can find bounded holomorphic functions
$G_{k}^{2} : \mathcal{E}_{HJ_n}^{k} \rightarrow \mathbb{C}$ and $\breve{G}_{p}^{2}:\mathcal{E}_{S_{d_p}} \rightarrow \mathbb{C}$
that obey to the next demand
\begin{multline}
\breve{\Delta}_{p}^{2}(\epsilon) = \breve{G}_{p+1}^{2}(\epsilon) - \breve{G}_{p}^{2}(\epsilon) \ \ \mbox{for $0 \leq p \leq \iota-2$ on
$\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$},\\
\Delta_{-n,0}^{2}(\epsilon) = \breve{G}_{0}^{2}(\epsilon) - G_{-n}^{2}(\epsilon)
\ \ \mbox{on $\mathcal{E}_{S_{d_0}} \cap \mathcal{E}_{HJ_n}^{-n}$}, \ \ \Delta_{\iota-1,n}^{2}(\epsilon) =
G_{n}^{2}(\epsilon) - \breve{G}_{\iota-1}^{2}(\epsilon) \ \ \mbox{on $\mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$},\\
\Delta_{k}^{2}(\epsilon) = G_{k}^{2}(\epsilon) - G_{k+1}^{2}(\epsilon) \ \ \mbox{for $-n \leq k \leq n-1$ on
$\mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$}. \label{cocycle_2_delta_split}
\end{multline}
Moreover, one can obtain coefficients $\varphi_{m}^{2} \in \mathbb{F}$, for $m \geq 0$ such that\\
1) For all $k \in \llbracket -n,n \rrbracket$,
any closed proper subsector $\mathcal{W} \subset \mathcal{E}_{HJ_n}^{k}$, centered at 0, one can find constants $K_{k},M_{k}>0$ with
\begin{equation}
||G_{k}^{2}(\epsilon) - \sum_{m=0}^{N-1} \varphi_{m}^{2} \epsilon^{m} ||_{\mathbb{F}} \leq
K_{k}(M_{k})^{N}(\frac{N}{\mathrm{Log} N})^{N} |\epsilon|^{N} \label{expansion_G_k2}
\end{equation}
for all $\epsilon \in \mathcal{W}$, all $N \geq 2$.\\
2) For $0 \leq p \leq \iota-1$, any closed proper subsector $\mathcal{W} \subset \mathcal{E}_{S_{d_p}}$, centered at 0,
one can grasp constants $K_{p},M_{p}>0$ with
\begin{equation}
||\breve{G}_{p}^{2}(\epsilon) - \sum_{m=0}^{N-1} \varphi_{m}^{2} \epsilon^{m} ||_{\mathbb{F}} \leq
K_{p}(M_{p})^{N}(\frac{N}{\mathrm{Log} N})^{N} |\epsilon|^{N} \label{expansion_breveG_p2}
\end{equation}
for all $\epsilon \in \mathcal{W}$, all $N \geq 2$.
\end{lemma}
We introduce the bounded holomorphic functions
$$
a_{k}(\epsilon) = G_{k}(\epsilon) - G_{k}^{1}(\epsilon) - G_{k}^{2}(\epsilon) \ \ \mbox{for $\epsilon \in \mathcal{E}_{HJ_n}^{k}$}, \ \
\breve{a}_{p}(\epsilon) = \breve{G}_{p}(\epsilon) - \breve{G}_{p}^{1}(\epsilon) - \breve{G}_{p}^{2}(\epsilon) \ \
\mbox{for $\epsilon \in \mathcal{E}_{S_{d_p}}$}.
$$
for $k \in \llbracket -n,n \rrbracket$ and $0 \leq p \leq \iota-1$. By construction, we notice that
\begin{multline*}
a_{k}(\epsilon) - a_{k+1}(\epsilon) = G_{k}(\epsilon) - G_{k}^{1}(\epsilon) - G_{k}^{2}(\epsilon) - G_{k+1}(\epsilon) +
G_{k+1}^{1}(\epsilon) + G_{k+1}^{2}(\epsilon)\\
= G_{k}(\epsilon) - G_{k+1}(\epsilon) - \Delta_{k}^{1}(\epsilon) - \Delta_{k}^{2}(\epsilon)
= G_{k}(\epsilon) - G_{k+1}(\epsilon) - \Delta_{k}(\epsilon) = 0
\end{multline*}
for $-n \leq k \leq n-1$ on $\mathcal{E}_{HJ_n}^{k+1} \cap \mathcal{E}_{HJ_n}^{k}$ together with
\begin{multline*}
\breve{a}_{p+1}(\epsilon) - \breve{a}_{p}(\epsilon) = \breve{G}_{p+1}(\epsilon) - \breve{G}_{p+1}(\epsilon)
- \breve{\Delta}_{p}^{1}(\epsilon) - \breve{\Delta}_{p}^{2}(\epsilon) = \breve{G}_{p+1}(\epsilon) - \breve{G}_{p+1}(\epsilon)
- \breve{\Delta}_{p}(\epsilon) = 0
\end{multline*}
for $0 \leq p \leq \iota-2$ on $\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_{p}}}$. Furthermore,
\begin{multline*}
\breve{a}_{0}(\epsilon) - a_{-n}(\epsilon) = \breve{G}_{0}(\epsilon) - \breve{G}_{0}^{1}(\epsilon) -
\breve{G}_{0}^{2}(\epsilon) - G_{-n}(\epsilon) + G_{-n}^{1}(\epsilon) + G_{-n}^{2}(\epsilon)\\
= \breve{G}_{0}(\epsilon) - G_{-n}(\epsilon) - \Delta_{-n,0}^{1}(\epsilon) - \Delta_{-n,0}^{2}(\epsilon) =
\breve{G}_{0}(\epsilon) - G_{-n}(\epsilon) - \Delta_{-n,0}(\epsilon) = 0
\end{multline*}
for $\epsilon \in \mathcal{E}_{HJ_n}^{-n} \cap \mathcal{E}_{S_{d_0}}$ and
\begin{multline*}
a_{n}(\epsilon) - \breve{a}_{\iota-1}(\epsilon) = G_{n}(\epsilon) - G_{n}^{1}(\epsilon) - G_{n}^{2}(\epsilon)
- \breve{G}_{\iota-1}(\epsilon) + \breve{G}_{\iota-1}^{1}(\epsilon) + \breve{G}_{\iota-1}^{2}(\epsilon)\\
= G_{n}(\epsilon) - \breve{G}_{\iota-1}(\epsilon) - \Delta_{\iota-1,n}^{1}(\epsilon) - \Delta_{\iota-1,n}^{2}(\epsilon)
= G_{n}(\epsilon) - \breve{G}_{\iota-1}(\epsilon) - \Delta_{\iota-1,n}(\epsilon) = 0
\end{multline*}
whenever $\epsilon \in \mathcal{E}_{HJ_n}^{n} \cap \mathcal{E}_{S_{d_{\iota-1}}}$.
As a result, the functions $a_{k}(\epsilon)$ on $\mathcal{E}_{HJ_n}^{k}$ and $\breve{a}_{p}(\epsilon)$ on
$\mathcal{E}_{S_{d_p}}$ are the restriction of a common holomorphic bounded function $a(\epsilon)$ on $D(0,\epsilon_{0}) \setminus \{ 0 \}$.
The origin is therefore a removable singularity and $a(\epsilon)$ defines a convergent power series on $D(0,\epsilon_{0})$.
As a consequence, one can write
$$
G_{k}(\epsilon) = a(\epsilon) + G_{k}^{1}(\epsilon) + G_{k}^{2}(\epsilon) \ \ \mbox{on $\mathcal{E}_{HJ_n}^{k}$}, \ \
\breve{G}_{p}(\epsilon) = a(\epsilon) + \breve{G}_{p}^{1}(\epsilon) + \breve{G}_{p}^{2}(\epsilon) \ \ \mbox{on
$\mathcal{E}_{S_{d_p}}$}
$$
for all $k \in \llbracket -n,n \rrbracket$, $0 \leq p \leq \iota-1$. Moreover, $G_{k}^{1}(\epsilon)$ (resp. $G_{k}^{2}(\epsilon)$)
have $\hat{G}^{1}(\epsilon) = \sum_{m \geq 0} \varphi_{m}^{1} \epsilon^{m}$ (resp.
$\hat{G}^{2}(\epsilon) = \sum_{m \geq 0} \varphi_{m}^{2} \epsilon^{m}$) as $1-$Gevrey (resp.
$1^{+}-$Gevrey) asymptotic expansion on $\mathcal{E}_{HJ_n}^{k}$ and
$\breve{G}_{p}^{1}$ (resp. $\breve{G}_{p}^{2}(\epsilon)$) possesses
$\hat{G}^{1}(\epsilon)$ (resp. $\hat{G}^{2}(\epsilon)$) as $1-$Gevrey (resp. $1^{+}-$Gevrey) asymptotic expansion
on $\mathcal{E}_{S_{d_p}}$.
By the very definition of the cocycles $\underline{\Delta}^{1}(\epsilon)$ and $\underline{\Delta}^{2}(\epsilon)$ given by
(\ref{cocycle_1_delta}) and (\ref{cocycle_2_delta}), in accordance with the constraints
(\ref{cocycle_1_delta_split}) and (\ref{cocycle_2_delta_split}), we get in particular that
\begin{multline*}
G_{n}^{2}(\epsilon) = \breve{G}_{\iota-1}^{2}(\epsilon) \ \ \mbox{on $\mathcal{E}_{S_{d_{\iota-1}}} \cap \mathcal{E}_{HJ_n}^{n}$}, \ \
G_{-n}^{2}(\epsilon) = \breve{G}_{0}^{2}(\epsilon) \ \ \mbox{on $\mathcal{E}_{S_{d_0}} \cap \mathcal{E}_{HJ_n}^{-n}$},\\
\breve{G}_{p+1}^{2}(\epsilon) = \breve{G}_{p}^{2}(\epsilon) \ \ \mbox{on $\mathcal{E}_{S_{d_{p+1}}} \cap \mathcal{E}_{S_{d_p}}$}
\end{multline*}
for all $0 \leq p \leq \iota-2$. For that reason, we see that $G_{-n}^{2}(\epsilon)$,$G_{n}^{2}(\epsilon)$ and
$\breve{G}_{p}^{2}(\epsilon)$ are the restrictions of a common holomorphic function denoted $G^{2}(\epsilon)$
on the large sector
$\mathcal{E}_{HS} = \mathcal{E}_{HJ_n}^{-n} \cup \bigcup_{h=0}^{\iota-1} \mathcal{E}_{S_{d_h}} \cup \mathcal{E}_{HJ_n}^{n}$ with aperture larger
than $\pi$. In addition, from the expansions (\ref{expansion_G_k2}) and (\ref{expansion_breveG_p2}) we deduce that $G^{2}(\epsilon)$
defines the $1^{+}-$sum of $\hat{G}^{2}(\epsilon)$ on $\mathcal{E}_{HS}$. Finally, when the aperture of $\mathcal{E}_{S_{d_p}}$ is strictly larger than $\pi$, in view of the expansion (247) it turns out that $\check{G}_{p}^{1! }$ defines the $1-$sum of $\hat{G}^{1}(\epsilon)$ on $\mathcal{E}_{S_{d_p}}$.
\end{proof}
\subsection{Existence of multiscale parametric Gevrey asymptotic expansions for the analytic solutions to the problems
(\ref{SPCP_first}), (\ref{SPCP_first_i_d}) and (\ref{SPCP_second}), (\ref{SPCP_second_i_d})}
We are now ready to enounce the third main result of this work, which reveals a fine structure of two Gevrey orders 1 and $1^{+}$ for the
solutions $u_{\mathcal{E}_{HJ_n}^{k}}$ and $u_{\mathcal{E}_{S_{d_p}}}$ (resp. $y_{\mathcal{E}_{HJ_n}^{k}}$ and $y_{\mathcal{E}_{S_{d_p}}}$)
regarding the parameter $\epsilon$.
\begin{theo} Let us assume that all the requirements asked in Theorem 1 (resp. Theorem 2) are fulfilled. Then, there exist\\
- An holomorphic function $a(t,z,\epsilon)$ (resp. $b(t,z,\epsilon)$) on the domain
$(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times D(0,\hat{\epsilon}_{0})$ for some
$0 < \hat{\epsilon}_{0} < \epsilon_{0}$,\\
- Two formal series
$$ \hat{u}^{j}(t,z,\epsilon) = \sum_{k \geq 0} u_{k}^{j}(t,z) \epsilon^{k} \in \mathbb{F}[[ \epsilon ]] \ \ , \ \ j=1,2 $$
(resp.
$$ \hat{y}^{j}(t,z,\epsilon) = \sum_{k \geq 0} y_{k}^{j}(t,z) \epsilon^{k} \in \mathbb{F}[[ \epsilon ]] \ \ , \ \ j=1,2) $$
whose coefficients $u_{k}^{j}(t,z)$ (resp. $y_{k}^{j}(t,z)$) belong to the Banach space
$\mathbb{F} = \mathcal{O}( (\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) )$ of bounded holomorphic functions
on the set $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1})$ endowed with the supremum norm,\\
which are submitted to the next features:\\
A) For each $k \in \llbracket -n,n \rrbracket$, the function $u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$ (resp.
$y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$) admits a decomposition
$$ u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon) = a(t,z,\epsilon) + u_{\mathcal{E}_{HJ_n}^{k}}^{1}(t,z,\epsilon) +
u_{\mathcal{E}_{HJ_n}^{k}}^{2}(t,z,\epsilon) $$
(resp.
$$ y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon) = b(t,z,\epsilon) + y_{\mathcal{E}_{HJ_n}^{k}}^{1}(t,z,\epsilon) +
y_{\mathcal{E}_{HJ_n}^{k}}^{2}(t,z,\epsilon) ) $$
where $u_{\mathcal{E}_{HJ_n}^{k}}^{1}(t,z,\epsilon)$ (resp. $y_{\mathcal{E}_{HJ_n}^{k}}^{1}(t,z,\epsilon)$) is bounded holomorphic
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{HJ_n}^{k}$ and possesses
$\hat{u}^{1}(t,z,\epsilon)$ (resp. $\hat{y}^{1}(t,z,\epsilon)$) as $1-$Gevrey asymptotic expansion on $\mathcal{E}_{HJ_n}^{k}$, meaning
that for any closed subsector $\mathcal{W} \subset \mathcal{E}_{HJ_n}^{k}$, there exist two constants $C,M>0$ with
$$ \sup_{t \in \mathcal{T} \cap D(0,r_{\mathcal{T}}), z \in D(0,\delta \delta_{1})}
|u_{\mathcal{E}_{HJ_n}^{k}}^{1}(t,z,\epsilon) - \sum_{k=0}^{N-1} u_{k}^{1}(t,z) \epsilon^{k}| \leq CM^{N}(\frac{N}{e})^{N} |\epsilon|^{N} $$
(resp.
$$ \sup_{t \in \mathcal{T} \cap D(0,r_{\mathcal{T}}), z \in D(0,\delta \delta_{1})}
|y_{\mathcal{E}_{HJ_n}^{k}}^{1}(t,z,\epsilon) - \sum_{k=0}^{N-1} y_{k}^{1}(t,z) \epsilon^{k}| \leq CM^{N}(\frac{N}{e})^{N} |\epsilon|^{N}) $$
for all $N \geq 1$, all $\epsilon \in \mathcal{W}$ and $u_{\mathcal{E}_{HJ_n}^{k}}^{2}(t,z,\epsilon)$ (resp.
$y_{\mathcal{E}_{HJ_n}^{k}}^{2}(t,z,\epsilon)$) is bounded holomorphic
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{HJ_n}^{k}$ and carries
$\hat{u}^{2}(t,z,\epsilon)$ (resp. $\hat{y}^{2}(t,z,\epsilon)$) as $1^{+}-$Gevrey asymptotic expansion on $\mathcal{E}_{HJ_n}^{k}$, in other words,
for any closed subsector $\mathcal{W} \subset \mathcal{E}_{HJ_n}^{k}$, one can get two constants $C,M>0$ with
$$ \sup_{t \in \mathcal{T} \cap D(0,r_{\mathcal{T}}), z \in D(0,\delta \delta_{1})}
|u_{\mathcal{E}_{HJ_n}^{k}}^{2}(t,z,\epsilon) - \sum_{k=0}^{N-1} u_{k}^{2}(t,z) \epsilon^{k}| \leq CM^{N}(\frac{N}{\mathrm{Log} N})^{N}
|\epsilon|^{N} $$
(resp.
$$ \sup_{t \in \mathcal{T} \cap D(0,r_{\mathcal{T}}), z \in D(0,\delta \delta_{1})}
|y_{\mathcal{E}_{HJ_n}^{k}}^{2}(t,z,\epsilon) - \sum_{k=0}^{N-1} y_{k}^{2}(t,z) \epsilon^{k}| \leq CM^{N}(\frac{N}{\mathrm{Log} N})^{N}
|\epsilon|^{N}) $$
for all $N \geq 2$, all $\epsilon \in \mathcal{W}$.\medskip
B) For each $0 \leq p \leq \iota - 1$, the function $u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ (resp.
$y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$) can be split into three pieces
$$ u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) = a(t,z,\epsilon) + u_{\mathcal{E}_{S_{d_p}}}^{1}(t,z,\epsilon) +
u_{\mathcal{E}_{S_{d_p}}}^{2}(t,z,\epsilon) $$
(resp.
$$ y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon) = b(t,z,\epsilon) + y_{\mathcal{E}_{S_{d_p}}}^{1}(t,z,\epsilon) +
y_{\mathcal{E}_{S_{d_p}}}^{2}(t,z,\epsilon) ) $$
where $u_{\mathcal{E}_{S_{d_p}}}^{1}(t,z,\epsilon)$ (resp. $y_{\mathcal{E}_{S_{d_p}}}^{1}(t,z,\epsilon)$) is bounded holomorphic
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{S_{d_p}}$ and has
$\hat{u}^{1}(t,z,\epsilon)$ (resp. $\hat{y}^{1}(t,z,\epsilon)$) as $1-$Gevrey asymptotic expansion on $\mathcal{E}_{S_{d_p}}$
and $u_{\mathcal{E}_{S_{d_p}}}^{2}(t,z,\epsilon)$ (resp.
$y_{\mathcal{E}_{S_{d_p}}}^{2}(t,z,\epsilon)$) is bounded holomorphic
on $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{S_{d_p}}$ and possesses
$\hat{u}^{2}(t,z,\epsilon)$ (resp. $\hat{y}^{2}(t,z,\epsilon)$) as $1^{+}-$Gevrey asymptotic expansion on $\mathcal{E}_{S_{d_p}}$.\medskip
Furthermore, the functions $u_{\mathcal{E}_{HJ_n}^{-n}}^{2}(t,z,\epsilon)$ (resp. $y_{\mathcal{E}_{HJ_n}^{-n}}^{2}(t,z,\epsilon)$),
$u_{\mathcal{E}_{HJ_n}^{n}}^{2}(t,z,\epsilon)$ (resp. $y_{\mathcal{E}_{HJ_n}^{n}}^{2}(t,z,\epsilon)$)
and all $u_{\mathcal{E}_{S_{d_h}}}^{2}(t,z,\epsilon)$ (resp. $y_{\mathcal{E}_{S_{d_h}}}^{2}(t,z,\epsilon)$) for $0 \leq h \leq \iota-1$, are
the restrictions of a common holomorphic function $u^{2}(t,z,\epsilon)$ (resp. $y^{2}(t,z,\epsilon)$)
defined on the large domain $(\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1}) \times \mathcal{E}_{HS}$,
where $\mathcal{E}_{HS} = \mathcal{E}_{HJ_n}^{-n} \cup_{h=0}^{\iota - 1} \mathcal{E}_{S_{d_h}} \cup \mathcal{E}_{HJ_n}^{n}$
which represents the $1^{+}-$sum of $\hat{u}^{2}(t,z,\epsilon)$ (resp. $\hat{y}^{2}(t,z,\epsilon)$) on $\mathcal{E}_{HS}$ w.r.t $\epsilon$.
Beside, $u_{\mathcal{E}_{S_{d_p}}}^{1}(t,z,\epsilon)$ (resp. $y_{\mathcal{E}_{S_{d_p}}}^{1}(t,z,\epsilon)$) is the
$1-$sum of $\hat{u}^{1}(t,z,\epsilon)$ (resp. $\hat{y}^{1}(t,z,\epsilon)$) on each $\mathcal{E}_{S_{d_p}}$ w.r.t $\epsilon$ whenever its aperture is strictly larger than $\pi$.
\end{theo}
\begin{proof}
For all $k \in \llbracket -n,n \rrbracket$, we set forth a holomorphic function $G_{k}$ described as
$G_{k}(\epsilon) := (t,z) \mapsto u_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$
(resp. $G_{k}(\epsilon) := (t,z) \mapsto y_{\mathcal{E}_{HJ_n}^{k}}(t,z,\epsilon)$) which defines, by construction, a bounded and
holomorphic function from $\mathcal{E}_{HJ_n}^{k}$ into the Banach space $\mathbb{F} =
\mathcal{O}( (\mathcal{T} \cap D(0,r_{\mathcal{T}})) \times D(0,\delta \delta_{1})$ equipped with the supremum norm. For all
$0 \leq p \leq \iota-1$, we set up a holomorphic function $\breve{G}_{p}$ given by
$\breve{G}_{p}(\epsilon) := (t,z) \mapsto u_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$ (resp.
$\breve{G}_{p}(\epsilon) := (t,z) \mapsto y_{\mathcal{E}_{S_{d_p}}}(t,z,\epsilon)$) which yields a bounded holomorphic function
from $\mathcal{E}_{S_{d_p}}$ into $\mathbb{F}$. We deduce that the assumption 1) of Proposition 23 is satisfied.
Furthermore, according to the bounds
(\ref{difference_u_Sdp_exp_small}) together with (\ref{difference_u_HJn_Sd0}) and (\ref{difference_u_HJn_Sdiota}) concerning the
functions $u_{\mathcal{E}_{S_{d_p}}}$, $0 \leq p \leq \iota - 2$ and
$u_{\mathcal{E}_{HJ_{n}}^{-n}}$, $u_{\mathcal{E}_{HJ_{n}}^{n}}$, $u_{\mathcal{E}_{S_{d_{\iota-1}}}}$ (resp. to the bounds
(\ref{exp_flat_difference_yk_plus_1_minus_yk_Sdp}) in a row with (\ref{difference_y_HJn_Sd0}) and
(\ref{difference_y_HJn_Sdiota}) dealing with the functions $y_{\mathcal{E}_{S_{d_p}}}$, $0 \leq p \leq \iota-2$ and
$y_{\mathcal{E}_{HJ_{n}}^{-n}}$, $y_{\mathcal{E}_{HJ_{n}}^{n}}$, $y_{\mathcal{E}_{S_{d_{\iota-1}}}}$), we observe that the
bounds (\ref{cond_Delta_cocycle_exp_flat}) are fulfilled for the functions
$\breve{\Delta}_{p}(\epsilon) = \breve{G}_{p+1}(\epsilon) - \breve{G}_{p}(\epsilon)$, $0 \leq p \leq \iota-2$ and
$\Delta_{-n,0}(\epsilon) = \breve{G}_{0}(\epsilon) - G_{-n}(\epsilon)$,
$\Delta_{\iota-1,n}(\epsilon) = G_{n}(\epsilon) - \breve{G}_{\iota-1}(\epsilon)$. As a result, Assumption 2) of Proposition 23 holds.
At last, keeping in mind the estimates (\ref{log_flat_difference_uk_plus_1_minus_uk_HJn}) for the maps
$u_{\mathcal{E}_{HJ_n}^{k}}$, $k \in \llbracket -n,n \rrbracket$, $k \neq n$ (resp. the estimates
(\ref{log_flat_difference_yk_plus_1_minus_yk_HJn}) for the maps $y_{\mathcal{E}_{HJ_n}^{k}}$, $k \in \llbracket -n,n \rrbracket$, $k \neq n$),
we conclude that the upper bounds (\ref{cond_Delta_cocycle_log_exp_flat}) are justified for the functions
$\Delta_{k}(\epsilon) = G_{k}(\epsilon) - G_{k+1}(\epsilon)$, $-n \leq k \leq n-1$. Hence, Assumption 3) of Proposition 23 holds true.
Accordingly, the proposition 23 gives rise to the existence of\\
- A convergent series $(t,z) \mapsto a(t,z,\epsilon) := a(\epsilon)$ (resp. $(t,z) \mapsto b(t,z,\epsilon) := a(\epsilon)$) belonging
to $\mathbb{F}\{ \epsilon \}$,\\
- Two formal series $(t,z) \mapsto \hat{u}^{j}(t,z,\epsilon) := \hat{G}^{j}(\epsilon)$
(resp. $(t,z) \mapsto \hat{y}^{j}(t,z,\epsilon) := \hat{G}^{j}(\epsilon)$) in $\mathbb{F}[[\epsilon]]$, $j=1,2$,\\
- $\mathbb{F}-$valued holomorphic functions $(t,z) \mapsto u_{\mathcal{E}_{HJ_n}^{k}}^{j}(t,z,\epsilon) := G_{k}^{j}(\epsilon)$
(resp. $(t,z) \mapsto y_{\mathcal{E}_{HJ_n}^{k}}^{j}(t,z,\epsilon) := G_{k}^{j}(\epsilon)$) on $\mathcal{E}_{HJ_n}^{k}$, for all
$k \in \llbracket -n,n \rrbracket$, $j=1,2$,\\
- $\mathbb{F}-$valued holomorphic functions $(t,z) \mapsto u_{\mathcal{E}_{S_{d_p}}}^{j}(t,z,\epsilon) := \breve{G}_{p}^{j}(\epsilon)$
(resp. $(t,z) \mapsto y_{\mathcal{E}_{S_{d_p}}}^{j}(t,z,\epsilon) := \breve{G}_{p}^{j}(\epsilon)$) on $\mathcal{E}_{S_{d_p}}$, for all
$0 \leq p \leq \iota-1$, $j=1,2$,\\
that accomplish the statement of Theorem 3.
\end{proof}
|
3,212,635,537,703 | arxiv | \section{Introduction and Summary}
The relevant degrees of freedom in QCD at large distances are still poorly understood.
A perturbative approach in this domain is rendered impossible by the lack of convergence and by the existence of multiple solutions to the gauge fixing condition (Gribov copies) \cite{Gribov:1977wm}.
Quasi-classical solutions are also poorly defined at strong coupling, and may not dominate the amplitudes in the presence of large quantum corrections.
A possible way of making QCD treatable at large distances is to use an effective theory encoding the known symmetries of the Lagrangian. At very large distances, the effective degrees of freedom are pions and the corresponding effective theory is based on spontaneously broken chiral invariance. Another symmetry present in QCD in the chiral limit
is scale invariance -- indeed, in the limit of massless quarks the Lagrangian does not possess
any dimensionful parameters and is therefore invariant with respect to rescaling of coordinates,
$x_{\mu} \to \lambda x_{\mu}$. In real world, this invariance is lost -- the hadrons possess finite masses and sizes, so the scale symmetry is broken. It is therefore tempting to formulate an effective theory of broken scale invariance in terms of the corresponding Goldstone boson - the dilaton. Since the scale transformation does not affect any quantum numbers, the corresponding particle has to be a scalar -- a scalar glueball or perhaps, in the presence of light quarks, a $\sigma$ or $f_0$ meson.
There is a well-known problem with this approach however: unlike chiral symmetry, the scale invariance is broken not spontaneously, but explicitly \cite{scale1,scale2}. Indeed the quantum effects and the regularization required to treat them bring in a dimensionful constant -- $\Lambda_{\rm QCD}$ \cite{Gross:1973id}. As a result, there is no limit in which the dilaton can become massless. Nevertheless, the corresponding effective theory of a scalar field
can still be formulated \cite{schechter,Migdal:1982jp}-- it is fixed unambiguously by the Ward identities of broken scale invariance.
However how useful this theory can be depends on how softly the scale symmetry is broken, and how
massive the resulting dilaton is. In gluodynamics, the dilaton is identified with the scalar glueball. The mass of the scalar glueball according to the lattice calculations is quite large, $M_G \simeq 1.5 \div 1.7$ GeV \cite{Chen:2005mg}. While this {\it is} the lightest glueball, its mass is not significantly smaller than the masses of glueballs with other quantum numbers \cite{Chen:2005mg}, so there is no reason to expect that the physics at large distances will be dominated by the dynamics of scalar glueballs. Moreover, the corresponding Compton wavelength $\sim M_G^{-1}$ is much shorter than the confinement radius $\sim \Lambda_{\rm QCD}^{-1}$, so
it is likely that the dynamics described by the effective theory of glueballs is entangled with the gluon dynamics.
This calls for an extension of the effective theory to include both scalar fields and gluons, and such an extension has been carried out in Refs.\cite{klt,Kharzeev:2004ct}.
The resulting theory involves the scalar dilaton field $\chi$ interacting with the gluon field characterized by $F_{\mu\nu}^a$.
\medskip
In this paper we will explore the dynamics of this model further; let us briefly summarize our findings.
In the absence of color fields, the minimum of the effective potential for the scalar field $\chi$ is at $\chi = 0$, and the
Lagrangian describes the theory of self-interacting scalar glueballs. However as the strength
of the color field increases, the minimum at $\chi = 0$ disappears and
a non-zero expectation value for $(F_{\mu\nu}^a)^2 < 0$ develops, corresponding to the formation of a confining chromo-electric flux tube. At the same time the potential for the field $\chi$ vanishes, and the dilaton excitation becomes massless.
While the resulting theory of gauge bosons interacting with a scalar field may look similar to a
Higgs model, there is a big difference -- the presence of dilatons does not break the gauge symmetry. Instead, the propagator of gluons in Coulomb gauge acquires a infrared divergent piece $\sim 1/k^4$
that corresponds to linear confinement in coordinate space. Such a propagator has been shown
to emerge once the multiple solutions of gauge fixing condition (Gribov copies) are eliminated \cite{Gribov:1977wm}; a confining Coulomb propagator was shown to be a necessary condition for confinement \cite{Zwanziger:2002sh} (for a review, see e.g. \cite{Dokshitzer:2004ie}). The presence of confinement in the model of \cite{klt,Kharzeev:2004ct} has been previously investigated by a different method in Ref. \cite{Gaete:2007zn}.
The effective theory of \cite{klt,Kharzeev:2004ct} allows a dual formulation as a classical Yang--Mills theory on a curved conformal space-time background. Qualitatively, a geometrical interpretation of confinement has been considered in Refs \cite{Kharzeev:2005iz,Castorina:2007eb}.
\medskip
There are many similarities between this and the previously proposed approaches: the MIT bag model \cite{Chodos:1974je},
the non-topological soliton model of Friedberg and Lee \cite{Friedberg:1976eg}, the Kogut--Susskind model \cite{Kogut:1974sn}, the color dielectric model \cite{Pirner:1991im}, the gauge theory with a dilaton formulated in \cite{Dick:1997ju}, and most of all with the ``perturbative confinement" program of 't Hooft \cite{'t Hooft:2002wz}. Indeed, as we will explain below, the proposed approach corresponds to the {\it infrared} renormalization of QCD. Once this
infrared renormalization is performed, the renormalized gluon propagator possesses the property of confinement. The advantage of the approach presented in this paper is that the structure of the effective theory is completely determined by broken scale invariance of QCD.
In the picture developed in this paper the dominant degree of freedom at intermediate distances
(distances shorter than the radius of confinement but longer than needed for perturbation theory to apply)
is the massless scalar excitation composed from gluon fields. We will call it the ``scalaron" to distinguish it from
the massive glueball, or dilaton state emerging at long distances. The existence of ``scalarons" inside hadrons would have very interesting implications -- for example, since scalarons have zero spin,
the total spin carried by gluons at intermediate distances inside hadrons should be equal to zero.
In the physics of dense QCD matter, we expect that close to the deconfinement transition scalarons should play an important dynamical role, possibly inducing a large bulk viscosity in the system in accord with \cite{Kharzeev:2007wb,Karsch:2007jc,Meyer:2007dy}.
\medskip
The paper is organized as follows. In section \ref{scaleinv} we introduce the effective Lagrangian of broken scale invariance. In section \ref{sec:conf} we discuss its properties and show that it
leads to confinement. The structure of the confining gluon propagator is studied in more detail in
section \ref{sec:el}. In section \ref{sec:thooft} we show that our effective theory can be viewed as a particular realization of 't Hooft's ``perturbative confinement" program. Section \ref{sec:sc}
contains a discussion of strong coupling behavior in the infrared region, and of the energy density stored in the flux tube. Finally, the discussion of the results is given in section \ref{sec:disc}.
\section{Scale invariance of QCD and the effective theory}\label{scaleinv}
The invariance with respect to the scale transformation $x_\mu \to \lambda x_\mu$ is a property of the QCD Lagrangian in the chiral limit. Noether's theorem requires the existence of the corresponding conserved dilatation current $s_{\mu}$: $\partial^{\mu} s_{\mu} = 0$.
Since the divergence of dilatation current in field theory is equal to the trace of the energy-momentum tensor $\partial^{\mu} s_{\mu} = \theta^{\mu}_{\mu}$, in conformally invariant theories $\theta^{\mu}_{\mu} =0$.
However quantum effects break conformal invariance \cite{scale1,scale2}:
\begin{equation}\label{trace0}
\partial^\mu s_\mu=\theta^\mu_{\mu}=\sum_qm_q\,\bar q\,q+\frac{\beta(g)}{2g}
F^{a\mu\nu}F^a_{\mu\nu}\,,
\end{equation}
where $\beta(g)$ is the QCD $\beta$-function, which governs the behavior of the running coupling
\begin{equation}
\mu \frac{d g(\mu)}{d \mu} = \beta (g)\,. \label{rg}
\end{equation}
At low energy, an effective Lagrangian can be constructed which accounts for the broken conformal invariance. The simplest possible effective low energy Lagrangian involves one scalar particle -- \emph{dilaton} \cite{Migdal:1982jp,schechter}. The form of the Lagrangian is uniquely determined by
the low energy theorems of QCD \cite{Novikov:1981xj}.
An elegant way to derive this effective Lagrangian has been suggested by Migdal and Shifman in \cite{Migdal:1982jp}.
They noted that since the gluodynamics is conformally invariant only in four dimensions, the anomalous contribution to the divergence of the dilatation current appears -- in the dimensional regularization scheme -- as a residual term in the 4D limit. As demonstrated in \cite{Migdal:1982jp}, if we formally couple the gluodynamics to the conformal background gravity -- described by a single scalar field $h(x)$ -- such a theory is conformally invariant in any D. This trick allows to account for all symmetries of the low energy theory. A subsequent Legendre transformation to the conjugate field $\chi(x)$ with a potential that has a classical minimum at $\chi=0$ yields then an effective low energy Lagrangian satisfying all necessary constraints.
This derivation has been generalized in \cite{klt,Kharzeev:2004ct} by including the gluon fields to describe the transition region between the short and long distances. The corresponding Einstein-Hilbert action reads
\begin{equation}\label{act}
S\,=\,\int d^4x\,\sqrt{-\mathrm{ g}}\left( \frac{1}{8\,\pi\, G}\, R
\,-\frac{1}{4} \,
\mathrm{g}^{\mu\nu}\, \mathrm{g}^{\lambda\sigma}\, F^a_{\mu\lambda}\,
F^a_{\nu \sigma}\,-\,
e^{2h}\,\theta_\mu^\mu
\right),
\end{equation}
where the background metric is given by $\mathrm{g}_{\mu\nu}(x)\,=\,e^{h(x)}\, \delta_{\mu\nu}$, $R$ is the Ricci scalar and $G$ is a dimensionful constant analogous to the Newton's gravitational constant. Upon substitution of the one-loop expression for $\theta_\mu^\mu$ in $SU(3)$ from \eq{trace0} and performing the Legendre transformation we derive the following
effective Lagrangian
\begin{equation}\label{LAGR}
\mathcal{L} \,=\, \frac{|\epsilon_\mathrm{v}|}{m^2}\,\frac{1}{2}\,e^{\chi/2}\, (\partial_{\mu}\chi)^2\,
+\,\left(|\epsilon_\mathrm{v}|\,+c\,\frac{1}{4} \,(F^a_{\mu\nu})^2 \right) e^\chi\,(1-\chi)\, -\,\frac{1}{4}\,(F^a_{\mu\nu})^2\,;
\end{equation}
the energy density of the vacuum $|\epsilon_\mathrm{v}|$ and the mass of the dilaton $m$ are the parameters of the theory. It is constructed in such a way that at $\chi=1$ (corresponding to some semi-hard momentum scale $M_0$) the terms containing the effective field $\chi$ cancel implying that dynamics of the color fields is perturbative. This expresses the fact that $M_0$ is a scale at which the effective theory defined by \eq{LAGR} has to be matched onto the pQCD. The effective theory \eq{LAGR} is non-renormalizable and requires introduction of a cutoff $M_0(m,|\epsilon_\mathrm{v}|)$ at some short distance.
In \eq{LAGR} we used notation $c=|\beta(g_0)|/2g_0$ with $g_0\equiv g(M_0)$. Note, that $c\ll 1$ for any phenomenologically reasonable $g_0$.
In the absence of color fields, the dilaton potential has a minimum at $\chi =0$ corresponding to
the physical vacuum with the energy density $V(\chi =0) = - |\epsilon_\mathrm{v}|$. The position of the minimum ($\chi =0$) does not change
when the color fields are either predominantly magnetic (corresponding to $(F^a_{\mu\nu})^2 = 2\ (B^{a 2} - E^{a 2}) > 0$) or electric but sufficiently weak so that
\begin{equation}
|\epsilon_\mathrm{v}|\,+c\,\frac{1}{4} \,(F^a_{\mu\nu})^2 > 0.
\end{equation}
However, the presence of a sufficiently strong color electric field with $(F^a_{\mu\nu})^2 = 2\ (B^{a 2} - E^{a 2}) < 0$ such that $|(F^a_{\mu\nu})^2| > 4 |\epsilon_\mathrm{v}| / c\ $ flips the sign of the dilaton potential and the extremum at $\chi=0$ becomes a maximum rather than a minimum. As we will discuss below, this transition corresponds to the formation of color electric flux tube. Since at this point the dilaton potential in \eq{LAGR} vanishes, the formation of the flux tube is accompanied by the emergence of a massless dilaton excitation -- the ``scalaron".
\section{Confinement by chromo-electric flux tubes}\label{sec:conf}
Consider the Lagrangian \eq{LAGR} which defines our effective low energy
theory \cite{klt,Kharzeev:2004ct}. It is valid at long distances $r\ge 1/M_0\equiv r_0$, where $M_0$ is a scale corresponding to
$\chi=1$. At this scale our effective theory is to be matched onto the pQCD; indeed at $\chi = 1$ our Lagrangian \eq{LAGR} becomes simply the Lagrangian of gluodynamics. The equation of motion of the dilaton field is
\begin{equation}\label{eqmot}
\frac{|\epsilon_\mathrm{v}|}{m^2}\,\partial_\mu\left(
e^{\chi/2}\,\partial_\mu\chi\right)
\,-\,
\frac{|\epsilon_\mathrm{v}|}{4m^2}\,e^{\chi/2}\left(\partial_\mu\chi\right)^2\, +\,
\chi\, e^\chi\,|\epsilon_\mathrm{v}|\,
+\,
\chi\, e^\chi\,c\,\frac{1}{4}\,(F_{\mu\nu}^a)^2\,
=\,0.
\end{equation}
The trace of energy-momentum tensor can be calculated directly from \eq{LAGR} using
\begin{equation}\label{trace}
\theta_\mu^\mu\,=\,g^{\mu\nu}\,\left(
2\,\frac{\partial\mathcal{L}}{\partial g^{\mu\nu}}\,-\,
g^{\mu\nu}\,\mathcal{L}\right)
\,+\,
\frac{8|\epsilon_\mathrm{v}|}{m^2}\,\partial_\mu^2\, e^{\chi/2},
\end{equation}
where the last term on the right hand side is the total
derivative. Using equation of motion of the dilaton field \eq{eqmot}
one arrives at
\begin{equation}\label{sp}
\theta_\mu^\mu\,=\,-\,4\,|\epsilon_\mathrm{v}|\, e^\chi\,-\,\chi\, e^\chi\,c\,(F_{\mu\nu}^a)^2\,.
\end{equation}
Since the vacuum at large distances corresponds to $\chi=0$, the color fluctuations represented by the second term in the r.h.s.\ of \eq{sp} decouple -- the properties of the physical vacuum are determined by the fluctuations of the dilaton field.
At small dilaton momenta its kinetic term is much smaller than the rest of terms in \eq{LAGR}. It catches up only at distances $r\sim 1/m$. Therefore, if $r_0\gg 1/m$ the kinetic term remains small in the entire region of validity of the effective theory $r\ge r_0$. We will verify later that this is indeed a valid assumption. Thus, we drope the kinetic term in \eq{LAGR} and get
\begin{subequations}
\begin{eqnarray}
\mathcal{L} &\approx& \left(|\epsilon_\mathrm{v}|\,+\,c\,\frac{1}{4} (F_{\mu\nu}^a)^2\right) e^\chi\,(1-\chi)\, -\,\frac{1}{4}\,(F^a_{\mu\nu})^2\,\\
&=& |\epsilon_\mathrm{v}| \,e^\chi\,(1-\chi)- \frac{1}{4}\left[-e^\chi\,(1-\chi)\,c\, +1\right]\,(F^a_{\mu\nu})^2\label{01};
\end{eqnarray}
\end{subequations}
this action implies that equation of motion for the dilaton is a constraint on the gluon field.
Extremizing the Lagrangian \eq{01} with respect to $\chi$ yields
\begin{equation}\label{min.cond}
\chi\, e^\chi\,\left(|\epsilon_\mathrm{v}| +c\,\frac{1}{4}\,(F^a_{\mu\nu})^2\right)|_\mathrm{min}\,=\,0\,.
\end{equation}
One solution to \eq{min.cond} is the physical vacuum at $\chi = 0$. However, once the chromo-electric gluon field $ (F^a_{\mu\nu})^2 < 0$ becomes sufficiently strong, \eq{min.cond} possesses a solution for any $\chi\neq 0$ if
\begin{equation}\label{vac.min}
(F^a_{\mu\nu})^2|_\mathrm{min}=2(\vec B^{a 2}-\vec E^{a 2})|_\mathrm{min}= -4|\epsilon_\mathrm{v}|\,c^{-1}\,.
\end{equation}
Assuming that the color field is created by static sources and neglecting the chromo-magnetic component of the field, the magnitude of the chromo-electric field from \eq{vac.min} is given by
\begin{equation}\label{chrom.el}
(\vec E^{a}_\mathrm{vac})^2 = \frac{2|\epsilon_\mathrm{v}|}{c}.
\end{equation}
Unlike in pQCD, where $(F^a_{\mu\nu})^2|_\mathrm{min}=0$, the minimum of the effective theory corresponds to the finite chromoelectric field $E^a$.
If the color field is weaker than \eq{chrom.el}, it is expelled from the vacuum, the ground state is at $\chi=0$, and the dynamics
is described by interacting excitations above this vacuum -- the scalar fields $\chi$.
\medskip
Since the scalar field $\chi$ coupled to the chromo-electric field in the effective action \eq{LAGR} is color-singlet, the color dynamics at large distances becomes frozen. We thus will employ
the quasi-Abelian gauge \cite{'tHooft:1981ht} $A_\mu^a=\phi(r) \,\delta^{a1}\,\delta^{\mu0}$, with $r^2=x_ix^i$, $i=1,2,3$. In this gauge \eq{vac.min} reads
\begin{equation}\label{eto.vac}
(\vec\nabla \phi_c(r))^2\,=\,2|\epsilon_\mathrm{v}| \,c^{-1}\,\equiv\, \vec E_\mathrm{vac}^2
\end{equation}
with the solution $\phi_c(r)= |\vec E_\mathrm{vac}|\,r$ describing the classical background color field at large distances.
Consider now the Coulomb potential induced in the presence of this background. We can define a
renormalized at large distances potential by subtracting the background potential $\phi_c(r)$.
This procedure corresponds to the \emph{infrared} renormalization.
The Laplace equation for the renormalized Coulomb potential reads
\begin{equation}\label{laplace}
\vec\nabla ^2[\phi^\mathrm{ren}(r) - \phi_c(r)]\,=\, g\,\delta(\vec r)\,.
\end{equation}
It is solved by the Cornell potential
\begin{equation}\label{renorm.pot}
\phi^\mathrm{ren}(r)\,=\, -\frac{g}{4\pi r}\,+\, |\vec E_\mathrm{vac}|\,r\,,
\end{equation}
where we used the identity $ \vec\nabla^2 r^{-1}=-4\pi \delta(\vec r) $.
We have found that the effective theory \eq{LAGR} is confining.
\medskip
Let us emphasize again that Eq.~\eq{vac.min} requires the dilaton mass to vanish when the color flux tube is formed. At large distances $r\gg r_0$ the quantum corrections build up to produce a significant dilaton mass which corresponds to the scalar glueball mass. The appearance of the massless scalar particle -- the ``scalaron" -- is directly related to the condition \eq{vac.min} necessary for the confinement to occur. Therefore, the dilaton becoming a true massless Goldstone boson of broken scale symmetry signals the onset of confinement.
\section{Matching to Perturbation Theory: The Effective Lagrangian at $\chi \simeq 1$.}\label{sec:el}
As we already discussed above, in the presence of strong chromo-electric fields exceeding \eq{chrom.el} the dilaton potential in \eq{LAGR} changes the sign, and the extremum at $\chi=0$ turns from a minimum to a maximum. The vacuum then shifts to the maximally allowed by
our effective theory value $\chi = 1$. At $\chi = 1$ the effective Lagrangian \eq{LAGR} becomes the Lagrangian of gluodynamics. Since the effective dilaton potential $V(\chi)$ vanishes at $\chi = 1$, the energy density in this case is determined solely by the energy density of the gluon field, and has no contribution from the dilaton condensate. Likewise, in the expectation value of the trace of the energy-momentum tensor \eq{sp} at $\chi = 1$ the dilaton contribution and the contribution of the background flux field cancel each other. Therefore, to discuss the quantum dynamics at short distances corresponding to the vicinity of $\chi =1$ we have to expand the dilaton field around this value, and at the same time to expand the gluon field around the classical flux tube background fixed by \eq{chrom.el}.
To accomplish this, we introduce a new field $\phi(x)$ through
\begin{equation}
\chi (x)= 1-v\,\phi(x)\,,\quad v\equiv\sqrt{\frac{m^2}{|\epsilon_\mathrm{v}|\, e^{1/2}}}\,,
\end{equation}
and decompose the gluon field as $A_\mu^a\to \mathcal{A}^a_\mu+A_\mu^a$, where $A^a_\mu$ is a classical field satisfying \eq{eto.vac} and $\mathcal{A}^a_\mu$ is a quantum fluctuation. The field strength decomposes as follows:
\begin{equation}\label{LCHI1}
F_{\mu\nu}^a\to F_{\mu\nu}^a + D_\mu \mathcal{A}_\nu^a-D_\nu \mathcal{A}_\mu^a+g^2f^{abc}\mathcal{A}_\mu^b\mathcal{A}_\nu^c\,,
\end{equation}
where now $F_{\mu\nu}$ is the field strength of the classical field and $D_\mu=\partial_\mu-igA_\mu^at^a$ is the covariant derivative. Expanding \eq{LAGR} in powers of $v\,\phi(x)$ to the second order we derive
\begin{eqnarray}\label{LCHI11}
\mathcal{L} &=& \frac{1}{2}(\partial_\mu\phi)^2-\frac{1}{4}(F_{\mu\nu}^a + D_\mu \mathcal{A}_\nu^a-D_\nu \mathcal{A}_\mu^a+g^2f^{abc}\mathcal{A}_\mu^b\mathcal{A}_\nu^c)^2\nonumber\\
&&+e(v\,\phi-v^2\phi^2)\frac{1}{4}\bigg[2F_{\mu\nu}^a(D_\mu \mathcal{A}_\nu^a-D_\nu \mathcal{A}_\mu^a+g^2f^{abc}\mathcal{A}_\mu^b\mathcal{A}_\nu^c)\nonumber\\
&&
+(D_\mu \mathcal{A}_\nu^a-D_\nu \mathcal{A}_\mu^a+g^2f^{abc}\mathcal{A}_\mu^b\mathcal{A}_\nu^c)^2 \bigg]
\end{eqnarray}
\begin{figure}[ht]
\includegraphics[width=14cm]{feyn_diag.eps}
\caption{Interactions in \eq{LCHI11} to the order $g^0$. Helix line represents a gluon, dashed line -- classical color field, solid line -- dilaton.}
\label{fig:frules}
\end{figure}
Feynman diagrams to the order $g^0$ are displayed in \fig{fig:frules}. The corresponding rules read:
\begin{subequations}
\begin{eqnarray}
V_a&=& 2\,c\,e\,v^2\,[(p_1\cdot p_2)g_{\mu\nu}-p_{1\mu}p_{2\nu}]\,\delta^{ab}\,,\\
V_b&=& -\,c\,e\,v\,[(p_1\cdot p_2)g_{\mu\nu}-p_{1\mu}p_{2\nu}]\,\delta^{ab}\,,\\
V_c &=& 2\,c\,e\,v^2\,[A^a_\mu(p_2)(p_1\cdot p_2)-p_{2\mu}(p_1\cdot A^a(p_2))]\,,\\
V_d &=& -c\,e\,v\,[A^a_\mu(p_2)(p_1\cdot p_2)-p_{2\mu}(p_1\cdot A^a(p_2))]\,\,,
\end{eqnarray}
\end{subequations}
where $p$'s are gluon momenta as shown in \fig{fig:frules}.
The effective Lagrangian \eq{LCHI11} can be used to compute non-perturbative corrections
to QCD amplitudes.
\section{The gluon propagator}
For phenomenological applications it is convenient to determine the leading non-perturbative correction to the gluon propagator.
Note that the renormalized potential \eq{renorm.pot} satisfies the following equation
\begin{equation}
\vec\nabla^4\phi^\mathrm{ren}(\vec r)\,+\, 8\pi\,|\vec E_\mathrm{vac}| \,\delta(\vec r)\,=\, g\,\vec\nabla^2\,\delta(\vec r) ,
\end{equation}
which we Fourier transform into
\begin{equation}
\phi^\mathrm{ren}(\vec k)\,=\, -\frac{g}{\vec k^2}\,-\,\frac{8\pi\,|\vec E_\mathrm{vac}| }{\vec k^4}\,.
\end{equation}
Therefore, the required expression for the renormalized gluon propagator in the Coulomb gauge is
\begin{equation}\label{prop}
D(\vec k^2) =\frac{1}{k^4}\left(-\vec k^2-\mu^2\right)
\end{equation}
where we denoted
\begin{equation}\label{mass}
\mu^2=\frac{8\pi\,|\vec E_\mathrm{vac}|}{g}\,.
\end{equation}
The gluon propagator with a $\sim 1/\vec k^4$ behavior in the infrared has been advocated before \cite{Gribov:1977wm,Zwanziger:2002sh}. In particular, it has been argued \cite{Shirkov:1997wi,Chetyrkin:1998yr} that such a behavior provides a way of extending the reach of perturbative QCD into the semi-hard region.
\section{Strong coupling in the infrared}\label{sec:sc}
We wish now to examine the behavior of the strong coupling at long distances. Let us concentrate on the color sector of the effective Lagrangian \eq{LAGR} and introduce the electric susceptibility of QCD vacuum
\begin{equation}\label{eps}
\epsilon(\chi)\equiv Z(\chi)= 1-c\,e^\chi\, (1-\chi)\,,
\end{equation}
If we now rescale the gluon field potentials as $\bar A^a_\mu = gA^a_\mu$ and $\bar F^2 = g^2F^2$, the Lagrangian \eq{LAGR} can be written as
\begin{equation}\label{resc}
\mathcal{L} \,=\, \frac{|\epsilon_\mathrm{v}|}{m^2}\,\frac{1}{2}\,e^{\chi/2}\, (\partial_{\mu}\chi)^2\,
+\,|\epsilon_\mathrm{v}|\,e^\chi\,(1-\chi)\,-\frac{1}{4}\,\frac{\epsilon(\chi)}{g^2}\,(\bar F^a_{\mu\nu})^2\,,
\end{equation}
The form of \eq{resc} implies the behavior of the renormalized strong coupling in the presence of dilaton background:
\begin{equation}\label{run}
\alpha_s(\chi(r))=\frac{\alpha_s(M_0)}{\epsilon(\chi)}=\frac{\alpha_s(M_0)}{1-\, c\,e^\chi\, (1-\chi)}\,.
\end{equation}
At $\chi =1$ the coupling is $\alpha_s(M_0)$, which is consistent with our procedure of matching onto perturbation theory at scale $M_0$ (and the corresponding distance $r_0 = M_0^{-1}$).
Note that in the physical vacuum $\chi=0$, coupling constant becomes
\begin{equation}\label{run.infty}
\alpha_s(\chi\to 0)=\frac{\alpha_s(M_0)}{1-c}\ ;
\end{equation}
since $c \ll 1$, the effective coupling in our effective theory remains quite small at large distances, and the leading quantum corrections arise from the interactions with the dilaton fields.
\medskip
To determine the distance $r_0$ at which matching to the pQCD takes place, we write down the Yang-Mills equations in the presence of the point-like source\cite{Pagels:1978dd}:
\begin{equation}\label{YM}
D_\nu^{ab}\left(\epsilon(\chi)\, \bar F_{\nu\mu}^b\right)\,=\, j_\mu^a\,,\quad
D_\mu^{ab}=\delta^{ab}\partial_\mu+f^{bac}\bar A_\mu^c\,,
\end{equation}
where $\bar A^a_\mu = gA^a_\mu$ and $\bar F^2 = g^2F^2$. In the quasi-Abelian gauge we get for the radial component of the electric field of a point source
\begin{equation}\label{mod.coul}
\bar E=\frac{1}{4\pi r^2\, \epsilon(\chi)}
\end{equation}
To find $r_0$ where our effective theory matches onto the pQCD we note that the classical minimum corresponds to $E=E_\mathrm{vac}$ (see \eq{eto.vac})
and set $\chi=1$, that is $\epsilon(\chi)=1$. We obtain
\begin{equation}\label{r0det}
r_0=\sqrt{\frac{g}{4\pi E_\mathrm{vac}}}\,.
\end{equation}
\section{The structure of the flux tube}
We can now determine the profile of the
energy density stored in the gluon--dilaton configuration as a function of coordinate $r$. For the energy density we obtain from the Lagrangian \eq{LAGR}:
\begin{eqnarray}
\theta^{00}(x)&=&
\frac{|\epsilon_\mathrm{v}|}{2m^2}\,[(\partial_0\chi)^2+(\partial_i\chi)^2]\,e^{\chi/2}
-\,|\epsilon_\mathrm{v}|\,e^\chi\,(1\,-\,\chi)\,
\nonumber\\
&&
+\,\left(-F^{a0\lambda}F^{a0}_{\quad\lambda}\,+
\,\frac{1}{4}\,\,
(F_{\lambda\sigma}^a)^2\right)\,\left(
1\,-\,c\, e^\chi\,(1\,-\,\chi)\right),
\end{eqnarray}
where $i=1,2,3$. For the constant dilaton field and the Coulomb gauge we work in we have
\begin{equation}\label{w1}
\theta^{00}=-|\epsilon_\mathrm{v}|\,e^\chi\,(1\,-\,\chi)+\frac{1}{2}\bar E^2\,\frac{1}{g^2}\,\left( 1-c\, e^\chi\,(1-\chi)\right)\,.
\end{equation}
Setting $E=E_\mathrm{vac}$ in \eq{mod.coul} we derive near $r=r_0$ (where the Coulomb law \eq{mod.coul}, rather than the string potential, holds)
\begin{equation}\label{w2}
\theta^{00}(r)=-|\epsilon_\mathrm{v}|\, c^{-1}+ 2\,|\epsilon_\mathrm{v}| \,\frac{1}{4\pi r^2 \bar E_0^2}\,c^{-1}=
-|\epsilon_\mathrm{v}|\left(1 - \frac{3r_0^2}{ r^2 }\right)\,, \quad r\approx r_0\,.
\end{equation}
After the subtraction of the vacuum term in \eq{w2} the energy density decreases as $~1/r^2$ implying that the total energy stored in the infrared gluon configuration is linear in distance. This is yet another way to see the formation of the flux tube at (rather short) distances $r\approx r_0$. We see also that the energy density changes sign at $r=\sqrt{3}r_0$ which can be identified with the radius of the flux tube, while $|\epsilon_\mathrm{v}|$ is the difference in the energy density inside and outside of the tube, analogous to the bag constant.
\section{Relation to 't~Hooft's ``perturbative confinement"} \label{sec:thooft}
Lagrangian \eq{01} is a particular realization of 't~Hooft's ``perturbative confinement" \cite{'t Hooft:2002wz} approach. It is instructive to review the arguments given in \cite{'t Hooft:2002wz}. The most general renormalized Lagrangian $\mathcal{L}=f(F_{\mu\nu}^2)$ can be written in the form
\begin{equation}\label{h.lagr}
\mathcal{L}(A,\phi)=-\frac{1}{4}\,Z(\phi)\, F_{\mu\nu}^2\, -\,V(\phi)+J_\mu A_\mu\,,
\end{equation}
where $\phi$ is a scalar field with self-action $V(\phi)$ and $J_\mu$ is an external source. We assume the quasi-Abelian gauge thus dropping the color indices. The requirement of confinement imposes a restriction on functions $V(\phi)$ and $Z(\phi)$. To derive these conditions we introduce the electric displacement field $\vec D$
\begin{equation}
\vec D=Z(\phi)\vec E\,.
\end{equation}
Then extremizing \eq{h.lagr} with respect to $\phi$ yields
\begin{equation}\label{h.extr}
\frac{1}{2}D^2=-\frac{\partial V}{\partial (1/Z)}
\end{equation}
Consider now the energy density $U(D,\phi)$ stored in a particular configuration of $D$ and $\phi$ fields
\begin{equation}\label{h.energy}
U=\vec D\cdot \partial_0 \vec A-\mathcal{L}=\frac{1}{2}\frac{D^2}{Z(\phi)}+V(\phi)+J_0A_0\,.
\end{equation}
Variation in a configuration results in change
\begin{equation}\label{h.dU}
dU=\frac{1}{2}D^2\,d\frac{1}{Z}\,+\, \frac{1}{Z}\,d\left(\frac{1}{2}D^2\right)\,+
\, dV =\frac{1}{Z}\,d\left(\frac{1}{2}D^2\right)\,.
\end{equation}
Now, confinement occurs when the most energetically favorable configuration corresponds to the linearly rising potential
\begin{equation}\label{h.conf}
U(\vec D)=\rho\, |\vec D|
\end{equation}
at least at small values of $|\vec D|$, i.e. in the transition region between perturbation theory and confinement. In \eq{h.conf} $\rho$ can be readily identified as a string tension. In this region where \eq{h.conf} holds we have using \eq{h.dU}
\begin{equation}\label{h.z}
\frac{1}{Z}=\frac{dU}{d(D^2/2)}\approx \frac{d\rho}{d(D^2/2)}=\frac{\rho}{D}\,.
\end{equation}
With the help of \eq{h.extr} we derive
\begin{equation}
V=-\int \,\frac{1}{2}\, D^2\, d(1/Z)\,\approx\, \frac{1}{2}\rho\, D=\frac{1}{2}\rho^2 Z\,.
\end{equation}
Let us now rewrite \eq{01} in a suggestive form
\begin{equation}
\mathcal{L}
= -V(\chi) - Z(\chi) \frac{1}{4}\,(F^a_{\mu\nu})^2\,.
\end{equation}
were the following notations were introduced
\begin{equation}
Z(\chi)=-e^\chi\,(1-\chi) \,c\,+1\,,\quad V(\chi)=-|\epsilon_\mathrm{v}| \,e^\chi\,(1-\chi)\,.
\end{equation}
\begin{figure}[ht]
\includegraphics[width=10cm]{vz1.eps}
\caption{A particular realization of the 't Hooft's perturbative confinement \cite{'t Hooft:2002wz} as explained in the text.}
\label{fig:vz}
\end{figure}
The resulting values of $V$ and $\partial{V}/{\partial Z}$ are displayed in \fig{fig:vz} for $\frac{\beta(g)}{2g}=1$.
At large values $D$ the energy density must take the perturbative form $U=D^2/2$. Therefore, \eq{h.z} it implies that $Z\to 1$ which in turn corresponds to $\chi=1$.
\section{Numerical estimates}
Let us check whether the relations following from our effective theory make sense phenomenologically. To do this, we have to choose numerical values for the parameters entering the Lagrangian \eq{LAGR}: the dilaton mass $m$ and the vacuum energy density $|\epsilon_\mathrm{v}|$. In addition, since the effective theory \eq{LAGR} is non-renormaliable, we also have to specify the value of the matching scale $M_0$. This value will also determine the value of the strong coupling $\alpha_s(M_0)$ thus defining the constant $c$. Since we have not introduced light quarks so far, our estimates will be applicable to pure gluodynamics.
In gluodynamics, the dilaton has to be identified with the scalar glueball; lattice QCD gives the mass of $m = 1.5 \div 1.7$ GeV, and we pick the value $m = 1.6$ GeV.
In accord with our previous work we choose $M_0 = 2 \ {\rm GeV}$ for the scale $M_0$ at which the perturbation theory starts to apply. The corresponding value of the QCD coupling is $\alpha_s(M_0) \simeq 0.35$, and $c=|\beta(g_0)|/2g_0 \simeq b\ \alpha_s(M_0) / ( 8 \pi) \simeq 0.15$; we used $b=11$ as appropriate for gluodynamics with $N_c = 3$.
The value of the vacuum energy density is somewhat uncertain; in gluodynamics it is related to the gluon condensate by the relation $|\epsilon_\mathrm{v}| = 11/32 \langle (\alpha_s/\pi) F^2 \rangle$. The original value of the gluon condensate is $\langle (\alpha_s/\pi) F^2 \rangle = 0.012\ {\rm GeV}^4$ \cite{Shifman:1978by}. The latest analysis \cite{Ioffe:2005ym} (including in particular an updated knowledge of $\alpha_s$) yields a significantly smaller value
$\langle (\alpha_s/\pi) F^2 \rangle = 0.005\pm0.004\ {\rm GeV}^4$ leading to $|\epsilon_\mathrm{v}| = 0.0017\pm0.0014$. In this situation we will work backwards and pick the
value of $|\epsilon_\mathrm{v}|$ using the relation \eq{chrom.el}. Eq. \eq{chrom.el} relates the value of the vacuum chromo-electric field $E_\mathrm{vac}$ (which according to \eq{renorm.pot} plays the role of string tension in our approach) to the values of $|\epsilon_\mathrm{v}|$ and $c$. Choosing $E_\mathrm{vac} \simeq 800 \ {\rm MeV/fm}$ for the string tension and the value $c = 0.15$ inferred above we get for the vacuum energy density from \eq{chrom.el}
\begin{equation}
|\epsilon_\mathrm{v}| \simeq 0.0019 \ {\rm GeV}^4,
\end{equation}
a value consistent with the analysis \cite{Ioffe:2005ym}. The radius of the flux tube can now be found from
\eq{w2} and \eq{r0det}; numerically, we find $r_\mathrm{tube} \simeq 0.2$ fm, a reasonable value.
Admittedly there is a significant uncertainty in the values of the parameters $|\epsilon_\mathrm{v}|$ and $M_0$ of the effective Lagrangian \eq{LAGR}. However reasonable choices of these parameters yield phenomenologically sound values of the string tension, the radius of the flux tube and the value of the effective coupling in the infrared.
\section{Discussion}\label{sec:disc}
An effective theory of broken scale symmetry of QCD given by the Lagrangian \eq{LAGR} which we have advocated here and elsewhere \cite{klt,Kharzeev:2004ct} possesses a number of remarkable properties: it yields confinement, and links the formation of color tube to the emergence of a massless scalar excitation -- ``the scalaron".
Let us describe the relevant degrees of freedom in this theory at different distances in simple physical terms.
At large distances, the gluons are bound into massive scalar glueballs. At shorter distances
(inside the hadron) the quasi-Abelian color flux tube is found, and the dominant degree of freedom are
the massless dilaton excitations, the scalarons. At still shorter distances we encounter gluons with non-perturbative interactions induced by the coupling to scalarons. Finally, at very short distances we match onto the usual QCD perturbation theory.
The basic idea of modifying the dynamics of gluon fields at large distances is of course not new, and several approaches of that kind were mentioned in the Introduction. The distinctive feature of the approach advocated in this paper is that our effective Lagrangian is fixed entirely by scale anomaly. Knowing the structure of the effective Lagrangian allowed us to establish that it possesses the property of confinement, and to link the formation of color confining flux tube to the emergence of massless scalar excitation.
In our opinion the effective theory \eq{LAGR} provides a way for systematically computing non-perturbative corrections to various amplitudes; we think it is worthwhile to pursue the phenomenological applications both at zero and finite temperature. We have already discussed the IR behavior of the strong coupling in this framework \cite{Kharzeev:2004ct} and the effect of soft gluon emission on the structure of the leading Regge singularity at high energies, i.e. the Pomeron \cite{klt}. This approach has an interesting implication also for the spin structure of the nucleon -- since the gluons at semi-hard scales are bound in this picture into scalar (spin-singlet) scalaron states, there should be no contribution from gluons to the spin of the hadron at semi-hard scales
where the perturbative evolution is initiated. This is perhaps consistent with the preliminary results
on the fraction of the proton's spin carried by gluons from RHIC and elsewhere.
It has already been demonstrated that the dilaton excitations near the critical temperature are responsible for the anomalous bulk viscosity \cite{Kharzeev:2007wb}. Likewise it determines the behavior of the trace of energy-momentum tensor at temperatures above $T_c$ and may give an important contribution to the parton energy loss. In the deconfined phase color charges are screened at distances of the order of the inverse Debye mass, while the massless dilatons mediate the long-range strong force with possible important influence on the global dynamics of the quark-gluon plasma.
We leave the development of these applications for the future.
\acknowledgments
The work of D.K. was supported by the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
K.T. is supported in part by the U.S. Department of Energy Grant No. DE-FG02-87ER40371; he would also like to thank
RIKEN, BNL and the U.S. Department of Energy (Contract
No. DE-AC02-98CH10886) for providing the facilities essential for the
completion of this work. This research of E.L. was supported
in part by the Israel Science Foundation, founded by the Israeli Academy of Science
and Humanities and by BSF grant $\#$ 20004019.
|
3,212,635,537,704 | arxiv |
\section*{S1. Landauer+DFT calculations}
We model the QPC as a set of three electrostatically coupled strictly two-dimensional systems: the electrodes, the donor layer and the 2DEG. The structure of the simulation closely models the one used in the experiment, with small modifications in simulated charge density to more closely replicate actual gate voltages used in the experiment.
There are two sets of surface electrodes. The dark yellow ones in the inset to Fig.~2b of the main article, separated by $300\,$nm and biased to $-285\,$mV, define a wide quantum wire in the 2DEG. The light yellow ones are gate electrodes defining the QPC with a lithographic width and length of $200\,$nm and $500\,$nm, respectively.
A uniformly doped fully ionized donor layer is $55\,$nm below the surface and provides an electron density of $2.5\times10^{11}\,$cm$^{-2}$. The 2DEG (brown in the inset to Fig.~2b of the main article) is $110\,$nm below the surface. Away from the QPC, the two-dimensional regions of 2DEG are modeled by wide quantum wires, making the system quasi-one-dimensional with six open channels per wire. Far away from the QPC the width of the 2DEG is $\sim200\,$nm and the electron density at the center of a wire is $\sim10^{11}\,$cm$^{-2}$.
Due to screening, the potentials and the electron densities on surface and in the 2DEG are independent of the distance from the QPC far away from the QPC. Therefore, we separated the system into two semi-infinite uniform quantum wires and a $1600\,$nm long central region, containing the QPC at its center.
We treated the 2DEG quantum-mechanically within the density functional theory (DFT), using the local spin-density approximation with the Tanatar-Ceperley exchange-correlation functional \cite{Tanatar89}. The presence of the semiconductor was taken into account through the use of the GaAs dielectric constant $\kappa = 12.9$ and the effective mass of 0.067 times the bare electron mass.
In each iteration of the DFT self-consistency loop, we first obtained the Kohn-Sham scattering states. We projected the 2D Kohn-Sham Hamiltonian to the lowest 10 transverse modes at each point along the QPC axis and then solved the resulting quasi-1D scattering problem by matching a wavefunction in the central region to asymptotic plane and evanescent waves in the uniform quantum wires.
Using the Kohn-Sham scattering states we calculated the 2DEG density using the Fermi functions with in general different chemical potentials and temperatures for electrons incoming from the source and from the drain electrodes. We applied the bias voltage symmetrically to the source and to the drain.
We then distributed the remaining electrons on the surface in such a way that the electrostatic potential there assumed the required values. For the exposed surface we used the "pinned" surface boundary condition \cite{Davies1995}. In this step we performed the calculation on a wide strip of width $800\,$nm centered at the QPC axis, with periodic boundary conditions, which enabled us to use the fast Fourier transform method. With the resulting charge distribution we calculated a better approximation to the electrostatic potential in the 2DEG to be used in the next iteration of the DFT self-consistency loop.
\section*{S2. Device fabrication}
The mesoscopic circuits measured in this work were defined using electrostatic CrAu gates on the surface of a GaAs/AlGaAs heterostructure. The 2DEG was $110\,$nm below the surface, with electron density $n_s = 1.11 \times 10^{11}\,$cm$^{-2}$ and mobility $\mu = 4.44 \times 10^6\,$cm$^2$/Vs measured at $T = 1.5\,$K. The heating channels had a lithographic width $1\,\mu$m and length $100\,\mu$m, with QPCs midway through the channel. The precise QPC geometry can be seen in Fig.~1 of the main text.
\section*{S3. Further insights from Fig. 2d}
We examine several locations in Fig.~2d in closer detail, to see how features in this image result from temperature-induced screening. The drain is on the right (source on the left), so heating the drain causes electrons to flow to the left (right) when $dt/d\varepsilon$ is positive (negative). \textbf{We define positive contributions to $\tilde S$ to correspond to more current flow to the left when $T_\mathrm{d}$ increases.} Schematics indicate how the bottom of 1D subbands vary with position along the QPC. Energy is highest at the saddle point (in the middle of the QPC) and falls off towards source or drain leads, to the left or right respectively.
\hspace{-0.2in}\includegraphics[width=0.96\textwidth]{Fig2dSI.pdf}
\clearpage
\section*{S4. Further discussion about point X$_2$}
Here we present evidence that the extra leg in the thermopower at point X$_2$ in Fig.~3b of the main article is due to presence of a localized state near the top of the QPC barrier which makes the potential there very sensitive to heating induced charging of the localized state at the drain chemical potential.
Note in Fig.~\ref{xcnoxc}b that this extra leg is not present in the $\Delta\mu$,$\Delta T$-DFT simulation if the exchange-correlation term is not taken into account, i.e. in the Hartree aproximation. The sensitivity of the potential at the top of the barrier to charging due to heating of the drain is presented in Figs.~\ref{ldos}a,b for simulations with and without exchange-correlation term, respectively. Starting with both source and drain at the same temperature (the potentials for this case are plotted with thick red lines), we heat the drain which causes charging (thin red lines) near the point where the drain chemical potential intersects the QPC potential. Electrons in vicinity screen this additional charge, resulting in a different QPC barrier (thick orange lines) and a different distributions of charge (thin orange lines). Because the potential barriers grow higher, the transmission functions become smaller (cyan lines), leading to a positive thermopower, as explained in the main article.
Notice that the potential at the top of the barrier is much more sensitive to heat induced charging when the exchange-correlation term is taken into account. To see why this is so, consider Figs.~\ref{ldos}c,d showing the local total and spin densities of states, respectively, calculated at point X$_2$ using the spin density functional theory (SDFT). Contrary to the unpolarized DFT calculations used in the main article and in Fig.~\ref{ldos}a, SDFT allows spin polarization to take place which is favorable in the regions where the density is low, i.e. near the points where a chemical potential intersects the QPC potential. To the left of point X$_2$, i.e., at more negative gate voltages, the QPC barrier rises above both chemical potentials and there are two localized states, one on each side of the barrier. At point X$_2$, the barrier has lowered sufficiently that the localized state pinned to the source chemical potential has spread over the top of the barrier (Figs.~\ref{ldos}c,d). Increasing the gate voltage further, the barrier drops far below the source chemical potential and this localized state is replaced by a smooth unpolarized charge density. The spreading of the localized state across the top of the barrier and then its eventual disappearance occurs over a narrow interval of gate voltages. Therefore, near X$_2$ the state of the QPC is very sensitive to additional charge appearing in vicinity. The formation of localized states at low densities can be traced to the exchange interaction. In the Hartree approximation, the evolution in this gate voltage interval is much smoother and, therefore, such sensitivity is not observed.
\begin{figure}[h]
\includegraphics[width=0.8\textwidth]{FigS1.png}
\caption{$\Delta\mu$,$\Delta T$-DFT thermopower simulations (a) with (same as Fig.~3b in the main article) and (b) without exchange-correlation terms. X$_2$ marks the points at $V_\mathrm{g}=-439\,$mV and $V_\textrm{off}=-0.8\,$mV. \label{xcnoxc}}
\end{figure}
\begin{figure}[h]
\includegraphics[width=1\textwidth]{FigS2.png}
\caption{(a) LDOS at point X$_2$ in Fig.~\ref{xcnoxc}a. The thick red and orange lines are the first subband QPC potentials at $T_\mathrm{s}=T_\mathrm{d}=100\,$mK and at $T_\mathrm{s}=100\,$mK, $T_\mathrm{d}=150\,$mK (the difference to the red line is multiplied by 100), respectively. The long-dashed white lines are the source and drain chemical potentials. The thin red and orange lines show the charging (in $10^{-4}\,$nm$^{-1}$, offset vertically by -1) induced by heating the drain in $\Delta\mu$-DFT and $\Delta\mu$,$\Delta T$-DFT simulations, respectively. The cyan line shows the drop of the transmission function due to the temperature induced screening (multiplied by 500 and offset horizontally by -100). (b) The same at point X$_2$ in Fig.~\ref{xcnoxc}b. (c)~LDOS and first subband spin-up and spin-down QPC potentials (red and blue lines, respectively) in the LSDA simulation with exchange-correlations terms. (d) Local spin density of states (LSDOS) corresponding to panel (c). \label{ldos}}
\end{figure}
\clearpage
\par
|
3,212,635,537,705 | arxiv | \section{Introduction}
The traditional concept of globular clusters (GCs) as simple stellar populations, where all stars share the same age and abundances within some small tolerance, is now a view of the past, as it has become clear that (nearly) all GCs host significant abundance spreads within them. While all GCs show the same basic pattern, enriched populations in He, N, \& Na and populations depleted in O \& C, the specifics of each cluster are unique. It is the manifestations of these distinctive chemical anomalies that cause the impressively complex colour-magnitude diagrams (CMDs) that have been uncovered with precision Hubble Space Telescope (HST) photometry, especially when viewed in the UV and near-UV. These star-to-star abundance variations within clusters are known as ``multiple populations" (MPs).
\begin{marginnote}[]
\entry{GCs}{Globular Clusters}
\entry{MPs}{Multiple Populations}
\end{marginnote}
The past decade has seen an impressive amount of observational work on the topic, with ground based spectroscopic surveys of thousands of stars within samples of GCs tracing the detailed abundance patterns \citep[e.g.,][]{Carretta:09UVES}, and space-based photometry providing unprecedented views of the number and make-up of the different populations within the GCs \citep[e.g.][]{Piotto:15UVsurvey}. In addition to these observational advances, a number of scenarios for the origin of MPs have been put forward, which have begun providing testable predictions. Alongside the co-formation/evolution of GC populations in galaxies, the origin of MPs is one of the major unsolved problems in GC and stellar populations research.
The goal of this review is to provide an overview of the present state of observations of MPs along with a critical comparison against theoretical models that have been put forward for their origin. We focus the majority of our attention to results obtained since the last {\em Annual Review} on the topic \citep{Gratton:04} and refer the interested reader to that comprehensive review for the historical developments and status of the field up until that time. Additionally, there have been a number of more recent excellent reviews on the topic, notably \citet{Gratton:12Rev} and \citet{Charbonnel:16}. The field has been growing at a rapid rate, with hundreds of relevant papers published each year, and as such, we are unable to reference all work in the field. Instead we use typical examples to illustrate broader points, and attempt to synthesise all results into a coherent status update of the field.
While many of the previous reviews have concentrated on the chemistry of MPs, we explore that as only one line of evidence, and also consider global properties and correlations, relation to field stars and the physical properties of both young and old massive clusters.
We define MPs as the presence of star-to-star variations in chemical abundances, not expected from stellar evolutionary processes. In particular, as will be reviewed below, this means variations in light elements such as He, C, N, and O that can cause complexities in CMDs as well as Na, Al and in some cases Mg. This can be contrasted with observations of some young ($<2$~Gyr) clusters which show unexpected features in their CMDs (e.g., extended main sequence turn-offs or split main sequences) which are not due by abundance variations but are rather driven by stellar evolutionary processes (i.e., rotationally induced stellar structure changes).
Finally, a note about terminology. Stars within GCs that show enhancements in He, N, Na and are depleted in O and C have various labels in the literature, e.g., ``2nd generation stars", ``2nd population", ``enriched stars"\footnote{We use to term ``enriched" in the ``chemical enrichment" sense, meaning that the material appears to be processed through nuclear reactions in stars. We note that some elements are in fact depleted (e.g., O, C).}. We chose to use the latter options, as ``2nd generation" implies a genetic link to a first population. While such a link is possible, it is by no means established (in fact evidence is currently pointing away from this interpretation), hence the use of more neutral terminology is more natural as the origin of MPs is still unknown. However, when referring to models that explicitly invoke multiple generations of stars, we will use the ``generation" label for clarity. Also, we will use ``correlation" to refer to a positive correlation between two or more elements, and ``anti-correlation" for a negative correlation between abundances.
\section{Observations of Abundance Variations and Colour Magnitude Diagrams}
\subsection{Abundance variations}
MPs with distinctive light element abundance pattern are widely observed in old and massive clusters. Abundance spreads are only rarely associated with star-to-star Fe and heavy element variations, implying that some unique chemical enrichment mechanism, operating only in cluster environments, is responsible for the observed chemistry. The suggestion that the light element anomalies arise from nuclear processing within massive stars from a previous generation born within GCs still remains the only theory that has been {\em quantitatively} investigated. Nonetheless, such a hypothesis suffers from several drawbacks and can only account for some of the relevant observations. In the following we will review the status of observations and critically discuss their interpretation in the framework of MPs.
\subsubsection{Light element abundance spreads}\label{SEC:CN}
The presence of chemical inhomogeneities among bright giants in clusters was revealed by pivotal studies in the early seventies \citep[e.g.][]{Osborn:71}. Stars at the same magnitude along the RGB were found to display variations in the strengths of CH, CN, and NH blue absorption features, due to underlying star-to-star variations in C and N abundances \citep{Bell:1980}\footnote{
In a first approximation, the CH absorption at 4300~\AA~can be regarded as a C sensitive diagnostic, while the CN and NH band-strengths (at 3839 and 3360\AA~respectively) are proxies for N.}. Most of the studied GCs display either a bimodal or multimodal CN distribution \citep[e.g.][]{Norris:87}. The molecular CN (NH) and CH bands were found also to be anti-correlated, with CN-strong stars also characterised by weak CH absorption and vice versa; i.e. N is found to anti-correlate with C.
\begin{marginnote}[]
\entry{Primordial star (1P)}{star having the same abundances as the field at the same metallicity [Fe/H].}
\entry{Enriched stars (2P)}{star showing enhanced N, Na, and Al and depleted C and O abundances
with respect to field at the same metallicity [Fe/H].}
\end{marginnote}
While extremely common in clusters, stars characterised by enhanced N and depleted C are rarely found in the field and not present in open clusters \citep[OCs; e.g.][]{MacLean:15,Martell:11}. However, GCs also contain stars that are characterised by the same abundance pattern observed in field stars of the same metallicity. This has led to the notion that GCs are made up of MPs, one with field-like composition, and a second with ``anomalous chemistry" unique to GCs. In the following, we will refer to the stars with peculiar chemical composition as enriched or 2P (second population) and the stars having field-like abundances as primordial or 1P (first population). We consider enriched or 2P and primordial or 1P as synonyms and we use the expressions interchangeably throughout this review.
Evolutionary mixing was originally proposed as the main cause of the C and N inhomogeneities as normal stellar evolution may contribute to the observed N-C anti-correlation in evolved RGBs \citep[e.g.,][]{Deni:90}. However, such an {\em evolutionary} scenario was soon challenged by observations \citep[e.g.][]{Gratton:04}, as
mixing theories cannot explain the abundance anomalies seen among non-evolved or scarcely evolved MS and MSTO stars \citep[e.g.][]{Cannon:98,Briley:04} which are characterised by negligible outer convective zones. Even if sufficient mixing could be achieved during MS evolution, it would also result in changes in helium abundances and extended lifetime of stars, e.g. mixing would result in broadening the MSTO region in the CMD, contrary to what observed (in ancient GCs).
\begin{marginnote}[]
\entry{RGB}{Red Giant Branch}
\entry{HB}{Horizontal Branch}
\entry{AGB}{Asymptotic Giant Branch}
\entry{SGB}{Sub-Giant Branch}
\entry{MS}{Main Sequence}
\entry{MSTO}{Main Sequence Turn-off}
\end{marginnote}
When higher-resolution spectra allowed for direct spectroscopic measurements of Na and O (through atomic lines) in
stars where N and C abundances were available, it was found that the N overabundance (C depletion) was associated to enhanced Na (O depletion); i.e. N-Na and C-O are positively correlated \citep[e.g.][]{Sneden:92}. Also, while the individual abundances of C, N, O show large spreads, the sum C+N+O is generally observed to be constant \citep[e.g.][see also~\S~\ref{sec:SCLUSTERS}]{Dickens:91}.
Anti-correlated Na and O ranges were found in nearly all the studied clusters, along with variations in Al and (possibly) Mg, anti-correlated with each other \citep[e.g.][]{Gratton:04,Gratton:12Rev}.
While O can potentially be depleted in the interiors of low mass stars through the CNO-cycle reactions,
variations in the abundances of heavier elements like Na, Al, and Mg cannot by produced by
fusion reactions within low-mass stars. This is because their temperatures are too low for the p-capture reactions to operate through the NeNa- and MgAl-chains (e.g.; \citealp{Prantzos:07}, Prantzos, Iliadis \& Charbonnel 2017). Hence, the abundance anomalies are not produced in the course of the evolution of stars we are currently observing but they were produced elsewhere, potentially within the interiors of more massive stars. See Fig.~\ref{fig:CUBI} (right panels) for some of the typical (anti-)correlations associated with MPs.
\begin{marginnote}[]
\entry{FRMS}{Fast Rotating Massive Star}
\entry{VMS}{Very Massive Star ($\geq$5000~\hbox{M$_\odot$})}
\end{marginnote}
How this material would then find its way into the low mass stars observed today is still an open question, as is the exact source of the material. Most models to date have adopted a scenario where material from a first generation of stars pollutes the intra cluster medium out of which subsequent generations of stars were born. Several candidate 1P polluters -- either intermediate mass asymptotic giant branch stars (AGBs; $3-9$~\hbox{M$_\odot$}), fast rotating massive (FRMSs, $>15$~\hbox{M$_\odot$}), or very massive (VMSs; $\geq$5000~\hbox{M$_\odot$}) stars -- have been proposed because they are
sites of hot CNO and NeNa processing and we will discuss them in \S~\ref{SEC:NUCLEARREACTION}.
A (weak) Si-Mg anti-correlation was observed in a small number of massive and/or metal-poor GCs \citep[e.g. NGC~6752, NGC~2808, M~15;][]{Yong:05,Carretta:09UVES}, implying proton burning occurring in even hotter environments ($\geq$75 MK), than needed for the CNO and NeNa processing.
The presence of anti-correlated CNONaAl abundances has been demonstrated to be nearly universal among old and massive clusters and has even been suggested to be {\em the} distinguishing feature between {\em genuine} globulars from other stellar associations \citep[e.g., OCs or dwarf galaxies;][]{Carretta:10GLOB}. If stars with high N, Na, or Al abundances are found in the field, they are usually considered to have originated from GCs (unless they are part of a binary system). Spectroscopic studies have estimated that at least 3\% of the local field metal-poor star population was born in GCs \citep[e.g.,][]{Carretta:10GLOB,Martell:11}, under the assumption that all 2P stars must form in GCs.
The shape and the extension of the light element anti-correlations (i.e. their extrema, substructure and their multi-modality) vary from cluster to cluster, with some clusters showing a well extended Na-O anti-correlation and objects for which both Na and O abundances span very short
ranges \citep[e.g.][]{Carretta:09GIR,Carretta:09UVES}.
In a few cases, the Na-O distribution is clumpy, with the presence of one or more gaps
\citep[e.g.][]{Marino:08M4, Lind:11,Carretta:15N2808}. Such quantised distributions may be common, but measurements with very small uncertainties are needed to corroborate this. However, such a multimodality of the blue CN band is nearly universal in metal-rich clusters ([Fe/H] $\geq$ --1.7 dex) where errors on CN measurements are small enough to reveal discrete distributions \citep[][]{Norris:87}.
The light element variations span similar intervals in different evolutionary phases \citep[e.g.][]{Gratton:12HBN1851}.
Observations show that unevolved stars on the MS and evolved RGB stars span the same ranges of chemical anomalies demonstrate that such light element variations cannot be due to accretion of processed material on already formed stars, as the anti-correlations would be strongly diluted by mixing as the stars evolve \citep[e.g.][]{Gratton:04}. Also, the ratio between 1P and 2P stars along the AGB appears to be consistent with
the corresponding ratio found on the RGB and the observed HB morphology \citep[e.g.][]{Cassisi:14,Lapenna:16,Lardo:17}.
An Al-Mg anti-correlation is not observed in all the GCs where the Na-O and N-C variations are detected. There are clusters that are characterised by a single Al abundance, while others show wide Al ranges \citep[][]{Carretta:09UVES,Meszaros:15}. The majority of the MW GC stars for which Mg abundances are available have typical Mg abundances in the range 0.2 $\leq$ [Mg/Fe] $\leq$ 0.5 dex; implying a very short (if any) Al-Mg anti-correlation. Only a few Galactic GCs have been found to host stars that are significantly deficient in Mg ([Mg/Fe]$\leq$ 0.0 dex - \citealp[e.g.][]{Carretta:142808,Mucciarelli:12N2419}).
The extent of the Al-Mg anti-correlation correlates with both cluster mass and metallicity, as massive and metal-poor cluster tend to have larger Al-Mg anti-correlations \citep[e.g.][]{Carretta:09UVES,Carretta:09GIR,Pancino:17}.
While the N-C, Na-O (and in some cases the Al-Mg) anti-correlations and photometric spreads along the RGBs (see~\S~\ref{SEC:PHOTOMETRY}) are distinctive signatures present in (nearly) all ancient GCs, the cluster-to-cluster differences are large in terms of the extreme values, substructure and multi-modality. The evidence that each surveyed GC has its own specific pattern of MPs calls for a high degree of variety (or stochasticity) that must be taken into account when proposing MP formation mechanisms \citep[e.g.,][]{BastianHe}.
To date, there have only been a few stars in a handful of GCs that have been fully characterised in terms of their chemistry \citep[i.e. the full set of varying elements: C, N, O, Na, Al, Mg, He, s-process, etc; e.g.][]{Smith:15}. Instead, different surveys have focussed on different elements, and often even different stars within the same GCs. This is an obvious avenue for future studies, to characterise the exact chemical fingerprint of 1P and enriched 2P stars. We refer the interested reader to the compilation of \citet[][]{Roediger:14} for abundances for a number of elements for stars in GCs.
\begin{figure}[!b]
\centering
\begin{minipage}{.4\textwidth}
\centering
\includegraphics[width=1\linewidth]{n6752.pdf}
\end{minipage}%
\begin{minipage}{0.6\textwidth}
\centering
\includegraphics[width=1.\linewidth]{anticorr.pdf}
\end{minipage}
\caption{NGC~6752 C$_{\rm U,B,I}$ vs. V CMD is shown in the {\bf left panel}. Photometry has been kindly provided by Peter Stetson. Spectroscopic targets from \citet{Yong:05,Yong:08} are also plotted. Colours correspond to a different chemical composition, with green, red, and black symbols having high, moderate, and primordial Na content. Stars with different light-element composition which are well mixed along the RGB in optical colours, occupy
distinct sequences in the C$_{\rm U,B,I}$ vs. V CMD. The same stars are plotted in the {\bf middle} and {\bf right panels} to show light abundance variations.}
\label{fig:CUBI}
\end{figure}
\subsubsection{He variations: Main Sequence splitting, Horizontal Branch morphology, direct measurements}\label{SEC:HE}
If the CNONaAlMg star-to-star variations are built in the stellar interiors through CNO cycling
and p-capture processes at high temperatures, we may also expect He variations (as it is the main product of H burning).
The observational data suggest that N and Na variations are always correlated with some (variable) He enhancement. However, this result is mostly based on indirect evidence as only a handful of studies have provided direct He abundance determinations\footnote{Direct measurements of elements like He, O, Na, and Al, require high-resolution thus they are limited to the brighter stars in GCs. Conversely, both N and C are generally measured in fainter stars at the base of the RGB, because evolutionary mixing as the star evolves along the RGB can blur the MP chemical signature. Hence abundance determinations for the whole set of elements which vary in GCs are available only for a few stars in a handful of clusters.}.
He enhancement can be inferred from:
{\em (a)} {\em direct} measurements of He abundances, {\em (b)} splits or spreads of the MS in optical CMDs, and {\em (c)} the HB morphology of the clusters. In what follows we refer to the He mass fraction as Y and denote variations in He as
$\Delta$Y = Y -- Y$_{\rm p}$, where Y$_{\rm p}$ represents the initial He mass fraction value of Y$_{\rm p}$ = 0.244
\citep[e.g.;][]{cassisi03}.
Direct Y measurements are difficult to obtain. Temperatures above T$>$ 8500 K are necessary to detect the He photospheric transitions in the optical. However, hot HB stars -- where the He line might appear because of their high temperatures, are also affected by diffusion and preferential settling of elements \citep{Behr:03}.
As a result, Y can only be measured in stars with temperatures between $\sim$8500 - 11500~K, which are hot enough to show He line but still cooler than the Grundahl-jump \citep[e.g.;][]{moehler14}, the temperature limit above which the original surface abundances are changed by diffusion \citep{Grundahl:99}. Nonetheless, Y measurements from the photospheric He I line at 5875~\AA~in HB stars have been obtained for some GCs, and variations have been reported, with typical spreads of $\Delta(Y)=0.02-0.05$ \citep[see][for a summary]{MucciarelliHe}. He-rich stars have been also shown to be Na-rich and they are systematically located towards the blue regions of the HB \citep[][]{Villanova:09}.
For FGK-type stars, no photospheric lines exist and He can only be measured from the
purely chromospheric He I absorption line at 10830~\AA.
Studies based on this near-infrared transition confirm that He enrichment generally correlates with [Na/Fe] and [Al/Fe].
\citet{Dupree:13} directly measured He abundances from the 10830~\AA~line in two giants in $\omega$~Cen. They estimate an He abundance of Y $\leq$ 0.22 (below the big bang nucleosynthesis value) for the 1P star and Y = 0.39 - 0.44 for the 2P one, with the He-rich star also enhanced in Al.
Similarly, \citet{Pasquini:11} performed a differential analysis between two giant stars of NGC 2808 with different Na abundances. They estimated that the 2P star is more He enriched than the Na-poor one by $\Delta$Y =0.17.
While the direct spectroscopic evidence of He-enhancement is somewhat sparse, several
photometric studies provided evidence that such He variations are in place \citep[e.g.,][]{Maeder:06,Anderson:0947Tuc,Bellini:13,Nardiello:15}.
Photometric estimates of $\Delta$Y can be derived by assuming that the observed colour dispersions at a given magnitude on the MS in optical colours (i.e. V--I) are due primarily to He spreads.
The measure of $\Delta$Y spreads from MS isochrone fitting presently appears the most reliable method to infer He dispersions (see \citealt{Cassisi:17} and \S~\ref{SEC:PHOTOMETRY}) and recent results from HST photometry reveal that the observed He spreads $\Delta Y$ strongly correlates with present-day cluster mass and luminosity, with more massive clusters having larger He spreads \citep[e.g.][which will be discussed in detail in \S~\ref{sec:age_mass}]{Milone:15M62}.
In $\omega$~Cen, the presence of a split MS \citep[e.g.][]{Bellini:10}
has been interpreted in terms of a large variation in the abundance of helium \citep[$\Delta$Y $\sim$ 0.15; e.g.][]{King:12}. The observation that the bluer MS is also $\simeq$ 0.3 dex more metal-rich than the redder MS further supports the existence of such large He enhancement, as canonical stellar models would predict the bluer MS to be more metal-poor than the red one and only a high He value can explain the colour difference between the two MSs \citep{Piotto:05}.
Large He variations are also observed in clusters with homogeneous iron content, as in NGC~2808 where three distinct MSs can be clearly identified in optical CMDs \citep[e.g.][]{Piotto:07}. Given the lack of an iron spread \citep[e.g.][]{Carretta:06}, the MS split is interpreted as being due to three groups of stars with different He \citep{Milone2808He} which are likely linked to the multimodal HB structure \citep{Dantona:05,Dalessandro:10} and the three chemically distinct groups observed along the RGB \citep{Carretta:142808}. In NGC~2808, such He variations are also correlated with light element abundance spreads, in the sense that stars with 1P composition are associated to the red MS with primordial He content, while stars with high N, Na, and Al are located onto the He-rich, blue MS \citep{Bragaglia:10b}.
Variations of He between 1P and 2P stellar groups may also affect the colour and luminosity of the RGB bump; as shown in \citet{Bragaglia:10a}.
Variations in the abundance of He can have a significant impact the HB morphology \citep{Rood:73,Dantona:02}. This is because He-rich stars evolve faster than those with primordial He and thus, at a given age, He-rich stars at the MSTO are less massive \citep[e.g.,][]{Chantereau:16}. Hence, if both He-rich and He-poor stars experience the same mass loss during RGB evolution, they should end up on the HB stars with different masses; i.e. different colours (see also \citealp{Norris6752}). Indeed, the HB morphology of several clusters has been modelled in terms of variable He \citep[e.g.][]{Caloi:07,Cassisi:09,Dantona:10,Dalessandro:13,DiCriscienzo:15}.
Since He affects the HB morphology both in terms of temperature (due to mass-loss) and luminosity (because of the different contribution to the luminosity of the H-burning shell), variations in colour (e.g.; temperature) along the HB are largely degenerate with mass-loss and age. Interestingly, the presence of He-enhanced populations along the blue part of the HB can be inferred without making assumptions about the RGB mass loss when a combination of optical and far-UV magnitudes is used \citep[e.g.][]{Dalessandro:10,Dalessandro:13}.
Further spectroscopic evidence (not including the measurement of He abundances) strengthens the connection between the HB morphology and the chemical composition \citep[e.g.,][]{Gratton:14,Lovisi:12,Schaeuble:15}. For example, the extension of the Na-O anti-correlation correlates with the maximum temperature of stars along the HB, indicating that the same physical mechanism responsible for the extreme Na enhancement and O depletion is also responsible for the morphology of the blue tail at the at end of the
HB sequence \citep[][]{Carretta:10GLOB}. This correlation is interpreted as an evidence that the HB morphology is determined not only by age and metallicity but also by the He abundance, as Na-rich stars are also He-rich \citep[e.g.][]{Gratton:10HB}. More massive clusters also tend to have HBs that are more extended towards higher temperatures \citep[][]{RecioBlanco:06}. This evidence in turn would again suggest that very massive GCs show larger extents of processing, i.e. very low O and high Na (see \S~\ref{sec:age_mass}).
\subsubsection{Lithium variations among GC stars}\label{SEC:LI}
Lithium traces mixing processes, as it is rapidly destroyed in p-captures at temperatures exceeding $\sim$ 2.5 MK. Thus, if high values of N, Na, and Al are produced through hot H-burning, 2P stars should be depleted in Li.
Some studies have revealed an anti-correlation between Na and Li, as expected \citep{PasquiniLi,Lind:09Li,Dorazi:15Li}. However, importantly, other works have not found evidence for Li variations among stars with 1P and 2P composition \citep[e.g.][]{Mucciarelli:11Li}. Since, Li is destroyed at relatively low temperatures (i.e., well below temperatures where Na is formed) any material that is enriched in Na should be Li free. In order to explain the presence of some Li in 2P stars, it has been suggested that the polluters' ejecta (i.e. Li free, Na, N-rich) must be mixed with unprocessed material; i.e. gas which has always been kept cooler than $\sim$ 2.5 MK \citep[][]{Prantzos:06}. Such models are known as ``dilution models", see \S~\ref{sec:models}, \S~\ref{sec:dilution}, and Fig.~\ref{fig:dilution}.
AGBs can potentially produce Li through the \citet{Cameron:71} mechanism at the beginning of the hot bottom burning \citep[HBB; e.g ][]{Ventura:02}. However, the finding of exactly the same Li abundance (or barely different) between 1P and 2P stars indicates that if AGB stars were responsible for the observed anomalies, they must have been able to {\em (a)} produce the same amount of Li previously destroyed by nuclear burning and {\em (b)} give yields close to the values of primordial nucleosynthesis.
This concurrence certainly requires a high degree of fine-tuning and thus this explanation is unsatisfactory. On the other hand, both massive and very massive star models requires mixing with pristine material to account for the presence of lithium in 2P stars because their ejecta are Li free. Thus, the maximum depletion of oxygen in the final enriched composition cannot exceed the depletion of Li \citep[][]{Salaris:14}\footnote{As the processed material is expected to be Li-free, whereas it is only depleted in O.} contrary to what is observed \citep[][]{Shen:10}.
As a matter of fact, all the proposed scenarios have major problems in reproducing the Li content observed in clusters, where small (or no) variations of Li are found associated with large variations of other light elements.
\subsubsection{Mg \& K}\label{SEC:K}
Mg does not show significant star-to-star dispersion in all but a handful of GCs (\S~\ref{SEC:CN}). In only two clusters (namely NGC~2419 and, to a lesser extent, NGC~2808), low Mg abundances are also correlated with extreme K enhancements \citep[e.g.][]{Mucciarelli:12N2419,Carretta:15N2808}, whereas star-to-star scatter in K are not generally observed for the bulk of GCs \citep[][]{Takeda:09}. The K overabundance of Mg-poor stars can be produced, under some assumptions, by AGBs \citep[e.g.][]{Ventura:12}. However, both Na and Al are destroyed at the typical temperatures at which K is produced, e.g. Na and K are anti-correlated in stellar ejecta \citep[][]{Prantzos:17}. Thus, the simultaneous Na and K enrichment seen in NGC~2419 and NGC~2808 cannot be explained if the observed Na and K inhomogeneities are produced by the same stellar source. As NGC~2808 and NGC~2419 are unusual in terms of the K-abundance patterns it is not clear if this is a promising window into the MP phenomenon, or instead pathological cases that confuse the issue.
\subsubsection{Multiple Populations in Extragalactic Environments}
MPs have been also found outside our Galaxy. Star-to-star abundance variations in N, Mg, Na, and Al were reported in extragalactic GCs by \citet{Mucciarelli:09}, who studied three ancient GCs in the LMC \citep[see also][for earlier studies]{Letarte:06,Johnson:06}. They found that these three clusters followed the same Na-O and Al-Mg anti-correlation trends as seen in Galactic GCs. \citet{Hollyhead:17} measured the N and C-abundances of stars in the $\sim8$~Gyr SMC cluster, Lindsay 1 based on low resolution spectroscopy of cluster members.
Using HST imaging in filters that are sensitive to C, N and O variations (see Fig.~\ref{fig:SPECTRA}), \citet{Larsen:15Fornax} determined the presence of MPs in four GCs in the Fornax dwarf spheroidal galaxy; they have also been detected in three 6 - 8 Gyr clusters in the SMC \citep{FlorianSMC}; as well as in the only {\em classical} GC in the SMC
\citep[][]{Dalessandro121,Florian:121}.
\begin{marginnote}[]
\entry{SMC}{Small Magellanic Cloud}
\entry{LMC}{Large Magellanic Cloud}
\end{marginnote}
There are a number of GCs within the Milky Way that likely originate from accreted dwarf galaxies. These include GCs associated with the Sagittarius dwarf galaxy, for example M54 (perhaps the nucleus of the galaxy, see ~\S~\ref{sec:omega}), Terzan 7 \& 8, Pal~12, \& Arp~2. M54 certainly shows MPs \citep{Carretta:10M54}, while the situation is less clear for Terzan~7 and 8 and Pal~12 due to the small samples of stars observed in each \citep[e.g.][]{Cohen:04}.
In addition to resolved star studies, integrated light studies have also found strong evidence for MPs to be present in extragalactic clusters by looking for GCs that are strongly enriched in N or Na. These include many ancient GCs in M31 \citep{Schiavon:13,Colucci:14,Sakari:15} and the lone GC associated with the WLM dwarf galaxy \citep{Larsen:14WLM}.
There have also also been attempts to search for MPs in extragalactic environments through integrated light photometry in the UV. If (large) He spreads are present within the clusters, an extreme HB may develop causing significantly more UV emission than if all stars have the nominal He abundance. Such UV-excess has been observed in some massive extragalactic GCs in M87, M31 and M81 \citep[e.g.,][]{Sohn:2006,Mayya:13,Peacock:17}.
Based on these studies, along with those of Galactic GCs, it appears that one of the main properties of MPs is their near ubiquity in ancient and massive GCs \citep[c.f.,][]{Renzini:15}. However, as will be discussed in \S~\ref{sec:age_mass} this near ubiquity does not appear to apply to the young and intermediate age ($\lesssim2$~Gyr) massive clusters in the LMC/SMC.
\subsection{Multiple Populations as Seen Through CMDs}\label{SEC:PHOTOMETRY}
The peculiar MP chemical composition can also be seen through accurate photometry \citep[e.g.][]{Hartwick:72}.
Imaging allows us to discriminate efficiently between 1P and 2P sub-populations through photometry in samples composed of many thousand of stars, while simultaneously covering a wider region in the sky (a result that is difficult to achieve with the most advanced spectroscopic facilities, even for nearby clusters). The relative number ratios between 1P/2P stars can be inferred and the radial distribution of the two groups can be investigated in detail by taking advantage of the large number statistics secured through photometry \citep[e.g.;][]{Lardo:11,Lee:17}.
Nonetheless, wide-field photometric observations covering the full extension of the clusters (i.e., out to the tidal radius) are available only for a subset of clusters \citep[][]{Dalessandro:14,Massari:16} even if a large amount of archival data are publicly available in the archives.
HST offers very high precision and accuracy to effectively sort different sub-populations \citep[][]{Piotto:15UVsurvey,Soto:17,Milone:17}. The {\em HST UV Legacy Survey of Galactic Globular Clusters} (PI G. Piotto) has had a major impact on the field, allowing for the exploration of MPs and the link with their host cluster in unprecedented precision. However, space-based observations have only a limited spatial coverage\footnote{Moreover, different regions of the clusters are included in the HST FoV, depending on the specific properties of the cluster itself, i.e. core/half-light radii and heliocentric distance.}.
The less dense outer parts of clusters (where the two-body relaxation timescale is longer and mixing less efficient) can retain imprints of different initial configurations of MPs as differences in their relative spatial distributions or kinematics, hence their study allows us to gain crucial insights on the dynamics in play at the formation of the different sub-populations.
\subsubsection{Causes for the Complex CMDs and Filter Dependence}
Splits or spreads in cluster CMDs have been used to identify MPs and constrain their properties. The cause of these splits
depends on the colour (or colour combination) used to image clusters and on the specific evolutionary stage considered. Briefly, filters encompassing wavelengths shorter than $\sim$ 4000\AA~are very sensitive to individual variations of C, N, O in the outer layers of stars with cooler atmospheres. Conversely, star-to-star variations in He (as well as the CNO sum) impact primarily the stellar structure. As such, they affect mainly optical bands although they have some influence on the UV.
\citet{Salaris:06} firstly considered the effect of He and light element variations on photometry. They conclude that in the Johnson-Cousins B,V, and I filters only an extreme helium enhancement (Y$\geq$ 0.35) leads to an appreciable colour change of stars with 2P composition as compared as a standard 1P stars. A prominent splitting of the MS and the MSTO is produced by relatively large He enhancements, while colour variations due to He variations are less pronounced in the RGB in optical colours. The CNONa anti-correlations do not affect the evolutionary properties of stars, hence the position of stellar models in the theoretical H-R diagram, when the C+N+O sum is kept constant \citep[][]{Sbordone:11}. On the other hand, the observed splitting of the SGB into a brighter and fainter sequences in some clusters in optical filters can be interpreted as the result of a change in the C+N+O sum \citep[][]{Cassisi:08,Piotto:12}. Moreover, 1P and 2P stars have also slightly different luminosity at the RGB bump and they occupy different regions on the HB when clusters are imaged with optical BVI filters \citep[e.g.; ][]{Bragaglia:10a}.
\begin{figure}
\includegraphics[width=0.74\columnwidth]{spectra.pdf}
\caption{Normalised synthetic spectra of RGB stars with 1P and 2P composition are plotted in the top panel. A number of molecular absorption bands that vary significantly between the two spectra, are also labelled.
The flux ratio between the two spectra ratio is shown in the bottom panel, along with some WFPC3/UVIS filters used in photometric studies to pinpoint the presence and properties of MPs
(bottom panel, from left to right: F336W (U), F343N, F438W (B), F555W (V), F814W (I), where the we also list the approximate Cousins-Johnson filter equivalent in parenthesis). After \citet{Sbordone:11}.}
\label{fig:SPECTRA}
\end{figure}
Larger colour spreads (from the MS up to the RGB, where the effect tends to be larger) are expected in CMDs including near ultraviolet filters, even while leaving the C+N+O sum unchanged \citep[see][for a comprehensive discussion]{Petrinferni:09}. C, N, and O individual variations are critical, while He enhancement works in the opposite direction of CNONa spreads.
This property appears to be shared by any filter encompassing the wavelength range 3000 $\leq$ $\lambda$ $\leq$~4000\AA, where most of the NH and CN absorptions are located. In Fig.~\ref{fig:SPECTRA} we show synthetic spectra of RGB stars with typical 1P (black) and 2P (red) chemical abundances, and highlight molecular bands that differ between the spectra. Additionally, in the bottom we show the flux ratio between the two spectra and the throughput curves of selected HST filters. Due to these spectral differences the colour spread observed in specific colour combinations including near-UV filters has been shown to be very sensitive to light elements abundances \citep[e.g.][]{Marino:08M4}. Several combinations of colours have also been introduced to best disentangle the different subpopulations.
For example, \citet{Monelli:13} found that all of the 23 clusters in their sample analysed with ground-based photometry show broadened or multimodal RGBs in the C$_{\rm U,B,I}$ = (U -- B) -- (B -- I) vs. V CMDs, where the different branches of the RGBs are tightly linked to their light element content (see Figs.~\ref{fig:CUBI} \& \ref{fig:SPECTRA}). \citet{Florian:121,FlorianSMC} imaged a number of clusters in the LMC in the colour index C$_{{\rm F336W,F438W,F343N}}$ = (F336W -- F438W) -- (F438W -- F343N) to pinpoint the presence of MPs with different C and N abundances, finding evidence for MPs for all observed clusters older than $\sim$ 6 Gyr \citep[see also][]{Hollyhead:17}.
In Fig.~\ref{fig:CUBI} we show an example case of NGC~6752. In the left panel we show the C$_{\rm U,B,I}$ CMD showing the split/spread RGB of the cluster in this filter combination. Additionally, we show the position of stars on the RGB, labelled in terms of their chemical abundances (right panels). Hence, the position of a star in CMDs, in specific filter combinations, can be used to trace the chemical composition of the stars.
\citet{Milone:17} used a similar colour index to constrain the presence and properties of MPs in 57 Galactic old clusters
using the large database of data coming from the HST Large Program {\em The HST Legacy Survey of Galactic Globular Clusters: Shedding UV Light on Their Populations and Formation} \citep[see][]{Piotto:15UVsurvey,Soto:17}. UV observations taken in the F275W, F336W and F438W filters further complement optical HST observations from the {\em ACS Survey of Galactic Globular Cluster} \citep[e.g.][]{Sarajedini:07} with WFC3/UVIS images. The defined C$_{{\rm F275W,F336W,F435W}}$ = (F275W -- F336W) -- (F336W -- F435W) colour combination allows one to clearly identify photometric splits/spreads caused by variations in individual elements, namely C, N, and O (see Fig.~\ref{fig:SPECTRA}). Also, the combination of UV CMDs with optical photometry allows He enhancement ($\Delta$Y) of the different subpopulations to be seen.
A pseudo colour-colour diagram (or chromosome map; see Fig~\ref{fig:uv_hst_legacy}) has also been introduced to
identify different subpopulation from the HST UV survey photometry by highlighting subtle chemical differences (in light elements and He)
between them \citep[e.g.][]{Milone2808He}. Briefly, two fiducial lines are drawn to fit at the blue and red envelope of the RGB sequence in the
F814W vs. C$_{{\rm F275W,F336W,F435W}}$ and F814W vs. (F275W-F814W) CMDs.
The red and blue fiducial lines are then used to verticalise the RGB sequence in a way that they translate into vertical
lines. A pseudo colour-colour plot can then be made of the position of each RGB star in the verticalised colours, $\Delta ^{N} _{C\rm {F275W,F336W,F438W}}$ and $\Delta ^{N}_{{\rm F275W,F814W}}$. An example of such a diagram can be seen in Fig~\ref{fig:uv_hst_legacy} for NGC~2808 stars, which reveals the presence of at least six sub-populations with distinct chemistry.
With such diagrams, \citet{Milone:17} were able to efficiently distinguish the 1P and 2P populations for most clusters, although some clusters did display a continuous distribution (see Fig.~\ref{fig:uv_hst_legacy} for the division). These distinctions were confirmed through comparison with the results of ground based spectroscopic studies, i.e. 1P stars identified photometrically corresponded to stars with the field abundance patterns of Na and O.
\begin{marginnote}[]
\entry{YMC}{Young Massive Cluster - a.k.a. young GC}
\end{marginnote}
With the precision of HST photometry, relatively tight constraints can be placed on any age difference between the populations. Using the UV-Legacy survey data, \citet{Nardiello:15} selected stars from the 1P and 2P populations based on UV images in the Galactic GC NGC~6352. The authors then estimated the age of each population independently, using optical CMDs (V-I vs. I) centred on the MSTO of each population. The optical colours are not strongly affected by MPs (although He variations can affect optical colours as well as non-constant C+N+O sums) hence any differences would be attributed primarily to age differences (if He variations are taken into account, which the authors did). In this case, the age difference was found to be $10 \pm 110$~Myr. When all sources of uncertainties are included (including the [$\alpha$/Fe] ratio), the authors find that the two populations are coeval with an upper limit of $300$~Myr between them. This is consistent with a similar upper age limit found by \citep[][]{Marino:12} for M22. Tighter age constraints can be gotten from younger clusters that show MPs (YMC; see \S~\ref{sec:ymcs}).
\subsubsection{A spread amongst 1P stars?}
An unexpected result of the \citet{Milone:17} study was that the 1P population displayed a significant spread in some clusters (although no spread was seen in Na and O for these stars) while being quite compact in other clusters.
Based on the data provided in \citet{Milone:17}, it appears that $\sim70$\% of the GCs in that sample display a significant spread in their 1P stars. While this appears to be common, many clusters do not show an extended 1P, and it not clear at present what (if any) cluster property controls the spread in the 1P stars.
Preliminary computations (Lardo et al., in preparation) reveal that for intermediate- and low- metallicities the $\Delta C(\rm {F275W,F336W,F438W)}$ colour spread essentially traces N (e.g., stars are sorted in order of increasing N abundance from bottom to top in the chromosome map of Fig.~\ref{fig:uv_hst_legacy}). Conversely, the $\Delta ({\rm F275W,F814W})$ colour spread is sensitive to He enhancement of the different subpopulations (e.g. in order of increasing He content, from right to left; see right-hand panel of Fig.~\ref{fig:uv_hst_legacy}). The spread in 1P stars is seen predominantly in the $F275W-F814W$ colour (UV - I) suggesting that He variations are present within the 1P, which would be very surprising given the lack of Na, N or O variations within this population.
This in turn suggests that some stars with little or no N-spread, show significant enhancement in their He values, which is in conflict with basic nucleosynthesis. Hence something else, other than the recycled by-products of stellar nucleosynthesis, has caused the He variations within 1P stars. This appears to be a particularly promising avenue for future study.
\begin{figure}[!tb]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{CMD.pdf}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{fig_review.pdf}
\end{minipage}
\caption{{\bf Left panel:} An HST UV-optical CMD of the central regions of NGC~2808 \citep[data are from][]{Piotto:15UVsurvey}. Note the distinct multiple RGBs and the highly structured HB. This complexity is due to light element abundance variations (He, C, N and O) between cluster stars. {\bf Right panel:} A ``chromosome map'' of NGC~2808 (after \citealt{Milone:17}) for RGBs (i.e. relative positions of the stars on the RGB in different filter combinations that are sensitive to different abundance variations) where at least six distinct populations can be inferred. Here the x-axis is mainly sensitive to variations in He while the y-axis is dominated by variations in N (at C, O to a lesser extent). Based on the definition of \citet{Milone:17}, stars above the dashed line are considered 2P, while stars below the same line are 1P. Note that both the 1P and 2P consist of 3 extended sub-populations.
}
\label{fig:uv_hst_legacy}
\end{figure}
\subsection{Are there single population GCs?}
Nearly all GCs analysed at high-resolution, with exception of Ter~7, Pal~12, Pal~3, and Rup~106, show the Na and O variations. Ter~7 and Pal~12 are low mass members of the Sagittarius and
high-resolution abundances exist only for a handful of cluster members \citep[$\leq$ 5 stars;][]{Cohen:04, grazina:04,Sbordone:07}. The same holds for Rup~106, a slightly more massive (5 $\times$10$^{4}\hbox{M$_\odot$}$) cluster with a probable extragalactic origin \citep[9 stars;][]{Villanova:13Rup106} and Pal~3, a distant GC in the outer halo, where the available data (2 stars) can neither confirm nor refute the presence of a Na-O
anti-correlation \citep{Koch:09}. Increasing the sample of stars studied in these low mass clusters is essential to determine if there is a lower GC mass limit where MPs are present \citep[e.g.,][]{Dalessandro:14}. In this respect searching for MPs through photometric methods can be problematic in these clusters as the low number of RGB stars often makes it difficult to identify MPs there, unless the populations are well separated (i.e., have large N or He variations).
In this respect, the case of the SMC old cluster NGC~121 studied by \citet{Dalessandro121}, is quite illustrative.
The authors derived Na and O for five RGBs and found no intrinsic scatter in both elements.
However, they detected two RGB sequences in their UV images, meaning that MPs are present. 2P stars were missed in their spectroscopic sample as it was biased (as most spectroscopic samples are) to the outer regions of the cluster, where the fraction of 2P stars is often lower in than in the central regions.
Two other old GCs have been claimed not to host MPs based on either ground based photometry or low resolution spectroscopy, E~3 \citep[][]{Salinas:15} and IC~4499 \citep[][]{Walker:11}, although followup HST photometry has detected MPs in IC~4499 (Dalessandro et al. in prep.). Additional high-resolution studies designed to measure the abundance of the relevant light elements (e.g. Na, O, etc) for a representative number of stars in such clusters are needed to draw firm conclusions on the presence of MPs.
As will be discussed in \S~\ref{sec:age_mass}, a number of high mass ($\sim10^5$~\hbox{M$_\odot$}) clusters younger than $\sim2$~Gyr have been studied, and so far none have been found to host MPs (e.g., Mucciarelli et al.~2008; 2014; Martocchia et al.~2017).
\begin{figure}[!htb]
\centering
\centering
\includegraphics[width=0.82\linewidth]{milone17.pdf}
\caption{Based on results from the HST UV Legacy Survey we show a summary of how MP properties vary with the present day GC mass \citep[after][]{Milone:17}. $\Delta W_{\rm F275W,F336W,F438W}$
and $\Delta W_{\rm F275W - F814W}$ are the widths of the RGB in the two colours (or colour combinations) corrected for the effect of metallicity, in a first approximation a measure of the amount of N- and He-enrichment (respectively) present in the cluster (i.e. the difference between the most enriched stars and the primordial stars). f$_{\rm{enriched}}$ is the fraction of 2P stars relative to the total number of stars, as measured on the RGB. In the {\bf bottom panels} we show f$_{\rm{enriched}}$ vs. cluster mass and $\Delta W_{\rm F275W,F336W,F438W}$. The solid (red) lines in each panel gives the best linear fit to the data, and the probability of no correlation between the points (P) is shown in each panel. Self-enrichment scenarios (for standard nucleosynthetic stellar sources) all predict an anti-correlation between f$_{\rm{enriched}}$ and $\Delta W_{\rm F275W,F336W,F438W}$, opposite to the observed trend. All data are from \citet[][]{Milone:17}.}
\label{fig:correlations_with_mass}
\end{figure}
\subsection{Global properties and correlations}
\subsubsection{Spatial Distributions, Dynamics, and Binary Properties of the Different Populations}
\label{sec:spatial_distributions}
In many cases different stellar sub-populations seem to not share the same radial distribution. Across a range of cluster-centric distance, most studies have found that 2P stars are systematically more concentrated in the innermost region than 1P stars \citep[e.g.][]{Lardo:11,Simioni:16}. Only a few exceptions to this general trend have been reported, with stars with primordial composition being more centrally concentrated than 2P giants \citep[][]{Larsen:15m15,Vanderbeke:15,Lim:16} or 1P and 2P stars having the same radial distribution \citep[e.g.][]{Dalessandro:14,Miholics:15}.
Hints that 2P stars have lower velocity dispersion \citep[e.g.][]{Bellazzini:12,Kucinskas:14} and more radially anisotropic velocity distribution \citep[][]{Richer:13,Bellini:15} have also been reported. The binary properties of 1P and 2P stars may also be different, with 2P stars showing a lower binary fraction \citep[][]{Dorazi:10,Lucatello:15}.
\subsubsection{Observed Population Ratios}
While there are radial trends in the 2P/1P ratios, in most cases large samples of stars are required to demonstrate this statistically. Overall, 2P stars make up the majority of stars in most GCs, although the fraction of 2P stars is seen to be a strong function of cluster mass, with more massive clusters having larger fractions of 2P stars (e.g., Milone et al.~2017 - see Fig.~\ref{fig:correlations_with_mass}). \citet{BastianLardo:15}, using mainly spectroscopic results from the literature which are biased towards the outer regions of clusters, did not find any trends between the enriched fractions (\hbox{f$_{\rm enriched}$} $=N_{\rm 2P}/N_{\rm tot}$) and metallicity or galactocentric distance\footnote{They also did not find any correlations between \hbox{f$_{\rm enriched}$}\ and cluster mass, but found an average value of $\hbox{f$_{\rm enriched}$}=0.68$ which agrees well the with the average from HST photometry, although why they did not find a trend with mass is not entirely clear.}. This has been confirmed with HST photometry \citep{Milone:17}. Hence, the MP phenomenon is not directly linked to the environment in which the cluster forms (e.g., within dwarf galaxies or the bulge of the Galaxy).
The trend between population ratios and mass is a key constraint on scenarios for the origin of MPs which will be discussed in \S~\ref{sec:trends_cluster_properties}.
\subsection{The Role of Cluster Age and Mass}
\label{sec:age_mass}
It is still not clear precisely which properties of the clusters determine whether MPs will be present within the cluster. However, with the release of large and homogeneous surveys we can begin searching for correlations between cluster properties (e.g. age, mass, location) and the presence/absence of MPs as well as their extent, in order to glean clues as to the mechanisms responsible for MPs. In Fig.~\ref{fig:MASSAGE} we show a collection of clusters from the literature where MPs have been searched for, in the age-mass plane and the [Fe/H]-concentration (mass/R$_{\rm h}$) plane.
\begin{figure}[!tb]
\centering
\begin{minipage}{.7\textwidth}
\centering
\includegraphics[width=1\linewidth]{age_mass.pdf}
\end{minipage}
\begin{minipage}{0.7\textwidth}
\centering
\includegraphics[width=1\linewidth]{concentration_fe.pdf}
\end{minipage}
\caption{A summary of results from the literature on whether MPs are present within clusters. Circles denote clusters where MPs have been unambiguously detected, triangles show where they have not been detected (with large enough samples to suggest a true absence) and squares show ambiguous cases (mainly due to small samples or potentially large observational uncertainties). Some particularly interesting cases are labelled. An age of 15 Gy has been assigned to clusters for which no age determination has been found in literature. Whether or not a cluster hosts MPs or not depends on its mass (or density) as well as its age. The data come from the compilation of \citet{Krause:16} with additional points added from more recent works discussed in this review.}
\label{fig:MASSAGE}
\end{figure}
\subsubsection{Cluster Mass}
As it became apparent that (nearly) all of the ancient GCs host MPs and that (so far) none of the open clusters do, it was suggested that cluster mass may play a key role \citep[e.g.][]{Carretta:10GLOB}. The general argument is that if clusters host a deep enough gravitational potential well, they may be able to retain the stellar ejecta of a first generation of stars and form a second generation with that enriched material. This is generally based on an escape velocity argument although often overlooks the role of energetic stellar sources, like high/low mass x-ray binaries or ionising white dwarfs \citep[e.g.][]{Dercole:08,Krause:12,McDonald:15}.
Cluster mass does appear to be an important parameter for GCs, playing a role in determining whether MPs are present, but also in the properties (i.e. how severe the abundance variations are) of the MPs. The first hints for this came from \citet{Carretta:10GLOB} who used their large sample of stars in 19 GCs to search for correlations between the extent of the Na-O anti-correlation (as measured through the interquartile distribution) and various cluster properties. The strongest relation found was with cluster mass, with higher mass clusters showing larger Na-O abundance spreads. This is difficult to reconcile with standard stellar evolution, as the stellar ejecta released into the cluster should not depend on cluster properties. For models that invoke dilution, this would require that lower mass clusters undergo more dilution (whereas lower mass clusters would be expected to accrete less gas from their surroundings) or that higher mass clusters retained a larger fraction of the processed material (although models already adopt that all clusters retain 100\% of the processed material). Since models already assume that GCs retain 100\% of the material processed through the enriching source (e.g., FRMSs, AGBs, IBs, etc) this will further exacerbate the mass budget problem (see \S~\ref{sec:mass_budget}).
Similarly, \citet{Milone:15M62} found that the He spread ($\Delta Y$) within Galactic GCs is much larger in higher mass clusters. Although this was only based on nine GCs, it will be directly tested with a much larger sample from the UV Legacy Survey of GCs \citep{Piotto:15UVsurvey}. In Fig.~\ref{fig:correlations_with_mass} we show the results from \citet{Milone:17} for the width of the RGB in the $(F275W-F814W)$ CMD (corrected for metallicity effects) which is a proxy for He spread (Lardo et al. in prep.). This confirms and extends the trend reported by \citet{Milone:15M62}.
One of the major results from the UV GC Legacy Survey has been the discovery of a strong correlation between cluster mass and the fraction of enriched stars (\hbox{f$_{\rm enriched}$}) within the cluster \citep{Milone:17}. Here, \hbox{f$_{\rm enriched}$}\ is found in the $\Delta_{\rm{F275W,F814W}}$ vs. $\Delta_{\rm{C F275W,F336W,F438W}}$ colour-colour plot (see Fig.~\ref{fig:uv_hst_legacy}). The authors note that in some cases the 1P population appears to be made up of multiple groups, hence \hbox{f$_{\rm enriched}$}\ may be a lower limit. In Fig.~\ref{fig:correlations_with_mass} we show some of the main results from \citet{Milone:17}, namely how N-spread ($\Delta_{\rm{C F275W,F336W,F438W}}$), He-spread ($\Delta_{\rm{F275W,F814W}}$), and f$_{\rm enriched}$ vary as a function of mass (after removing the trends with metallicity).
High mass clusters (e.g., NGC~2808, 47~Tuc with M$_{\rm cluster} \sim10^6$~\hbox{M$_\odot$}) have \hbox{f$_{\rm enriched}$}$\approx 0.8$, while clusters with masses near $10^5$~\hbox{M$_\odot$}\ have \hbox{f$_{\rm enriched}$}$\sim0.4-0.5$. Note that the enriched population still makes up a substantial fraction of the stars even in low mass clusters. It is not just the fraction of enriched stars that varies with cluster mass, it is also the extent of the enrichment as well (i.e. larger abundance spreads in higher mass clusters. This is in agreement with earlier work based on spectroscopic samples \citep[][]{Carretta:10GLOB}. The implications of these results will be discussed in \S~\ref{sec:trends_cluster_properties}.
There has also been studies focused on old open clusters, which typically have masses much lower than GCs, e.g., Ber~39 \citep{Bragaglia:12}. To date, MPs have not been found in open clusters with masses as high as $2\times10^4$~\hbox{M$_\odot$}\ and ages as old as $\sim$ 6 - 9 ~Gyr. Comparison of clusters with ages of 6-8~Gyr clusters in the SMC with masses of $\sim10^5$~\hbox{M$_\odot$}\ \citep{Hollyhead:17,FlorianSMC} with their lower-mass open clusters counterparts (e.g., Berkeley 39) hint that mass may indeed play a role (see Fig.~\ref{fig:MASSAGE}). The SMC clusters do host MPs, while open clusters do not. Although, of course, the formation environment may also have been different.
Recent studies have also targeted low mass ancient GCs, such as NGC~6362 ($M\sim5\times10^4$~\hbox{M$_\odot$}, \citealt{Dalessandro:14}) or E3 ($1.4\times10^4$~\hbox{M$_\odot$}\ - \citealt{Salinas:15}), with mixed results. NGC~6362 does host MPs, but based on its orbit and observed stellar mass function, it is likely that it has lost a significant amount of mass during its evolution \citep[e.g.,][]{Kruijssen09}. E3, on the other hand, does not appear to host MPs, based on CN low resolution spectra. The very extended (R$_{\rm h}\sim25$~pc) outer halo cluster Palomar 14 with a mass of only $\sim10^4$~\hbox{M$_\odot$}\ does appear to host MPs \citep{Caliscan:12}. The current record holder for the lowest current stellar mass cluster that still hosts MPs is NGC~6535 with few$\times10^3$~\hbox{M$_\odot$}\ (\citealp{Milone:17}; Carretta et al. 2018).
A summary of the role of mass (and concentration) in whether a cluster hosts MPs or not is shown in Fig.~\ref{fig:MASSAGE}. There is overlap between ancient GCs that do host MPs and younger clusters that do not. However, the data are consistent with a lower initial mass limit of $\sim10^5$ where MPs can develop (at least for clusters older than $\sim2$~Gyr, see next section).
\subsubsection{Cluster age and metallicty}
As discussed above, nearly all of the ancient GCs that have been studied in the necessary detail host MPs. However, there are stellar clusters that formed after the peak epoch of GC formation ($z=2-5$), continuing to form up to the present day, that have masses and densities comparable to, or even significantly above, the ancient GCs. Hence, an obvious question is whether these clusters also host MPs, and if so, can they be used to test the formation scenarios that have been put forward (see \S~\ref{sec:models}).
There have been a number of studies to search for MPs in massive clusters with ages $<8$~Gyr (see \citealt{Krause:16} for a recent review). With only a handful of exceptions (discussed above) it appears that all massive clusters older than $\sim6$~Gyr host MPs \citep{Hollyhead:17,FlorianSMC} while all clusters younger than $\sim$2~Gyr do not \citep[e.g.,][]{Mucciarelli:08,Mucciarelli:14LMC,Martocchia:17N419}, even with mass being held constant (at $\sim10^5$~\hbox{M$_\odot$}; see Figure~\ref{fig:MASSAGE}).
The $\sim6$~Gyr clusters, NGC~339, NGC~416 and Kron~3, all located in the SMC, show clear evidence for MPs \citep[][]{FlorianSMC}. This age corresponds to a formation epoch of $z_{\rm form} = 0.65$, arguing against a cosmological origin of the phenomenon (i.e. special properties of the early universe that contributed to the formation of MPs). Unexpectedly however, another massive cluster in the SMC, at an age of $\sim1.7$~Gyr, NGC~419, with a similar mass of $\sim2\times10^5$~\hbox{M$_\odot$}~does not host MPs, based on HST photometry \citep{Martocchia:17N419}. The youngest cluster found so far to host MPs is NGC~1978, at an age of $\sim2$~Gyr \citep{Martocchia:18a}, suggesting that MPs (at least on the RGB) develop in an extremely narrow age range (or alternatively stopped being able to form in the LMC/SMC) between $\sim1.7-2$~Gyr \footnote{We note, however, that in the $2-8$~Gyr clusters, MPs have only been identified through N-variations. High-resolution studies to also estimate Na and O in these stars would be a welcome contribution.}. This is shown in Fig.~\ref{fig:MASSAGE}, where clusters like NGC~1783 and NGC~1978 lie on opposite sides of this dividing line in age, although with nearly identical masses. However, there are also older clusters like Ber~39 (a Galactic OC ) that do not host MPs, suggesting that mass (and potentially formation environment) plays a strong role as well.
There have also been a number of studies that have searched directly for abundance spreads in young/intermediate age massive clusters, based on high-resolution spectroscopy \citep[e.g.,][]{Mucciarelli:08,Mucciarelli:14LMC} of individual stars. No solid evidence for abundance spreads has been found so far for any cluster less than $\sim2$~Gyr.
A number of studies have attempted to search for abundance anomalies through integrated light spectral studies
\citep[e.g.,][]{Colucci:12,CabreraZiri:16,Lardo:17Antennae}. These are mainly focussed on finding high mean levels of elements that typically vary due to MPs, namely [Na/Fe] or [Al/Fe]. As with the resolved studies, to date there have not been clear indications for abundance spreads in the young or intermediate age clusters ($<2$~Gyr), although the ancient GCs do show the expected trends in integrated light.
Finally, Fig.~\ref{fig:MASSAGE} also shows the results from the literature on whether a cluster hosts MPs in [Fe/H] vs. concentration (mass/radius) space. There is overlap in both [Fe/H] and concentration where clusters do/do not host MPs. Systematic searches for MPs in diffuse GCs may lead to significant new insights.
\begin{summary}[Observational Summary of Multiple Populations]
\begin{enumerate}
\item MPs, as seen in light element abundance spreads (C, N, O, Na, Al, He and sometimes Mg), are nearly ubiquitous in old massive GCs, independent of their formation environment (formed within the Galaxy or elsewhere) or metallicity.
\item MPs can be defined through clear correlations and anti-correlations between light-elements. The main ones being a Na-O anti-correlation, a N-C anti-correlation, a Na-N correlation, and N and Na being correlated with He. In some clusters Li is correlated with O (and hence anti-correlated with Na), Li measurements are relatively scarce.
\item In most clusters [Fe/H] is constant between the populations and the sum of C+N+O is also typically constant within the measurement uncertainties (although there are more clusters with C+N+O spreads than those with [Fe/H] spreads).
\item Observed abundance trends are qualitatively consistent with those expected from the yields of hot hydrogen burning (increase in He, N, Na, sometimes Al; decrease in C, O, sometimes Mg), however no nucleosynthetic source provides a quantitative match to the data simultaneously.
\item It is the spreads in He, C, N, and O (mainly) that cause the complexity observed in high precision CMDs for the majority of clusters (i.e. not age spreads or Fe spreads).
\item The fraction of enriched stars (ranging from $40-90$\% in the ancient GCs), the extent of the anti-correlations, and the He spread within the clusters are all a strong function of the cluster mass, all increasing with increasing mass. Hence, the cluster properties appear to play a strong role in the formation of MPs. 2P stars make up the majority of stars in most GCs today, meaning that a substantial amount of processed material is required to form them. This leads to the "mass-budget problem" which will be discussed in \S~\ref{sec:mass_budget}.
\item It appears that the abundance patterns are discrete, when high precision measurements are possible, with many clusters showing the presence of $>3-4$ sub-populations.
\item The majority of clusters in the HST UV Legacy Survey ($\sim70\%$) show a spread in their 1P stars, in addition to the spread in the 2P stars. Preliminary modelling suggests that this is mainly due to He variations in 1P stars that are not accompanied by variations in other light elements (e.g., N, Na, O).
\item In most clusters studied to date the enriched population of stars is either more centrally concentrated than the primordial population or if the cluster is dynamical relaxed, the two populations share the same distribution. However, in a handful of cases the situation is reversed, with the 1P stars more centrally concentrated than the 2P stars.
\item MPs have been detected in clusters as young as $\sim2$~Gyr, which corresponds to a formation redshift of $z=0.17$, well past the peak epoch of GC formation ($z=2-5$). Surprisingly, MPs have not been found in massive clusters with ages less than $2$~Gyr.
\item MPs are found in the full range of GC metallicities, from [Fe/H]$\sim-2.5$ to near solar metallicity.
\end{enumerate}
\end{summary}
\section{Nucleosynthesis and Multiple Populations}\label{SEC:NUCLEARREACTION}
All elements whose abundances show considerable scatter in GC stars (i.e. C, N, O, Na, Mg and Al) may participate in hydrostatic hydrogen burning. As a consequence, the presence of the C, N, O, Na, and Al anti-correlated ranges observed in GCs has been interpreted as the results of hydrogen-burning through the CNO-cycle and the NeNa- and MgAl-chains \citep[e.g.][]{Langer:93}.
In the CNO-cycle, H is converted into He, and the individual abundances of the C, N, and O are altered
whereas their net sum remains constant (as required by observations, see~\S~\ref{SEC:CN}).
The CNO-cycle is activated at T $\sim$ 20 MK, while the NeNa chain requires temperatures around $\sim$ 40 MK. Na
reaches its equilibrium value at $\sim$50 MK and decreases at higher temperature. At higher temperatures (T$\simeq$ 70 MK) Al can be produced by p-captures on Mg isotopes \citep[e.g.][]{Denisenkov:89,Prantzos:07}.
Three stellar types have been proposed as candidate polluters, because they reach extreme temperatures within their interiors (see also \S~\ref{SEC:LI} and \S~\ref{sec:SCLUSTERS} for additional constraints from elements others than CNO, Na, Al, and Mg). The possible 2P processed material donors are: intermediate mass ($\sim$ 3-8 $\hbox{M$_\odot$}$) AGB stars experiencing HBB \citep[e.g.][]{Dantona:16}; massive stars $\geq$ 15 $\hbox{M$_\odot$}$ \citep[][]{Krause:13,deMink:09}\footnote{This happens in the cores of massive stars, so additional processes are necessary to bring the material to the surface. In the case of single stars, rotational mixing has been suggested, the so-called FRMSs. Interactions between massive stars in binary systems can also bring processed material to the surface.}, and VMS \citep[$\sim$ 10$^{4}$ $\hbox{M$_\odot$}$;][]{Denissenkov:14}. Scenarios where the mixed contributions by different polluters have also been proposed \citep[e.g.,][]{Sills:10,Bastian:13}.
As we discuss the characteristics of each of the proposed stellar sources as well as the scenarios developed around them, we will keep track of their successes and failures to reproduce key observations. This will be done in Fig.~\ref{fig:truth_table}. When a model matches an observation a green check will be used, while a green check with asterisk notes that the model may be consistent with observations under reasonable (assumptions). Red crosses indicate when a model is in direct conflict with an observation and a red cross with asterisk shows where a model may match an observation but requires a high degree of fine tuning or by solving that problem it would violate another constraint.
\begin{figure}[!tb]
\includegraphics[width=0.95\columnwidth]{truth_table_v2.pdf}
\caption{A graphical summary of the comparison between predictions for the proposed models and observations (a.k.a. ``Truth Table"). A (red) cross shows a direct contradiction; a (red) cross with an asterisk shows a contradiction that may be avoided with relatively extreme fine tuning, or if the solution to that problem would violate another constraint; a (green) checkmark denotes where the prediction of a model is consistent with observations; a (green) checkmark with an asterisk indicates a situation where the model can be brought into agreement with observations with a (potentially) reasonable assumption (i.e. some degree of fine tuning is necessary); and finally a ``?" indicates where a model has not been developed enough to make a reliable prediction. As can be seen, no model does particularly well when compared to observations.}
\label{fig:truth_table}
\end{figure}
\subsection{Potential Sources of the Enriched Material and Constraints from the Observed Variations}
Several observational constraints can naturally be reproduced within the proposed self-enrichment scenarios. Yet, a number of ad hoc assumptions must be made to explain other MP properties. For the sake of clearness, in what follows, we briefly introduce and discuss candidate stellar polluters for GC self-enrichment \citep[see also][]{Renzini:15,Charbonnel:16}.
\subsubsection{Massive Stars}
Massive ($\geq$ 15 $\hbox{M$_\odot$}$) MS stars reach the high temperatures required to manufacture the observed CNONaAl pattern very early in their MS evolution \citep[e.g.,][]{Maeder:06}. The fast rotation required by MP models allows for the transport of nuclides from the convective core to the radiative envelope, while losing mass through {\em (1)} a slow outflowing equatorial disc produced by a mechanical wind when the MS star rotates
close to critical velocity, and {\em (2)} a fast radiatively driven wind in the direction unhampered by the disc. The enriched second generation stars a then predicted to form within this outflowing equatorial disc (i.e., a decretion disc).
\begin{marginnote}[]
\entry{Decretion disc}{A disc made up of lost material around the equator of a rapidly rotating star}
\end{marginnote}
\begin{itemize}
\item The N-C and Na-O anti-correlated pattern is quickly established in massive star interiors, although the details of chemical enrichment depends on the adopted reaction rates.
The FRMS are also able to process some Mg, which results
in a production of Al, at the expense of $^{24}$Mg. However,
this requires that the nuclear reaction rates for proton capture on
$^{24}$Mg are increased by three orders of magnitude \citep[e.g.][]{Decressin:07a}. Using nominal reaction rates,
FRMS would produce a positive Al-Mg correlation, which
contradicts the observed anti-correlation.
Finally, the temperatures reached in massive star interiors are not high enough to build the Si-Mg anti-correlation observed in a subset of clusters (\S~\ref{SEC:CN}), nor variations in elements heavier than Al.
\item Na and Al directly correlate with He, as observed (\S~\ref{SEC:HE}). However, the predicted He enhancement is significantly higher than the value allowed by observations \citep[see \S~\ref{SEC:HE}; e.g.][]{BastianHe,Chantereau:16}. However, since the NeNa reaction is very efficient, a large fraction of material in the massive star core does have the correct Na pattern before an extreme He enhancement is produced early in the life of the star. Thus, it is possible to reproduce the observed $\Delta$ Y if some mechanism is able to increase the mass loss at critical rotation and halt self-pollution before large amounts of He is injected in 2P stars (i.e. if the core material can be accessed earlier than models predict). This would, however, introduce a high degree of fine tuning.
\item Discs where 2P stars are forming must detach at a certain stellar mass/age (which varies from star to star depending on its initial mass and metallicity) to avoid pollution by He-burning products, i.e. a strong increase of C and O not allowed by observations.
\item Massive stars ejecta are also Li free, so one must invoke some degree of dilution with unprocessed material to reproduce observations (see \S~\ref{SEC:LI}).
\item Rotating massive stars would coexist with the supernovae from single stars as well as with other massive stars. Hence, it is not clear how their discs can survive in the crowded central GC regions \citep[e.g.][]{Renzini:15}.
\item 2P abundances would have necessarily continuous distribution. The photometric and spectroscopic discreteness observed in
some clusters cannot be readily reproduced by massive stars (Krause et al. 2016).
\end{itemize}
\subsubsection{Very Massive Stars (VMS)}
\citet{Denissenkov:14} envisioned a scenario where the most massive stars in the young cluster sink to the centre as a result of dynamical friction. Shortly after they reach the centre, the massive stars undergo multiple collisions with each other in a runaway process eventually forming a very massive star. VMSs with masses $\sim$ 10$^{4}$ $\hbox{M$_\odot$}$ are predicted to be fully convective with luminosities close to the Eddington limit, allowing for a significant mass loss. Below are some important constraints on VMS as the polluting stars.
\begin{itemize}
\item By the end of their MS lifetimes, VMSs are expected to reach very high He fractions, that would contradict the observed limits of $\Delta$Y in GCs today (\S~\ref{SEC:HE}). Hence, in order to stop the overproduction of He, it has been suggested that VMSs fragment (soon after it formed), when only a small fraction of H was transformed into He. Thus, hot H-burning should occur only for a limited amount of time during the MS evolution on a VMS to reproduce the observed $\Delta$ Y distribution; e.g. until the Y has increased to Y$\sim$ 0.4.
\item While the observed anti-correlations and the Mg isotopic ratios --contrary to the case of AGBs and FRMSs -- are nicely reproduced, VMS nucleosynthesis cannot account for the observed Li (\S~\ref{SEC:LI}). Therefore, dilution is also required in this model.
\item Only stars with masses in the mass range between 2 $\times$ 10$^{3}$ - 10$^{4}$ \hbox{M$_\odot$}~have central temperatures that provide the observed GC light element anomalies up to Mg (e.g.; Prantzos, Iliadis \& Charbonnel 2017).
\item VMSs have not been observed and their existence is still highly speculative. Also, due to the relativistic conditions required to model them, which in general has not been included in stellar evolutionary codes, their evolutionary and nucleosynthetic yields are also highly uncertain.
\end{itemize}
\begin{marginnote}[]
\entry{SDU}{Second Dredge-up}
\entry{TDU}{Third Dredge-up}
\entry{HBB}{Hot Bottom Burning}
\end{marginnote}
\subsubsection{AGB Stars}
Processed material with some of the observed 2P chemical composition can be provided by intermediate-mass ($\sim$5 - 6.5 $\hbox{M$_\odot$}$) AGB stars through a complicated interplay of nucleosynthesis and mixing episodes, namely the SDU, the TDU, and HBB \citep[e.g.][]{Karakas:14}. Contribution by lower mass AGBs should be avoided because AGBs less massive than $\sim$ 3.5 $\hbox{M$_\odot$}$ would release into cluster ejecta with enhanced C+N+O content\footnote{Surface C+N+O enhancements is also predict for rotating AGB stars more massive than $\geq$ 4 \hbox{M$_\odot$} \citep{Decressin:09}, contrary to observations of the majority of GCs.}.
During the SDU the convective envelope extends into the H-exhausted region and mixes to the surface mostly He and N from the CNO cycling. Ashes from He burning nucleosynthesis (mostly C and O, as well as Na and Mg) are eventually transported from the interior to the surface by the TDU, leading to an increase of the total C+N+O in the ejecta. Following each TDU episode, the H-burning shell is re-ignited until the next instability of the He-burning shell develops. This exchange of power between H- and He-burning shells along with the associated TDU episodes occurs many times during the AGB phase and the overall changes in the surface abundances of AGBs stars caused by TDU episodes strongly depends on mass, metallicity, mass-loss, etc.
Intermediate-mass stars also have envelopes that can reach very high temperatures (up to $\sim$ 100 MK, with the maximum temperature reached is a function of the AGB mass) to activate hot H-burning. This process is known as HBB. As a result, the envelope is exposed to regions where hot H-burning takes place, until the temperature at the base of the convective envelope drops below $\sim$20 MK (because of the mass loss which removes the envelope) at which point HBB is not longer supported.
\begin{textbox}[!hb]
\section{Comparison of AGB Model Yields}\label{BOX:AGB}
The chemical evolution of AGB star models greatly depends on the adopted input physics.
Different treatments for convection and mass loss recipes lead to variations of the HBB or TDU efficiency (among others) in the AGB models, indirectly changing the chemical yields. As a result, {\em "the predictive power of AGB models is still undermined by many uncertainties''} \citep[][]{Ventura:05}.
Models based on the mixing-length theory (MLT) of low convective efficiency fail to reproduce most of the observed chemical anomalies \citep[e.g. ][]{Fenner:04,Doherty:14}. In particular, they predict HBB temperatures that are too low to allow for efficient ON processing, i.e. AGBs produce too much Na and they do not provide large O depletion. Also, Mg and Al are positively correlated in the yields. 2P stars would also show an increase in the total CNO, which contradicts observations \citep[][]{Ivans:99}.
Full spectrum of turbulence (FST) models are, compared to MLT case, more consistent with observations on MPs. FST model for turbulent convection results in a large convection efficiency, which translates in a very strong HBB \citep[e.g.;][]{Ventura:05,Ventura:05a}. Higher temperatures are reached at the base of the convective envelope and stars evolve to higher luminosities with respect to the MLT case. As a consequence of the high luminosity and larger mass-loss, they undergo a limited number of thermal pulses, so that the impact of TDU in changing the surface composition is limited. However, the lack of TDU in the FST models also limits the amount of Na that can be produced in AGB stars with M$\geq$ 5 $\hbox{M$_\odot$}$, which reach temperatures so high that sodium is destroyed, providing a negative sodium yield. The theoretical yields may be reconciled with the observations only if we assume that the (uncertain) cross section of the main channel of sodium destruction is a factor of $\sim$2-5 lower than the recommended values \citep[][]{Ventura:06,Dantona:16}. Finally, in the FST case, the magnesium isotopic ratios are expected to exceed (by far) unity in the more massive stellar models (M $\geq$ 4 \hbox{M$_\odot$}), in contrast to what is observed \citep[][]{Yong:03}.
This problem is shared by the MLT model.
\end{textbox}
A summary of the ability of AGB stars to match observed MP abundances is as follows:
\begin{itemize}
\item Pollution from AGBs qualitatively reproduces some of the light element variations observed in 2P stars. However, it is not possible -- without some modifications to the main physical inputs and relevant cross sections -- to obtain
simultaneous O depletion and Na enrichment and keeping the C+N+O sum constant in AGB yields,
as required by observations \citep[e.g.][]{Sneden:00coll,Charbonnel:16,Marigo:17}.
Indeed, the composition of the material ejected by AGBs through winds critically depends on what mechanism (either TDU or HBB) dominates. The net effect of TDU is the mixing of He-burning products to the surface, in particular, C, Ne and O. The HBB destroys O and produces Na by p-captures on the dredged-up Ne (note that the surface Na abundance first increases during the SDU). At the temperatures required to destroy Mg ($\sim$ 100 MK), Na is destroyed again \citep[e.g.][]{Denissenkov:03}. Thus, without the Ne dredged-up by TDU and converted into Na by HBB, low values of O in the ejecta would lead necessarily to low Na for very high temperatures \citep[e.g.][]{Denissenkov:03}. Na production can be increased by invoking an efficient TDU to effectively replenish Na by dredged-up Ne. However, this would lead to an increase of the overall CNO sum that is not allowed by observations. Alternatively, Na destruction can be lowered by tweaking reaction rates \citep[][]{Renzini:15,Dantona:16}.
\item The observed Li distribution is not reproduced and dilution with a large amount material characterised by the same GC pristine composition (same initial abundances and Fe) is needed \citep[e.g.][see \S~\ref{sec:dilution}]{Dercole:16}. Dilution is also required in order to obtain the observed anti-correlations (e.g., Na-O). As the cluster is $>30$~Myr before AGB stars evolve, it is not clear where this material would come from (see \S~\ref{sec:origin_of_dilution}). The need to dilute AGB ejecta with unprocessed material also requires that material from the first massive stars exploding as SNe should be removed from the cluster, e.g. in order to avoid variable pollution from Fe-rich material resulting in [Fe/H] spreads.
\item He-rich material is mixed into the surface via the SDU, whereas the TDU and HBB are responsible for changes in light elements.
Thus, He, Na and Al should not be strictly correlated in AGB yields \citep[e.g.][]{Charbonnel:16}\footnote{Even if some initial Na enrichment during the SDU is expected, Na production due to the burning of dredged-up Ne also contributes to the resulting Na abundance. Thus, an obvious correlation between Na and He is not expected {\em a priori}.}. The He content of the ejecta is predicted to increase with stellar mass, and can reach He values up to Y$\sim$ 0.38 in super-AGB stars \citep[e.g.,][]{Ventura:13}, less than that observed in some GCs.
\item Since the temperatures reached during the HBB are related to the envelope opacity and thus to the overall metallicity of clusters, the AGB model would naturally explain why the products of extreme nucleosynthesis (Mg depletion and Si and K production) are observed only in metal-poor clusters. However, it is not clear why many metal poor clusters do not show these trends. The HBB temperature may be high enough to alter Si and K abundances in the most massive AGB models \citep[e.g.; ][]{Ventura:12}, however at such temperatures Na would be destroyed, i.e. 2P stars would have low Na abundance \citep[][]{Charbonnel:16}.
\item Low-mass AGBs could potentially be responsible for the star-to-star variations in C+N+O and s-processes observed in a handful clusters (\S~\ref{SEC:PECULIAR_GCS}). However, they cannot produce light-element variations themselves
(because of the competition between TDU and HBB).
\end{itemize}
\section{Theories for the Origin of Multiple Populations}
\label{sec:models}
\subsection{AGB Scenario}
\label{sec:models_agb}
AGB stars have been suggested to be the source of the polluted material, early on in the development of this field \citep[e.g.,][]{Cottrell:81}. The ``AGB Scenario" is arguably the model that has gotten the most attention in the literature, and many aspects of the model have been included in other scenarios, even those that use different enrichment sources. Hence, we begin by discussing this model.
\subsubsection{Basic Scenario}
The model envisions the formation of a massive cluster with a single age and abundance pattern (i.e, an SSP), representing a first generation (FG) of stars. The feedback from high mass stars and the associated SNe clear any remaining gas from within the cluster, hence all enriched material from the high mass stars and SNe are lost from the cluster (this is required to avoid Fe-spreads). After $\sim30$~Myr, stars from the FG begin to evolve through the AGB phase of stellar evolution, and the winds of these stars, due to their low velocity ($\sim10-30$~km/s - \citealt{Loup:93}), are not able to escape the cluster, so a reservoir of polluted gas begins to form in the cluster. This material cools and sinks towards the cluster centre, and once a critical density is reached a second generation (SG) of stars begins to form out of this material \citep[e.g.,][]{Dercole:08,Bekki:17}. Early versions of the model had the second generation forming more or less continuously until star-formation was truncated due to the onset of 'rapid' Type-Ia SNe which would clear the cluster of any remaining gas, at an assumed age of $\sim100$~Myr. After the sub-populations within GCs were found to be largely discrete (e.g., M4 - \citealt{Marino:08M4}), the model was refined by invoking multiple discrete bursts between the onset of AGB stars ($\sim30$~Myr) after the formation of the FG and when Type-Ia SNe began \citep[e.g.,][]{Dercole:16}.
\begin{marginnote}[]
\entry{First generation stars (1G)}{In models of MP formation, stars of the first generation}
\entry{Second generation stars (2G)}{In models of MP formation, stars of the second generation that show the anomalous chemistry.}
\end{marginnote}
\begin{figure}[!tb]
\includegraphics[width=0.75\columnwidth]{dilution_araa.pdf}
\caption{An illustration of a dilution model. The yields of suggested polluter stars are shown: AGB yields (from \citealt[][]{Dercole:10}) are shown with red squares for different masses (although note that other AGB yields do not show significant Na enhancement - \citealt[][]{Doherty:14}); typical high mass star ($\sim20$~\hbox{M$_\odot$}) yields are given with a blue upside-down triangle (from \citealt[][]{deMink:09}) and very massive star ($\sim5\times10^4$~\hbox{M$_\odot$}) yields are shown (off to the left of the panel - from \citealt{Denissenkov:14}). Dilution models use these yields and then dilute them with gas that has the initial chemical composition (i.e. that of the 1P stars). This leads to dilution tracks where the 2P stars are located. All suggested pollution mechanisms require dilution (to various degrees) to explain the observed chemical abundances (i.e. He and Li). Also shown are data from NGC~104 from the compilation of \citet[][]{Roediger:14}.}
\label{fig:dilution}
\end{figure}
It is worth noting that all AGB models to date do not produce a Na-O anti-correlation, but rather a correlation. In order to reproduce the observed anti-correlations, this scenario requires the (re)accretion of large amounts of pristine material (i.e. material that shares the same abundances as the FG stars) from the surroundings, i.e., dilution of the AGB yields with material that matches the initial chemical composition of 1P stars is required. In Fig.~\ref{fig:dilution} we show the basic idea of a dilution model. Combining the yields from the polluting stars (e.g., AGB stars) with material that matches the 1P stars, dilution tracks can be created to explain the run of chemical abundances observed within clusters, where a 2P star's position is governed by the relative amount of processed material (i.e. AGB yields) and diluting material (1P chemistry) used to form the star.
This accreted material material is then mixed with the AGB ejecta and forms SG stars, known as dilution, hence the SG of stars would have different Na-O abundances ranging from the pure yields of AGB stars to those of the FG. An additional problem for yields of AGB stars is that in the mass range of $\sim4-9$~\hbox{M$_\odot$}\footnote{AGB stars of lower masses are generally disregarded as contributing to the formation of the SG as they do not conserve the C+N+O sum, which contradicts
observations \citep[e.g.,][]{Ivans:99}} some models provide the Na-enrichment and O-depletion required to match observations \citep[e.g.,][]{Ventura:09}, whereas other calculations have found that AGB stars are not able to produce the Na-enrichment required \citep[][see \S~\ref{BOX:AGB}]{Doherty:14}. Additionally, the latter models find that the C+N+O sum is not kept constant at any mass for AGB stars, in conflict with the observed properties of MPs in most clusters.
\begin{marginnote}[]
\entry{IMF}{Initial Mass Function of Stars}
\end{marginnote}
An important aspect of this - and most other - models, is that they can only produce a small fraction of the total cluster mass in 2G stars. This is due to the stellar IMF of the 1G of stars, which only has a small fraction of its total mass in stars in a specific mass range that can produce material to pollute/enrich the 2G of stars (i.e. $f_{\rm enriched,initial} \sim0.02-0.1$). In order to obtain the observed fractions of primordial and enriched stars (\hbox{f$_{\rm enriched}$}$=0.4-0.8$) the model needs to assume that GCs lose substantial fractions of their initial population of stars (1G stars), often up to 95\% of their initial masses while retaining all/most of the 2G stars\footnote{Heavy mass loss is also required by the FRMS scenario \citep[e.g.;][]{Schaerer:11}.}. This will be further discussed in \S~\ref{sec:internal_mass_budget}.
In the model envisioned by \citet{Dercole:08} the gas coming off of AGB stars is able to rapidly cool, mix with material (possibly accreted) with the same chemical abundance pattern as the 1G stars, fall to the centre of the cluster, and subsequently form a 2G of stars. However, it is not clear whether such material would be able to cool and remain in the cluster. For example, if the heating of a population of x-ray binaries is included in the simulation, the gas is unable to cool, and instead flows out of the cluster. \citet{Conroy:11} have shown that the Lyman-Werner photon flux of stars of the 1G is high enough to not allow the gas to cool and sink to the cluster centre, until an age of $200-300$~Myr, delaying the formation of a 2G of stars for a much longer period of time. Such a time delay would be a severe problem for the AGB scenario, as even under optimistic model yields, the C+N+O sum would not be conserved for AGB stars at this mass. \citet{Conroy:11} have also shown that due to the cluster's motion within the galaxy, Bondi-Hoyle accretion onto the cluster is expected to be very inefficient, and the authors suggest that clusters can retain a relatively large fraction of their initial gas mass ($\sim10$\%) in order to sweep up the interstellar medium (ISM) in order for the cluster to have the necessary primordial gas for dilution. This again, ignores the role of heating from x-ray binaries and other mechanisms not included in standard simple stellar population models, whereas if such sources are included clusters would be expected to be gas free, which is a substantial problem for this model (see \S~\ref{sec:ymcs}). It is also not clear that the material accreted from the surrounding galaxy would match the abundances of the 1P stars to the necessary precision imposed by the lack of Fe spreads in most clusters.
One of the features of AGB stars that make them promising candidates to supply the enriched material is the fact that they can burn H at higher temperatures than main sequence massive stars, the exact ranges depends on metallicity and mass of the AGB star (see e.g., Fig.~8 in \citealt{Prantzos:07}). This allows them to activate the Al-Mg burning chain, hence to deplete Mg and increase Al. As discussed in \S~\ref{SEC:CN}, a minority of clusters show significant Mg spreads and most other potential polluting stars have difficulty producing the spreads without adjusting the nuclear cross-sections in an ad-hoc manner. By including dilution, the basic AGB model (for some model yields) is able to quantitatively match the observed Na-O anti-correlation with GCs and qualitatively the increase in He. On the other hand, it does not predict the correct abundance pattern of Li (as material processed through AGB stars should, to first order, be Li free) without invoking and fine tuning a specific mechanism to produce Li (see \S~\ref{SEC:LI}).
In summary, the basic AGB model, while conceptually simple, has a number of shortcomings that subsequent works have attempted to address. This will be explicitly addressed in \S~\ref{sec:comparisons}.
\subsubsection{Alternative Versions}
In order to avoid the problems associated with dilution (i.e. accreting the material and the associated timing constraints), \citet[][see also ~\citealt{Renzini:15}]{Renzini:13} suggest that the yields of AGB stars may be very different from that predicted by current theoretical yields. Due to the many parameters involved in estimating the yield of AGB stars (see \S~\ref{BOX:AGB}) there is significant freedom when adopting AGB yields. The authors speculate that perhaps the true yields of AGB stars result in a Na-O anti-correlation so that no dilution would be necessary, although without dilution it would be very difficult to match the Li abundance patterns. Further work is needed to search the full range of potential parameter space of AGB model yields, but work so far suggests that AGB stars are not able to produce an anti-correlation of Na-O \citep[e.g.,][]{Marigo:17}. However, if this was true, it would add an additional factor of $\sim2$ to the already strict mass budget problem (which is discussed in detail in \S~\ref{sec:mass_budget}).
It has also been suggested that ancient GCs may have formed embedded in larger dark matter halos, allowing them to hold onto a larger fraction of the material ejected from evolving stars \citep[e.g.,][]{Bekki:07, Trenti:15}. If large/extended dark matter haloes were necessary to form MPs, then we would expect that only the oldest ($z_{\rm form} > 6$) GCs would be able to host MPs, as at lower redshift it would be increasingly unlikely to find a gas-rich dark matter halo that has not undergone significant star-formation (where Fe spreads would be expected). The discovery of MPs in clusters younger than $8$~Gyr ($z_{\rm form} < 1$) argues against this type of scenario \citep{Hollyhead:17, FlorianSMC}.
\subsection{Fast Rotating Massive Stars (FRMS) and Interacting Binaries (IBs)}
Massive stars also undergo hot hydrogen burning in their cores, during the MS, and as such are also potential candidates to provide the enriched material needed to form MPs. However, as this happens deep within the stars it is difficult to bring up the enriched material to the stellar surface where it can be released into the GC intra-cluster medium. Massive stars that are rapidly rotating can overcome this problem, due to rotationally induced mixing which can cause, in extreme cases, the stars to be (nearly) fully mixed.
\citet{Decressin:07a} and \citet{Decressin:07b} developed a scenario using FRMS as the enrichment source. This scenario is similar to that of the AGB scenario, using the enriched material from a FG of stars to form a SG, but happens when the cluster is much younger ($<10-20$~Myr). As in the AGB scenario, the ejecta of FRMS must also be diluted to match the observed abundance patterns (typical yields and dilution are shown in Fig.~\ref{fig:dilution}). However, since the cluster is still young there is no need to bring the material from outside the cluster, as it is assumed that the cluster has retained a relatively large fraction of gas/dust left over from the formation of the FG. The winds of the FRMSs then mixes with the left over gas and forms a SG of stars. The FRMS scenario suffers from the same mass budget problem discussed for the AGB scenario \citep[e.g.,][]{Schaerer:11}.
FRMS naturally produce a Na-O anti-correlation and the enriched material can also be strongly enhanced in He, which helps explain clusters with large He spreads like NGC~2808. However, the high He yields may be a problem for more typical clusters with small He spreads \citep[e.g.,][]{Chantereau:16}. FRMS are not able to activate the Al-Mg chain before the end of the MS, so are not able to explain the observed Mg spreads in some clusters without ad-hoc changes to the nuclear cross sections.
\citet{Krause:13} further developed the FRMS scenario by exploring cases where a young GC may not be able to expel the left-over gas from the formation of a 1G of stars, even with SNe, allowing the cluster to remain embedded in it's natal GMC for $\sim20$~Myr. The authors suggest that the decretion discs (i.e. equatorial discs forming from material that is thrown off the critically rotating star) might also accrete material from the gas rich intracluster medium, which would solve the dilution requirements.
\begin{marginnote}[]
\entry{GMC}{Giant Molecular Cloud}
\end{marginnote}
\citet{Charbonnel:14} presented a variant on the FRMS scenario in order to solve the mass budget problem (see \S~\ref{sec:mass_budget}). Here, the first generation of stars forms with a top heavy stellar IMF (i.e., only stars that would not be alive today) and the second generation would consist mainly of low mass stars. In this model, stars with ``primordial composition" (i.e. 1P stars) would be actually second generation stars that formed primarily from material left over from the first generation. Such a model can be tested through carbon isotopic ratios of MS stars.
Another way to release enriched material from the cores of massive stars into the intracluster medium is through binary interactions. \citet{deMink:09} modelled a binary interaction between a 20 and 15~\hbox{M$_\odot$}\ star and investigated the yields of the expelled material. They found that the 20~\hbox{M$_\odot$}\ star shed about 10~\hbox{M$_\odot$}\ worth of material due to the interaction, and that the yields matched the observed trends in GCs (i.e. Na-enriched, O-depleted, etc). While the overall trends and correlations of the yields should apply to most massive stars, the exact yields depend on a number of parameters, e.g., the time of interaction (i.e. stellar evolutionary state), total mass of the stars and the mass ratio of the stars. Hence, interacting binaries have the benefit of potentially explaining the observed variations from cluster to cluster, but have difficulty matching the discreteness of abundance ratios found in many sub-populations.
A potential problem of scenarios that operate in the first few Myr of a cluster's life, is that after 3-8~Myr (depending on the cutoff mass for SNe), core collapse SNe begin to explode. The retention of just a small amount of this material will result in Fe spreads that are in conflict with observations \citep{Renzini:08}. Hence, processes that are limited in time to the epoch before the first core collapse SNe may need to be required in such models.
\citet{Szecsi:18} proposed a variant on this scenario, where a 2G of stars form in shells around high mass ($150-600$~\hbox{M$_\odot$}) red supergiant stars. This scenario suffers from similar problems as the FRMS scenario (in terms of abundances, discreteness, and mass budget), but also is only expected to operate at low metallicity. Since MPs are found in GCs of all metallicities ($-0.3 > [Fe/H] > -2.5$) this scenario could only apply to a subset of the known GCs.
\subsection{Early Disc Accretion Scenario}
\label{sec:EDAS}
\citet{Bastian:13} suggested an alternative model for MPs that did not invoke multiple epochs of star-formation. Instead, it was driven largely by the constraints posed by YMCs (see \S~\ref{sec:ymcs}). The model used the enriched material ejecta from high-mass interacting binary stars \citep{deMink:09} as well as the FRMS within the cluster to pollute low mass stars that formed at the same time as the high mass stars. The authors suggested that low-mass ($<2$~\hbox{M$_\odot$}) stars may retain the protoplanetary discs around them for $\sim10$~Myr which would sweep up the enriched material as they passed through the cluster core (the authors also assumed that the cluster is mass-segregated from a very early age, so that the high mass stars are concentrated in the cluster centre). The enriched material that was swept up by the discs would then eventually be accreted onto the host star.
While this scenario matches most observations of YMCs, it has a number of shortcomings as well (see \S~\ref{sec:comparisons}). In particular, it requires that the accreting stars are fully convective (in order to mix the accreted material throughout the star) which in turn means that the accretion timescales are extremely short (1-3~Myrs - \citealt{Salaris:14,Dantona15:pms}). This minimises the time that the mechanism could potentially work which effectively limits the amount of processed material that can be supplied and accreted.
\citet{Wijnen:16} ran hydrodynamical simulations to test this scenario, placing a realistic protoplanetary disc in a ``wind" of material (i.e. the ejecta of interacting binary stars, where the ``wind" refers to the disc moving through the intracluster ISM). They found that while the disc did indeed accrete material from the ISM, the accreted material had little or no angular momentum which caused the disc to rapidly accrete onto the star and disappear. Without the disc no further accretion would be possible. The authors found that this happened on a rapid timescale, $\sim10^4$~years, much shorter than the required $10^7$~years for the scenario to work.
\subsection{Turbulent Separation of Elements During GC Formation}
\citet{Hopkins:14} also put forward a potential origin of MPs that did not invoke multiple generations of star-formation within GCs. In his scenario, MPs would be the result of cloud physics during the earliest phases of GC formation. In extremely turbulent environments, like those in progenitor clouds of GCs, large dust grains can become aerodynamic and begin to move separately from the gas and small dust grains. Large resonant fluctuations in the dust can then develop. Within these overdense regions, dust will be over-represented, so any stars that form within such regions will be enhanced in the elements associated with large dust grains. On the other hand, the gas and small dust grains (like Fe grains) will be more uniformly distributed. In principle, this mechanism provides a natural and powerful way to separate elements in the early phases of GC formation. Since this mechanism depends on the level of turbulence, it would predict larger abundance spreads in more mass proto-GC clouds, consistent with observations.
However, as noted by the author, Na and O normally occur on the same dust grains, so such fluctuations would predict a Na-O spread but as a correlation instead of the anti-correlation seen in GCs. Also, He is not affected by dust, so an additional mechanism would need to be invoked to explain the inferred He spreads in GCs. Finally, any enhancement in an element in some stars would necessarily lead to a depletion of that element in other stars. We would then expect that, starting from field star abundance composition, we would see more or less symmetrical spreads around the field star abundance. Observations, however, show the scatter in a single direction from the position of where halo field stars lie (at a given metallicity).
\subsection{Reverse Population Order For GC Formation Scenarios}
In order to alleviate the mass-budget problem (which will be discussed in \S~\ref{sec:comparisons}), some authors tentatively investigated formation models where the abundances of forming stars move from 2P to 1P, as star formation within the cluster proceeds (e.g.; \citealp{Marcolini:09}, Pancino et al., in preparation).
The scenario outlined \citet{Marcolini:09} envisions GC formation from gas enriched locally by a single Type Ia SN and AGB yields superimposed on an ambient medium pre-enriched by low-metallicity Type II SNe. The star formation of the proto-GC only takes place inside this region and stars born within the inner volume will be depleted in O and Mg (because of the single SN Ia) and enhanced in N, Na and Al abundances (due to AGB pollution). External to this volume can be found a region with the same composition as the proto-halo gas at the epoch of GC formation. After a new generation of stars is born, associated SNe II begin to pollute and expand the inner volume, while mixing with the lower metallicity material from the external shell, i.e. gas with pristine composition.
Hence, the [Fe/H] and the CNO sum remain constant during cluster evolution and the N-C and Na-O anti-correlations can be reproduced. The Al-Mg anti-correlation can only be reproduced assuming that AGBs produce more Al than predicted by models \citep[by a factor of $\sim$10-50; e.g.][]{Karakas:07}.
In a following paper, the authors focus on other elements and achieve some success in reproducing the observed trends \citep{Sanchez:12}. Nonetheless, the dynamical feasibility of the scenario has not been probed with hydrodynamical simulations and severe assumptions need to be made on the Fe content of the ISM at the epoch of formation, as well as the
the size of the inner region where the inhomogeneous pollution by the SN Ia and AGBs is confined \citep[e.g.][]{Sanchez:12}. More importantly, this class of models require very peculiar stellar configurations that are not expected at the present epoch \citep[e.g.][]{Conroy:12}.
\subsection{Extended cluster formation event}
\citet{Elmegreen:17} have further explored a model put forward by \citet{Prantzos:06} that invokes the special conditions of galaxies or GMCs at high redshift (namely high density, turbulence and pressure environments) to foster the formation of MPs before the first SNe occurs ($<3$~Myr). Here, a first generation SSP is born in the core of a massive, dense and turbulent GMC. Due to the high stellar densities, high mass stars have their envelopes stripped (and rotating massive stars lose large parts of their envelopes through decretion discs) very rapidly, which (as discussed above) are expected to show many of the observed abundance anomalies. This material mixes with that left over from the formation of the FG and forms subsequent generations. Low mass FG stars are assumed to be ejected due to two mechanisms, the first is binary dynamics and the second is that the gravitational potential of the cloud core/cluster is rapidly varying as the gas within it (which dominates the potential) is moved due to stellar feedback. It remains to be seen if the high FG mass loss rates (and required low SG mass loss rates) required are feasible.
\citet{Wunsch:17}, following on \cite{Tenorio:05}, have suggested that the winds released from massive stars can become so dense in a massive and dense young cluster, that they enter a catastrophic cooling regime and can collapse into the cluster centre. Here, the material may mix with left over primordial material (i.e. dilute) and form a second generation of stars. Hence, this is another mechanism (rather than stellar interactions) that can potentially make enriched material from massive stars available for further epochs of star-formation within a cluster. This also suffers from the mass budget problem and would require large fractions of 1G stars to be lost. \citet{Lochhaas:17} develop this model further in terms of chemistry and show that the model is not able to simultaneously account for the increasing enriched fraction and increasing chemical spread with increasing cluster mass (see \S~\ref{sec:trends_cluster_properties}).
As these scenarios invokes massive stars, we will include it in our comparisons with observations, in particular the abundance trends, with other scenarios that invoke massive stars (\S~\ref{sec:comparisons}).
A key aspect of this scenario is that it happens (and terminates) before the first SNe occurs within the proto-GC in order to avoid Fe spreads (similar to the FRMS scenario). One potential problem with the scenario is that it takes high-mass stars some time to increase their He mass through nuclear burning, whereas this model starts using stripped material from the massive stars at $t=0$. This may be ok for standard clusters with small He spreads (e.g., NGC~104) but it may be difficult to reproduce clusters like NGC~2808, which hosts a large He spread.
Finally, for the limited models available of interacting binaries and fast rotating mass stars, it is not clear that they will be able to provide the stochasticity (i.e., the specific abundance pattern - extrema, discrete sub-populations - for each GC) required to match the observations. \citet{Elmegreen:17} suggest that sub-clumps may form within the proto-GC, and each sub-clump would have its own chemistry due to the exact chain of stellar interactions. However, these sub-clumps would each be expected to be $>10^4$~\hbox{M$_\odot$}, where the stellar IMF is fully sampled, hence stochastic effects would be expected to be minimised. The \citet{Wunsch:17} scenario suffers from the same problem.
\subsection{Very Massive Stars Due to Runaway Collisions}
\label{sec:vms}
\citet{Gieles:18} have developed a model for MPs that adopts VMSs ($>10^3$~\hbox{M$_\odot$}) as the origin of the processed material. In this model, the proto-cluster undergoes adiabatic contraction due to gas accretion, increasing the stellar density and subsequently the stellar collision rate. A runaway collision process can form a VMS, which releases hot-hydrogen burning processes through its stellar wind into the intra-cluster environment \citep{Denissenkov:14}. This processed material mixes with pristine gas (i.e. gas with the same abundance pattern as the initial proto-cluster) and forms further generations of stars until the very massive stars burns out, or potentially explodes due to instabilities within the star. Because the VMS can be continuously rejuvenated through stellar collisions, the amount of processed material ejected by the star can be several times the maximum mass of the star. While this process leads to multiple generations of stars within the cluster, the expected age spread would be less than $\sim3$~Myr.
One major advantage of this model is that it predicts a super-linear scaling between the mass of the very massive star and the mass (or density) of the cluster. This naturally produces the observed trend of increasing fractions of enriched stars (and potentially as well as the increasing spreads in N, Na, etc) as a function of GC mass. This kind of model also does not violate the constraints from YMCs, and much of the expected abundance patterns also appear to match observations.
One of the major drawbacks of the model is that VMSs are still only theoretical, although the authors perform numerical simulations showing that under certain conditions (relevant for GC formation) runaway collisions are likely to take place, even when considering two-body relaxation and the strong stellar mass loss of the massive object due to its stellar wind. This same process is expected to also be at work in clusters today, if they reach the required stellar densities. Hence, it is not clear if the model can explain why NGC~1978 ($\sim2$~Gyr) hosts MPs while NGC~419 ($\sim1.5$~Gyr) does not, given their similar masses and radii.
\section{Comparing Predictions to Observations}
\label{sec:comparisons}
\subsection{Chemical Abundance Patterns}
\label{sec:chemical_abundance_patterns}
\subsubsection{The Need for Dilution}
\label{sec:dilution}
As discussed in \S~\ref{sec:models}, the suggested stellar sources for the origin of the polluted material have difficulties in reproducing some of the observed abundance trends. For example, AGB model yields suggest that Na and O should be correlated, not anti-correlated. Also, wherever nuclear processing of C, N, O, and Na takes place, the resulting material is expected to be Li free, whereas observations show that Li is constant or only slightly varying in GCs from star-to-star. In order to address these problems, most models have adopted some form of dilution, i.e., that the enriched material produced by 1P stars is mixed with material that matches the chemistry of the 1P stars (referred to as ``primordial" material).
Here we discuss the predictions of dilution in comparison with observations. A basic illustration of a dilution model is given in Fig.~\ref{fig:dilution}.
\subsubsection{Li Variations}
Without dilution we would expect all 2G stars to be effectively Li free, as any material subjected to hot hydrogen burning will have its Li rapidly destroyed. Observations show, however, that in some GCs Li is constant between 1P or 2P stars, or that it is depressed in 2P stars, i.e. anti-correlated with Na and correlated with O (see \S~\ref{SEC:LI}). The amount of Li would then reflect the amount of diluted material included in the formation of 2P stars. This assumes that Li is not produced by other processes. In principle, AGB stars can produce some amount of Li through the Cameron-Fowler mechanism \citep{Cameron:71}, but this requires extreme fine tuning to match the observe Li variations/constancy (see \S~\ref{SEC:LI}).
\citet{Salaris:14} have pointed out difficulties in such dilution models. Essentially, since the enriched material is expected to be Li free while only depleted in O, the spread in Li should always be larger than the spread in O. However, for at least one cluster, NGC~6752, the spread in Li is smaller than the spread in O. The Li spread (in relation to Na, O and other light elements) needs to be studied in other clusters, but if these results are confirmed this poses a major problem for all models that use high mass stars (i.e. $>15$~\hbox{M$_\odot$}) as well as AGBs.
There are tentative hints that the amount of Li variation is larger in higher mass clusters, similar to what is observed in Na, O, He and N. Indeed, in high-mass and metal-poor clusters, stars
characterised by extreme composition (very high Na and Al enhancement) are also
Li-poor \citep[e.g., NGC~1904, NGC~2808, NGC~6752, M~5, NGC~6397; see][and references therein]{Dorazi:15Li}. The presence of a fraction of 2P stars with depleted Li abundance is surprsing, because 2P stars with an intermediate degree of chemical variations share the same Li abundance as 1P stars. If the light element anomalies are produced by nucleosynthesis in the interior of stars, this finding implies that some mechanism
(either dilution or Li production by AGBs) should operate to restore the Li abundance of its initial value in 2P stars with intermediate composition without changing Li in extreme 2P stars. Again, such an interpretation requires extreme fine tuning.
\subsubsection{Quantitative Abundance Trends and the Need for Stochasticity}
While many studies have compared the observed abundance distributions of specific clusters to the yields of potential polluter stars, few have carried out a more general analysis including multiple elements and comparisons between clusters. \citet{BastianHe} studied a sample of eight Galactic GCs that all had measurements of their Na-O anti-correlations as well as spreads in He based on HST imaging. With the exception of NGC~2808, the authors conclude that the observed distributions (Na, O, He) were not in agreement with the predicted yields of AGBs, FRMS, interacting binaries, or very massive stars, even when dilution with primordial material was taken into account. Specifically, based on the extent of the Na-O anti-correlations, large He spreads ($\Delta Y > 0.1$) would be expected in all cases, whereas in most cases $\Delta Y_{\rm obs} < 0.05$.
\citet{BastianHe} also considered ``empirical yields", i.e. adopting the observed Na-O anti-correlation and He spreads observed for a given cluster, and comparing that to the other GCs in the sample. Surprisingly, even when using the ``empirical yields" a satisfactory fit for the other clusters could not be reached (even when controlling for metallicity). The conclusion is that whatever the polluting source, it needs to produce a high degree of cluster-to-cluster variations in order to explain the observations. Dilution of a fixed set of yields does not help in explaining the full set of observations. This argues against the stellar sources normally considered (i.e., AGBs or massive stars) being the origin of the enriched material, as none can provide the necessary cluster-to-cluster variation. However, the multi-modal abundance patterns within GCs suggest that for a given GC, the yield/dilution combination is quite uniform (i.e. taking on only a handful of values within the cluster).
It is beyond the scope of this review to quantitatively compare the yields of each proposed source with observed for each element, especially considering that most works to date have only focussed on one or two elements at a time (i.e. not testing whether the yields and required dilution that match a given element are able to match another). We refer to the interested reader to e.g. \citet{Dantona:16,Prantzos:17}.
\subsubsection{The Origin of the Diluting Material}
\label{sec:origin_of_dilution}
For models that adopt massive stars as the origin of the enriched material ($>15$~\hbox{M$_\odot$}), it is assumed that a large reservoir of primordial material is left over within the cluster from the formation of the 1G of stars.
However, for models that invoke pollution from AGB stars, the origin of the diluted material is more difficult to explain. Once core-collapse SNe from the massive stars in the 1G begin to explode, all material left over from the formation of the 1G is expected to be removed to large distances (i.e., unbound from the cluster). Hence, the primordial material must then be (re)accreted from the surroundings. This material must also avoid being contaminated with the material (e.g., Fe) from the SNe, or else Fe spreads would be expected in all clusters \citep[e.g.,][]{Renzini:15}.
\citet{Conroy:11} suggested that this material can be accreted from the host galaxy as the clusters orbit through the interstellar medium (ISM). While accretion due to gravitational focussing is not efficient for the majority of cases, the authors found that if a reservoir of gas already exists within the cluster ($\sim10$\% of the stellar mass) it can sweep up material and the reservoir can grow. However, \citet{Dercole:11} have shown that this near-constant accretion of new material, when coupled with the adopted AGB yields, will not reproduce the observed abundance distributions. For the AGB model to work, the timing of the dilution needs to be very specific, with nothing being accreted (i.e. no diluting material present) when the most massive AGB stars are shedding their material, and an ever increasing amount of material being accreted after that, until the process is terminated, potentially by the onset of Type Ia SNe.
\citet{Dercole:16} have further developed the basic AGB scenario by placing the young GC inside a disc galaxy. In the model, the SNe from the 1G of stars blows a hole in the surrounding ISM and eventually the expelled material is lost to the host galaxy. They adopt the same basic scenario as \citet{Dercole:08} that the young cluster can retain the ejecta of AGBs and that this material can cool and form a 2G of stars within the cluster. Eventually, the SNe blown bubble begins to close (as SNe become less frequent) and material from the galaxy fills the hole, some of which is then accreted back onto the cluster. This scenario requires the surrounding material (out to 100s of pc) to be chemically identical to that of the FG stars within the cluster. Additionally, this model does not take into account the motion of the cluster within the host galaxy, in particular the high velocity dispersion expected in young galaxies \citep[c.f.,][]{Kruijssen:15}, hence it is not clear that the gas would be accreted onto the cluster. Note that massive clusters ($>10^6$~\hbox{M$_\odot$}) in galaxy mergers today do not appear to be able to efficiently accrete material from their surroundings \citep[][]{Longmore:15,CabreraZiri:15alma}.
\subsubsection{Al-Mg Anti-correlation}
Interestingly, the presence of Al and Mg anti-correlated ranges among cluster stars is one of the strongest arguments against the FRMS scenario, as the temperature required to efficiently destroy $^{24}$Mg is reached in the core of massive stars only at the very end of their main sequence evolution \citep[][]{Decressin:07a}. As a consequence, a large increase (by a factor 1000) of the $^{24}$Mg(p,$\gamma$) reaction rate around 50 MK with respect to the nominal values is demanded to build the Al-Mg anti-correlation in the stellar core and even in that case Mg depletion would be associated with a strong He enrichment \citep[up to Y$\sim$ 0.8 after dilution with unprocessed material, see ][]{Chantereau:16}. Pollution from AGBs would in principle reproduce more naturally the observations, because both the depletion
of Mg and the production of Al are sensitive to AGB metallicity, in the sense that more extended Al and Mg variations are expected at low metallicity, as observed \citep[][]{Ventura:16}.
However, the resulting (anti-)correlations between Na, Mg, Al, and Si are greatly dependent on the mixing with He-burning material; e.g. because of the competition between TDU and HHB (see \S~\ref{SEC:NUCLEARREACTION}).
Finally, the observed dependence of Mg depletion and Al production on metallicity can be explained in the VMS scenario if the mass loss leads
to the formation of smaller very massive stars at higher metallicities \citep[e.g.][]{Vink:11}.
\subsection{Discrete vs. Continuous Abundance Spreads}
In the majority of GCs, 1P and 2P stars are observed to be distributed continuously in the Na-O plane. However, a number of studies revealed that the O, Na, and Al abundances of different sub-populations are clustered around certain values \citep[e.g.][]{Marino:08M4, Lind:11,Carretta:142808,Carretta:15N2808}. Nonetheless, the evidence of multimodality from high resolution spectra is still sparse. On the contrary, C and N (and CN band strength) multimodality is almost universal among clusters with intermediate to high metallicity (e.g. \citealp{Norris:87})\footnote{The bands of bi-metallic molecules like CN are weak in metal-poor GCs, because their strength has a quadratic dependence on the metallicity.}. HST photometry, in particular when including UV filters, also shows largely discrete RGBs and MSs in some cases (see \S~\ref{SEC:PHOTOMETRY} and Fig.~\ref{fig:uv_hst_legacy}). These findings indicate that the spectroscopic Na-O distributions may also be made up of discrete groups of stars, but that errors have blurred the distinction between the groups, causing the distribution to appear continuous \citep[e.g.][]{Carretta:13AL,Carretta:15N2808}.
The observed discreteness between two (or more) subpopulations would disfavour formation scenarios based on accretion onto pre-existing stars (e.g., EDA scenario) or 2P stars being born within the disk of fast rotating massive star (e.g., FRMS scenario). Such processes would result in a continuous range of abundance variations rather than the discrete distributions demanded by the observations.
\subsection{Radial Distributions, velocity dispersions and binarity}
The evidence of a more centrally concentrated 2P (see \S~\ref{sec:spatial_distributions}) is in qualitative agreement with most of the proposed scenarios. Also, the higher incidence of binaries with 1P composition \citep[][]{Dorazi:10,Lucatello:15} would again be consistent with a 2P preferentially found towards cluster inner regions. For example, in the \citet{Dercole:08} scenario, where the AGB ejecta form a cooling flow and rapidly collect towards the cluster centre, forming a concentrated 2P. The system starts with more concentrated 2P stars, as the cluster evolves, the 1P and 2P stars mix. The long-term dynamical evolution of the different sub-populations with initial spatial segregation allows for efficient mixing in the innermost regions, where the local two-body relaxation time scale is shorter, potentially erasing any initial differences between subpopulations on a relaxation timescale \citep[e.g.][]{Vesperini:13}.
However, there are a handful of exceptions to this general behaviour, with the 2P stars {\em less} centrally concentrated than 1P stars \citep[][]{Larsen:15m15,Vanderbeke:15,Lim:16}. While differences in mass between the 1P and 2P stars due to He variations offers a potential explanation, the required He spreads are much larger than can be accommodated by the observations \citep[][]{Larsen:15m15}.
Different formation models may leave unique kinematics imprints imprints that would allow to distinguish between various scenarios \citep[i.e. different subpopulations showing different flattening; e.g.][]{Mastrobuono:13}. In this regard, the differential rotation of subpopulations provides precious insights, as such an observational property may survive the long term dynamical evolution of old GCs and would allow us to distinguish
different formation scenarios \citep[e.g.][]{Bellazzini:12,Vincent:15,Cordero:17}.
\subsection{Mass Budget Problem}
\label{sec:mass_budget}
A difficulty of all the proposed self-enrichment scenarios that was quickly realised was that since the enriched populations within GCs was equal to, or larger than, the primordial population (i.e., \hbox{f$_{\rm enriched}$}$>0.5$), there simply would not be enough material processed through 1P stars to explain the number of 2P stars if standard stellar IMFs are adopted \citep[e.g.,][]{Prantzos:06}. This is known as the ``mass budget problem". For example, for a standard IMF, only $\sim7$\% of the stellar mass in a 1G is in stars with masses between $5-9$~\hbox{M$_\odot$} (i.e. stars that pass through the AGB phase often associated with the AGB scenario). Low mass ($<0.8$~\hbox{M$_\odot$}) stars, on the other hand, make up $\sim40$\% of the initial mass fraction. For a typical GC, 2P stars represent $\sim67$\% of cluster stars, while 1P stars make up the remaining $\sim33$\%. Assuming that 100\% of the mass of every AGB star gets used to make 2P stars (an extreme assumption) and that the 2P has a standard IMF, AGB stars can only account for $4-5$\% of the population of 2P stars. If we assume that, on average, 50\% of the mass of each 2P star comes from diluting material, then AGB stars can account for $8-10$\% of the 2P stars.
The commonly invoked solutions to this problem have been 1) to apply an ad-hoc limit to the mass range of 2G stars to $<0.8$~\hbox{M$_\odot$}\, i.e. the mass range observed in GCs today (giving a factor of $\sim2.5$, i.e. accounting for $\sim20$\% of the needed mass), and 2) to assume that the number 1G of stars was much larger when the cluster formed, and that $\sim90-95$\% of them have been lost during the evolution of the cluster. These lost stars would then populate the field of the host galaxy.
For the AGB senario, \citet{Dercole:08} and \citet{Conroy:12} estimate that GCs must have been at least $10-20$ times more massive than observed today. This is expected to be even $8-25$ times in the FRMS scenario. \citet{CabreraZiri:15alma} discuss this problem in detail and conclude that under more realistic assumptions the problem may be factor of $2-3$ times worse than previously suggested (i.e., requiring clusters to have been $\sim30-60$ times more massive at birth than presently). This is a basic prediction of these scenarios that can be directly tested observationally.
\subsubsection{Internal Mass Budget Problem}
\label{sec:internal_mass_budget}
We refer to the ``internal mass budget problem" to mean the relative numbers of enriched and primordial stars within GCs. For a standard stellar IMF, only a small fraction ($2-8$\%) of the 1G mass is processed through a given stellar type (e.g., AGBs, FRMS, IBs) and released into the intracluster medium, even for optimistic yields. However, the present day observed \hbox{f$_{\rm enriched}$}~for clusters is $40-90$\% \citep[e.g.,][]{Milone:17}, and a significant amount of processed mass is needed for each of the enriched stars. The standard solution to this problem is to assume that GCs were $10-100$ times more massive at birth than they are currently, and that, since the 2G stars are thought to be born more centrally concentrated, a large fraction of the 1G stars were lost during their evolution.
\citet{Vesperini:10} have simulated the evolution of such a cluster in a Galactic-like potential and found that, in principle, with the right selection of parameters, such extreme mass loss can be reproduced with numerical models. However, in order to obtain such extreme mass loss, the authors needed to assume that GCs began their lives tidally limited and mass segregated, so that they expand due to stellar mass loss and lose stars to the galaxy over their tidal boundaries. The clusters would then start their lives with effective radii of 10s to 100s of pc (depending on the strength of the tidal field at birth), although it has not been demonstrated that such clusters would resemble the observed Galactic GCs after $\sim10$~Gyr of evolution. Present day GCs and YMCs have much smaller effective radii, with means around $\sim3$~pc \citep{Harris:96,Larsen:04}. Additionally, it is not clear that such a mechanism would work in environments with weaker tidal fields (that display similar \hbox{f$_{\rm enriched}$}\ as Galactic GCs) like that of GCs in the LMC/SMC or Fornax dwarf galaxy.
\citet{BastianLardo:15} and \citet{Milone:17} both looked at the \hbox{f$_{\rm enriched}$}\ as a function of the Galactocentric distance. If large fractions of 1P stars are lost due to the tidal field, even in the case of a tidally limited and mass segregated initial cluster conditions, there would be a strong expected relation between \hbox{f$_{\rm enriched}$}\ and the Galactocentric radius \citep[see][]{BastianLardo:15}. However, \hbox{f$_{\rm enriched}$}\ was not found to depend on the Galactocentric distance (or orbit), in contradiction with predictions from scenarios that invoke heavy mass loss. \citet{Milone:17} have found that \hbox{f$_{\rm enriched}$}\ is a strong function of present day GC mass, with higher mass GCs having larger \hbox{f$_{\rm enriched}$}\ (see Fig.~\ref{fig:correlations_with_mass}). This trend is opposite to that which would be expected if GCs underwent large amounts of mass loss. Higher mass clusters are expected to lose a lower fraction of their mass during their evolution, hence they should have enriched fractions closer to the primordial value.
\citet{Kruijssen:15} estimated the mass lost from GCs forming and evolving in a cosmological context, and found that massive GCs (with initial masses $>5\times10^5$~\hbox{M$_\odot$}) are only expected to lose a relatively small fraction of their initial masses (i.e., potentially being a factor of $\sim2-4$ more massive than currently seen). This is largely in agreement with non-MP driven estimates of mass loss from Galactic GCs \citep[e.g.,][]{Kruijssen09} and constraints from the form of the lower mass function in clusters (which is sensitive to mass loss - e.g., \citealt{Webb:15}.)
\subsubsection{External Mass Budget Problem}
\label{sec:external_mass_budget}
We refer to the ``external mass budget problem" to mean the number of primordial stars in GCs relative to that of the host galaxy. This is linked to the internal mass budget if one adopts models where large fractions of 1P stars are lost to the field. In principle, one should find an excess of 1P stars in the halo that came from GCs, at the position of the donor GCs in phase space (i.e. position, velocity and/or metallicity; e.g., \citealt{Schaerer:11}).
The number of GCs found (per unit galaxy mass or luminosity) is known to be high in some dwarf galaxies \citep[e.g.,][]{Larsen:12}. It becomes even higher at low metallicity (e.g., [Fe/H] $< -$1 dex) when GCs and field stars of the same metallicity are compared \citep[e.g.,][]{Harris:02}. \citet{Larsen:12} have exploited this observation to place some of the strictest constraints on the origin of MPs to date. The authors counted the number of 1P stars in GCs in the Fornax Dwarf galaxy below [Fe/H]=--2 dex and compared that to the number of stars observed in the field in the same metallicity range. They found that GC stars made up $\sim20-25$\% of the stars in this metallicity range. Even if all stars in this metallicity range formed in clusters, this would mean that these GCs could have only been a factor of 4 or 5 more massive than they currently are, in contradiction with the requirements of models requiring large mass loss. \citet{Larsen:14WLM} have extended this kind of study to the dwarf galaxies WLM and IKN and found similar results, showing that this is a common phenomenon and not linked to the specific evolutionary history of the dwarf galaxy host \footnote{In fact, the high specific frequencies observed in many dwarf galaxies \citep[e.g.,][]{Harris:13} argues against these heavy mass loss scenarios, assuming that GCs in dwarfs also host MPs (i.e. that MPs are ubiquitous).}.
\citet{Khalaj:16} have suggested that, in the context of the FRMS scenario, that the expulsion of gas (left over from the formation of the 1G and 2G stars) from the young GC could unbind large fractions of stars from the cluster at high velocity. If the stars leave with a large enough velocity they could potentially leave the young galaxy all together. Note that this solution would not be applicable to the AGB scenario, as the cluster would already be gas free when the AGB stars begin to evolve. While possible, observations of YMCs today, do not support the idea that gas expulsion leads to large mass loss within clusters \citep[c.f.,][]{Longmore:14}.
\subsection{Trends with Cluster Properties}
\label{sec:trends_cluster_properties}
As discussed in \S~\ref{sec:age_mass} the present day mass of a GC is directly linked to 1) the fraction of enriched stars present and 2) the extent of the abundance spreads in N, Na, O and He (see Fig.~\ref{fig:correlations_with_mass}), with higher mass clusters havng larger enriched fractions and larger spreads. Assuming that the yields of the polluting source (e.g., AGB, FRMS, VMS, etc) are not dependent on the GC properties, this is difficult to explain in the classic scenarios as stellar yields (for a fully sampled IMF) should provide a constant amount of enriched material per unit stellar mass.
The increasing fraction of enriched stars at higher masses is contrary to the expectations of scenarios that invoke heavy mass loss to obtain large (present day) fractions of enriched stars (e.g, AGB or FRMS scenarios - see \S~\ref{sec:models} \& \S~\ref{sec:trends_cluster_properties}) as it would require higher mass clusters to lose larger fractions of their mass (i.e. large numbers of 1P stars), opposite to that expected from basic dynamical considerations \citep[e.g.,][]{Kruijssen:15}. Additionally, if GCs did lose large fractions of their initial masses, it would be extremely difficult to maintain these strong correlations with cluster mass (e.g., \citealt{Schiavon:13}). It is also unexpected that higher mass clusters should show larger abundance spreads. While in principle they may hold onto more of the processed material, they also should accrete/retain more primordial material (i.e., diluting material). Additionally, models already assume that all of the processed material is used in the formation of 2P stars (i.e., 100\% star-formation efficiency of the processed material).
While it is difficult to reconcile models to these observations, it is worth noting that no model put forward to date is able to account for both the fraction of enriched stars as well as the extent of the variations as a function of cluster mass at the same time. This is because, for the polluting sources suggested, the amount of enriched material produced is fixed per unit mass. The model can use that enriched material to either create larger abundance spreads (i.e., putting more of it in 2P stars) are use it to create more 2P stars (i.e. increasing f$_{\rm enriched}$), not both. The conclusion reached is that the enrichment mechanism must depend on the mass (or density) of the host cluster.
\subsection{Constraints from Young Massive Clusters (YMCs)}
\label{sec:ymcs}
One of the major discoveries made by the HST was that stellar clusters with masses and densities rivalling (and in some cases, greatly exceeding) GCs are still forming in the local Universe \citep[c.f.,][]{PortegiesZwart:10}. These YMCs are commonly referred to as proto-GCs as they have similar properties to that expected for the present day GCs when they were young \citep[e.g.,][]{Kruijssen:14}. Due to their proximity and relative brightness, we can use YMCs to test the scenarios for the formation of GCs and the MPs within them. The properties of YMCs themselves are discussed in \S~\ref{sec:ymcs_gcs}.
\begin{marginnote}[]
\entry{YMC}{Young Massive Cluster - a.k.a. young GC}
\end{marginnote}
While MPs have not been found to date within YMCs with ages of $<2$~Gyr \citep[e.g.,][]{CabreraZiri:16}, they can still provide useful constraints on the origin of MPs, as most theories put forward so far do not invoke any special physics present only in the early Universe. Most theories simply invoke the gravitational potential of the young GC as being deep enough to hold onto expelled stellar ejecta. Hence, even if YMCs are not the equivalent of proto-GCs, they can still be used to directly test predictions of the proposed scenarios.
\subsubsection{Constraints on Age Spreads within YMCs}
One of the key predictions of the AGB scenario is that clusters which are massive enough should be able to retain the ejecta of AGB stars and form subsequent stellar generations. \citet{Larsen:11} studied the resolved CMDs of seven massive ($10^5 - 10^6$~\hbox{M$_\odot$}) young clusters in nearby galaxies, and while there were features in the CMDs that were not well described by a standard isochrone, age spreads (of the order of 10s of Myr) were also inconsistent with the observations. In one case, NGC~1313-379, an age spread could not be reliably ruled out.
Following on the work of \citet{Peacock:13}, \citet{Bastian:13ymcs} searched for evidence of ongoing star-formation within a sample of $\sim140$ YMCs with ages between $10$~Myr and $1$~Gyr and masses between $10^4 - 10^8$~\hbox{M$_\odot$}. They searched for emission lines (i.e., H$\beta$ and O{\sc [iii]}) from the unresolved clusters and O-stars in the CMDs of the resolved clusters. No clusters were found with evidence of ongoing star-formation. \citet{CabreraZiri:14,CabreraZiri:16w3} took this analysis a step further by estimating the star-formation histories of two massive ($>10^7$~\hbox{M$_\odot$}) clusters in galactic merger remnants, NGC~34 (S1 - $\sim100$~Myr) and NGC~7252 (W3 - $\sim500$~Myr), using high S/N integrated optical spectra. In both cases, the clusters were best fit by a SSP (i.e., no evidence of a secondary starburst was found).
At an age of $\sim2$~Gyr, NGC~1978 is the youngest cluster that shows evidence for MPs \citep[][]{Martocchia:18a}. Due to its youth, it can be used to place tight constraints on age differences between the subpopulations. \citet[][]{Martocchia:18b} were able to identify two populations on the SGB of the cluster with UV-photometry and then compared the positions of the stars in each population in an optical CMD. In optical colours, the position of the stars along the SGB (essentially the vertical placement of the stars) is sensitive mainly to age (and not chemical anomalies). The authors found an age difference of $1\pm20$~Myr between the populations, i.e. that they were coeval.
Taken together the constraints on age spreads in YMCs suggest that they are less than $10$ or $20$~Myr. This does not directly constrain the FRMS, VMS, or EDA scenarios, but does place severe restrictions on scenarios that adopt AGBs as the polluters, as the first AGB stars do not evolve until $30$~Myr after the 1G forms.
\subsubsection{Constraints on Gas and Dust Reservoirs within YMCs}
In order for a massive cluster to form a second generation of stars, it must be able to retain a significant amount of gas within it for an extended period.
\citet{Longmore:15} used the predictions of the \citet{Dercole:08} model for multiple star forming events in the context of the AGB scenario, to show that the clusters should show extreme extinction in their inner regions, effectively being invisible in the inner $\sim3$~pc. He notes that no such massive 'ring' clusters have been observed and that many massive ($>10^6$~\hbox{M$_\odot$}) clusters have been found with little or no extinction in the age range where the \citet{Dercole:08} models predicts that 2nd generation should be forming. \citet{CabreraZiri:15alma} used deep ALMA observations of three massive ($>10^6$~\hbox{M$_\odot$}) clusters in the Antennae merging galaxies with ages between 50 and 200~Myr to search for any gas within them. Depending on the adopted conversion factor between the observed CO luminosity and total gas/dust mass, the authors could place upper limits of $<1-10$\% of the stellar mass is present in gas within the clusters.
Finally, \citet{Bastian:14frms} and \citet{Hollyhead:15} have studied a sample of young clusters ($<10-20$~Myr) with masses between $\sim10^4$ and $\sim10^7$~\hbox{M$_\odot$}\ to see how long clusters remain embedded in their natal gas cloud. In contradiction to the predictions of the FRMS scenario of \citet{Krause:13}, who suggested that massive clusters should remain embedded for $\sim20$~Myr, observations showed that independent of mass (in the range studied) clusters were gas free within the first $2-4$~Myr of their lives, probably before the first SNe (for metallicities from $1/5^{\rm th}$ solar to solar). \cite{Whitmore:02} and \citet{Reines:08} studied the nearby starburst galaxies, NGC~4038/39 and NGC~4449, respectively, comparing radio continuum measurements with optical HST colours/magnitudes. Both works conclude that young massive clusters are largely gas free by an age of $7$~Myr, often considerably shorter.
We can conclude from these works that clusters are very efficient at removing (or consuming) any gas within them, from very young (few Myr) to very old ages ($>1$~Gyr). This applies to very massive clusters, even if simple escape velocity arguments would suggest that they should be able to retain any gas within them. For young clusters the Lyman-Werner flux within the cluster is expected to be very high \citep[e.g.,][]{Conroy:11} which will not allow the gas to cool sufficiently to collapse to the cluster centre, and the presence of x-ray binaries and other energetic sources (e.g., white dwarfs) and/or ongoing SNe appear to keep the cluster gas free throughout its lifetime. Hence, models that invoke the potential well of clusters to hold onto enriched gas are not supported by observations.
\subsubsection{Globular Clusters in Formation}
A major advance in the field may come with the launch of the James Webb Space Telescope (JWST), as, in specific circumstances, it will allow us to peer into galaxies at the epoch of GC formation (i.e., $z>2$). Some initial steps in this direction have already been taken by observing highly lensed galaxies at $z>3$ and their young GC populations. \citet{Vanzella:17} studied a sample of compact GC-like objects in five highly lensed galaxies including rest-frame UV/optical photometry and spectroscopy from HST and VLT/MUSE. Two of their objects, ID11 ($z=3.1169$) and GC1 ($z=6.145$) are particularly interesting due to their young ages ($<10$~Myr), small effective radii ($\lesssim50$~pc) and stellar masses ($1-20\times10^6$~\hbox{M$_\odot$}), which are expected for young GCs. The estimated properties are also similar to those of YMCs forming in nearby galaxies, supporting the idea that YMCs are indeed the equivalent of young GCs.
It is difficult to draw conclusions from such a small sample, but future work on lensed samples (as well as JWST samples) offer a chance to study the populations statistics of young GCs. If clusters are $10-30\times$ more massive when they form than they are currently, JWST would be expected to observe many clusters in excess of $0.5-1\times10^7$~\hbox{M$_\odot$}\ \citep[e.g.,][]{Renzini:17}. Alternatively, in models for the evolution of GCs based on the observed properties of YMCs and the conditions expected to be experienced by the clusters throughout their lives (i.e., models not tuned to achieve severe mass-loss), only a handful of massive ($>0.5-1\times10^7$~\hbox{M$_\odot$}) would be expected in each host galaxy \citep[e.g.,][]{Kruijssen:15}.
\begin{summary}[Summary Points of the Comparison Between the Predictions and Observations]
\begin{enumerate}
\item The observed positive correlations between f$_{\rm encriched}$, extent of the abundance spreads, and cluster mass is directly at odds with scenarios that invoke large amounts of cluster mass loss in order to go from a cluster dominated by primordial stars to a cluster dominated by enriched stars.
\item This argues that the observed fractions are imprinted at birth, which essentially rules out all standard nucleosynthetic sources.
\item Quantitative comparison between the observed ranges of Na, O and He spreads with the predicted yields of suggested polluter stars shows that none (or any combination thereof) can match the observations. While each cluster is unique in the details of its chemistry (requiring stochasticity in their formation) most clusters have He spreads that are much too small for the observed Na and O spreads.
\item Li is a problem for all scenarios, as it should be highly depleted in all material that is enriched in Na and He (and depleted in O), whereas observations do not show depletion to the predicted amounts (even including dilution).
\item YMCs, with properties similar to those expected for young GCs, are still forming today. Studies have not found evidence for multiple star-forming epochs within the clusters, nor large gas/dust reservoirs needed to form further generations of stars. This is in tension with most proposed scenarios for the origin of MPs.
\item We graphically summarise the comparison between models and predictions in Fig.~\ref{fig:truth_table}.
\end{enumerate}
\end{summary}
\section{Peculiar clusters: Fe spreads, CNO, and S-process Variations}\label{SEC:PECULIAR_GCS}
While large variations in light element abundances are almost universal among old and massive clusters, the abundances of heavier $\alpha$ (Si, Ca, Ti), Fe-peak (Fe, Ni) and n-capture (Sr, Ba, La, Eu) elements within GCs vary little from star-to-star.
\subsection{Clusters with multimodal metallicity distributions: $\omega$~Cen, M54 and Terzan~5}
\label{sec:omega}
Understanding the formation and evolution of $\omega$~Cen, the most massive cluster in the Galaxy, represents a challenge for all the MP scenarios.
The presence of a wide metallicity range \citep[--2.2 $\leq \lbrack $Fe/H$ \rbrack \leq$--0.6 dex; e.g.][]{Johnson:10} in its stars demands that it was massive enough to retain SN ejecta at very high velocity (or to accrete gas from its surroundings for long periods), allowing for multiple bursts of star formation, with each generation becoming progressively enriched in Fe. This possibly indicates that $\omega$~Cen constitutes the remnant of a tidally disrupted dwarf galaxy \citep[e.g.,][]{Bekki:03}.
Although the observational scenario appears far more complex than for normal GCs, $\omega$~Cen also displays the key chemical signatures of MPs. Each metallicity subpopulation in $\omega$~Cen shows its own Na-O anti-correlation (with the possible exception of the most metal-poor stars), with the more metal-rich, He-rich stars \citep[Y $\geq$ 0.35;][]{Joo:13} showing a Na-O correlation \citep{Marino:11Omega}.
The extension of the Na-O anti-correlation is also more extended towards higher metallicity and the fraction of stars with high- and intermediate Na also increases with metallicity \citep{Marino:11Omega}. This is difficult to explain within the AGB scenario framework, as the cooling flow from massive, metal-rich AGB stars would need to be delayed and further enriched by core-collapse supernovae to account for more extended Na-O anti-correlation towards higher metallicity \citep[][]{Dantona:11Omega}.
An increase in CNO sum and in the s-process element with [Fe/H] is also observed \citep{Marino:11Omega,Johnson:10}.
Low-mass AGB stars (M $<$ 3 $\hbox{M$_\odot$}$) are observationally confirmed sites for s-process production, but they evolve on timescales longer (on the order of a Gyr) than the lifetimes of higher-mass AGB stars invoked to be responsible for the Na-O anti-correlation ($\sim100-200$ Myr, in the AGB scenario, see \S~\ref{sec:models_agb}).
Also, while AGB stars with masses $\lesssim$ 3 $\hbox{M$_\odot$}$ can produce Na, enhance the C+N+O content, and produce s-process elements, they cannot deplete O or produce He. The most recent age estimates reports a maximum relative age spread of only $\sim$500 Myr among $\omega$~Cen populations \citep[][]{Tailo:16age}.
Therefore low-mass AGBs, that evolve on longer timescales, cannot be responsible for the C+N+O and s-process pattern observed in $\omega$~Cen.
M~54 is the nearest extragalactic GC we can observe and the second most massive GC in the halo.
Even though the M~54 metallicity distribution has a significantly smaller dispersion than $\omega$~Cen \citep[e.g.][]{Carretta:10M54}, both clusters have been proposed to represent a snapshot of nuclear star clusters in different stages of evolution. In the case of M~54, the associated dwarf galaxy, i.e. the Sagittarius is still visible, whereas the parent system once hosting $\omega$~Cen has been disrupted.
Both metallicity groups in M~54 display their own Na-O anti-correlation,
with the metal-poor group showing a less extended Na-O anti-correlation with respect to the metal-rich stars, as observed for $\omega$~Cen \citep[][]{Carretta:10M54}.
Terzan~5 is a massive ($\sim$ 10$^{6}$ $\hbox{M$_\odot$}$) stellar system located in the bulge of the Galaxy. The two distinct red clumps in its CMD \citep{Ferraro:09} have been linked to stellar populations with different metallicity \citep[although see][for an alternative explanation]{Lee:15}. Indeed, a large and multimodal metallicity distribution (--0.8 $\leq$ [Fe/H] $\leq$ +0.3 dex) has been reported \citep{Massari:14}, however there is no consensus on the presence of light element spreads in Terzan~5 \citep[e.g.][]{Origlia:11,Schiavon:17Ter5}. The $\alpha$-element abundance pattern of the metallicity sub-populations mirrors what is observed for field stars in the Bulge, with $\alpha$ enhancement up to about solar metallicities and a decreasing [$\alpha$/Fe] toward the solar ratio at super-solar [Fe/H] \citep{Origlia:11}. The presence of two distinct MSTOs suggests that the dominant sub-solar metallicity components developed $\sim$12 Gyr ago, while the super-solar groups formed only $\sim$ 4.5 Gyr ago after a prolonged period of quiescence \citep[e.g.][]{Ferraro:16}. This finding has lead to the suggestion that Terzan~5 may constitute the remnant core of a dwarf galaxy, or perhaps even a surviving fragment of the formation of the original bulge.
\subsection{Clusters with small unimodal Fe spreads and s-process bimodality}\label{sec:SCLUSTERS}
GCs characterised by a dispersion in their s-capture elements (e.g., M~22, NGC~1851, M~2, NGC~362, M~19, NGC~5286) have received a growing attention during the last years.
The observed s-process bimodal distribution is associated
with a split SGB in optical colours \citep[e.g.][]{Piotto:12} and, when C, N, and O abundances for unevolved stars are available, to variations in the net C+N+O content \citep[e.g.][]{YongCNO}. Each s-process group displays its own Na-O anti-correlation, with the average Na abundance positively correlated with s-process enrichment \citep[][]{Marino:11M22,Yong:14M2}. Finally, s-rich stars are possibly slightly enhanced in Fe \citep[e.g.,][]{DaCosta:15,Lim:17}.
Since the presence of a [Fe/H] constrains the potential well in which a stellar system formed, a dispersion in [Fe/H] implies that the system was able to retain SN ejecta to host multiple star formation events\footnote{ The average [Fe/H] dispersions for MW GCs appear to be significantly smaller than the spectroscopic [Fe/H] spreads of $\sim$0.3 dex or more of dwarf galaxies, as no GC less luminous than M$_{\rm V}$ = --10 shows a substantial ($\geq$0.1 dex) [Fe/H] dispersions \citep[][]{Willman:12}.}. Indeed, it has been speculated that they
represent the nuclear remnants of a tidally disrupted dwarf galaxy \citep[e.g.][]{DaCosta:15,Marino:15N5286}.
This leads to the idea that GCs with small Fe variations would have contributed with a significant fraction of stars to the construction of the Galactic Halo, along with their host galaxies.
However, the presence of such small intrinsic Fe variations in a number of GCs is still debated, as they
can be artificially introduced by the method used to derive atmospheric parameters of stars (\citealp{MucciarelliM22,LardoM2}; but see also \citealp{Lee:16}).
For example, very little star-to-star Fe variation is measured when metallicity is measured from Fe II lines and the surface gravities are from photometry. Conversely, when gravities are derived by imposing the ionisation equilibrium between the FeI and FeII, the [Fe/H] distribution is broad. Yet, the stellar gravities required to match [FeI/H] and [FeII/H] would lead to stellar masses for giants which are not physical \citep[e.g.][]{MucciarelliM22}. Interestingly, different FeI and FeII metallicity distributions are only observed in clusters which also show s-process and light-element variations. While the cause of the observed discrepancy between Fe abundances as inferred from Fe I and Fe II has been not yet determined, this finding suggests caution when measuring abundances using the classical spectroscopic approach on clusters with s-process variations.
Finally, the discrepancy between Fe abundances measured from Fe I and Fe II lines, which is observed for RGB stars with different s-processes in a few clusters is observed also in GC with no intrinsic variations in heavy elements in the AGB phase, where Fe I lines provide systematically lower abundances than RGBs \citep[e.g.][Wang et al., submitted]{Lapenna:15M62}. Currently, there is not explanation for this effect.
\subsection{The Blue Tilt in Cluster Populations}
\label{sec:blue_tilt}
Observations of GC populations, especially around massive early type galaxies (ETGs) which contain thousands of such clusters, have shown that the metal poor population of clusters (i.e. the blue GCs) displays an average trend of becoming redder (more metal rich) as a function of increasing brightness \citep[e.g.,][]{Harris:09bluetilt}. The origin of this ``blue tilt" is still uncertain, but a popular explanation for the phenomenon is that more massive clusters are able to retain not just the stellar ejecta (i.e., see \S~\ref{sec:models}) but also the SNe ejecta from a first generation of stars, and subsequently form a more metal rich second generation. The average metallicity of the cluster would then increases with each successive generation \citep[see][]{Strader:08,Bailin:09}. One problem with such scenarios is that it is unclear how a cluster could retain the ejecta from SNe.
An alternative explanation, that can also account for the fact that the blue tilt is not observed in all GC populations, is that it is due to how the metal poor GC population is assembled, namely through the accretion of relatively low mass metal poor dwarf galaxies and their GC populations. As lower mass dwarf galaxies have lower ISM pressures than their higher-mass counterparts, they are expected to form fewer high-mass clusters \cite[e.g.,][]{Kruijssen:15}. Massive GCs will preferentially come from higher mass dwarf galaxies, which in turn are more likely to be metal rich. This will result in a (statistical) upper envelope in the mass-metallicity plane for GC populations, skewing the mean metallicity to higher values for high cluster masses (Usher et al. in prep). Such a scenario can be tested with the next generation of galaxy formation simulations that include GC formation and evolution \citep[e.g.,][]{Pfeffer:18}.
\section{Young Massive Clusters and Their Relation to Globular Clusters}
\label{sec:ymcs_gcs}
While historically GCs were treated as objects that exclusively formed in the early Universe, it is now clear that objects with properties that are very similar to those expected of young GCs are still forming today. Some of these YMCs have masses and densities well in excess of present day GCs, and their ages range from forming today to $\sim6-8$~Gyr. While such clusters do exist in the Galaxy (with masses up to $\sim10^5$~\hbox{M$_\odot$}), they are difficult to study due to the often extreme (differential) extinction and crowding in the Galactic plane. However, we are fortunate that our nearest extragalactic companions, the LMC and SMC host large populations of such clusters. They are near enough that we can resolve them into their individual stars, especially with HST, and in some cases can obtain high-resolution spectra of individual stars.
A major result in the field in the past decade has been the finding that many of these clusters are not well represented by a single stellar isochrone, but instead show features such as dual MSs and extended MSTOs among other unexpected features. The hope has been that these features are related to the MPs observed in the ancient GCs, and that they could then be used to pinpoint the physical mechanisms responsible for MPs.
\subsection{Extended Main Sequence Turn-offs in Young and Intermediate Age Clusters}
\label{sec:eMSTO}
The high precision photometry achievable with the Advanced Camera for Surveys (ACS) on HST allowed the construction of CMDs of massive young and intermediate age clusters in the LMC/SMC in unparalleled detail. As is often the case, this increase in detail led to unexpected features that could not be explained within a traditional framework. In this case, it was the discovery of eMSTOs in the intermediate age clusters ($1-2$~Gyr) in the LMC/SMC that could not be explained by photometric uncertainties or stellar binarity. This was first reported by \citet{Bertelli:03} and \citet{Mackey:07} and shown to be a general characteristic in subsequent works \citep[e.g.,][]{Mackey:08,Milone:09,Piatti:14}.
\begin{marginnote}[]
\entry{eMSTO}{Extended Main Sequence Turn-off}
\end{marginnote}
The initial explanation for the eMSTOs was that the clusters were formed in an extended star-forming event, lasting $200-700$~Myr \citep[e.g.,][]{Milone:09,Goudfrooij:14}. Due to this possibility, many works have attempted to link the observations of the eMSTO clusters with those of the ancient GCs hosting MPs \citep[e.g.,][]{Goudfrooij:14}. However, subsequent work has shown that the eMSTO phenomenon is unlikely to be caused by an actual age spread within the clusters (see \S~\ref{sec:ymcs}). Subsequent studies have found that YMCs with ages between $20-300$~Myr also show eMSTOs, and that the inferred age spread was directly proportional to the age of the cluster \citet{Niederhofer:15rotation}. Additionally, studies focused on other regions of the CMDs that should also be affected by age spreads have not been found to be in agreement with the age-spread interpretation \citep[e.g.,][]{Li:16}. Finally, at $\sim2$~Gyr, NGC~1978 does not show an eMSTO \citep[][]{Martocchia:18b} despite its relatively high mass.
This points instead toward a stellar evolutionary affect. One such affect is stellar rotation, first proposed by \citet{Bastian:09rotation} and subsequently studied in more detail in \citet{Brandt:15} using the Geneva stellar evolutionary models that include rotation. Such models do well in predicting the relation between the inferred age spread an the age of the cluster, as well as the lack of eMSTOs in clusters with ages above $\sim2$~Gyr due to magnetic breaking of the stars.
Finally, recent high-resolution studies of A and F ($1-2.5$\hbox{M$_\odot$}) stars have found evidence for light-element abundance (Na, O, Mg) spreads in rapidly rotating stars in open clusters \citep{Pancino:18}. The origin of these variations (and their link to GCs) is still unknown, but rotational mixing and diffusion are possible causes.
It is striking that the eMSTO phenomenon disappears at (nearly) the same age that MPs on the RGB begin to be seen \citep{Martocchia:18a,Martocchia:18b}. How/whether these two phenomenon are related is a rich avenue for future work.
\subsection{Split Main Sequences}
Another surprising feature that has been found in resolved CMDs of YMCs in the LMC/SMC was that many of them, when viewed in the blue/UV filters displayed bi-modal (i.e. split) MSs \citep{Milone:15n1856}. At first glance, this appears to be similar to the split MS in ancient GCs which are due to light element abundance spreads (e.g., He, C, N, O spreads). However, \citet{Milone:15n1856} investigated possible causes of the splits, creating stellar models that included the abundance spreads, iron spreads, C+N+O spreads and also age spreads. They conclude that none of the models were able to explain the split MS observed in clusters like NGC~1856 ($\sim300$~Myr, $\sim10^5$~\hbox{M$_\odot$}).
\citet{Dantona:15} used the SYCLIST stellar models \citep{Georgy:14} that include rotation (including inclination effects) to model NGC~1856, and showed that rotation could explain the observed MS split if the stellar rotation distribution was bi-modal with a minor peak peak at $\omega<0.3$ and a dominant peak at $\omega\sim0.9$. It is interesting to note that in all the YMCs in the LMC studied to date with split MS, the red MS (corresponding to the rapid rotators) is generally the dominant population (between 42\% and 75\% - e.g., \citealt{Milone:16n1755,Milone:17n1866}). These stars would be rotating much faster than typically found in the field or in lower mass open clusters \citep{McSwain:05}.
Such an extreme rotational distribution should lead to observationally detectable signatures, as a large population of rapid rotators should have a high rate of Be stars, i.e. stars near the critical rotation limit with partially ionised decretion discs. \citet{Bastian:17} looked for such a population of Be stars and indeed found a much higher fraction in the $\sim100$~Myr cluster NGC~1850 and the $\sim300$~Myr cluster NGC 1856. In both clusters, the authors found Be fractions between $30-60$\% near the MSTO, much higher than found in the field or in lower mass clusters. These observations confirmed the high fraction of rapid rotators in YMCs, lending support to the idea that the split MS is caused by a bi-modal rotational distribution.
However, further observations to measure the actual rotational distribution in YMCs are required to directly test this scenario. Preliminary results appear to confirm the bi-modal rotational distribution with a large fraction of rapidly rotating stars \citep{Dupree:17}. If true, the conclusion would be that stars forming in dense/massive clusters would retain a signature of their origin, namely in their rapid rotation rates. Although why stars born in clusters would preferentially be born with high rotation rates is currently unknown.
\subsection{Chemical Anomalies in YMCs?}
While YMCs have provided strong tests for the theories of the formation of MPs, it is not yet clear whether they host such abundance anomalies. As discussed in \S~\ref{sec:age_mass} initial spectroscopic studies of a limited number stars in massive young and intermediate age clusters in the LMC did not find evidence of MPs \citep{Mucciarelli:08,Mucciarelli:14LMC}. This has been confirmed through photometric studies based on large samples \citep{Martocchia:17N419,Martocchia:18a}.
\begin{marginnote}[]
\entry{RSG}{Red Supergiant Star}
\end{marginnote}
The young and intermediate age LMC and SMC clusters are quite massive, relative to their open cluster counterparts in the Galaxy, however as discussed in \S~\ref{sec:ymcs}, YMCs with much higher masses (by factors of 10 to 1000) are known to exist. However, the distances to these extragalactic objects generally makes it impossible to obtain high precision photometry or spectroscopy for individual stars. Hence, some studies have attempted to search for the spectroscopic fingerprint of MPs in integrated light. \citet{CabreraZiri:16} and \citet{Lardo:17Antennae} have exploited the fact that YMCs are dominated by the light of RSGs at young ages (in the near-IR), and that RSGs all have similar temperatures, meaning that their integrated light can be studied as a single RSG. If MPs would be present in these massive YMCs, we would expect that their Al and Na abundances would be higher than that of field RSGs at the same Fe-abundance. These authors studied four clusters with masses between $5-20 \times10^5$~\hbox{M$_\odot$}, and searched for evidence of Al enhancement, although none was found in any of the clusters, despite their high masses. This RGB focussed technique is sensitive to chemical anomalies in stars above $\sim15$~\hbox{M$_\odot$} \citep[e.g.,][]{Davies:08}, although integrated light spectroscopy can in principle be used to search for MPs at any age, with proper modelling of its stellar populations (e.g., Hernandez et al.~2017).
One potential caveat to note about the previous studies is that they are not comparing like-with-like, at least in terms of stellar mass. All studies of young and intermediate age clusters have focussed on the evolved portions of the CMD (e.g., the RGB), which at 200~Myr or 2~Gyr corresponds to a stellar mass of $\sim3.6$~\hbox{M$_\odot$}\ and $\sim1.5$~\hbox{M$_\odot$}, respectively (at [$Fe/H]=-0.7$). At ages of $6$ and $10$~Gyr the stellar mass on the RGB is $\sim1.0$~\hbox{M$_\odot$}\ and $\sim0.9$~\hbox{M$_\odot$}, respectively. While the main sequence for the LMC/SMC young/intermediate massive clusters is out of range for spectroscopy with existing instruments, there is potential to use HST to obtain N-sensitive photometry to compare the same mass range in young and ancient clusters (i.e., $< 0.8$~\hbox{M$_\odot$}). Additionally, future instruments like JWST or the E-ELT may provide important insights at lower stellar masses.
\section{Multiple Populations on Galaxy Scales}
\label{sec:mps_galaxy_scales}
Dwarf galaxies have stellar masses ranging from the GC mass scale up to a few $\times10^9$~\hbox{M$_\odot$}. In many cases, their stellar populations are not too dissimilar from that of certain GCs (like $\omega$-Cen and M54), with modest metallicity spreads and a dominant old stellar population (see~\S~\ref{sec:omega}). It is normally assumed that MPs are not present in the field stars in dwarfs, due to 1) the assumption that MPs are restricted to GCs and 2) the low fraction ($\sim3$\%) of 2P stars in the field of the MW halo \citep[e.g.,][]{Martell:11} which is thought to come from, at least partially, from accreted satellite dwarf galaxies. We can infer a lack of a large population of stars with large $\Delta(Y)$ values within local dwarf galaxies, based on the morphology of the HB. The HB of dwarf galaxies lack to the ``extreme" stars seen in GCs with large Y spreads (e.g., NGC~2808). For example, detailed modelling of the HB of the Carina Dwarf galaxy did not lead to evidence of Y spreads within the populations (although age and Fe spreads were identified - \citealt{Savino:15}). Additionally, \citet{Norris:17} searched for MPs in the Carina dwarf galaxies in 63 RGB stars (looking for an Na-O spread) and only found stars with typical abundance patterns, i.e. 1P stars.
Stepping further afield, \citet{Strader:13} studied a very massive ($\sim2\times10^8$~\hbox{M$_\odot$}) and dense (R$_{\rm h}=24$~pc) ultra-compact dwarf galaxy around the Virgo elliptical galaxy, M60 (M60-UCD1). The authors find evidence for the object to be enriched in N ([N/Fe]$=+0.61$) and Na ([Na/Fe]$=+0.42$), hence it likely hosts MPs, with a large population of highly enriched 2P stars.
While studies of MPs and chemical anomalies have largely focussed on massive and dense star clusters, there is growing evidence that they may be present outside clusters, making up a significant fraction of the stars in certain parts of galaxies. \citet{Schiavon:17bulge} discovered a large population of N-rich stars, which display correlations between [N/Fe] and [Al/Fe], as well as being anti-correlated with [C/Fe], i.e. they display the same chemical anomalies as stars in GCs. The authors focussed on the low metallicity regime and found that for $[Fe/H] < -1$, the chemically anomalous stars make up $\sim7$\% of the stars of the bulge/inner-halo. Extrapolating their results to the full bulge/inner halo, they estimate that the mass of enriched stars is a few times $10^8$~\hbox{M$_\odot$}, which is a factor of $\sim8$ more than the mass of the entire Galactic GC system. This fact, and the lack of correspondence between the enriched star and GC population metallicity distributions, suggests that the discovered enriched stars in the bulge/inner-halo did not originate from dissolved GCs.
If true, this would suggest that MPs may not be a product of only GCs, but may instead be a general feature of certain stellar populations. While currently still inconclusive, there is tantalising evidence that MPs may be present in other dense and old stellar populations. For example, the mean [N/Fe] and [Na/Fe] abundances of ETGs increase with increasing velocity dispersion \citep[e.g.,][]{Schiavon:07,Conroy:14} which could imply that the fraction of enriched stars is an increasing function of velocity dispersion. Recently, \citet{vanDokkum:17} have used high S/N spatially resolved spectra of massive ETGs and find that the mean [Na/Fe] abundance increases towards the galaxy centres while [O/Fe] decreases, again suggesting that MPs may be present in the centres of such systems. While high velocity dispersion within ETGs is also positively correlated with high [Mg/Fe] \citep[e.g.][]{Walcher:15}, \citet{vanDokkum:17} found that relative to the outskirts of the galaxies, [Mg/Fe] was depressed in the central regions. Hence, the centres of ETGs appear to show many of the trends seen in MPs.
Another potential link between MPs and the massive ETGs is through the UV-upturn \citep[e.g.,][]{Oconnell:99}. The origin of the UV-upturn is still under debate but the presence of a large number of extreme HB stars is one of the leading contenders. As seen in Galactic GCs, like NGC~2808, the presence of a large He spread amongst cluster stars is correlated with an extreme population of HB stars (metallicity also effects the fraction of stars that pass through an extreme HB period). Hence, if ETGs do host MPs, it would imply that the UV-upturn is caused by large He spreads, which would be correlated with large Na and N-spreads \citep{Chantereau:18}.
Further work is needed to explicitly test if MPs are present within ETGs and if so, in what fractions. However, if MPs are found to make up a significant fraction of ETG stars, it would have a dramatic effect on our understanding of MPs and their origin. It may imply, for example, that we need to explore non-cluster focussed scenarios for the origin of MPs.
\section{Future Directions}
Throughout this review we have attempted to highlight topics that are particularly uncertain and which new theoretical and observational studies are likely to lead to important advances. Here we briefly summarise some of the directions that we feel are likely to be the most fruitful in the next few years.
\begin{itemize}
\item While observations of evolved stars in YMCs have not revealed the presence of MPs, it is not clear if MPs are absent or restricted in the stellar mass range where they can appear. The unexpected transition at $\sim2$~Gyr, below which MPs are not found in evolved stars and above which they are, suggests that MPs may be present in many YMCs, but only in low mass stars (i.e., lower-mass main sequence stars).
\item In order to identify the cluster parameter(s) that control whether MPs are present (age, mass, density, metallicity, etc) the parameter space of clusters should be further sampled. Looking at low-density GCs in the outer Galactic halo, or those that have been accreted could be particularly fruitful. Also, extending the age range of clusters under study may place stricter limits on the appearance of MPs.
\item Further work quantifying how the properties of MPs within clusters depend on the cluster properties would be very beneficial. Is cluster mass or density the controlling factor for the fraction of enriched stars or the degree of abundance spreads within clusters?
\item To date, only a handful of GC stars have been fully characterised in terms of their abundances (He, C, N, O, Na, Al, Mg, etc). Systematic studies of the precise way all these elements are related, and of the variety between clusters may help pinpoint the origin of MPs. Dissecting the (pseudo)colour-colour diagrams of the HST UV GC survey may offer an efficient means to search many of these correlations. What causes the spread in the 1P stars in the pseudo-colour diagrams in some clusters and not in others? Detailed modelling of the colour spreads in is needed to characterise the abundance variations in a large sample of GCs (as well as confirmation through spectroscopic follow-up).
If spectroscopy confirms that the colour spread among 1P stars is due to He variations (associated with small-or-no C-N-Na-O variations), alternative physical mechanisms for the origin of MPs --other than stellar nucleosynthesis-- will need to be investigated.
\item As discussed in \S~\ref{sec:mps_galaxy_scales} there is tentative evidence that MPs may not be restricted to GCs but may be present in other environments as well (dwarf galaxies, bulge/inner halos of galaxies and ETGs). Studies confirming or refuting this may result in a major breakthrough in the field.
\item Recent theoretical studies have largely focussed on developing existing scenarios, exploring ways in which the models can be changed in order to provide a better match to observations. We argue that the present observations do not support the traditional theories of self-enrichment through the formation of multiple generations of stars. Hence, new theories for the origin of MPs (e.g., non-standard stellar evolution, very massive stars, etc) should be encouraged and developed to test against the wealth of observational data now in hand.
\item One property of stars that affects stellar evolution, which is dependent on environment, is stellar rotation. Stars in dense/massive young clusters rotate significantly faster than those in the field or lower mass open clusters. Additionally, the age boundary for whether MPs are present ($2-2.5$~Gyr) in evolved stars is also the boundary ($\sim1.5-1.6$~\hbox{M$_\odot$}) at which MSTO and RGB stars would be magnetically braked (i.e., at this age clusters no longer show extended main sequence turn-offs). Could MPs be caused by a non-standard stellar evolutionary effect linked to rotation?
\end{itemize}
\section*{DISCLOSURE STATEMENT}
The authors are not aware of any affiliations, memberships, funding, or financial holdings that
might be perceived as affecting the objectivity of this review.
\section*{ACKNOWLEDGMENTS}
We are grateful to Soeren Larsen, Ivan Cabrera-Ziri, Corinne Charbonnel, Maurizio Salaris, Mark Gieles, Emanuele Dalessandro, Alessio Mucciarelli, Chris Usher, William Chantereau, Elena Pancino, Santi Cassisi, Francesca D'Antona, Henry Lamers, Eugenio Carretta, and Angela Bragaglia for helpful discussions and detailed comments on earlier versions of the manuscript. Additionally, we thank the editor, Sandy Faber, for suggestions that significantly improved the manuscript. NB acknowledges financial support from the Royal Society (University Research Fellowship) and the European Research Council (ERC-CoG-646928, Multi-Pop). CL thanks the Swiss National Science Foundation for supporting this research through the Ambizione grant number PZ00P2\_168065.
\bibliographystyle{ar-style2}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.